Dec 06 05:42:01 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 06 05:42:01 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 06 05:42:01 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 05:42:01 localhost kernel: BIOS-provided physical RAM map:
Dec 06 05:42:01 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 06 05:42:01 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 06 05:42:01 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 06 05:42:01 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 06 05:42:01 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 06 05:42:01 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 06 05:42:01 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 06 05:42:01 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 06 05:42:01 localhost kernel: NX (Execute Disable) protection: active
Dec 06 05:42:01 localhost kernel: APIC: Static calls initialized
Dec 06 05:42:01 localhost kernel: SMBIOS 2.8 present.
Dec 06 05:42:01 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 06 05:42:01 localhost kernel: Hypervisor detected: KVM
Dec 06 05:42:01 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 06 05:42:01 localhost kernel: kvm-clock: using sched offset of 4197818485 cycles
Dec 06 05:42:01 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 06 05:42:01 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 06 05:42:01 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 06 05:42:01 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 06 05:42:01 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 06 05:42:01 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 06 05:42:01 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 06 05:42:01 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 06 05:42:01 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 06 05:42:01 localhost kernel: Using GB pages for direct mapping
Dec 06 05:42:01 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 06 05:42:01 localhost kernel: ACPI: Early table checksum verification disabled
Dec 06 05:42:01 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 06 05:42:01 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:01 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:01 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:01 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 06 05:42:01 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:01 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:01 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 06 05:42:01 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 06 05:42:01 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 06 05:42:01 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 06 05:42:01 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 06 05:42:01 localhost kernel: No NUMA configuration found
Dec 06 05:42:01 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 06 05:42:01 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 06 05:42:01 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 06 05:42:01 localhost kernel: Zone ranges:
Dec 06 05:42:01 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 06 05:42:01 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 06 05:42:01 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 06 05:42:01 localhost kernel:   Device   empty
Dec 06 05:42:01 localhost kernel: Movable zone start for each node
Dec 06 05:42:01 localhost kernel: Early memory node ranges
Dec 06 05:42:01 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 06 05:42:01 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 06 05:42:01 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 06 05:42:01 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 06 05:42:01 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 06 05:42:01 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 06 05:42:01 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 06 05:42:01 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 06 05:42:01 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 06 05:42:01 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 06 05:42:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 06 05:42:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 06 05:42:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 06 05:42:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 06 05:42:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 06 05:42:01 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 06 05:42:01 localhost kernel: TSC deadline timer available
Dec 06 05:42:01 localhost kernel: CPU topo: Max. logical packages:   8
Dec 06 05:42:01 localhost kernel: CPU topo: Max. logical dies:       8
Dec 06 05:42:01 localhost kernel: CPU topo: Max. dies per package:   1
Dec 06 05:42:01 localhost kernel: CPU topo: Max. threads per core:   1
Dec 06 05:42:01 localhost kernel: CPU topo: Num. cores per package:     1
Dec 06 05:42:01 localhost kernel: CPU topo: Num. threads per package:   1
Dec 06 05:42:01 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 06 05:42:01 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 06 05:42:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 06 05:42:01 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 06 05:42:01 localhost kernel: Booting paravirtualized kernel on KVM
Dec 06 05:42:01 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 06 05:42:01 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 06 05:42:01 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 06 05:42:01 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 06 05:42:01 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 06 05:42:01 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 06 05:42:01 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 05:42:01 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 06 05:42:01 localhost kernel: random: crng init done
Dec 06 05:42:01 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 06 05:42:01 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 06 05:42:01 localhost kernel: Fallback order for Node 0: 0 
Dec 06 05:42:01 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 06 05:42:01 localhost kernel: Policy zone: Normal
Dec 06 05:42:01 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 06 05:42:01 localhost kernel: software IO TLB: area num 8.
Dec 06 05:42:01 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 06 05:42:01 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 06 05:42:01 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 06 05:42:01 localhost kernel: Dynamic Preempt: voluntary
Dec 06 05:42:01 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 06 05:42:01 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 06 05:42:01 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 06 05:42:01 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 06 05:42:01 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 06 05:42:01 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 06 05:42:01 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 06 05:42:01 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 06 05:42:01 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 05:42:01 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 05:42:01 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 05:42:01 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 06 05:42:01 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 06 05:42:01 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 06 05:42:01 localhost kernel: Console: colour VGA+ 80x25
Dec 06 05:42:01 localhost kernel: printk: console [ttyS0] enabled
Dec 06 05:42:01 localhost kernel: ACPI: Core revision 20230331
Dec 06 05:42:01 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 06 05:42:01 localhost kernel: x2apic enabled
Dec 06 05:42:01 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 06 05:42:01 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 06 05:42:01 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 06 05:42:01 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 06 05:42:01 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 06 05:42:01 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 06 05:42:01 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 06 05:42:01 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 06 05:42:01 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 06 05:42:01 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 06 05:42:01 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 06 05:42:01 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 06 05:42:01 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 06 05:42:01 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 06 05:42:01 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 06 05:42:01 localhost kernel: x86/bugs: return thunk changed
Dec 06 05:42:01 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 06 05:42:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 06 05:42:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 06 05:42:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 06 05:42:01 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 06 05:42:01 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 06 05:42:01 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 06 05:42:01 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 06 05:42:01 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 06 05:42:01 localhost kernel: landlock: Up and running.
Dec 06 05:42:01 localhost kernel: Yama: becoming mindful.
Dec 06 05:42:01 localhost kernel: SELinux:  Initializing.
Dec 06 05:42:01 localhost kernel: LSM support for eBPF active
Dec 06 05:42:01 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 06 05:42:01 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 06 05:42:01 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 06 05:42:01 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 06 05:42:01 localhost kernel: ... version:                0
Dec 06 05:42:01 localhost kernel: ... bit width:              48
Dec 06 05:42:01 localhost kernel: ... generic registers:      6
Dec 06 05:42:01 localhost kernel: ... value mask:             0000ffffffffffff
Dec 06 05:42:01 localhost kernel: ... max period:             00007fffffffffff
Dec 06 05:42:01 localhost kernel: ... fixed-purpose events:   0
Dec 06 05:42:01 localhost kernel: ... event mask:             000000000000003f
Dec 06 05:42:01 localhost kernel: signal: max sigframe size: 1776
Dec 06 05:42:01 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 06 05:42:01 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 06 05:42:01 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 06 05:42:01 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 06 05:42:01 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 06 05:42:01 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 06 05:42:01 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 06 05:42:01 localhost kernel: node 0 deferred pages initialised in 10ms
Dec 06 05:42:01 localhost kernel: Memory: 7763984K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec 06 05:42:01 localhost kernel: devtmpfs: initialized
Dec 06 05:42:01 localhost kernel: x86/mm: Memory block size: 128MB
Dec 06 05:42:01 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 06 05:42:01 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 06 05:42:01 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 06 05:42:01 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 06 05:42:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 06 05:42:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 06 05:42:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 06 05:42:01 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 06 05:42:01 localhost kernel: audit: type=2000 audit(1764999720.206:1): state=initialized audit_enabled=0 res=1
Dec 06 05:42:01 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 06 05:42:01 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 06 05:42:01 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 06 05:42:01 localhost kernel: cpuidle: using governor menu
Dec 06 05:42:01 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 06 05:42:01 localhost kernel: PCI: Using configuration type 1 for base access
Dec 06 05:42:01 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 06 05:42:01 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 06 05:42:01 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 06 05:42:01 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 06 05:42:01 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 06 05:42:01 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 06 05:42:01 localhost kernel: Demotion targets for Node 0: null
Dec 06 05:42:01 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 06 05:42:01 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 06 05:42:01 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 06 05:42:01 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 06 05:42:01 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 06 05:42:01 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 06 05:42:01 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 06 05:42:01 localhost kernel: ACPI: Interpreter enabled
Dec 06 05:42:01 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 06 05:42:01 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 06 05:42:01 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 06 05:42:01 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 06 05:42:01 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 06 05:42:01 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 06 05:42:01 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [3] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [4] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [5] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [6] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [7] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [8] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [9] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [10] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [11] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [12] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [13] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [14] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [15] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [16] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [17] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [18] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [19] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [20] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [21] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [22] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [23] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [24] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [25] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [26] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [27] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [28] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [29] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [30] registered
Dec 06 05:42:01 localhost kernel: acpiphp: Slot [31] registered
Dec 06 05:42:01 localhost kernel: PCI host bridge to bus 0000:00
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 06 05:42:01 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 06 05:42:01 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 06 05:42:01 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 06 05:42:01 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 06 05:42:01 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 06 05:42:01 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 06 05:42:01 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 06 05:42:01 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 06 05:42:01 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 06 05:42:01 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 06 05:42:01 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 06 05:42:01 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 06 05:42:01 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 06 05:42:01 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 06 05:42:01 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 06 05:42:01 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 06 05:42:01 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 06 05:42:01 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 06 05:42:01 localhost kernel: iommu: Default domain type: Translated
Dec 06 05:42:01 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 06 05:42:01 localhost kernel: SCSI subsystem initialized
Dec 06 05:42:01 localhost kernel: ACPI: bus type USB registered
Dec 06 05:42:01 localhost kernel: usbcore: registered new interface driver usbfs
Dec 06 05:42:01 localhost kernel: usbcore: registered new interface driver hub
Dec 06 05:42:01 localhost kernel: usbcore: registered new device driver usb
Dec 06 05:42:01 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 06 05:42:01 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 06 05:42:01 localhost kernel: PTP clock support registered
Dec 06 05:42:01 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 06 05:42:01 localhost kernel: NetLabel: Initializing
Dec 06 05:42:01 localhost kernel: NetLabel:  domain hash size = 128
Dec 06 05:42:01 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 06 05:42:01 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 06 05:42:01 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 06 05:42:01 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 06 05:42:01 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 06 05:42:01 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 06 05:42:01 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 06 05:42:01 localhost kernel: vgaarb: loaded
Dec 06 05:42:01 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 06 05:42:01 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 06 05:42:01 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 06 05:42:01 localhost kernel: pnp: PnP ACPI init
Dec 06 05:42:01 localhost kernel: pnp 00:03: [dma 2]
Dec 06 05:42:01 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 06 05:42:01 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 06 05:42:01 localhost kernel: NET: Registered PF_INET protocol family
Dec 06 05:42:01 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 06 05:42:01 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 06 05:42:01 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 06 05:42:01 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 06 05:42:01 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 06 05:42:01 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 06 05:42:01 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 06 05:42:01 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 06 05:42:01 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 06 05:42:01 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 06 05:42:01 localhost kernel: NET: Registered PF_XDP protocol family
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 06 05:42:01 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 06 05:42:01 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 06 05:42:01 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 06 05:42:01 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72517 usecs
Dec 06 05:42:01 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 06 05:42:01 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 06 05:42:01 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 06 05:42:01 localhost kernel: ACPI: bus type thunderbolt registered
Dec 06 05:42:01 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 06 05:42:01 localhost kernel: Initialise system trusted keyrings
Dec 06 05:42:01 localhost kernel: Key type blacklist registered
Dec 06 05:42:01 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 06 05:42:01 localhost kernel: zbud: loaded
Dec 06 05:42:01 localhost kernel: integrity: Platform Keyring initialized
Dec 06 05:42:01 localhost kernel: integrity: Machine keyring initialized
Dec 06 05:42:01 localhost kernel: Freeing initrd memory: 87804K
Dec 06 05:42:01 localhost kernel: NET: Registered PF_ALG protocol family
Dec 06 05:42:01 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 06 05:42:01 localhost kernel: Key type asymmetric registered
Dec 06 05:42:01 localhost kernel: Asymmetric key parser 'x509' registered
Dec 06 05:42:01 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 06 05:42:01 localhost kernel: io scheduler mq-deadline registered
Dec 06 05:42:01 localhost kernel: io scheduler kyber registered
Dec 06 05:42:01 localhost kernel: io scheduler bfq registered
Dec 06 05:42:01 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 06 05:42:01 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 06 05:42:01 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 06 05:42:01 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 06 05:42:01 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 06 05:42:01 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 06 05:42:01 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 06 05:42:01 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 06 05:42:01 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 06 05:42:01 localhost kernel: Non-volatile memory driver v1.3
Dec 06 05:42:01 localhost kernel: rdac: device handler registered
Dec 06 05:42:01 localhost kernel: hp_sw: device handler registered
Dec 06 05:42:01 localhost kernel: emc: device handler registered
Dec 06 05:42:01 localhost kernel: alua: device handler registered
Dec 06 05:42:01 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 06 05:42:01 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 06 05:42:01 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 06 05:42:01 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 06 05:42:01 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 06 05:42:01 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 06 05:42:01 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 06 05:42:01 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 06 05:42:01 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 06 05:42:01 localhost kernel: hub 1-0:1.0: USB hub found
Dec 06 05:42:01 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 06 05:42:01 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 06 05:42:01 localhost kernel: usbserial: USB Serial support registered for generic
Dec 06 05:42:01 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 06 05:42:01 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 06 05:42:01 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 06 05:42:01 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 06 05:42:01 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 06 05:42:01 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 06 05:42:01 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 06 05:42:01 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-06T05:42:00 UTC (1764999720)
Dec 06 05:42:01 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 06 05:42:01 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 06 05:42:01 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 06 05:42:01 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 06 05:42:01 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 06 05:42:01 localhost kernel: usbcore: registered new interface driver usbhid
Dec 06 05:42:01 localhost kernel: usbhid: USB HID core driver
Dec 06 05:42:01 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 06 05:42:01 localhost kernel: Initializing XFRM netlink socket
Dec 06 05:42:01 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 06 05:42:01 localhost kernel: Segment Routing with IPv6
Dec 06 05:42:01 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 06 05:42:01 localhost kernel: mpls_gso: MPLS GSO support
Dec 06 05:42:01 localhost kernel: IPI shorthand broadcast: enabled
Dec 06 05:42:01 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 06 05:42:01 localhost kernel: AES CTR mode by8 optimization enabled
Dec 06 05:42:01 localhost kernel: sched_clock: Marking stable (1203008953, 150654028)->(1470619971, -116956990)
Dec 06 05:42:01 localhost kernel: registered taskstats version 1
Dec 06 05:42:01 localhost kernel: Loading compiled-in X.509 certificates
Dec 06 05:42:01 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 06 05:42:01 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 06 05:42:01 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 06 05:42:01 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 06 05:42:01 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 06 05:42:01 localhost kernel: Demotion targets for Node 0: null
Dec 06 05:42:01 localhost kernel: page_owner is disabled
Dec 06 05:42:01 localhost kernel: Key type .fscrypt registered
Dec 06 05:42:01 localhost kernel: Key type fscrypt-provisioning registered
Dec 06 05:42:01 localhost kernel: Key type big_key registered
Dec 06 05:42:01 localhost kernel: Key type encrypted registered
Dec 06 05:42:01 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 06 05:42:01 localhost kernel: Loading compiled-in module X.509 certificates
Dec 06 05:42:01 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 06 05:42:01 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 06 05:42:01 localhost kernel: ima: No architecture policies found
Dec 06 05:42:01 localhost kernel: evm: Initialising EVM extended attributes:
Dec 06 05:42:01 localhost kernel: evm: security.selinux
Dec 06 05:42:01 localhost kernel: evm: security.SMACK64 (disabled)
Dec 06 05:42:01 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 06 05:42:01 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 06 05:42:01 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 06 05:42:01 localhost kernel: evm: security.apparmor (disabled)
Dec 06 05:42:01 localhost kernel: evm: security.ima
Dec 06 05:42:01 localhost kernel: evm: security.capability
Dec 06 05:42:01 localhost kernel: evm: HMAC attrs: 0x1
Dec 06 05:42:01 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 06 05:42:01 localhost kernel: Running certificate verification RSA selftest
Dec 06 05:42:01 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 06 05:42:01 localhost kernel: Running certificate verification ECDSA selftest
Dec 06 05:42:01 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 06 05:42:01 localhost kernel: clk: Disabling unused clocks
Dec 06 05:42:01 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 06 05:42:01 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 06 05:42:01 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 06 05:42:01 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 06 05:42:01 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 06 05:42:01 localhost kernel: Run /init as init process
Dec 06 05:42:01 localhost kernel:   with arguments:
Dec 06 05:42:01 localhost kernel:     /init
Dec 06 05:42:01 localhost kernel:   with environment:
Dec 06 05:42:01 localhost kernel:     HOME=/
Dec 06 05:42:01 localhost kernel:     TERM=linux
Dec 06 05:42:01 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 06 05:42:01 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 06 05:42:01 localhost systemd[1]: Detected virtualization kvm.
Dec 06 05:42:01 localhost systemd[1]: Detected architecture x86-64.
Dec 06 05:42:01 localhost systemd[1]: Running in initrd.
Dec 06 05:42:01 localhost systemd[1]: No hostname configured, using default hostname.
Dec 06 05:42:01 localhost systemd[1]: Hostname set to <localhost>.
Dec 06 05:42:01 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 06 05:42:01 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 06 05:42:01 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 06 05:42:01 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 06 05:42:01 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 06 05:42:01 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 06 05:42:01 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 06 05:42:01 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 06 05:42:01 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 06 05:42:01 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 06 05:42:01 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 06 05:42:01 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 06 05:42:01 localhost systemd[1]: Reached target Local File Systems.
Dec 06 05:42:01 localhost systemd[1]: Reached target Path Units.
Dec 06 05:42:01 localhost systemd[1]: Reached target Slice Units.
Dec 06 05:42:01 localhost systemd[1]: Reached target Swaps.
Dec 06 05:42:01 localhost systemd[1]: Reached target Timer Units.
Dec 06 05:42:01 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 06 05:42:01 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 06 05:42:01 localhost systemd[1]: Listening on Journal Socket.
Dec 06 05:42:01 localhost systemd[1]: Listening on udev Control Socket.
Dec 06 05:42:01 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 06 05:42:01 localhost systemd[1]: Reached target Socket Units.
Dec 06 05:42:01 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 06 05:42:01 localhost systemd[1]: Starting Journal Service...
Dec 06 05:42:01 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 06 05:42:01 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 06 05:42:01 localhost systemd[1]: Starting Create System Users...
Dec 06 05:42:01 localhost systemd[1]: Starting Setup Virtual Console...
Dec 06 05:42:01 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 06 05:42:01 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 06 05:42:01 localhost systemd-journald[305]: Journal started
Dec 06 05:42:01 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/effe0b74d2bb436fb6215e7c5f665fb5) is 8.0M, max 153.6M, 145.6M free.
Dec 06 05:42:01 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Dec 06 05:42:01 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Dec 06 05:42:01 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 06 05:42:01 localhost systemd[1]: Started Journal Service.
Dec 06 05:42:01 localhost systemd[1]: Finished Create System Users.
Dec 06 05:42:01 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 06 05:42:01 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 06 05:42:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 06 05:42:01 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 06 05:42:01 localhost systemd[1]: Finished Setup Virtual Console.
Dec 06 05:42:01 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 06 05:42:01 localhost systemd[1]: Starting dracut cmdline hook...
Dec 06 05:42:01 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Dec 06 05:42:01 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 05:42:01 localhost systemd[1]: Finished dracut cmdline hook.
Dec 06 05:42:01 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 06 05:42:01 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 06 05:42:01 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 06 05:42:01 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 06 05:42:01 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 06 05:42:01 localhost kernel: RPC: Registered udp transport module.
Dec 06 05:42:01 localhost kernel: RPC: Registered tcp transport module.
Dec 06 05:42:01 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 06 05:42:01 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 06 05:42:01 localhost rpc.statd[440]: Version 2.5.4 starting
Dec 06 05:42:01 localhost rpc.statd[440]: Initializing NSM state
Dec 06 05:42:01 localhost rpc.idmapd[445]: Setting log level to 0
Dec 06 05:42:01 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 06 05:42:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 06 05:42:01 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Dec 06 05:42:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 06 05:42:01 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 06 05:42:01 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 06 05:42:01 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 06 05:42:01 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 06 05:42:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 05:42:01 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 06 05:42:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 05:42:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 05:42:01 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 06 05:42:01 localhost systemd[1]: Reached target Network.
Dec 06 05:42:01 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 06 05:42:01 localhost systemd[1]: Starting dracut initqueue hook...
Dec 06 05:42:01 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 06 05:42:01 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 06 05:42:01 localhost kernel:  vda: vda1
Dec 06 05:42:01 localhost systemd-udevd[459]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 05:42:01 localhost kernel: libata version 3.00 loaded.
Dec 06 05:42:01 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 06 05:42:01 localhost kernel: scsi host0: ata_piix
Dec 06 05:42:01 localhost kernel: scsi host1: ata_piix
Dec 06 05:42:01 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 06 05:42:01 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 06 05:42:01 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 06 05:42:02 localhost systemd[1]: Reached target Initrd Root Device.
Dec 06 05:42:02 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 06 05:42:02 localhost kernel: ata1: found unknown device (class 0)
Dec 06 05:42:02 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 06 05:42:02 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 06 05:42:02 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 06 05:42:02 localhost systemd[1]: Reached target System Initialization.
Dec 06 05:42:02 localhost systemd[1]: Reached target Basic System.
Dec 06 05:42:02 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 06 05:42:02 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 06 05:42:02 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 06 05:42:02 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 06 05:42:02 localhost systemd[1]: Finished dracut initqueue hook.
Dec 06 05:42:02 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 06 05:42:02 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 06 05:42:02 localhost systemd[1]: Reached target Remote File Systems.
Dec 06 05:42:02 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 06 05:42:02 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 06 05:42:02 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 06 05:42:02 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Dec 06 05:42:02 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 06 05:42:02 localhost systemd[1]: Mounting /sysroot...
Dec 06 05:42:02 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 06 05:42:02 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 06 05:42:02 localhost kernel: XFS (vda1): Ending clean mount
Dec 06 05:42:02 localhost systemd[1]: Mounted /sysroot.
Dec 06 05:42:02 localhost systemd[1]: Reached target Initrd Root File System.
Dec 06 05:42:02 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 06 05:42:02 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 06 05:42:02 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 06 05:42:02 localhost systemd[1]: Reached target Initrd File Systems.
Dec 06 05:42:02 localhost systemd[1]: Reached target Initrd Default Target.
Dec 06 05:42:02 localhost systemd[1]: Starting dracut mount hook...
Dec 06 05:42:02 localhost systemd[1]: Finished dracut mount hook.
Dec 06 05:42:02 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 06 05:42:02 localhost rpc.idmapd[445]: exiting on signal 15
Dec 06 05:42:03 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 06 05:42:03 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 06 05:42:03 localhost systemd[1]: Stopped target Network.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Timer Units.
Dec 06 05:42:03 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 06 05:42:03 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Basic System.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Path Units.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Remote File Systems.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Slice Units.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Socket Units.
Dec 06 05:42:03 localhost systemd[1]: Stopped target System Initialization.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Local File Systems.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Swaps.
Dec 06 05:42:03 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped dracut mount hook.
Dec 06 05:42:03 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 06 05:42:03 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 06 05:42:03 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 06 05:42:03 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 06 05:42:03 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 06 05:42:03 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 06 05:42:03 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 06 05:42:03 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 06 05:42:03 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 06 05:42:03 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 06 05:42:03 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 06 05:42:03 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Closed udev Control Socket.
Dec 06 05:42:03 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Closed udev Kernel Socket.
Dec 06 05:42:03 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 06 05:42:03 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 06 05:42:03 localhost systemd[1]: Starting Cleanup udev Database...
Dec 06 05:42:03 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 06 05:42:03 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 06 05:42:03 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Create System Users.
Dec 06 05:42:03 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Finished Cleanup udev Database.
Dec 06 05:42:03 localhost systemd[1]: Reached target Switch Root.
Dec 06 05:42:03 localhost systemd[1]: Starting Switch Root...
Dec 06 05:42:03 localhost systemd[1]: Switching root.
Dec 06 05:42:03 localhost systemd-journald[305]: Journal stopped
Dec 06 05:42:03 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Dec 06 05:42:03 localhost kernel: audit: type=1404 audit(1764999723.229:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 06 05:42:03 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 05:42:03 localhost kernel: SELinux:  policy capability open_perms=1
Dec 06 05:42:03 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 05:42:03 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 06 05:42:03 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 05:42:03 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 05:42:03 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 05:42:03 localhost kernel: audit: type=1403 audit(1764999723.353:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 06 05:42:03 localhost systemd[1]: Successfully loaded SELinux policy in 126.239ms.
Dec 06 05:42:03 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.070ms.
Dec 06 05:42:03 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 06 05:42:03 localhost systemd[1]: Detected virtualization kvm.
Dec 06 05:42:03 localhost systemd[1]: Detected architecture x86-64.
Dec 06 05:42:03 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 05:42:03 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped Switch Root.
Dec 06 05:42:03 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 06 05:42:03 localhost systemd[1]: Created slice Slice /system/getty.
Dec 06 05:42:03 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 06 05:42:03 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 06 05:42:03 localhost systemd[1]: Created slice User and Session Slice.
Dec 06 05:42:03 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 06 05:42:03 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 06 05:42:03 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 06 05:42:03 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Switch Root.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 06 05:42:03 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 06 05:42:03 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 06 05:42:03 localhost systemd[1]: Reached target Path Units.
Dec 06 05:42:03 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 06 05:42:03 localhost systemd[1]: Reached target Slice Units.
Dec 06 05:42:03 localhost systemd[1]: Reached target Swaps.
Dec 06 05:42:03 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 06 05:42:03 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 06 05:42:03 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 06 05:42:03 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 06 05:42:03 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 06 05:42:03 localhost systemd[1]: Listening on udev Control Socket.
Dec 06 05:42:03 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 06 05:42:03 localhost systemd[1]: Mounting Huge Pages File System...
Dec 06 05:42:03 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 06 05:42:03 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 06 05:42:03 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 06 05:42:03 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 06 05:42:03 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 06 05:42:03 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 05:42:03 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 06 05:42:03 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 06 05:42:03 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 06 05:42:03 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 06 05:42:03 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 06 05:42:03 localhost systemd[1]: Stopped Journal Service.
Dec 06 05:42:03 localhost systemd[1]: Starting Journal Service...
Dec 06 05:42:03 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 06 05:42:03 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 06 05:42:03 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 05:42:03 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 06 05:42:03 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 06 05:42:03 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 06 05:42:03 localhost kernel: fuse: init (API version 7.37)
Dec 06 05:42:03 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 06 05:42:03 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 06 05:42:03 localhost systemd-journald[676]: Journal started
Dec 06 05:42:03 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 06 05:42:03 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 06 05:42:03 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Mounted Huge Pages File System.
Dec 06 05:42:03 localhost systemd[1]: Started Journal Service.
Dec 06 05:42:03 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 06 05:42:03 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 06 05:42:03 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 06 05:42:03 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 06 05:42:03 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 05:42:03 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 06 05:42:03 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 06 05:42:03 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 06 05:42:03 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 06 05:42:03 localhost kernel: ACPI: bus type drm_connector registered
Dec 06 05:42:03 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 06 05:42:03 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 06 05:42:03 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 06 05:42:03 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 06 05:42:03 localhost systemd[1]: Mounting FUSE Control File System...
Dec 06 05:42:03 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 06 05:42:03 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 06 05:42:03 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 06 05:42:03 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 06 05:42:03 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 06 05:42:03 localhost systemd[1]: Starting Create System Users...
Dec 06 05:42:03 localhost systemd[1]: Mounted FUSE Control File System.
Dec 06 05:42:03 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 06 05:42:03 localhost systemd-journald[676]: Received client request to flush runtime journal.
Dec 06 05:42:03 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 06 05:42:03 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 06 05:42:03 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 06 05:42:03 localhost systemd[1]: Finished Create System Users.
Dec 06 05:42:03 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 06 05:42:03 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 06 05:42:03 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 06 05:42:03 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 06 05:42:03 localhost systemd[1]: Reached target Local File Systems.
Dec 06 05:42:03 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 06 05:42:03 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 06 05:42:03 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 06 05:42:03 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 06 05:42:03 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 06 05:42:03 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 06 05:42:03 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 06 05:42:03 localhost bootctl[693]: Couldn't find EFI system partition, skipping.
Dec 06 05:42:03 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 06 05:42:04 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 06 05:42:04 localhost systemd[1]: Starting Security Auditing Service...
Dec 06 05:42:04 localhost systemd[1]: Starting RPC Bind...
Dec 06 05:42:04 localhost auditd[698]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 06 05:42:04 localhost auditd[698]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 06 05:42:04 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 06 05:42:05 localhost augenrules[704]: /sbin/augenrules: No change
Dec 06 05:42:05 localhost systemd[1]: Started RPC Bind.
Dec 06 05:42:05 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 06 05:42:05 localhost augenrules[719]: No rules
Dec 06 05:42:05 localhost augenrules[719]: enabled 1
Dec 06 05:42:05 localhost augenrules[719]: failure 1
Dec 06 05:42:05 localhost augenrules[719]: pid 698
Dec 06 05:42:05 localhost augenrules[719]: rate_limit 0
Dec 06 05:42:05 localhost augenrules[719]: backlog_limit 8192
Dec 06 05:42:05 localhost augenrules[719]: lost 0
Dec 06 05:42:05 localhost augenrules[719]: backlog 0
Dec 06 05:42:05 localhost augenrules[719]: backlog_wait_time 60000
Dec 06 05:42:05 localhost augenrules[719]: backlog_wait_time_actual 0
Dec 06 05:42:05 localhost augenrules[719]: enabled 1
Dec 06 05:42:05 localhost augenrules[719]: failure 1
Dec 06 05:42:05 localhost augenrules[719]: pid 698
Dec 06 05:42:05 localhost augenrules[719]: rate_limit 0
Dec 06 05:42:05 localhost augenrules[719]: backlog_limit 8192
Dec 06 05:42:05 localhost augenrules[719]: lost 0
Dec 06 05:42:05 localhost augenrules[719]: backlog 0
Dec 06 05:42:05 localhost augenrules[719]: backlog_wait_time 60000
Dec 06 05:42:05 localhost augenrules[719]: backlog_wait_time_actual 0
Dec 06 05:42:05 localhost augenrules[719]: enabled 1
Dec 06 05:42:05 localhost augenrules[719]: failure 1
Dec 06 05:42:05 localhost augenrules[719]: pid 698
Dec 06 05:42:05 localhost augenrules[719]: rate_limit 0
Dec 06 05:42:05 localhost augenrules[719]: backlog_limit 8192
Dec 06 05:42:05 localhost augenrules[719]: lost 0
Dec 06 05:42:05 localhost augenrules[719]: backlog 0
Dec 06 05:42:05 localhost augenrules[719]: backlog_wait_time 60000
Dec 06 05:42:05 localhost augenrules[719]: backlog_wait_time_actual 0
Dec 06 05:42:05 localhost systemd[1]: Started Security Auditing Service.
Dec 06 05:42:05 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 06 05:42:05 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 06 05:42:07 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 06 05:42:07 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 06 05:42:07 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 06 05:42:07 localhost systemd[1]: Starting Update is Completed...
Dec 06 05:42:07 localhost systemd[1]: Finished Update is Completed.
Dec 06 05:42:07 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec 06 05:42:07 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 06 05:42:07 localhost systemd[1]: Reached target System Initialization.
Dec 06 05:42:07 localhost systemd[1]: Started dnf makecache --timer.
Dec 06 05:42:07 localhost systemd[1]: Started Daily rotation of log files.
Dec 06 05:42:07 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 06 05:42:07 localhost systemd[1]: Reached target Timer Units.
Dec 06 05:42:07 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 06 05:42:07 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 06 05:42:07 localhost systemd[1]: Reached target Socket Units.
Dec 06 05:42:07 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 06 05:42:07 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 05:42:07 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 06 05:42:07 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 05:42:07 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 05:42:07 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 05:42:07 localhost systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 05:42:07 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 06 05:42:07 localhost systemd[1]: Reached target Basic System.
Dec 06 05:42:07 localhost dbus-broker-lau[757]: Ready
Dec 06 05:42:07 localhost systemd[1]: Starting NTP client/server...
Dec 06 05:42:07 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 06 05:42:07 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 06 05:42:07 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 06 05:42:07 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 06 05:42:07 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 06 05:42:07 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 06 05:42:07 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 06 05:42:07 localhost systemd[1]: Started irqbalance daemon.
Dec 06 05:42:07 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 06 05:42:07 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 05:42:07 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 05:42:07 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 05:42:07 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 06 05:42:07 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 06 05:42:07 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 06 05:42:07 localhost chronyd[786]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 06 05:42:07 localhost chronyd[786]: Loaded 0 symmetric keys
Dec 06 05:42:07 localhost chronyd[786]: Using right/UTC timezone to obtain leap second data
Dec 06 05:42:07 localhost chronyd[786]: Loaded seccomp filter (level 2)
Dec 06 05:42:07 localhost systemd[1]: Starting User Login Management...
Dec 06 05:42:07 localhost systemd[1]: Started NTP client/server.
Dec 06 05:42:07 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 06 05:42:07 localhost systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 06 05:42:07 localhost systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 06 05:42:07 localhost systemd-logind[788]: New seat seat0.
Dec 06 05:42:07 localhost systemd[1]: Started User Login Management.
Dec 06 05:42:07 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 06 05:42:07 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 06 05:42:07 localhost kernel: kvm_amd: TSC scaling supported
Dec 06 05:42:07 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 06 05:42:07 localhost kernel: kvm_amd: Nested Paging enabled
Dec 06 05:42:07 localhost kernel: kvm_amd: LBR virtualization supported
Dec 06 05:42:07 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 06 05:42:07 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 06 05:42:07 localhost kernel: Console: switching to colour dummy device 80x25
Dec 06 05:42:07 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 06 05:42:07 localhost kernel: [drm] features: -context_init
Dec 06 05:42:07 localhost kernel: [drm] number of scanouts: 1
Dec 06 05:42:07 localhost kernel: [drm] number of cap sets: 0
Dec 06 05:42:07 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 06 05:42:07 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 06 05:42:07 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 06 05:42:07 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 06 05:42:07 localhost iptables.init[778]: iptables: Applying firewall rules: [  OK  ]
Dec 06 05:42:07 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 06 05:42:07 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 06 Dec 2025 05:42:07 +0000. Up 8.47 seconds.
Dec 06 05:42:08 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 06 05:42:08 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 06 05:42:08 localhost systemd[1]: run-cloud\x2dinit-tmp-tmplb51rjoq.mount: Deactivated successfully.
Dec 06 05:42:08 localhost systemd[1]: Starting Hostname Service...
Dec 06 05:42:08 localhost systemd[1]: Started Hostname Service.
Dec 06 05:42:08 np0005548730.novalocal systemd-hostnamed[855]: Hostname set to <np0005548730.novalocal> (static)
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Reached target Preparation for Network.
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Starting Network Manager...
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3639] NetworkManager (version 1.54.1-1.el9) is starting... (boot:51b7fc77-de53-449b-a623-3e379c4f8c52)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3643] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3703] manager[0x55da7cb5a080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3739] hostname: hostname: using hostnamed
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3739] hostname: static hostname changed from (none) to "np0005548730.novalocal"
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3742] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3876] manager[0x55da7cb5a080]: rfkill: Wi-Fi hardware radio set enabled
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3877] manager[0x55da7cb5a080]: rfkill: WWAN hardware radio set enabled
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3916] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3916] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3917] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3917] manager: Networking is enabled by state file
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3919] settings: Loaded settings plugin: keyfile (internal)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3927] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3944] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3955] dhcp: init: Using DHCP client 'internal'
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3957] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3968] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3975] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3982] device (lo): Activation: starting connection 'lo' (db243730-cbc6-4646-8e89-af902689878a)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3990] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.3993] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4047] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4051] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4054] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4056] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4059] device (eth0): carrier: link connected
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4062] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4069] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4077] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4082] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4083] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4085] manager: NetworkManager state is now CONNECTING
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4086] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4094] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4097] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4146] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4156] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4178] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Started Network Manager.
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Reached target Network.
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4399] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4402] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4409] device (lo): Activation: successful, device activated.
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4433] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4434] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4437] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4440] device (eth0): Activation: successful, device activated.
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4444] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 06 05:42:08 np0005548730.novalocal NetworkManager[859]: <info>  [1764999728.4449] manager: startup complete
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Reached target NFS client services.
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Reached target Remote File Systems.
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 06 05:42:08 np0005548730.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 06 Dec 2025 05:42:09 +0000. Up 9.84 seconds.
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.204         | 255.255.255.0 | global | fa:16:3e:df:f7:f0 |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fedf:f7f0/64 |       .       |  link  | fa:16:3e:df:f7:f0 |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 06 05:42:09 np0005548730.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 05:42:13 np0005548730.novalocal chronyd[786]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec 06 05:42:13 np0005548730.novalocal chronyd[786]: System clock wrong by 1.085662 seconds
Dec 06 05:42:14 np0005548730.novalocal chronyd[786]: System clock was stepped by 1.085662 seconds
Dec 06 05:42:14 np0005548730.novalocal chronyd[786]: System clock TAI offset set to 37 seconds
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: Cannot change IRQ 35 affinity: Operation not permitted
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: IRQ 35 affinity is now unmanaged
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: Cannot change IRQ 33 affinity: Operation not permitted
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: IRQ 33 affinity is now unmanaged
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: IRQ 31 affinity is now unmanaged
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: IRQ 28 affinity is now unmanaged
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: Cannot change IRQ 34 affinity: Operation not permitted
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: IRQ 34 affinity is now unmanaged
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: IRQ 32 affinity is now unmanaged
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: IRQ 30 affinity is now unmanaged
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 06 05:42:18 np0005548730.novalocal irqbalance[783]: IRQ 29 affinity is now unmanaged
Dec 06 05:42:20 np0005548730.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 05:42:26 np0005548730.novalocal useradd[992]: new group: name=cloud-user, GID=1001
Dec 06 05:42:26 np0005548730.novalocal useradd[992]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 06 05:42:26 np0005548730.novalocal useradd[992]: add 'cloud-user' to group 'adm'
Dec 06 05:42:26 np0005548730.novalocal useradd[992]: add 'cloud-user' to group 'systemd-journal'
Dec 06 05:42:26 np0005548730.novalocal useradd[992]: add 'cloud-user' to shadow group 'adm'
Dec 06 05:42:26 np0005548730.novalocal useradd[992]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Generating public/private rsa key pair.
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: The key fingerprint is:
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: SHA256:H0jeodJbAdECsB4jHZQWSv3SKWvCvmyPTIVz2wvzhts root@np0005548730.novalocal
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: The key's randomart image is:
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: +---[RSA 3072]----+
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |  .o=+..oo       |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: | . o+o  ...      |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |  o.=o ...o      |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |   +oo++ + o     |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: | .o ++. S +      |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |  o+oo . + .     |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: | ..o+.. . .      |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: | +o..=..         |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: | .=oooE          |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: The key fingerprint is:
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: SHA256:fCKjYTUQ8qA5qOPRG/CU4dC8r1imxYkcJwWtMU8XQ9U root@np0005548730.novalocal
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: The key's randomart image is:
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: +---[ECDSA 256]---+
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: | oB.+=o..        |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |.=oXoo.  E       |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |=.B++ o          |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |.==+ . o         |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |+.*+= o S .      |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |.+.Bo+ o o       |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: | .*.o            |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: | o .             |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |                 |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: The key fingerprint is:
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: SHA256:Yp58sB5Jc0U+yJJGzzK9UbhQOJywVthz+3CBHM93Bks root@np0005548730.novalocal
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: The key's randomart image is:
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: +--[ED25519 256]--+
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |    .=o=o+o E    |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |    .+O*=B.. o   |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |    o *=Bo*.o o  |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |   . . +++.o o   |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |      * S+       |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |     = O  .      |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |      B .        |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |     . o         |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: |      .          |
Dec 06 05:42:26 np0005548730.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Reached target Network is Online.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Starting System Logging Service...
Dec 06 05:42:26 np0005548730.novalocal sm-notify[1009]: Version 2.5.4 starting
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Starting Permit User Sessions...
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Finished Permit User Sessions.
Dec 06 05:42:26 np0005548730.novalocal sshd[1011]: Server listening on 0.0.0.0 port 22.
Dec 06 05:42:26 np0005548730.novalocal sshd[1011]: Server listening on :: port 22.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Started Command Scheduler.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Started Getty on tty1.
Dec 06 05:42:26 np0005548730.novalocal crond[1015]: (CRON) STARTUP (1.5.7)
Dec 06 05:42:26 np0005548730.novalocal crond[1015]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 06 05:42:26 np0005548730.novalocal crond[1015]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 98% if used.)
Dec 06 05:42:26 np0005548730.novalocal crond[1015]: (CRON) INFO (running with inotify support)
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Reached target Login Prompts.
Dec 06 05:42:26 np0005548730.novalocal rsyslogd[1010]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1010" x-info="https://www.rsyslog.com"] start
Dec 06 05:42:26 np0005548730.novalocal rsyslogd[1010]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Started System Logging Service.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Reached target Multi-User System.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1028]: Unable to negotiate with 38.102.83.114 port 42200: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1041]: Unable to negotiate with 38.102.83.114 port 42220: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1054]: Unable to negotiate with 38.102.83.114 port 42230: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 06 05:42:26 np0005548730.novalocal rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1014]: Connection closed by 38.102.83.114 port 42184 [preauth]
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1082]: Unable to negotiate with 38.102.83.114 port 42262: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1034]: Connection closed by 38.102.83.114 port 42206 [preauth]
Dec 06 05:42:26 np0005548730.novalocal kdumpctl[1024]: kdump: No kdump initial ramdisk found.
Dec 06 05:42:26 np0005548730.novalocal kdumpctl[1024]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1086]: Unable to negotiate with 38.102.83.114 port 42278: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1061]: Connection closed by 38.102.83.114 port 42246 [preauth]
Dec 06 05:42:26 np0005548730.novalocal sshd-session[1079]: Connection closed by 38.102.83.114 port 42260 [preauth]
Dec 06 05:42:26 np0005548730.novalocal cloud-init[1162]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 06 Dec 2025 05:42:26 +0000. Up 26.41 seconds.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 06 05:42:26 np0005548730.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 06 05:42:27 np0005548730.novalocal dracut[1288]: dracut-057-102.git20250818.el9
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1327]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 06 Dec 2025 05:42:27 +0000. Up 26.81 seconds.
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1360]: #############################################################
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1361]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1364]: 256 SHA256:fCKjYTUQ8qA5qOPRG/CU4dC8r1imxYkcJwWtMU8XQ9U root@np0005548730.novalocal (ECDSA)
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1368]: 256 SHA256:Yp58sB5Jc0U+yJJGzzK9UbhQOJywVthz+3CBHM93Bks root@np0005548730.novalocal (ED25519)
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1370]: 3072 SHA256:H0jeodJbAdECsB4jHZQWSv3SKWvCvmyPTIVz2wvzhts root@np0005548730.novalocal (RSA)
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1371]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1374]: #############################################################
Dec 06 05:42:27 np0005548730.novalocal cloud-init[1327]: Cloud-init v. 24.4-7.el9 finished at Sat, 06 Dec 2025 05:42:27 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 27.01 seconds
Dec 06 05:42:27 np0005548730.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 06 05:42:27 np0005548730.novalocal systemd[1]: Reached target Cloud-init target.
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 06 05:42:27 np0005548730.novalocal dracut[1290]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: memstrack is not available
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: memstrack is not available
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: *** Including module: systemd ***
Dec 06 05:42:28 np0005548730.novalocal dracut[1290]: *** Including module: fips ***
Dec 06 05:42:29 np0005548730.novalocal dracut[1290]: *** Including module: systemd-initrd ***
Dec 06 05:42:29 np0005548730.novalocal dracut[1290]: *** Including module: i18n ***
Dec 06 05:42:29 np0005548730.novalocal dracut[1290]: *** Including module: drm ***
Dec 06 05:42:29 np0005548730.novalocal dracut[1290]: *** Including module: prefixdevname ***
Dec 06 05:42:29 np0005548730.novalocal dracut[1290]: *** Including module: kernel-modules ***
Dec 06 05:42:29 np0005548730.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]: *** Including module: kernel-modules-extra ***
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]: *** Including module: qemu ***
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]: *** Including module: fstab-sys ***
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]: *** Including module: rootfs-block ***
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]: *** Including module: terminfo ***
Dec 06 05:42:30 np0005548730.novalocal dracut[1290]: *** Including module: udev-rules ***
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: Skipping udev rule: 91-permissions.rules
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: *** Including module: virtiofs ***
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: *** Including module: dracut-systemd ***
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: *** Including module: usrmount ***
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: *** Including module: base ***
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: *** Including module: fs-lib ***
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: *** Including module: kdumpbase ***
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]:   microcode_ctl module: mangling fw_dir
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel" is ignored
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 06 05:42:31 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]: *** Including module: openssl ***
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]: *** Including module: shutdown ***
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]: *** Including module: squash ***
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]: *** Including modules done ***
Dec 06 05:42:32 np0005548730.novalocal dracut[1290]: *** Installing kernel module dependencies ***
Dec 06 05:42:33 np0005548730.novalocal dracut[1290]: *** Installing kernel module dependencies done ***
Dec 06 05:42:33 np0005548730.novalocal dracut[1290]: *** Resolving executable dependencies ***
Dec 06 05:42:33 np0005548730.novalocal sshd-session[2297]: Received disconnect from 45.78.222.195 port 49996:11: Bye Bye [preauth]
Dec 06 05:42:33 np0005548730.novalocal sshd-session[2297]: Disconnected from authenticating user root 45.78.222.195 port 49996 [preauth]
Dec 06 05:42:34 np0005548730.novalocal dracut[1290]: *** Resolving executable dependencies done ***
Dec 06 05:42:34 np0005548730.novalocal dracut[1290]: *** Generating early-microcode cpio image ***
Dec 06 05:42:34 np0005548730.novalocal dracut[1290]: *** Store current command line parameters ***
Dec 06 05:42:34 np0005548730.novalocal dracut[1290]: Stored kernel commandline:
Dec 06 05:42:34 np0005548730.novalocal dracut[1290]: No dracut internal kernel commandline stored in the initramfs
Dec 06 05:42:37 np0005548730.novalocal dracut[1290]: *** Install squash loader ***
Dec 06 05:42:38 np0005548730.novalocal dracut[1290]: *** Squashing the files inside the initramfs ***
Dec 06 05:42:39 np0005548730.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: *** Squashing the files inside the initramfs done ***
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: *** Hardlinking files ***
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: Mode:           real
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: Files:          50
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: Linked:         0 files
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: Compared:       0 xattrs
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: Compared:       0 files
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: Saved:          0 B
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: Duration:       0.000502 seconds
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: *** Hardlinking files done ***
Dec 06 05:42:40 np0005548730.novalocal dracut[1290]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec 06 05:42:42 np0005548730.novalocal kdumpctl[1024]: kdump: kexec: loaded kdump kernel
Dec 06 05:42:42 np0005548730.novalocal kdumpctl[1024]: kdump: Starting kdump: [OK]
Dec 06 05:42:42 np0005548730.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 06 05:42:42 np0005548730.novalocal systemd[1]: Startup finished in 1.542s (kernel) + 2.342s (initrd) + 37.888s (userspace) = 41.773s.
Dec 06 05:42:43 np0005548730.novalocal sshd-session[4303]: Received disconnect from 43.252.229.25 port 49156:11: Bye Bye [preauth]
Dec 06 05:42:43 np0005548730.novalocal sshd-session[4303]: Disconnected from authenticating user root 43.252.229.25 port 49156 [preauth]
Dec 06 05:42:59 np0005548730.novalocal sshd-session[4305]: Accepted publickey for zuul from 38.102.83.114 port 40350 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 06 05:42:59 np0005548730.novalocal systemd-logind[788]: New session 1 of user zuul.
Dec 06 05:42:59 np0005548730.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 06 05:42:59 np0005548730.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 06 05:42:59 np0005548730.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 06 05:42:59 np0005548730.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Queued start job for default target Main User Target.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Created slice User Application Slice.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Reached target Paths.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Reached target Timers.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Starting D-Bus User Message Bus Socket...
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Starting Create User's Volatile Files and Directories...
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Listening on D-Bus User Message Bus Socket.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Reached target Sockets.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Finished Create User's Volatile Files and Directories.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Reached target Basic System.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Reached target Main User Target.
Dec 06 05:42:59 np0005548730.novalocal systemd[4309]: Startup finished in 103ms.
Dec 06 05:42:59 np0005548730.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 06 05:42:59 np0005548730.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 06 05:42:59 np0005548730.novalocal sshd-session[4305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 05:43:01 np0005548730.novalocal python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 05:43:04 np0005548730.novalocal sshd-session[4396]: Received disconnect from 103.98.176.164 port 52122:11: Bye Bye [preauth]
Dec 06 05:43:04 np0005548730.novalocal sshd-session[4396]: Disconnected from authenticating user root 103.98.176.164 port 52122 [preauth]
Dec 06 05:43:10 np0005548730.novalocal sshd-session[4398]: Received disconnect from 38.105.25.194 port 33484:11: Bye Bye [preauth]
Dec 06 05:43:10 np0005548730.novalocal sshd-session[4398]: Disconnected from authenticating user root 38.105.25.194 port 33484 [preauth]
Dec 06 05:43:14 np0005548730.novalocal python3[4425]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 05:43:21 np0005548730.novalocal python3[4483]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 05:43:22 np0005548730.novalocal python3[4523]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 06 05:43:25 np0005548730.novalocal python3[4549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqdoyMw63SttI9iIXXKReZrn7TtV6srqHUtRSMkVNLk+dttRn/ilDYbFQHqzJ/xo15je18vaR7Bv7vbkUWqoBvXv2qGlsSjPC8W+dR3ltNvw5kJVACfyidVN58/yDoI48cYWeA9FNh/mxTYsU+lJxB8yKvctjItgaTD57DHx0/1ZJwntbGd1eTTg2+WCcTHiojfKmh1KhZm+lPGhLU3lPYya1xkVQLbgRIYnzXs5f2uKHRgoP7Rje0ttuq/2/FepxsVGWoB5DY6o7A0KrVtSPjJSA16aQO+X8om8LuAjPea4s5dF3x0mFmyEg6pCqgXrtGHUTxGDTcrOAaFB8COX0bxRJ84pfPcT9UeJN29I9iIrqrNo+IPgx+m02+DjoqYXWQ5Ri7hJNdlCVXBBJOHCLOeDCnjQole1OSsjiV6vnzBMpreymud5D4dPMOHC3WHrXXsect8s9TUQFtFTQcn3YBnqyNniUqfq5GyL9PMxw+QmtE5vsrAraDXvXeGqjun0E= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:31 np0005548730.novalocal python3[4573]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:31 np0005548730.novalocal python3[4672]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:43:31 np0005548730.novalocal python3[4743]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764999811.2244575-252-59821104459126/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=18bb717f21a643ea9e23f475ff82c0e9_id_rsa follow=False checksum=c804b4c6be78ad90794c8d3b586f074243741373 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:33 np0005548730.novalocal python3[4866]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:43:33 np0005548730.novalocal sshd-session[4867]: Received disconnect from 194.107.115.2 port 12576:11: Bye Bye [preauth]
Dec 06 05:43:33 np0005548730.novalocal sshd-session[4867]: Disconnected from authenticating user root 194.107.115.2 port 12576 [preauth]
Dec 06 05:43:33 np0005548730.novalocal python3[4939]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764999812.1892695-307-174357342152857/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=18bb717f21a643ea9e23f475ff82c0e9_id_rsa.pub follow=False checksum=167f5c866ddcddcef28b6690fafd0cb8b4c5cda9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:35 np0005548730.novalocal python3[4987]: ansible-ping Invoked with data=pong
Dec 06 05:43:36 np0005548730.novalocal python3[5011]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 05:43:38 np0005548730.novalocal python3[5069]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 06 05:43:39 np0005548730.novalocal python3[5101]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:40 np0005548730.novalocal python3[5125]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:40 np0005548730.novalocal python3[5149]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:41 np0005548730.novalocal python3[5173]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:41 np0005548730.novalocal python3[5197]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:42 np0005548730.novalocal python3[5221]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:43 np0005548730.novalocal sudo[5245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpefhkchyhpatbrmfkoyeogevfsltvzp ; /usr/bin/python3'
Dec 06 05:43:43 np0005548730.novalocal sudo[5245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:43:43 np0005548730.novalocal python3[5247]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:43 np0005548730.novalocal sudo[5245]: pam_unix(sudo:session): session closed for user root
Dec 06 05:43:44 np0005548730.novalocal sudo[5323]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyupzknvsivpuspahfyrtabibumxcikt ; /usr/bin/python3'
Dec 06 05:43:44 np0005548730.novalocal sudo[5323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:43:44 np0005548730.novalocal python3[5325]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:43:44 np0005548730.novalocal sudo[5323]: pam_unix(sudo:session): session closed for user root
Dec 06 05:43:44 np0005548730.novalocal sudo[5396]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucicdvxdvzblbifckccdaptjkznywbcs ; /usr/bin/python3'
Dec 06 05:43:44 np0005548730.novalocal sudo[5396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:43:46 np0005548730.novalocal python3[5398]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764999823.905958-32-156022682058664/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:46 np0005548730.novalocal sudo[5396]: pam_unix(sudo:session): session closed for user root
Dec 06 05:43:46 np0005548730.novalocal python3[5446]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:47 np0005548730.novalocal python3[5470]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:47 np0005548730.novalocal python3[5494]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:47 np0005548730.novalocal python3[5518]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:47 np0005548730.novalocal python3[5542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:48 np0005548730.novalocal python3[5566]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:48 np0005548730.novalocal python3[5590]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:48 np0005548730.novalocal python3[5614]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:48 np0005548730.novalocal python3[5638]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:49 np0005548730.novalocal python3[5662]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:49 np0005548730.novalocal python3[5686]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:50 np0005548730.novalocal python3[5710]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:50 np0005548730.novalocal python3[5734]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:50 np0005548730.novalocal python3[5758]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:50 np0005548730.novalocal python3[5782]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:51 np0005548730.novalocal python3[5806]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:51 np0005548730.novalocal python3[5830]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:51 np0005548730.novalocal python3[5854]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:52 np0005548730.novalocal python3[5878]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:52 np0005548730.novalocal python3[5902]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:52 np0005548730.novalocal python3[5926]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:52 np0005548730.novalocal python3[5950]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:53 np0005548730.novalocal python3[5974]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:53 np0005548730.novalocal python3[5998]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:53 np0005548730.novalocal python3[6022]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:54 np0005548730.novalocal python3[6048]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:43:54 np0005548730.novalocal sshd-session[6023]: Received disconnect from 43.252.229.25 port 47236:11: Bye Bye [preauth]
Dec 06 05:43:54 np0005548730.novalocal sshd-session[6023]: Disconnected from authenticating user root 43.252.229.25 port 47236 [preauth]
Dec 06 05:43:58 np0005548730.novalocal sudo[6072]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xstujxixywrzgdythjffznjnogdrlqbk ; /usr/bin/python3'
Dec 06 05:43:58 np0005548730.novalocal sudo[6072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:43:58 np0005548730.novalocal python3[6074]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 05:43:58 np0005548730.novalocal systemd[1]: Starting Time & Date Service...
Dec 06 05:43:58 np0005548730.novalocal systemd[1]: Started Time & Date Service.
Dec 06 05:43:58 np0005548730.novalocal systemd-timedated[6076]: Changed time zone to 'UTC' (UTC).
Dec 06 05:43:58 np0005548730.novalocal sudo[6072]: pam_unix(sudo:session): session closed for user root
Dec 06 05:43:58 np0005548730.novalocal sudo[6103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxaazuzvjuropdibcqtokohfwbuydzrf ; /usr/bin/python3'
Dec 06 05:43:58 np0005548730.novalocal sudo[6103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:43:58 np0005548730.novalocal python3[6105]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:43:58 np0005548730.novalocal sudo[6103]: pam_unix(sudo:session): session closed for user root
Dec 06 05:44:00 np0005548730.novalocal python3[6181]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:44:00 np0005548730.novalocal python3[6252]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764999839.717964-253-46419613315968/source _original_basename=tmpi8s29whb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:44:00 np0005548730.novalocal python3[6352]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:44:01 np0005548730.novalocal python3[6423]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764999840.714831-302-259777807905461/source _original_basename=tmp2p4h3clw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:44:02 np0005548730.novalocal sudo[6523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jefzlfzygfjopgmjfnvtzwvdjnysxbpi ; /usr/bin/python3'
Dec 06 05:44:02 np0005548730.novalocal sudo[6523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:44:03 np0005548730.novalocal python3[6525]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:44:03 np0005548730.novalocal sudo[6523]: pam_unix(sudo:session): session closed for user root
Dec 06 05:44:03 np0005548730.novalocal sudo[6596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhbzndxmtwwsifntdoevqcwzeurfgyep ; /usr/bin/python3'
Dec 06 05:44:03 np0005548730.novalocal sudo[6596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:44:03 np0005548730.novalocal python3[6598]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764999842.8153706-382-111336076607193/source _original_basename=tmplygn8km_ follow=False checksum=01954034105cdb65b42722894a5c1036808c70c7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:44:03 np0005548730.novalocal sudo[6596]: pam_unix(sudo:session): session closed for user root
Dec 06 05:44:04 np0005548730.novalocal python3[6646]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:44:04 np0005548730.novalocal python3[6672]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:44:04 np0005548730.novalocal sudo[6750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymugzzckqhhapbqapczmrjquunblyce ; /usr/bin/python3'
Dec 06 05:44:04 np0005548730.novalocal sudo[6750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:44:04 np0005548730.novalocal python3[6752]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:44:04 np0005548730.novalocal sudo[6750]: pam_unix(sudo:session): session closed for user root
Dec 06 05:44:05 np0005548730.novalocal sudo[6823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuwbkxffmgnveqcsokcxlucyujlyjzde ; /usr/bin/python3'
Dec 06 05:44:05 np0005548730.novalocal sudo[6823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:44:05 np0005548730.novalocal python3[6825]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764999844.7098122-453-226338261639713/source _original_basename=tmpz5jh0vro follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:44:05 np0005548730.novalocal sudo[6823]: pam_unix(sudo:session): session closed for user root
Dec 06 05:44:05 np0005548730.novalocal sudo[6874]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwnygrytzuybphmklxtyakqufqjndces ; /usr/bin/python3'
Dec 06 05:44:05 np0005548730.novalocal sudo[6874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:44:05 np0005548730.novalocal python3[6876]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-fae9-4e7d-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:44:05 np0005548730.novalocal sudo[6874]: pam_unix(sudo:session): session closed for user root
Dec 06 05:44:07 np0005548730.novalocal python3[6904]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-fae9-4e7d-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 06 05:44:08 np0005548730.novalocal python3[6933]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:44:19 np0005548730.novalocal sshd-session[6934]: Received disconnect from 38.105.25.194 port 57906:11: Bye Bye [preauth]
Dec 06 05:44:19 np0005548730.novalocal sshd-session[6934]: Disconnected from authenticating user root 38.105.25.194 port 57906 [preauth]
Dec 06 05:44:22 np0005548730.novalocal sshd-session[6936]: Received disconnect from 103.98.176.164 port 49812:11: Bye Bye [preauth]
Dec 06 05:44:22 np0005548730.novalocal sshd-session[6936]: Disconnected from authenticating user root 103.98.176.164 port 49812 [preauth]
Dec 06 05:44:28 np0005548730.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 05:44:32 np0005548730.novalocal sudo[6963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huxjuswpazyzjjjmpljatondtbifnnvz ; /usr/bin/python3'
Dec 06 05:44:32 np0005548730.novalocal sudo[6963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:44:32 np0005548730.novalocal python3[6965]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:44:32 np0005548730.novalocal sudo[6963]: pam_unix(sudo:session): session closed for user root
Dec 06 05:44:48 np0005548730.novalocal sshd-session[6966]: Received disconnect from 194.107.115.2 port 39422:11: Bye Bye [preauth]
Dec 06 05:44:48 np0005548730.novalocal sshd-session[6966]: Disconnected from authenticating user root 194.107.115.2 port 39422 [preauth]
Dec 06 05:45:10 np0005548730.novalocal sshd[1011]: Timeout before authentication for connection from 45.78.201.60 to 38.102.83.204, pid = 4400
Dec 06 05:45:16 np0005548730.novalocal sshd-session[6968]: Received disconnect from 43.252.229.25 port 45328:11: Bye Bye [preauth]
Dec 06 05:45:16 np0005548730.novalocal sshd-session[6968]: Disconnected from authenticating user root 43.252.229.25 port 45328 [preauth]
Dec 06 05:45:26 np0005548730.novalocal systemd[4309]: Starting Mark boot as successful...
Dec 06 05:45:26 np0005548730.novalocal systemd[4309]: Finished Mark boot as successful.
Dec 06 05:45:29 np0005548730.novalocal chronyd[786]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Dec 06 05:45:31 np0005548730.novalocal sshd-session[6971]: Received disconnect from 38.105.25.194 port 51548:11: Bye Bye [preauth]
Dec 06 05:45:31 np0005548730.novalocal sshd-session[6971]: Disconnected from authenticating user root 38.105.25.194 port 51548 [preauth]
Dec 06 05:45:33 np0005548730.novalocal sshd-session[4318]: Received disconnect from 38.102.83.114 port 40350:11: disconnected by user
Dec 06 05:45:33 np0005548730.novalocal sshd-session[4318]: Disconnected from user zuul 38.102.83.114 port 40350
Dec 06 05:45:33 np0005548730.novalocal sshd-session[4305]: pam_unix(sshd:session): session closed for user zuul
Dec 06 05:45:33 np0005548730.novalocal systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Dec 06 05:45:34 np0005548730.novalocal sshd[1011]: drop connection #0 from [45.78.201.60]:60306 on [38.102.83.204]:22 penalty: exceeded LoginGraceTime
Dec 06 05:45:39 np0005548730.novalocal sshd-session[6973]: Received disconnect from 103.98.176.164 port 55730:11: Bye Bye [preauth]
Dec 06 05:45:39 np0005548730.novalocal sshd-session[6973]: Disconnected from authenticating user root 103.98.176.164 port 55730 [preauth]
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 06 05:45:39 np0005548730.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 06 05:45:39 np0005548730.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8376] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 06 05:45:39 np0005548730.novalocal systemd-udevd[6976]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8543] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8569] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8581] device (eth1): carrier: link connected
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8583] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8589] policy: auto-activating connection 'Wired connection 1' (8f9bc9fc-8e7d-33b0-a6ce-5a8609487289)
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8594] device (eth1): Activation: starting connection 'Wired connection 1' (8f9bc9fc-8e7d-33b0-a6ce-5a8609487289)
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8595] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8597] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8600] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 05:45:39 np0005548730.novalocal NetworkManager[859]: <info>  [1764999939.8603] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 05:45:40 np0005548730.novalocal sshd-session[6979]: Accepted publickey for zuul from 38.102.83.114 port 33446 ssh2: RSA SHA256:aHyVcRaDK3hfZLzaCXAUf9WeLucbkCfDdQjLc4/bZwE
Dec 06 05:45:40 np0005548730.novalocal systemd-logind[788]: New session 3 of user zuul.
Dec 06 05:45:40 np0005548730.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 06 05:45:40 np0005548730.novalocal sshd-session[6979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 05:45:40 np0005548730.novalocal python3[7006]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0d69-987a-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:45:50 np0005548730.novalocal sudo[7084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgiwyfskkgsbklufrvpjxngaknqsjcse ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 05:45:50 np0005548730.novalocal sudo[7084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:45:50 np0005548730.novalocal python3[7086]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:45:50 np0005548730.novalocal sudo[7084]: pam_unix(sudo:session): session closed for user root
Dec 06 05:45:51 np0005548730.novalocal sudo[7157]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkkrvaophhxympxibajhegwjmnfegvhg ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 05:45:51 np0005548730.novalocal sudo[7157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:45:51 np0005548730.novalocal python3[7159]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764999950.679098-155-2524117487780/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=fddfa7124c69688574afd59002dfcf8204902fba backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:45:51 np0005548730.novalocal sudo[7157]: pam_unix(sudo:session): session closed for user root
Dec 06 05:45:51 np0005548730.novalocal sudo[7207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezyhotoefqeachyhujplxxagoencmgjv ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 05:45:51 np0005548730.novalocal sudo[7207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:45:51 np0005548730.novalocal python3[7209]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 05:45:51 np0005548730.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 06 05:45:51 np0005548730.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 06 05:45:51 np0005548730.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 06 05:45:51 np0005548730.novalocal systemd[1]: Stopping Network Manager...
Dec 06 05:45:51 np0005548730.novalocal NetworkManager[859]: <info>  [1764999951.9709] caught SIGTERM, shutting down normally.
Dec 06 05:45:51 np0005548730.novalocal NetworkManager[859]: <info>  [1764999951.9723] dhcp4 (eth0): canceled DHCP transaction
Dec 06 05:45:51 np0005548730.novalocal NetworkManager[859]: <info>  [1764999951.9723] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 05:45:51 np0005548730.novalocal NetworkManager[859]: <info>  [1764999951.9723] dhcp4 (eth0): state changed no lease
Dec 06 05:45:51 np0005548730.novalocal NetworkManager[859]: <info>  [1764999951.9727] manager: NetworkManager state is now CONNECTING
Dec 06 05:45:51 np0005548730.novalocal NetworkManager[859]: <info>  [1764999951.9924] dhcp4 (eth1): canceled DHCP transaction
Dec 06 05:45:51 np0005548730.novalocal NetworkManager[859]: <info>  [1764999951.9925] dhcp4 (eth1): state changed no lease
Dec 06 05:45:51 np0005548730.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[859]: <info>  [1764999952.2634] exiting (success)
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: Stopped Network Manager.
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: NetworkManager.service: Consumed 1.731s CPU time, 10.2M memory peak.
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: Starting Network Manager...
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.3207] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:51b7fc77-de53-449b-a623-3e379c4f8c52)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.3208] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.3258] manager[0x559f6eb01070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: Starting Hostname Service...
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: Started Hostname Service.
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4007] hostname: hostname: using hostnamed
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4008] hostname: static hostname changed from (none) to "np0005548730.novalocal"
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4012] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4016] manager[0x559f6eb01070]: rfkill: Wi-Fi hardware radio set enabled
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4016] manager[0x559f6eb01070]: rfkill: WWAN hardware radio set enabled
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4040] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4041] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4041] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4041] manager: Networking is enabled by state file
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4043] settings: Loaded settings plugin: keyfile (internal)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4054] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4085] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4094] dhcp: init: Using DHCP client 'internal'
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4097] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4102] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4107] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4115] device (lo): Activation: starting connection 'lo' (db243730-cbc6-4646-8e89-af902689878a)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4122] device (eth0): carrier: link connected
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4125] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4130] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4130] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4136] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4141] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4146] device (eth1): carrier: link connected
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4149] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4152] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (8f9bc9fc-8e7d-33b0-a6ce-5a8609487289) (indicated)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4152] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4156] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4160] device (eth1): Activation: starting connection 'Wired connection 1' (8f9bc9fc-8e7d-33b0-a6ce-5a8609487289)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4164] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: Started Network Manager.
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4167] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4169] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4170] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4171] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4173] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4177] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4179] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4181] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4190] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4192] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4207] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4216] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4239] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4245] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4253] device (lo): Activation: successful, device activated.
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4261] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.4269] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 06 05:45:52 np0005548730.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 06 05:45:52 np0005548730.novalocal sudo[7207]: pam_unix(sudo:session): session closed for user root
Dec 06 05:45:52 np0005548730.novalocal python3[7275]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0d69-987a-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.8518] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.8938] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.8940] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.8944] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.8947] device (eth0): Activation: successful, device activated.
Dec 06 05:45:52 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999952.8952] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 06 05:46:01 np0005548730.novalocal sshd-session[7297]: Received disconnect from 194.107.115.2 port 9758:11: Bye Bye [preauth]
Dec 06 05:46:01 np0005548730.novalocal sshd-session[7297]: Disconnected from authenticating user root 194.107.115.2 port 9758 [preauth]
Dec 06 05:46:02 np0005548730.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 05:46:22 np0005548730.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 05:46:27 np0005548730.novalocal sshd-session[7301]: Received disconnect from 43.252.229.25 port 43426:11: Bye Bye [preauth]
Dec 06 05:46:27 np0005548730.novalocal sshd-session[7301]: Disconnected from authenticating user root 43.252.229.25 port 43426 [preauth]
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4295] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 06 05:46:37 np0005548730.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 05:46:37 np0005548730.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4635] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4638] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4645] device (eth1): Activation: successful, device activated.
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4651] manager: startup complete
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4652] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <warn>  [1764999997.4657] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4663] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 06 05:46:37 np0005548730.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4795] dhcp4 (eth1): canceled DHCP transaction
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4796] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4796] dhcp4 (eth1): state changed no lease
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4810] policy: auto-activating connection 'ci-private-network' (a71610cf-c796-5ff3-8929-fe296ca655c2)
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4814] device (eth1): Activation: starting connection 'ci-private-network' (a71610cf-c796-5ff3-8929-fe296ca655c2)
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4815] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4817] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4826] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4833] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4879] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4881] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 05:46:37 np0005548730.novalocal NetworkManager[7226]: <info>  [1764999997.4888] device (eth1): Activation: successful, device activated.
Dec 06 05:46:39 np0005548730.novalocal sshd-session[7326]: Received disconnect from 38.105.25.194 port 50332:11: Bye Bye [preauth]
Dec 06 05:46:39 np0005548730.novalocal sshd-session[7326]: Disconnected from authenticating user root 38.105.25.194 port 50332 [preauth]
Dec 06 05:46:47 np0005548730.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 05:46:52 np0005548730.novalocal sshd-session[6982]: Received disconnect from 38.102.83.114 port 33446:11: disconnected by user
Dec 06 05:46:52 np0005548730.novalocal sshd-session[6982]: Disconnected from user zuul 38.102.83.114 port 33446
Dec 06 05:46:52 np0005548730.novalocal sshd-session[6979]: pam_unix(sshd:session): session closed for user zuul
Dec 06 05:46:52 np0005548730.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 06 05:46:52 np0005548730.novalocal systemd[1]: session-3.scope: Consumed 1.541s CPU time.
Dec 06 05:46:52 np0005548730.novalocal systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Dec 06 05:46:52 np0005548730.novalocal systemd-logind[788]: Removed session 3.
Dec 06 05:46:56 np0005548730.novalocal sshd-session[7328]: Received disconnect from 103.98.176.164 port 48638:11: Bye Bye [preauth]
Dec 06 05:46:56 np0005548730.novalocal sshd-session[7328]: Disconnected from authenticating user root 103.98.176.164 port 48638 [preauth]
Dec 06 05:47:16 np0005548730.novalocal sshd-session[7330]: Received disconnect from 194.107.115.2 port 36604:11: Bye Bye [preauth]
Dec 06 05:47:16 np0005548730.novalocal sshd-session[7330]: Disconnected from authenticating user root 194.107.115.2 port 36604 [preauth]
Dec 06 05:47:22 np0005548730.novalocal sshd-session[7332]: Received disconnect from 188.166.104.136 port 34122:11: Bye Bye [preauth]
Dec 06 05:47:22 np0005548730.novalocal sshd-session[7332]: Disconnected from authenticating user root 188.166.104.136 port 34122 [preauth]
Dec 06 05:47:31 np0005548730.novalocal sshd-session[7334]: Accepted publickey for zuul from 38.102.83.114 port 50452 ssh2: RSA SHA256:aHyVcRaDK3hfZLzaCXAUf9WeLucbkCfDdQjLc4/bZwE
Dec 06 05:47:31 np0005548730.novalocal systemd-logind[788]: New session 4 of user zuul.
Dec 06 05:47:31 np0005548730.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 06 05:47:31 np0005548730.novalocal sshd-session[7334]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 05:47:31 np0005548730.novalocal sudo[7413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbwiderxhjtpezihzhxcebytqonsyupm ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 05:47:31 np0005548730.novalocal sudo[7413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:47:31 np0005548730.novalocal python3[7415]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:47:31 np0005548730.novalocal sudo[7413]: pam_unix(sudo:session): session closed for user root
Dec 06 05:47:31 np0005548730.novalocal sudo[7486]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrcdjqqkpkdaqqqjoasocoipaxjindcm ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 06 05:47:31 np0005548730.novalocal sudo[7486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:47:31 np0005548730.novalocal python3[7488]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765000051.27744-373-1216440777277/source _original_basename=tmpiwq_kq3r follow=False checksum=9aebb8635d8af9bb3da7e243651834426f96779b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:47:31 np0005548730.novalocal sudo[7486]: pam_unix(sudo:session): session closed for user root
Dec 06 05:47:35 np0005548730.novalocal sshd-session[7337]: Connection closed by 38.102.83.114 port 50452
Dec 06 05:47:35 np0005548730.novalocal sshd-session[7334]: pam_unix(sshd:session): session closed for user zuul
Dec 06 05:47:35 np0005548730.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 06 05:47:35 np0005548730.novalocal systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Dec 06 05:47:35 np0005548730.novalocal systemd-logind[788]: Removed session 4.
Dec 06 05:47:37 np0005548730.novalocal sshd-session[7514]: Received disconnect from 43.252.229.25 port 41496:11: Bye Bye [preauth]
Dec 06 05:47:37 np0005548730.novalocal sshd-session[7514]: Disconnected from authenticating user root 43.252.229.25 port 41496 [preauth]
Dec 06 05:47:47 np0005548730.novalocal sshd-session[7516]: Received disconnect from 38.105.25.194 port 45390:11: Bye Bye [preauth]
Dec 06 05:47:47 np0005548730.novalocal sshd-session[7516]: Disconnected from authenticating user root 38.105.25.194 port 45390 [preauth]
Dec 06 05:48:04 np0005548730.novalocal sshd-session[7518]: Received disconnect from 45.78.201.60 port 48492:11: Bye Bye [preauth]
Dec 06 05:48:04 np0005548730.novalocal sshd-session[7518]: Disconnected from authenticating user root 45.78.201.60 port 48492 [preauth]
Dec 06 05:48:05 np0005548730.novalocal sshd-session[7519]: Connection closed by 45.78.222.195 port 34478 [preauth]
Dec 06 05:48:11 np0005548730.novalocal sshd-session[7522]: Received disconnect from 103.98.176.164 port 50434:11: Bye Bye [preauth]
Dec 06 05:48:11 np0005548730.novalocal sshd-session[7522]: Disconnected from authenticating user root 103.98.176.164 port 50434 [preauth]
Dec 06 05:48:25 np0005548730.novalocal systemd[4309]: Created slice User Background Tasks Slice.
Dec 06 05:48:26 np0005548730.novalocal systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 05:48:26 np0005548730.novalocal systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 05:48:30 np0005548730.novalocal sshd-session[7526]: Received disconnect from 194.107.115.2 port 63448:11: Bye Bye [preauth]
Dec 06 05:48:30 np0005548730.novalocal sshd-session[7526]: Disconnected from authenticating user root 194.107.115.2 port 63448 [preauth]
Dec 06 05:48:58 np0005548730.novalocal sshd-session[7528]: Received disconnect from 38.105.25.194 port 36472:11: Bye Bye [preauth]
Dec 06 05:48:58 np0005548730.novalocal sshd-session[7528]: Disconnected from authenticating user root 38.105.25.194 port 36472 [preauth]
Dec 06 05:49:03 np0005548730.novalocal sshd-session[7530]: Received disconnect from 188.166.104.136 port 55882:11: Bye Bye [preauth]
Dec 06 05:49:03 np0005548730.novalocal sshd-session[7530]: Disconnected from authenticating user root 188.166.104.136 port 55882 [preauth]
Dec 06 05:49:05 np0005548730.novalocal sshd-session[7532]: Received disconnect from 43.252.229.25 port 39654:11: Bye Bye [preauth]
Dec 06 05:49:05 np0005548730.novalocal sshd-session[7532]: Disconnected from authenticating user root 43.252.229.25 port 39654 [preauth]
Dec 06 05:49:30 np0005548730.novalocal sshd-session[7534]: Received disconnect from 103.98.176.164 port 57690:11: Bye Bye [preauth]
Dec 06 05:49:30 np0005548730.novalocal sshd-session[7534]: Disconnected from authenticating user root 103.98.176.164 port 57690 [preauth]
Dec 06 05:49:45 np0005548730.novalocal sshd-session[7536]: Received disconnect from 194.107.115.2 port 33800:11: Bye Bye [preauth]
Dec 06 05:49:45 np0005548730.novalocal sshd-session[7536]: Disconnected from authenticating user root 194.107.115.2 port 33800 [preauth]
Dec 06 05:50:23 np0005548730.novalocal sshd-session[7538]: Received disconnect from 43.252.229.25 port 37764:11: Bye Bye [preauth]
Dec 06 05:50:23 np0005548730.novalocal sshd-session[7538]: Disconnected from authenticating user root 43.252.229.25 port 37764 [preauth]
Dec 06 05:50:30 np0005548730.novalocal sshd-session[7540]: Received disconnect from 45.78.201.60 port 43542:11: Bye Bye [preauth]
Dec 06 05:50:30 np0005548730.novalocal sshd-session[7540]: Disconnected from authenticating user root 45.78.201.60 port 43542 [preauth]
Dec 06 05:50:40 np0005548730.novalocal sshd-session[7542]: Received disconnect from 45.78.222.195 port 55876:11: Bye Bye [preauth]
Dec 06 05:50:40 np0005548730.novalocal sshd-session[7542]: Disconnected from authenticating user root 45.78.222.195 port 55876 [preauth]
Dec 06 05:50:51 np0005548730.novalocal sshd-session[7544]: Received disconnect from 103.98.176.164 port 39092:11: Bye Bye [preauth]
Dec 06 05:50:51 np0005548730.novalocal sshd-session[7544]: Disconnected from authenticating user root 103.98.176.164 port 39092 [preauth]
Dec 06 05:51:37 np0005548730.novalocal sshd-session[7546]: Received disconnect from 43.252.229.25 port 35832:11: Bye Bye [preauth]
Dec 06 05:51:37 np0005548730.novalocal sshd-session[7546]: Disconnected from authenticating user root 43.252.229.25 port 35832 [preauth]
Dec 06 05:52:14 np0005548730.novalocal sshd-session[7548]: Received disconnect from 103.98.176.164 port 46852:11: Bye Bye [preauth]
Dec 06 05:52:14 np0005548730.novalocal sshd-session[7548]: Disconnected from authenticating user root 103.98.176.164 port 46852 [preauth]
Dec 06 05:52:45 np0005548730.novalocal sshd-session[7550]: Received disconnect from 14.63.196.175 port 35950:11: Bye Bye [preauth]
Dec 06 05:52:45 np0005548730.novalocal sshd-session[7550]: Disconnected from authenticating user root 14.63.196.175 port 35950 [preauth]
Dec 06 05:52:56 np0005548730.novalocal sshd-session[7554]: Accepted publickey for zuul from 38.102.83.114 port 40722 ssh2: RSA SHA256:aHyVcRaDK3hfZLzaCXAUf9WeLucbkCfDdQjLc4/bZwE
Dec 06 05:52:56 np0005548730.novalocal systemd-logind[788]: New session 5 of user zuul.
Dec 06 05:52:56 np0005548730.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 06 05:52:56 np0005548730.novalocal sshd-session[7554]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 05:52:56 np0005548730.novalocal sudo[7581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkygtkhvvkagyxqqchqqcciwzdlgplqi ; /usr/bin/python3'
Dec 06 05:52:56 np0005548730.novalocal sudo[7581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:52:56 np0005548730.novalocal python3[7583]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-7dd8-fec6-000000000ca2-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:52:56 np0005548730.novalocal sudo[7581]: pam_unix(sudo:session): session closed for user root
Dec 06 05:52:56 np0005548730.novalocal sudo[7610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlduzkcjbkczmbtbqaihzppvtnicywdb ; /usr/bin/python3'
Dec 06 05:52:56 np0005548730.novalocal sudo[7610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:52:56 np0005548730.novalocal python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:52:56 np0005548730.novalocal sudo[7610]: pam_unix(sudo:session): session closed for user root
Dec 06 05:52:56 np0005548730.novalocal sudo[7636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofhihnzwgnghxnbtszzwrcwedaksjvxc ; /usr/bin/python3'
Dec 06 05:52:56 np0005548730.novalocal sudo[7636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:52:57 np0005548730.novalocal python3[7638]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:52:57 np0005548730.novalocal sudo[7636]: pam_unix(sudo:session): session closed for user root
Dec 06 05:52:57 np0005548730.novalocal sudo[7662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjonkfkkuqnfsuvcmhvwficxxyxtbehk ; /usr/bin/python3'
Dec 06 05:52:57 np0005548730.novalocal sudo[7662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:52:57 np0005548730.novalocal python3[7664]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:52:57 np0005548730.novalocal sudo[7662]: pam_unix(sudo:session): session closed for user root
Dec 06 05:52:57 np0005548730.novalocal sudo[7688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmuplzpxxvctckaqsqslwywwhxgunoah ; /usr/bin/python3'
Dec 06 05:52:57 np0005548730.novalocal sudo[7688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:52:57 np0005548730.novalocal python3[7690]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:52:57 np0005548730.novalocal sudo[7688]: pam_unix(sudo:session): session closed for user root
Dec 06 05:52:58 np0005548730.novalocal sudo[7714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxaabycvwzzhjcvjgefuhqjrtgmaztvn ; /usr/bin/python3'
Dec 06 05:52:58 np0005548730.novalocal sudo[7714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:52:58 np0005548730.novalocal python3[7716]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:52:58 np0005548730.novalocal sudo[7714]: pam_unix(sudo:session): session closed for user root
Dec 06 05:52:58 np0005548730.novalocal sudo[7792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npkjwexezjwflsjrmhfmpoohevqsqhfu ; /usr/bin/python3'
Dec 06 05:52:58 np0005548730.novalocal sudo[7792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:52:58 np0005548730.novalocal python3[7794]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:52:58 np0005548730.novalocal sudo[7792]: pam_unix(sudo:session): session closed for user root
Dec 06 05:52:58 np0005548730.novalocal sudo[7865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgshwcifhwcqhjuxkdeqqrydiuwgbxgu ; /usr/bin/python3'
Dec 06 05:52:58 np0005548730.novalocal sudo[7865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:52:59 np0005548730.novalocal python3[7867]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765000378.4717822-364-270766170703159/source _original_basename=tmpqspf9xtr follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:52:59 np0005548730.novalocal sudo[7865]: pam_unix(sudo:session): session closed for user root
Dec 06 05:53:01 np0005548730.novalocal sudo[7916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dllvaldahewzjoidigybufadgtowghvm ; /usr/bin/python3'
Dec 06 05:53:01 np0005548730.novalocal sudo[7916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:53:01 np0005548730.novalocal python3[7918]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 05:53:01 np0005548730.novalocal systemd[1]: Reloading.
Dec 06 05:53:01 np0005548730.novalocal systemd-rc-local-generator[7938]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 05:53:01 np0005548730.novalocal sudo[7916]: pam_unix(sudo:session): session closed for user root
Dec 06 05:53:03 np0005548730.novalocal sudo[7971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czhsxhiqhwvsilfuaqbiqjueqreenwoh ; /usr/bin/python3'
Dec 06 05:53:03 np0005548730.novalocal sudo[7971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:53:03 np0005548730.novalocal python3[7973]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 06 05:53:03 np0005548730.novalocal sudo[7971]: pam_unix(sudo:session): session closed for user root
Dec 06 05:53:04 np0005548730.novalocal sudo[7997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exkztvapimeuamemawiydtvbgljblexg ; /usr/bin/python3'
Dec 06 05:53:04 np0005548730.novalocal sudo[7997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:53:04 np0005548730.novalocal python3[7999]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:53:05 np0005548730.novalocal sudo[7997]: pam_unix(sudo:session): session closed for user root
Dec 06 05:53:05 np0005548730.novalocal sudo[8025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqggnpdlutidfagltvxnxspoknlhkoa ; /usr/bin/python3'
Dec 06 05:53:05 np0005548730.novalocal sudo[8025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:53:05 np0005548730.novalocal python3[8027]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:53:05 np0005548730.novalocal sudo[8025]: pam_unix(sudo:session): session closed for user root
Dec 06 05:53:05 np0005548730.novalocal sudo[8053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sagjmeqzlshrqntxikzqcjpyfrqxxknz ; /usr/bin/python3'
Dec 06 05:53:05 np0005548730.novalocal sudo[8053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:53:05 np0005548730.novalocal python3[8055]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:53:05 np0005548730.novalocal sudo[8053]: pam_unix(sudo:session): session closed for user root
Dec 06 05:53:05 np0005548730.novalocal sudo[8081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwcizbyrhgywyhffhgpaducpofeepmm ; /usr/bin/python3'
Dec 06 05:53:05 np0005548730.novalocal sudo[8081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:53:05 np0005548730.novalocal python3[8083]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:53:05 np0005548730.novalocal sudo[8081]: pam_unix(sudo:session): session closed for user root
Dec 06 05:53:07 np0005548730.novalocal python3[8110]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-7dd8-fec6-000000000ca9-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:53:08 np0005548730.novalocal python3[8140]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 05:53:11 np0005548730.novalocal sshd-session[7557]: Connection closed by 38.102.83.114 port 40722
Dec 06 05:53:11 np0005548730.novalocal sshd-session[7554]: pam_unix(sshd:session): session closed for user zuul
Dec 06 05:53:11 np0005548730.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 06 05:53:11 np0005548730.novalocal systemd[1]: session-5.scope: Consumed 4.304s CPU time.
Dec 06 05:53:11 np0005548730.novalocal systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Dec 06 05:53:11 np0005548730.novalocal systemd-logind[788]: Removed session 5.
Dec 06 05:53:13 np0005548730.novalocal sshd-session[8146]: Accepted publickey for zuul from 38.102.83.114 port 40370 ssh2: RSA SHA256:aHyVcRaDK3hfZLzaCXAUf9WeLucbkCfDdQjLc4/bZwE
Dec 06 05:53:13 np0005548730.novalocal systemd-logind[788]: New session 6 of user zuul.
Dec 06 05:53:13 np0005548730.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 06 05:53:13 np0005548730.novalocal sshd-session[8146]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 05:53:13 np0005548730.novalocal sudo[8173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbzxzrlhptejcngfufddvnjhgklcbxdh ; /usr/bin/python3'
Dec 06 05:53:13 np0005548730.novalocal sudo[8173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:53:14 np0005548730.novalocal python3[8175]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 05:53:17 np0005548730.novalocal sshd-session[8182]: Received disconnect from 45.78.222.195 port 43752:11: Bye Bye [preauth]
Dec 06 05:53:17 np0005548730.novalocal sshd-session[8182]: Disconnected from authenticating user root 45.78.222.195 port 43752 [preauth]
Dec 06 05:53:30 np0005548730.novalocal sshd-session[7552]: Connection closed by 45.78.201.60 port 38856 [preauth]
Dec 06 05:53:41 np0005548730.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 06 05:53:41 np0005548730.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 05:53:41 np0005548730.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 05:53:41 np0005548730.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 05:53:41 np0005548730.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 05:53:41 np0005548730.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 05:53:41 np0005548730.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 05:53:41 np0005548730.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 05:53:53 np0005548730.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 06 05:53:53 np0005548730.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 05:53:53 np0005548730.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 05:53:53 np0005548730.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 05:53:53 np0005548730.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 05:53:53 np0005548730.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 05:53:53 np0005548730.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 05:53:53 np0005548730.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 05:54:05 np0005548730.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 06 05:54:05 np0005548730.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 05:54:05 np0005548730.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 05:54:05 np0005548730.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 05:54:05 np0005548730.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 05:54:05 np0005548730.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 05:54:05 np0005548730.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 05:54:05 np0005548730.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 05:54:09 np0005548730.novalocal setsebool[8247]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 06 05:54:09 np0005548730.novalocal setsebool[8247]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 06 05:54:22 np0005548730.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 06 05:54:22 np0005548730.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 05:54:22 np0005548730.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 05:54:22 np0005548730.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 05:54:22 np0005548730.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 05:54:22 np0005548730.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 05:54:22 np0005548730.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 05:54:22 np0005548730.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 05:54:59 np0005548730.novalocal dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 06 05:55:00 np0005548730.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 05:55:00 np0005548730.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 06 05:55:00 np0005548730.novalocal systemd[1]: Reloading.
Dec 06 05:55:00 np0005548730.novalocal systemd-rc-local-generator[9004]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 05:55:00 np0005548730.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 05:55:04 np0005548730.novalocal sudo[8173]: pam_unix(sudo:session): session closed for user root
Dec 06 05:55:07 np0005548730.novalocal python3[13852]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-70f1-899a-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 05:55:10 np0005548730.novalocal sshd-session[12751]: Received disconnect from 45.78.201.60 port 58296:11: Bye Bye [preauth]
Dec 06 05:55:10 np0005548730.novalocal sshd-session[12751]: Disconnected from authenticating user root 45.78.201.60 port 58296 [preauth]
Dec 06 05:55:15 np0005548730.novalocal kernel: evm: overlay not supported
Dec 06 05:55:17 np0005548730.novalocal systemd[4309]: Starting D-Bus User Message Bus...
Dec 06 05:55:17 np0005548730.novalocal dbus-broker-launch[15060]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 06 05:55:17 np0005548730.novalocal dbus-broker-launch[15060]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 06 05:55:17 np0005548730.novalocal systemd[4309]: Started D-Bus User Message Bus.
Dec 06 05:55:17 np0005548730.novalocal dbus-broker-lau[15060]: Ready
Dec 06 05:55:17 np0005548730.novalocal systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 06 05:55:17 np0005548730.novalocal systemd[4309]: Created slice Slice /user.
Dec 06 05:55:17 np0005548730.novalocal systemd[4309]: podman-14456.scope: unit configures an IP firewall, but not running as root.
Dec 06 05:55:17 np0005548730.novalocal systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Dec 06 05:55:17 np0005548730.novalocal systemd[4309]: Started podman-14456.scope.
Dec 06 05:55:17 np0005548730.novalocal systemd[4309]: Started podman-pause-8aa4a700.scope.
Dec 06 05:55:18 np0005548730.novalocal sudo[15389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhvzbdqhtxbapixcbhljpcwxrkejwnvc ; /usr/bin/python3'
Dec 06 05:55:18 np0005548730.novalocal sudo[15389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:55:18 np0005548730.novalocal python3[15401]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.182:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.182:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:55:18 np0005548730.novalocal python3[15401]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 06 05:55:18 np0005548730.novalocal sudo[15389]: pam_unix(sudo:session): session closed for user root
Dec 06 05:55:18 np0005548730.novalocal sshd-session[8149]: Connection closed by 38.102.83.114 port 40370
Dec 06 05:55:18 np0005548730.novalocal sshd-session[8146]: pam_unix(sshd:session): session closed for user zuul
Dec 06 05:55:18 np0005548730.novalocal systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Dec 06 05:55:18 np0005548730.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Dec 06 05:55:18 np0005548730.novalocal systemd[1]: session-6.scope: Consumed 1min 8.810s CPU time.
Dec 06 05:55:18 np0005548730.novalocal systemd-logind[788]: Removed session 6.
Dec 06 05:55:38 np0005548730.novalocal sshd-session[22656]: Connection closed by 38.102.83.248 port 50288 [preauth]
Dec 06 05:55:38 np0005548730.novalocal sshd-session[22658]: Connection closed by 38.102.83.248 port 50290 [preauth]
Dec 06 05:55:38 np0005548730.novalocal sshd-session[22659]: Unable to negotiate with 38.102.83.248 port 50298: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 06 05:55:38 np0005548730.novalocal sshd-session[22662]: Unable to negotiate with 38.102.83.248 port 50310: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 06 05:55:38 np0005548730.novalocal sshd-session[22660]: Unable to negotiate with 38.102.83.248 port 50312: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 06 05:55:42 np0005548730.novalocal sshd-session[24029]: Accepted publickey for zuul from 38.102.83.114 port 50626 ssh2: RSA SHA256:aHyVcRaDK3hfZLzaCXAUf9WeLucbkCfDdQjLc4/bZwE
Dec 06 05:55:42 np0005548730.novalocal systemd-logind[788]: New session 7 of user zuul.
Dec 06 05:55:42 np0005548730.novalocal systemd[1]: Started Session 7 of User zuul.
Dec 06 05:55:42 np0005548730.novalocal sshd-session[24029]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 05:55:43 np0005548730.novalocal python3[24129]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO334parmk7cxOcI6a0PFnZ27TSOTPbpmTBoe8YEeQQ0LR7/5AS2zaJjFwcqLsniM8KyvvJoyYEK8iW2BwzxAhE= zuul@np0005548728.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:55:43 np0005548730.novalocal sudo[24351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slvxgtgkygfzveblxjjropvybzlktgel ; /usr/bin/python3'
Dec 06 05:55:43 np0005548730.novalocal sudo[24351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:55:43 np0005548730.novalocal python3[24359]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO334parmk7cxOcI6a0PFnZ27TSOTPbpmTBoe8YEeQQ0LR7/5AS2zaJjFwcqLsniM8KyvvJoyYEK8iW2BwzxAhE= zuul@np0005548728.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:55:43 np0005548730.novalocal sudo[24351]: pam_unix(sudo:session): session closed for user root
Dec 06 05:55:44 np0005548730.novalocal sudo[24711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjsreagrgguuohnnemiekuxryxtdvlbz ; /usr/bin/python3'
Dec 06 05:55:44 np0005548730.novalocal sudo[24711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:55:44 np0005548730.novalocal python3[24719]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548730.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 06 05:55:44 np0005548730.novalocal useradd[24796]: new group: name=cloud-admin, GID=1002
Dec 06 05:55:44 np0005548730.novalocal useradd[24796]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 06 05:55:44 np0005548730.novalocal sudo[24711]: pam_unix(sudo:session): session closed for user root
Dec 06 05:55:45 np0005548730.novalocal sudo[25151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvnljemdryvbknlmnoktcbavelltzkff ; /usr/bin/python3'
Dec 06 05:55:45 np0005548730.novalocal sudo[25151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:55:45 np0005548730.novalocal python3[25159]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO334parmk7cxOcI6a0PFnZ27TSOTPbpmTBoe8YEeQQ0LR7/5AS2zaJjFwcqLsniM8KyvvJoyYEK8iW2BwzxAhE= zuul@np0005548728.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 05:55:45 np0005548730.novalocal sudo[25151]: pam_unix(sudo:session): session closed for user root
Dec 06 05:55:46 np0005548730.novalocal sudo[25416]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzkepoaxezjbqfbiyipudbxqazgjwrdf ; /usr/bin/python3'
Dec 06 05:55:46 np0005548730.novalocal sudo[25416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:55:46 np0005548730.novalocal python3[25425]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 05:55:46 np0005548730.novalocal sudo[25416]: pam_unix(sudo:session): session closed for user root
Dec 06 05:55:46 np0005548730.novalocal sudo[25669]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srltbhezmjyktnkuccaqkqnxsqcyqkwt ; /usr/bin/python3'
Dec 06 05:55:46 np0005548730.novalocal sudo[25669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:55:46 np0005548730.novalocal python3[25678]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765000546.0308132-168-71646305816701/source _original_basename=tmpb081hykv follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 05:55:46 np0005548730.novalocal sudo[25669]: pam_unix(sudo:session): session closed for user root
Dec 06 05:55:47 np0005548730.novalocal sudo[25995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrrfmbcfryswpgkppqkvsscaknxkrnyy ; /usr/bin/python3'
Dec 06 05:55:47 np0005548730.novalocal sudo[25995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 05:55:47 np0005548730.novalocal python3[26003]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Dec 06 05:55:47 np0005548730.novalocal systemd[1]: Starting Hostname Service...
Dec 06 05:55:47 np0005548730.novalocal systemd[1]: Started Hostname Service.
Dec 06 05:55:47 np0005548730.novalocal systemd-hostnamed[26104]: Changed pretty hostname to 'compute-1'
Dec 06 05:55:47 compute-1 systemd-hostnamed[26104]: Hostname set to <compute-1> (static)
Dec 06 05:55:47 compute-1 NetworkManager[7226]: <info>  [1765000547.9170] hostname: static hostname changed from "np0005548730.novalocal" to "compute-1"
Dec 06 05:55:47 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 05:55:47 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 05:55:47 compute-1 sudo[25995]: pam_unix(sudo:session): session closed for user root
Dec 06 05:55:48 compute-1 sshd-session[24073]: Connection closed by 38.102.83.114 port 50626
Dec 06 05:55:48 compute-1 sshd-session[24029]: pam_unix(sshd:session): session closed for user zuul
Dec 06 05:55:48 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Dec 06 05:55:48 compute-1 systemd[1]: session-7.scope: Consumed 2.334s CPU time.
Dec 06 05:55:48 compute-1 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Dec 06 05:55:48 compute-1 systemd-logind[788]: Removed session 7.
Dec 06 05:55:57 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 05:55:58 compute-1 sshd-session[28192]: Connection closed by 45.78.222.195 port 54910 [preauth]
Dec 06 05:56:02 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 05:56:02 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 06 05:56:02 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1min 402ms CPU time.
Dec 06 05:56:02 compute-1 systemd[1]: run-rf8a52161a4e647b7bc7a3ad4d8537a15.service: Deactivated successfully.
Dec 06 05:56:10 compute-1 sshd-session[30000]: Received disconnect from 188.166.104.136 port 54650:11: Bye Bye [preauth]
Dec 06 05:56:10 compute-1 sshd-session[30000]: Disconnected from authenticating user root 188.166.104.136 port 54650 [preauth]
Dec 06 05:56:17 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 05:57:26 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 06 05:57:26 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 06 05:57:26 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 06 05:57:26 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 06 05:59:43 compute-1 sshd-session[30010]: Received disconnect from 188.166.104.136 port 36206:11: Bye Bye [preauth]
Dec 06 05:59:43 compute-1 sshd-session[30010]: Disconnected from authenticating user root 188.166.104.136 port 36206 [preauth]
Dec 06 05:59:59 compute-1 sshd-session[30012]: Connection closed by 45.78.201.60 port 58936 [preauth]
Dec 06 06:00:07 compute-1 sshd-session[30014]: Accepted publickey for zuul from 38.102.83.248 port 40802 ssh2: RSA SHA256:aHyVcRaDK3hfZLzaCXAUf9WeLucbkCfDdQjLc4/bZwE
Dec 06 06:00:07 compute-1 systemd-logind[788]: New session 8 of user zuul.
Dec 06 06:00:07 compute-1 systemd[1]: Started Session 8 of User zuul.
Dec 06 06:00:07 compute-1 sshd-session[30014]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:00:08 compute-1 python3[30090]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:00:10 compute-1 sudo[30204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfemxjcspwnvqmljyykjsvbsvnavzgf ; /usr/bin/python3'
Dec 06 06:00:10 compute-1 sudo[30204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:10 compute-1 python3[30206]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:00:10 compute-1 sudo[30204]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:11 compute-1 sudo[30277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drqpgvajvqyvkxukbihrqolbamtulpyp ; /usr/bin/python3'
Dec 06 06:00:11 compute-1 sudo[30277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:11 compute-1 python3[30279]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.4847004-34059-248273232684798/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:00:11 compute-1 sudo[30277]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:11 compute-1 sudo[30303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxojnfpbjibbbcixrbgzxnsaoneqkrlo ; /usr/bin/python3'
Dec 06 06:00:11 compute-1 sudo[30303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:11 compute-1 python3[30305]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:00:11 compute-1 sudo[30303]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:11 compute-1 sudo[30376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oawrzjjpogzlkdztxjfxqmhxbccbpihe ; /usr/bin/python3'
Dec 06 06:00:11 compute-1 sudo[30376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:11 compute-1 python3[30378]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.4847004-34059-248273232684798/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:00:11 compute-1 sudo[30376]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:11 compute-1 sudo[30402]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjqhfaqcaeihdkaojfxrwfomckmmwqnn ; /usr/bin/python3'
Dec 06 06:00:11 compute-1 sudo[30402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:12 compute-1 python3[30404]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:00:12 compute-1 sudo[30402]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:12 compute-1 sudo[30475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhcbatfakhjajeqvkzyvpjewvakqhikj ; /usr/bin/python3'
Dec 06 06:00:12 compute-1 sudo[30475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:12 compute-1 python3[30477]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.4847004-34059-248273232684798/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:00:12 compute-1 sudo[30475]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:12 compute-1 sudo[30501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hapvilhsqekdvgfdvrfeglehdoxvsydt ; /usr/bin/python3'
Dec 06 06:00:12 compute-1 sudo[30501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:12 compute-1 python3[30503]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:00:12 compute-1 sudo[30501]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:12 compute-1 sudo[30574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unabxgzizmeaueciavfufgjogzzsndqk ; /usr/bin/python3'
Dec 06 06:00:12 compute-1 sudo[30574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:12 compute-1 python3[30576]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.4847004-34059-248273232684798/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:00:13 compute-1 sudo[30574]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:13 compute-1 sudo[30600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niftqlstutvuazwbleyoewwqadrywyev ; /usr/bin/python3'
Dec 06 06:00:13 compute-1 sudo[30600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:13 compute-1 python3[30602]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:00:13 compute-1 sudo[30600]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:13 compute-1 sudo[30673]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jutqgsjguobcnpfuymzsarculoylwyah ; /usr/bin/python3'
Dec 06 06:00:13 compute-1 sudo[30673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:13 compute-1 python3[30675]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.4847004-34059-248273232684798/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:00:13 compute-1 sudo[30673]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:13 compute-1 sudo[30699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxvntrawdxovboirkmsvonbbcbtfwry ; /usr/bin/python3'
Dec 06 06:00:13 compute-1 sudo[30699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:13 compute-1 python3[30701]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:00:13 compute-1 sudo[30699]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:13 compute-1 sudo[30772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiypnxgbbnniavsockzlfgncctdpkuem ; /usr/bin/python3'
Dec 06 06:00:13 compute-1 sudo[30772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:14 compute-1 python3[30774]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.4847004-34059-248273232684798/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:00:14 compute-1 sudo[30772]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:14 compute-1 sudo[30798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jydceazfcgjcxzjbvjariqsqbhkwtely ; /usr/bin/python3'
Dec 06 06:00:14 compute-1 sudo[30798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:14 compute-1 python3[30800]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:00:14 compute-1 sudo[30798]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:14 compute-1 sudo[30871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvepipnwpweylyeifamwtibrkjxdsatj ; /usr/bin/python3'
Dec 06 06:00:14 compute-1 sudo[30871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:00:14 compute-1 python3[30873]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765000810.4847004-34059-248273232684798/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:00:14 compute-1 sudo[30871]: pam_unix(sudo:session): session closed for user root
Dec 06 06:00:27 compute-1 python3[30922]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:01:01 compute-1 CROND[30925]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 06:01:01 compute-1 run-parts[30928]: (/etc/cron.hourly) starting 0anacron
Dec 06 06:01:01 compute-1 anacron[30936]: Anacron started on 2025-12-06
Dec 06 06:01:01 compute-1 anacron[30936]: Will run job `cron.daily' in 10 min.
Dec 06 06:01:01 compute-1 anacron[30936]: Will run job `cron.weekly' in 30 min.
Dec 06 06:01:01 compute-1 anacron[30936]: Will run job `cron.monthly' in 50 min.
Dec 06 06:01:01 compute-1 anacron[30936]: Jobs will be executed sequentially
Dec 06 06:01:01 compute-1 run-parts[30938]: (/etc/cron.hourly) finished 0anacron
Dec 06 06:01:01 compute-1 CROND[30924]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 06:01:11 compute-1 sshd-session[30939]: Connection closed by 45.78.222.195 port 36822 [preauth]
Dec 06 06:01:27 compute-1 sshd-session[30941]: Received disconnect from 188.166.104.136 port 47744:11: Bye Bye [preauth]
Dec 06 06:01:27 compute-1 sshd-session[30941]: Disconnected from authenticating user root 188.166.104.136 port 47744 [preauth]
Dec 06 06:02:21 compute-1 sshd-session[30944]: Connection closed by 45.78.201.60 port 54552 [preauth]
Dec 06 06:03:22 compute-1 sshd-session[30946]: Connection closed by 3.19.14.190 port 47618
Dec 06 06:03:38 compute-1 sshd-session[30947]: Received disconnect from 14.63.196.175 port 46820:11: Bye Bye [preauth]
Dec 06 06:03:38 compute-1 sshd-session[30947]: Disconnected from authenticating user root 14.63.196.175 port 46820 [preauth]
Dec 06 06:03:48 compute-1 sshd-session[30949]: Received disconnect from 45.78.222.195 port 38378:11: Bye Bye [preauth]
Dec 06 06:03:48 compute-1 sshd-session[30949]: Disconnected from authenticating user root 45.78.222.195 port 38378 [preauth]
Dec 06 06:04:44 compute-1 sshd-session[30952]: Connection closed by 45.78.201.60 port 34118 [preauth]
Dec 06 06:05:01 compute-1 sshd-session[30954]: Received disconnect from 188.166.104.136 port 37818:11: Bye Bye [preauth]
Dec 06 06:05:01 compute-1 sshd-session[30954]: Disconnected from authenticating user root 188.166.104.136 port 37818 [preauth]
Dec 06 06:05:27 compute-1 sshd-session[30017]: Received disconnect from 38.102.83.248 port 40802:11: disconnected by user
Dec 06 06:05:27 compute-1 sshd-session[30017]: Disconnected from user zuul 38.102.83.248 port 40802
Dec 06 06:05:27 compute-1 sshd-session[30014]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:05:27 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Dec 06 06:05:27 compute-1 systemd[1]: session-8.scope: Consumed 4.979s CPU time.
Dec 06 06:05:27 compute-1 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Dec 06 06:05:27 compute-1 systemd-logind[788]: Removed session 8.
Dec 06 06:09:32 compute-1 sshd-session[30958]: Received disconnect from 45.78.201.60 port 47350:11: Bye Bye [preauth]
Dec 06 06:09:32 compute-1 sshd-session[30958]: Disconnected from authenticating user root 45.78.201.60 port 47350 [preauth]
Dec 06 06:11:01 compute-1 anacron[30936]: Job `cron.daily' started
Dec 06 06:11:01 compute-1 anacron[30936]: Job `cron.daily' terminated
Dec 06 06:12:38 compute-1 sshd-session[30965]: Connection reset by authenticating user root 45.135.232.92 port 53628 [preauth]
Dec 06 06:12:39 compute-1 sshd-session[30967]: Invalid user admin from 45.135.232.92 port 53636
Dec 06 06:12:41 compute-1 sshd-session[30967]: Connection reset by invalid user admin 45.135.232.92 port 53636 [preauth]
Dec 06 06:12:43 compute-1 sshd-session[30969]: Invalid user test from 45.135.232.92 port 53652
Dec 06 06:12:43 compute-1 sshd-session[30969]: Connection reset by invalid user test 45.135.232.92 port 53652 [preauth]
Dec 06 06:12:45 compute-1 sshd-session[30971]: Connection reset by authenticating user root 45.135.232.92 port 53656 [preauth]
Dec 06 06:12:47 compute-1 sshd-session[30973]: Invalid user admin from 45.135.232.92 port 21556
Dec 06 06:12:48 compute-1 sshd-session[30973]: Connection reset by invalid user admin 45.135.232.92 port 21556 [preauth]
Dec 06 06:13:52 compute-1 sshd[1011]: Timeout before authentication for connection from 45.78.201.60 to 38.102.83.204, pid = 30963
Dec 06 06:14:27 compute-1 sshd-session[30975]: Received disconnect from 14.63.196.175 port 39726:11: Bye Bye [preauth]
Dec 06 06:14:27 compute-1 sshd-session[30975]: Disconnected from authenticating user root 14.63.196.175 port 39726 [preauth]
Dec 06 06:14:41 compute-1 sshd-session[30977]: Accepted publickey for zuul from 192.168.122.30 port 38492 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:14:41 compute-1 systemd-logind[788]: New session 9 of user zuul.
Dec 06 06:14:41 compute-1 systemd[1]: Started Session 9 of User zuul.
Dec 06 06:14:41 compute-1 sshd-session[30977]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:14:42 compute-1 python3.9[31130]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:14:43 compute-1 sudo[31309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-affautcscstdznnkadqoowgkpvysecqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001682.8864336-62-22327637728644/AnsiballZ_command.py'
Dec 06 06:14:43 compute-1 sudo[31309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:14:43 compute-1 python3.9[31311]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:14:51 compute-1 sudo[31309]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:17 compute-1 sshd-session[30980]: Connection closed by 192.168.122.30 port 38492
Dec 06 06:15:17 compute-1 sshd-session[30977]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:15:17 compute-1 systemd[1]: Starting dnf makecache...
Dec 06 06:15:17 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Dec 06 06:15:17 compute-1 systemd[1]: session-9.scope: Consumed 8.182s CPU time.
Dec 06 06:15:17 compute-1 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Dec 06 06:15:17 compute-1 systemd-logind[788]: Removed session 9.
Dec 06 06:15:17 compute-1 dnf[31370]: Failed determining last makecache time.
Dec 06 06:15:17 compute-1 dnf[31370]: delorean-openstack-barbican-42b4c41831408a8e323 187 kB/s |  13 kB     00:00
Dec 06 06:15:17 compute-1 dnf[31370]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.4 MB/s |  65 kB     00:00
Dec 06 06:15:17 compute-1 dnf[31370]: delorean-openstack-cinder-1c00d6490d88e436f26ef 764 kB/s |  32 kB     00:00
Dec 06 06:15:17 compute-1 dnf[31370]: delorean-python-stevedore-c4acc5639fd2329372142 2.8 MB/s | 131 kB     00:00
Dec 06 06:15:17 compute-1 dnf[31370]: delorean-python-cloudkitty-tests-tempest-2c80f8 694 kB/s |  32 kB     00:00
Dec 06 06:15:17 compute-1 dnf[31370]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 6.0 MB/s | 349 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 939 kB/s |  42 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-python-designate-tests-tempest-347fdbc 401 kB/s |  18 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-openstack-glance-1fd12c29b339f30fe823e 372 kB/s |  18 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 637 kB/s |  29 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-openstack-manila-3c01b7181572c95dac462 610 kB/s |  25 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-python-whitebox-neutron-tests-tempest- 2.9 MB/s | 154 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-openstack-octavia-ba397f07a7331190208c 489 kB/s |  26 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-openstack-watcher-c014f81a8647287f6dcc 327 kB/s |  16 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-ansible-config_template-5ccaa22121a7ff 139 kB/s | 7.4 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 2.8 MB/s | 144 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-openstack-swift-dc98a8463506ac520c469a 322 kB/s |  14 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-python-tempestconf-8515371b7cceebd4282 1.1 MB/s |  53 kB     00:00
Dec 06 06:15:18 compute-1 dnf[31370]: delorean-openstack-heat-ui-013accbfd179753bc3f0 2.0 MB/s |  96 kB     00:00
Dec 06 06:15:19 compute-1 dnf[31370]: CentOS Stream 9 - BaseOS                         16 MB/s | 8.8 MB     00:00
Dec 06 06:15:21 compute-1 dnf[31370]: CentOS Stream 9 - AppStream                      38 MB/s |  25 MB     00:00
Dec 06 06:15:28 compute-1 dnf[31370]: CentOS Stream 9 - CRB                           8.2 MB/s | 7.3 MB     00:00
Dec 06 06:15:30 compute-1 dnf[31370]: CentOS Stream 9 - Extras packages                45 kB/s |  20 kB     00:00
Dec 06 06:15:30 compute-1 dnf[31370]: dlrn-antelope-testing                            17 MB/s | 1.1 MB     00:00
Dec 06 06:15:30 compute-1 dnf[31370]: dlrn-antelope-build-deps                        9.1 MB/s | 461 kB     00:00
Dec 06 06:15:31 compute-1 dnf[31370]: centos9-rabbitmq                                823 kB/s | 123 kB     00:00
Dec 06 06:15:31 compute-1 dnf[31370]: centos9-storage                                 2.8 MB/s | 415 kB     00:00
Dec 06 06:15:31 compute-1 dnf[31370]: centos9-opstools                                338 kB/s |  51 kB     00:00
Dec 06 06:15:32 compute-1 dnf[31370]: NFV SIG OpenvSwitch                             2.8 MB/s | 456 kB     00:00
Dec 06 06:15:32 compute-1 dnf[31370]: repo-setup-centos-appstream                      68 MB/s |  25 MB     00:00
Dec 06 06:15:33 compute-1 sshd-session[31485]: Accepted publickey for zuul from 192.168.122.30 port 33884 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:15:33 compute-1 systemd-logind[788]: New session 10 of user zuul.
Dec 06 06:15:33 compute-1 systemd[1]: Started Session 10 of User zuul.
Dec 06 06:15:33 compute-1 sshd-session[31485]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:15:33 compute-1 python3.9[31638]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 06 06:15:35 compute-1 python3.9[31812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:15:35 compute-1 sudo[31962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgpvtgfjcluskttswxvmgzuydbeysevp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001735.6003768-100-189792438023107/AnsiballZ_command.py'
Dec 06 06:15:35 compute-1 sudo[31962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:36 compute-1 python3.9[31964]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:15:36 compute-1 sudo[31962]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:37 compute-1 sudo[32115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqofnjfkrgnoqzcogazowpzgeiadugk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001736.6457744-135-276218877686537/AnsiballZ_stat.py'
Dec 06 06:15:37 compute-1 sudo[32115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:37 compute-1 python3.9[32117]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:15:37 compute-1 sudo[32115]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:37 compute-1 sudo[32267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eusgxsplqoqdxdigyubvvirjueidbglw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001737.5165572-159-280522976406544/AnsiballZ_file.py'
Dec 06 06:15:37 compute-1 sudo[32267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:38 compute-1 python3.9[32269]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:15:38 compute-1 sudo[32267]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:38 compute-1 dnf[31370]: repo-setup-centos-baseos                         30 MB/s | 8.8 MB     00:00
Dec 06 06:15:38 compute-1 sudo[32427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quqomjfdgmjvzmdhymcnijupfebnipna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001738.2948809-183-170340315676467/AnsiballZ_stat.py'
Dec 06 06:15:38 compute-1 sudo[32427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:38 compute-1 python3.9[32429]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:15:38 compute-1 sudo[32427]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:39 compute-1 sudo[32550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prxwncnhxuldsfofhpcktffnhtdnmozo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001738.2948809-183-170340315676467/AnsiballZ_copy.py'
Dec 06 06:15:39 compute-1 sudo[32550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:39 compute-1 python3.9[32552]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765001738.2948809-183-170340315676467/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:15:39 compute-1 sudo[32550]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:40 compute-1 sudo[32710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drodwjayumfwqczjdxtspknywbiucphv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001739.7925353-228-9764608040966/AnsiballZ_setup.py'
Dec 06 06:15:40 compute-1 sudo[32710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:40 compute-1 python3.9[32712]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:15:40 compute-1 sudo[32710]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:40 compute-1 dnf[31370]: repo-setup-centos-highavailability              691 kB/s | 744 kB     00:01
Dec 06 06:15:40 compute-1 sudo[32866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqxdvblnxkymcbiwyuabtqgjuqytawuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001740.7123923-252-51580706312452/AnsiballZ_file.py'
Dec 06 06:15:40 compute-1 sudo[32866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:41 compute-1 python3.9[32868]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:15:41 compute-1 sudo[32866]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:41 compute-1 dnf[31370]: repo-setup-centos-powertools                     43 MB/s | 7.3 MB     00:00
Dec 06 06:15:41 compute-1 sudo[33026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-howytmdxokydrpxweejxxdpujxswdhzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001741.413491-279-177213419201728/AnsiballZ_file.py'
Dec 06 06:15:41 compute-1 sudo[33026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:41 compute-1 python3.9[33028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:15:41 compute-1 sudo[33026]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:42 compute-1 python3.9[33178]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:15:44 compute-1 dnf[31370]: Extra Packages for Enterprise Linux 9 - x86_64   15 MB/s |  20 MB     00:01
Dec 06 06:15:48 compute-1 python3.9[33436]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:15:49 compute-1 python3.9[33586]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:15:50 compute-1 python3.9[33740]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:15:51 compute-1 sudo[33896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-momwxumoebvtqxowhlxwifhmpnpctvbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001750.865778-423-229527443329907/AnsiballZ_setup.py'
Dec 06 06:15:51 compute-1 sudo[33896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:51 compute-1 python3.9[33898]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:15:51 compute-1 sudo[33896]: pam_unix(sudo:session): session closed for user root
Dec 06 06:15:52 compute-1 sudo[33980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byeuikjqwavbqzcooroizjdgdyjowilr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001750.865778-423-229527443329907/AnsiballZ_dnf.py'
Dec 06 06:15:52 compute-1 sudo[33980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:15:52 compute-1 python3.9[33982]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:16:01 compute-1 dnf[31370]: Metadata cache created.
Dec 06 06:16:01 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 06 06:16:01 compute-1 systemd[1]: Finished dnf makecache.
Dec 06 06:16:01 compute-1 systemd[1]: dnf-makecache.service: Consumed 35.600s CPU time.
Dec 06 06:16:02 compute-1 systemd[1]: Reloading.
Dec 06 06:16:03 compute-1 systemd-rc-local-generator[34037]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:16:03 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 06 06:16:03 compute-1 systemd[1]: Reloading.
Dec 06 06:16:03 compute-1 systemd-rc-local-generator[34080]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:16:03 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 06 06:16:03 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 06 06:16:03 compute-1 systemd[1]: Reloading.
Dec 06 06:16:03 compute-1 systemd-rc-local-generator[34116]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:16:03 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 06 06:16:05 compute-1 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Dec 06 06:16:05 compute-1 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Dec 06 06:16:05 compute-1 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Dec 06 06:16:37 compute-1 sshd-session[34224]: Received disconnect from 45.78.201.60 port 45042:11: Bye Bye [preauth]
Dec 06 06:16:37 compute-1 sshd-session[34224]: Disconnected from authenticating user root 45.78.201.60 port 45042 [preauth]
Dec 06 06:17:15 compute-1 kernel: SELinux:  Converting 2719 SID table entries...
Dec 06 06:17:15 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:17:15 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:17:15 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:17:15 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:17:15 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:17:15 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:17:15 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:17:16 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 06 06:17:16 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:17:16 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:17:16 compute-1 systemd[1]: Reloading.
Dec 06 06:17:16 compute-1 systemd-rc-local-generator[34456]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:17:16 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:17:16 compute-1 sudo[33980]: pam_unix(sudo:session): session closed for user root
Dec 06 06:17:17 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:17:17 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:17:17 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.011s CPU time.
Dec 06 06:17:17 compute-1 systemd[1]: run-r867c12f8bc144c45b0b4cf9acfa21677.service: Deactivated successfully.
Dec 06 06:18:26 compute-1 sudo[35366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ailayxtthugimlpleasoqtchyodznhkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001905.7322426-460-1966139990112/AnsiballZ_command.py'
Dec 06 06:18:26 compute-1 sudo[35366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:26 compute-1 python3.9[35368]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:18:26 compute-1 sudo[35366]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:27 compute-1 sudo[35647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuannuotrioegptyosmsyfmaowtvtgfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001907.3333013-483-201054625692070/AnsiballZ_selinux.py'
Dec 06 06:18:27 compute-1 sudo[35647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:28 compute-1 python3.9[35649]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 06 06:18:28 compute-1 sudo[35647]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:28 compute-1 sudo[35799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcdjwqswqhyoserpwypoybvvtgoojqpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001908.6882644-516-108723045062716/AnsiballZ_command.py'
Dec 06 06:18:28 compute-1 sudo[35799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:29 compute-1 python3.9[35801]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 06 06:18:30 compute-1 sudo[35799]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:32 compute-1 sudo[35955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gycuxefahufjpjigdrlrdtghtmpbfbvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001912.6036015-540-28947269200271/AnsiballZ_file.py'
Dec 06 06:18:32 compute-1 sudo[35955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:33 compute-1 python3.9[35957]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:18:33 compute-1 sudo[35955]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:34 compute-1 sudo[36107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdkgahhozqxvfmjzwejdhldsavnbznez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001913.9501247-564-57259379899324/AnsiballZ_mount.py'
Dec 06 06:18:34 compute-1 sudo[36107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:34 compute-1 python3.9[36109]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 06 06:18:34 compute-1 sudo[36107]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:35 compute-1 sudo[36259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjzpzccobbokwhcanxptomgpfjqpqstr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001915.5802238-648-15241498816390/AnsiballZ_file.py'
Dec 06 06:18:35 compute-1 sudo[36259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:36 compute-1 python3.9[36261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:18:36 compute-1 sudo[36259]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:41 compute-1 sudo[36411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjnfkdvuaadjueiwidvmjrtutuajdfft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001921.335685-672-131545176658415/AnsiballZ_stat.py'
Dec 06 06:18:41 compute-1 sudo[36411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:45 compute-1 python3.9[36413]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:18:45 compute-1 sudo[36411]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:45 compute-1 sudo[36534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzdauphhwnxmmfjqvukrhpyfkdfzbbnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001921.335685-672-131545176658415/AnsiballZ_copy.py'
Dec 06 06:18:45 compute-1 sudo[36534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:45 compute-1 python3.9[36536]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765001921.335685-672-131545176658415/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:18:45 compute-1 sudo[36534]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:46 compute-1 sudo[36686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqtawiqlypmwqgtlibzinuplvvwtekrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001926.6092641-744-59121154397509/AnsiballZ_stat.py'
Dec 06 06:18:46 compute-1 sudo[36686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:47 compute-1 python3.9[36688]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:18:47 compute-1 sudo[36686]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:47 compute-1 sudo[36838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmtgkorlsdcbxbhyvmagixmtfpupmfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001927.2886908-768-235697281272197/AnsiballZ_command.py'
Dec 06 06:18:47 compute-1 sudo[36838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:47 compute-1 python3.9[36840]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:18:47 compute-1 sudo[36838]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:48 compute-1 sudo[36991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypymmkykbvapaetchlnjrwgdwtegkulx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001928.3526156-794-262450269036000/AnsiballZ_file.py'
Dec 06 06:18:48 compute-1 sudo[36991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:48 compute-1 python3.9[36993]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:18:48 compute-1 sudo[36991]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:49 compute-1 sudo[37143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfhgwvjqgcqufsvjryjkzqirvxrpybfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001929.348771-825-265967603928854/AnsiballZ_getent.py'
Dec 06 06:18:49 compute-1 sudo[37143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:49 compute-1 python3.9[37145]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 06 06:18:50 compute-1 sudo[37143]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:50 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:18:50 compute-1 sudo[37297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvddyyqqmfjiohkpyspcodjbbpmbrwqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001930.2271435-849-274551655489402/AnsiballZ_group.py'
Dec 06 06:18:50 compute-1 sudo[37297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:50 compute-1 python3.9[37299]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:18:50 compute-1 groupadd[37300]: group added to /etc/group: name=qemu, GID=107
Dec 06 06:18:50 compute-1 groupadd[37300]: group added to /etc/gshadow: name=qemu
Dec 06 06:18:50 compute-1 groupadd[37300]: new group: name=qemu, GID=107
Dec 06 06:18:50 compute-1 sudo[37297]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:51 compute-1 sudo[37455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcymwqhytfaobtweoouprctjfodfcbwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001931.1500976-873-45266552954656/AnsiballZ_user.py'
Dec 06 06:18:51 compute-1 sudo[37455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:51 compute-1 python3.9[37457]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 06:18:52 compute-1 useradd[37459]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 06:18:53 compute-1 sudo[37455]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:53 compute-1 sudo[37615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftslvqdaedisuwnbwdfccdeyzanfeexb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001933.383693-897-89049381229452/AnsiballZ_getent.py'
Dec 06 06:18:53 compute-1 sudo[37615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:53 compute-1 python3.9[37617]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 06 06:18:53 compute-1 sudo[37615]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:54 compute-1 sudo[37768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvpcxclfybtsgxuvcluabuuymnljpykc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001934.1958294-921-11684603982975/AnsiballZ_group.py'
Dec 06 06:18:54 compute-1 sudo[37768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:54 compute-1 python3.9[37770]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:18:54 compute-1 groupadd[37771]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 06 06:18:54 compute-1 groupadd[37771]: group added to /etc/gshadow: name=hugetlbfs
Dec 06 06:18:54 compute-1 groupadd[37771]: new group: name=hugetlbfs, GID=42477
Dec 06 06:18:54 compute-1 sudo[37768]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:55 compute-1 sudo[37926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjypfjckoiidtwkxzvoydnksybuvvmaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001935.118873-948-222185155279208/AnsiballZ_file.py'
Dec 06 06:18:55 compute-1 sudo[37926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:55 compute-1 python3.9[37928]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 06 06:18:55 compute-1 sudo[37926]: pam_unix(sudo:session): session closed for user root
Dec 06 06:18:56 compute-1 sudo[38078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofybrdstoqqkhxtbllpmguzxvuevczat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001936.2595897-981-183018677590724/AnsiballZ_dnf.py'
Dec 06 06:18:56 compute-1 sudo[38078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:18:56 compute-1 python3.9[38080]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:18:59 compute-1 sudo[38078]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:02 compute-1 sudo[38233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kudecrqlroezwlsgojsinonuakwpftng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001941.8749988-1005-14935147750387/AnsiballZ_file.py'
Dec 06 06:19:02 compute-1 sudo[38233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:02 compute-1 python3.9[38235]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:19:02 compute-1 sudo[38233]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:02 compute-1 sshd-session[38082]: Connection closed by 45.78.201.60 port 47080 [preauth]
Dec 06 06:19:02 compute-1 sudo[38385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjeyxqxzxffxqfuwsnnmrfetomdjughg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001942.5767937-1029-105160914151774/AnsiballZ_stat.py'
Dec 06 06:19:02 compute-1 sudo[38385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:02 compute-1 python3.9[38387]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:19:03 compute-1 sudo[38385]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:03 compute-1 sudo[38508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvmfjfztrsjaeeowpcgwjtfkyvyqiebg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001942.5767937-1029-105160914151774/AnsiballZ_copy.py'
Dec 06 06:19:03 compute-1 sudo[38508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:03 compute-1 python3.9[38510]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765001942.5767937-1029-105160914151774/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:19:03 compute-1 sudo[38508]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:04 compute-1 sudo[38660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjpgvovoxhzcsnxeilmisqzmyzfxpiyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001943.9335892-1074-152578620008421/AnsiballZ_systemd.py'
Dec 06 06:19:04 compute-1 sudo[38660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:04 compute-1 python3.9[38662]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:19:04 compute-1 systemd[1]: Starting Load Kernel Modules...
Dec 06 06:19:04 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 06 06:19:04 compute-1 kernel: Bridge firewalling registered
Dec 06 06:19:04 compute-1 systemd-modules-load[38666]: Inserted module 'br_netfilter'
Dec 06 06:19:04 compute-1 systemd[1]: Finished Load Kernel Modules.
Dec 06 06:19:04 compute-1 sudo[38660]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:05 compute-1 sudo[38819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-movhrzoyllpmeipmktgdwisbyelfmmha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001945.2820961-1098-281094115648194/AnsiballZ_stat.py'
Dec 06 06:19:05 compute-1 sudo[38819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:05 compute-1 python3.9[38821]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:19:05 compute-1 sudo[38819]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:06 compute-1 sudo[38942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgcodzttjvwywbjvkxvpcrtgbkvorqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001945.2820961-1098-281094115648194/AnsiballZ_copy.py'
Dec 06 06:19:06 compute-1 sudo[38942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:06 compute-1 python3.9[38944]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765001945.2820961-1098-281094115648194/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:19:06 compute-1 sudo[38942]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:07 compute-1 sudo[39094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzjtmwtnxsimlhwwretqoiujfrgwfov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001946.895353-1152-47291994189969/AnsiballZ_dnf.py'
Dec 06 06:19:07 compute-1 sudo[39094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:07 compute-1 python3.9[39096]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:19:11 compute-1 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Dec 06 06:19:11 compute-1 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Dec 06 06:19:12 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:19:12 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:19:12 compute-1 systemd[1]: Reloading.
Dec 06 06:19:12 compute-1 systemd-rc-local-generator[39159]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:19:12 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:19:13 compute-1 sudo[39094]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:15 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:19:15 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:19:15 compute-1 systemd[1]: man-db-cache-update.service: Consumed 4.287s CPU time.
Dec 06 06:19:15 compute-1 systemd[1]: run-rad78503d6003472e859098d1f4ef0c4d.service: Deactivated successfully.
Dec 06 06:19:16 compute-1 python3.9[42849]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:19:16 compute-1 python3.9[43002]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 06 06:19:17 compute-1 python3.9[43152]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:19:18 compute-1 sudo[43302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foioxmrdokagzdygbwrjlxrkdrgxlxtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001958.0169032-1269-2646614215348/AnsiballZ_command.py'
Dec 06 06:19:18 compute-1 sudo[43302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:18 compute-1 python3.9[43304]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:19:18 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 06:19:18 compute-1 systemd[1]: Starting Authorization Manager...
Dec 06 06:19:18 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 06:19:18 compute-1 polkitd[43521]: Started polkitd version 0.117
Dec 06 06:19:18 compute-1 polkitd[43521]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 06:19:18 compute-1 polkitd[43521]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 06:19:18 compute-1 polkitd[43521]: Finished loading, compiling and executing 2 rules
Dec 06 06:19:18 compute-1 polkitd[43521]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 06 06:19:18 compute-1 systemd[1]: Started Authorization Manager.
Dec 06 06:19:18 compute-1 sudo[43302]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:19 compute-1 sudo[43689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owlverxggjaobgfmupqfejvgkqupputi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001959.474199-1296-118992582507014/AnsiballZ_systemd.py'
Dec 06 06:19:19 compute-1 sudo[43689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:20 compute-1 python3.9[43691]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:19:20 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 06 06:19:20 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Dec 06 06:19:20 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 06 06:19:20 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 06:19:20 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 06:19:20 compute-1 sudo[43689]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:20 compute-1 python3.9[43852]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 06 06:19:24 compute-1 sudo[44002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voeawjaiwibpyolzbprhgymyycuvpjea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001964.0310693-1467-178071089688493/AnsiballZ_systemd.py'
Dec 06 06:19:24 compute-1 sudo[44002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:24 compute-1 python3.9[44004]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:19:24 compute-1 systemd[1]: Reloading.
Dec 06 06:19:24 compute-1 systemd-rc-local-generator[44033]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:19:24 compute-1 sudo[44002]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:25 compute-1 sudo[44190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kekwnaipmcbmiaovcgtaghdlzhnjyuip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001964.9723713-1467-152223438043071/AnsiballZ_systemd.py'
Dec 06 06:19:25 compute-1 sudo[44190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:25 compute-1 python3.9[44192]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:19:25 compute-1 systemd[1]: Reloading.
Dec 06 06:19:25 compute-1 systemd-rc-local-generator[44221]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:19:25 compute-1 sudo[44190]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:26 compute-1 sudo[44378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzrrriioenmotxayxqkujcgnpqexverl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001966.7116983-1515-180383418337383/AnsiballZ_command.py'
Dec 06 06:19:26 compute-1 sudo[44378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:27 compute-1 python3.9[44380]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:19:27 compute-1 sudo[44378]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:27 compute-1 sudo[44531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynczqdwiehfsewprkiuzdwhrsbmdbloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001967.5213356-1539-143372982553608/AnsiballZ_command.py'
Dec 06 06:19:27 compute-1 sudo[44531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:27 compute-1 python3.9[44533]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:19:27 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 06 06:19:27 compute-1 sudo[44531]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:28 compute-1 sudo[44684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-theddvujgcesdmjliutrysfzyapocciu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001968.2135437-1563-260491745003699/AnsiballZ_command.py'
Dec 06 06:19:28 compute-1 sudo[44684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:28 compute-1 python3.9[44686]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:19:30 compute-1 sudo[44684]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:30 compute-1 sudo[44846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uitanvpylvtxfanrefxxtusnwtdyquug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001970.3089201-1587-280623553286833/AnsiballZ_command.py'
Dec 06 06:19:30 compute-1 sudo[44846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:30 compute-1 python3.9[44848]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:19:30 compute-1 sudo[44846]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:31 compute-1 sudo[44999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyqhnemtpowqpofagbmtpmmobiisbqgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001970.98406-1611-178296358465201/AnsiballZ_systemd.py'
Dec 06 06:19:31 compute-1 sudo[44999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:31 compute-1 python3.9[45001]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:19:31 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 06 06:19:31 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Dec 06 06:19:31 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Dec 06 06:19:31 compute-1 systemd[1]: Starting Apply Kernel Variables...
Dec 06 06:19:31 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 06 06:19:31 compute-1 systemd[1]: Finished Apply Kernel Variables.
Dec 06 06:19:31 compute-1 sudo[44999]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:32 compute-1 sshd-session[31488]: Connection closed by 192.168.122.30 port 33884
Dec 06 06:19:32 compute-1 sshd-session[31485]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:19:32 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Dec 06 06:19:32 compute-1 systemd[1]: session-10.scope: Consumed 1min 40.082s CPU time.
Dec 06 06:19:32 compute-1 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Dec 06 06:19:32 compute-1 systemd-logind[788]: Removed session 10.
Dec 06 06:19:37 compute-1 sshd-session[45031]: Accepted publickey for zuul from 192.168.122.30 port 48724 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:19:37 compute-1 systemd-logind[788]: New session 11 of user zuul.
Dec 06 06:19:37 compute-1 systemd[1]: Started Session 11 of User zuul.
Dec 06 06:19:37 compute-1 sshd-session[45031]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:19:38 compute-1 python3.9[45184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:19:39 compute-1 sudo[45338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjyxdoibxelmvujtsadzlvteazbxakir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001978.9594026-74-223003350906366/AnsiballZ_getent.py'
Dec 06 06:19:39 compute-1 sudo[45338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:39 compute-1 python3.9[45340]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 06 06:19:39 compute-1 sudo[45338]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:40 compute-1 sudo[45491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baevbpoepehfjziqkkbmmjawaxwnggox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001979.7809515-98-75751024465884/AnsiballZ_group.py'
Dec 06 06:19:40 compute-1 sudo[45491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:40 compute-1 python3.9[45493]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:19:40 compute-1 groupadd[45494]: group added to /etc/group: name=openvswitch, GID=42476
Dec 06 06:19:40 compute-1 groupadd[45494]: group added to /etc/gshadow: name=openvswitch
Dec 06 06:19:40 compute-1 groupadd[45494]: new group: name=openvswitch, GID=42476
Dec 06 06:19:40 compute-1 sudo[45491]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:41 compute-1 sudo[45649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyexcjnaldmdpkdacbsdarekbbsotjzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001980.7509391-122-210746032213241/AnsiballZ_user.py'
Dec 06 06:19:41 compute-1 sudo[45649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:41 compute-1 python3.9[45651]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 06:19:41 compute-1 useradd[45653]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 06:19:41 compute-1 useradd[45653]: add 'openvswitch' to group 'hugetlbfs'
Dec 06 06:19:41 compute-1 useradd[45653]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 06 06:19:41 compute-1 sudo[45649]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:42 compute-1 sudo[45809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndjznhowjfmgpbanwrpnpgjysxwbhuzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001982.0057926-152-74931503921946/AnsiballZ_setup.py'
Dec 06 06:19:42 compute-1 sudo[45809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:42 compute-1 python3.9[45811]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:19:42 compute-1 sudo[45809]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:43 compute-1 sudo[45893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pymgnwedtfjpwonastzjsrzlxtdoihzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001982.0057926-152-74931503921946/AnsiballZ_dnf.py'
Dec 06 06:19:43 compute-1 sudo[45893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:43 compute-1 python3.9[45895]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 06:19:45 compute-1 sudo[45893]: pam_unix(sudo:session): session closed for user root
Dec 06 06:19:46 compute-1 sudo[46056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fidqeagkhivbgitlbecuikorewbhgrqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765001986.5764115-194-192432497516126/AnsiballZ_dnf.py'
Dec 06 06:19:46 compute-1 sudo[46056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:19:47 compute-1 python3.9[46058]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:20:01 compute-1 kernel: SELinux:  Converting 2731 SID table entries...
Dec 06 06:20:01 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:20:01 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:20:01 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:20:01 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:20:01 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:20:01 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:20:01 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:20:01 compute-1 groupadd[46081]: group added to /etc/group: name=unbound, GID=993
Dec 06 06:20:01 compute-1 groupadd[46081]: group added to /etc/gshadow: name=unbound
Dec 06 06:20:01 compute-1 groupadd[46081]: new group: name=unbound, GID=993
Dec 06 06:20:01 compute-1 useradd[46088]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 06 06:20:01 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 06 06:20:01 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 06 06:20:03 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:20:03 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:20:03 compute-1 systemd[1]: Reloading.
Dec 06 06:20:03 compute-1 systemd-rc-local-generator[46586]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:20:03 compute-1 systemd-sysv-generator[46591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:20:03 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:20:05 compute-1 sudo[46056]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:07 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:20:07 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:20:07 compute-1 systemd[1]: run-r93552a818f464ec583c7658cae5f55dc.service: Deactivated successfully.
Dec 06 06:20:07 compute-1 sudo[47155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvntrnnspgouxqvjuzlkkpmtyuejzzhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002007.1839085-218-12182879237758/AnsiballZ_systemd.py'
Dec 06 06:20:07 compute-1 sudo[47155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:08 compute-1 python3.9[47157]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:20:08 compute-1 systemd[1]: Reloading.
Dec 06 06:20:08 compute-1 systemd-rc-local-generator[47185]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:20:08 compute-1 systemd-sysv-generator[47188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:20:08 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Dec 06 06:20:08 compute-1 chown[47198]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 06 06:20:08 compute-1 ovs-ctl[47203]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 06 06:20:08 compute-1 ovs-ctl[47203]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 06 06:20:08 compute-1 ovs-ctl[47203]: Starting ovsdb-server [  OK  ]
Dec 06 06:20:08 compute-1 ovs-vsctl[47253]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 06 06:20:08 compute-1 ovs-vsctl[47269]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"03fe054d-d727-4af3-9c5e-92e57505f242\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 06 06:20:08 compute-1 ovs-ctl[47203]: Configuring Open vSwitch system IDs [  OK  ]
Dec 06 06:20:08 compute-1 ovs-vsctl[47279]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Dec 06 06:20:08 compute-1 ovs-ctl[47203]: Enabling remote OVSDB managers [  OK  ]
Dec 06 06:20:08 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Dec 06 06:20:08 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 06 06:20:08 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 06 06:20:08 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 06 06:20:08 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Dec 06 06:20:08 compute-1 ovs-ctl[47324]: Inserting openvswitch module [  OK  ]
Dec 06 06:20:08 compute-1 ovs-ctl[47292]: Starting ovs-vswitchd [  OK  ]
Dec 06 06:20:08 compute-1 ovs-vsctl[47342]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Dec 06 06:20:08 compute-1 ovs-ctl[47292]: Enabling remote OVSDB managers [  OK  ]
Dec 06 06:20:08 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 06 06:20:08 compute-1 systemd[1]: Starting Open vSwitch...
Dec 06 06:20:08 compute-1 systemd[1]: Finished Open vSwitch.
Dec 06 06:20:09 compute-1 sudo[47155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:09 compute-1 python3.9[47493]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:20:10 compute-1 sudo[47643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skucfabpwpizvvjskrudjyffwozslqox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002010.09693-272-180436822814613/AnsiballZ_sefcontext.py'
Dec 06 06:20:10 compute-1 sudo[47643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:10 compute-1 python3.9[47645]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 06 06:20:12 compute-1 kernel: SELinux:  Converting 2745 SID table entries...
Dec 06 06:20:12 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:20:12 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:20:12 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:20:12 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:20:12 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:20:12 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:20:12 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:20:12 compute-1 sudo[47643]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:13 compute-1 python3.9[47801]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:20:14 compute-1 sudo[47957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcstnutydbdhxzaooqpiontwlmeyudgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002014.05122-326-72476967410526/AnsiballZ_dnf.py'
Dec 06 06:20:14 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 06 06:20:14 compute-1 sudo[47957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:14 compute-1 python3.9[47959]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:20:15 compute-1 sudo[47957]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:16 compute-1 sudo[48110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enwukuhrqmmhyucpsbnlcyjwatvevkse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002016.1853669-350-191959740903504/AnsiballZ_command.py'
Dec 06 06:20:16 compute-1 sudo[48110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:16 compute-1 python3.9[48112]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:20:17 compute-1 sudo[48110]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:18 compute-1 sudo[48397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbsfgcjsfgbyzqzjdmuxrgocohboqhhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002017.7199626-374-240614568183300/AnsiballZ_file.py'
Dec 06 06:20:18 compute-1 sudo[48397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:18 compute-1 python3.9[48399]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 06:20:18 compute-1 sudo[48397]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:19 compute-1 python3.9[48549]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:20:19 compute-1 sudo[48701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znoyvajjzzhxvoajxslhmpuiazifntxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002019.3798075-422-18914195526302/AnsiballZ_dnf.py'
Dec 06 06:20:19 compute-1 sudo[48701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:19 compute-1 python3.9[48703]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:20:23 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:20:23 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:20:23 compute-1 systemd[1]: Reloading.
Dec 06 06:20:23 compute-1 systemd-rc-local-generator[48741]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:20:23 compute-1 systemd-sysv-generator[48744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:20:23 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:20:23 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:20:23 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:20:23 compute-1 systemd[1]: run-r8f2c90ca49174560b8e8d0e905937629.service: Deactivated successfully.
Dec 06 06:20:23 compute-1 sudo[48701]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:39 compute-1 sudo[49017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmxpvwvywnkxpjrkaiahbqadlspqeish ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002039.446891-446-209982238056224/AnsiballZ_systemd.py'
Dec 06 06:20:39 compute-1 sudo[49017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:40 compute-1 python3.9[49019]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:20:40 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 06 06:20:40 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Dec 06 06:20:40 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Dec 06 06:20:40 compute-1 systemd[1]: Stopping Network Manager...
Dec 06 06:20:40 compute-1 NetworkManager[7226]: <info>  [1765002040.1270] caught SIGTERM, shutting down normally.
Dec 06 06:20:40 compute-1 NetworkManager[7226]: <info>  [1765002040.1286] dhcp4 (eth0): canceled DHCP transaction
Dec 06 06:20:40 compute-1 NetworkManager[7226]: <info>  [1765002040.1286] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 06:20:40 compute-1 NetworkManager[7226]: <info>  [1765002040.1287] dhcp4 (eth0): state changed no lease
Dec 06 06:20:40 compute-1 NetworkManager[7226]: <info>  [1765002040.1288] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 06:20:40 compute-1 NetworkManager[7226]: <info>  [1765002040.1341] exiting (success)
Dec 06 06:20:40 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 06:20:40 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 06:20:40 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 06 06:20:40 compute-1 systemd[1]: Stopped Network Manager.
Dec 06 06:20:40 compute-1 systemd[1]: NetworkManager.service: Consumed 15.369s CPU time, 4.1M memory peak, read 0B from disk, written 38.0K to disk.
Dec 06 06:20:40 compute-1 systemd[1]: Starting Network Manager...
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.2051] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:51b7fc77-de53-449b-a623-3e379c4f8c52)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.2052] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.2119] manager[0x564c5ae16090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 06 06:20:40 compute-1 systemd[1]: Starting Hostname Service...
Dec 06 06:20:40 compute-1 systemd[1]: Started Hostname Service.
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.2983] hostname: hostname: using hostnamed
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.2983] hostname: static hostname changed from (none) to "compute-1"
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.2989] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.2993] manager[0x564c5ae16090]: rfkill: Wi-Fi hardware radio set enabled
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.2993] manager[0x564c5ae16090]: rfkill: WWAN hardware radio set enabled
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3013] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3022] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3022] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3023] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3023] manager: Networking is enabled by state file
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3025] settings: Loaded settings plugin: keyfile (internal)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3028] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3051] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3059] dhcp: init: Using DHCP client 'internal'
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3061] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3065] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3070] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3078] device (lo): Activation: starting connection 'lo' (db243730-cbc6-4646-8e89-af902689878a)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3084] device (eth0): carrier: link connected
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3090] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3094] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3095] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3101] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3108] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3113] device (eth1): carrier: link connected
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3117] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3121] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (a71610cf-c796-5ff3-8929-fe296ca655c2) (indicated)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3121] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3126] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3131] device (eth1): Activation: starting connection 'ci-private-network' (a71610cf-c796-5ff3-8929-fe296ca655c2)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3137] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 06 06:20:40 compute-1 systemd[1]: Started Network Manager.
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3145] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3149] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3150] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3153] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3156] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3158] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3160] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3162] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3167] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3169] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3176] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3196] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3204] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3207] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3210] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3216] device (lo): Activation: successful, device activated.
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3238] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 06 06:20:40 compute-1 systemd[1]: Starting Network Manager Wait Online...
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3299] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3303] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3304] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3307] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3310] device (eth1): Activation: successful, device activated.
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3317] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3319] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3321] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3323] device (eth0): Activation: successful, device activated.
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3326] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 06 06:20:40 compute-1 NetworkManager[49031]: <info>  [1765002040.3328] manager: startup complete
Dec 06 06:20:40 compute-1 sudo[49017]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:40 compute-1 systemd[1]: Finished Network Manager Wait Online.
Dec 06 06:20:41 compute-1 sudo[49243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfcdsharlgpwywmscfpolauhcmdneaou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002040.7642014-470-252342361293783/AnsiballZ_dnf.py'
Dec 06 06:20:41 compute-1 sudo[49243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:41 compute-1 python3.9[49245]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:20:45 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:20:45 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:20:45 compute-1 systemd[1]: Reloading.
Dec 06 06:20:45 compute-1 systemd-sysv-generator[49301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:20:45 compute-1 systemd-rc-local-generator[49298]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:20:45 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:20:46 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:20:46 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:20:46 compute-1 systemd[1]: run-rdc0de7df1df7493bb8cd10ffb82fe03d.service: Deactivated successfully.
Dec 06 06:20:47 compute-1 sudo[49243]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:50 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 06:20:51 compute-1 sudo[49701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blolzlhccphbnlytkgethvmhkglnvjnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002050.753553-506-257267877954751/AnsiballZ_stat.py'
Dec 06 06:20:51 compute-1 sudo[49701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:51 compute-1 python3.9[49703]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:20:51 compute-1 sudo[49701]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:51 compute-1 sudo[49853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iatwxenkiodvnpfhqglhogxoogqsqzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002051.4877336-533-196559573590275/AnsiballZ_ini_file.py'
Dec 06 06:20:51 compute-1 sudo[49853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:52 compute-1 python3.9[49855]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:20:52 compute-1 sudo[49853]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:52 compute-1 sudo[50007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwpnvnotrivdrwqyustakkrayypxjntp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002052.5146632-563-114429561698477/AnsiballZ_ini_file.py'
Dec 06 06:20:52 compute-1 sudo[50007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:52 compute-1 python3.9[50009]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:20:53 compute-1 sudo[50007]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:53 compute-1 sudo[50159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdriszamjkuepuucllgqbvauqkelzasq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002053.1615272-563-251382157675999/AnsiballZ_ini_file.py'
Dec 06 06:20:53 compute-1 sudo[50159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:53 compute-1 python3.9[50161]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:20:53 compute-1 sudo[50159]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:54 compute-1 sudo[50311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkbuuazudhmytlxlxaqpvldvkxnuubhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002053.7875583-608-195046337413995/AnsiballZ_ini_file.py'
Dec 06 06:20:54 compute-1 sudo[50311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:54 compute-1 python3.9[50313]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:20:54 compute-1 sudo[50311]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:54 compute-1 sudo[50463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcoruhthuttelwqypsycoktxfybusozz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002054.3710163-608-121487782334942/AnsiballZ_ini_file.py'
Dec 06 06:20:54 compute-1 sudo[50463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:54 compute-1 python3.9[50465]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:20:54 compute-1 sudo[50463]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:55 compute-1 sudo[50615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkgdisqxowlwxpettsshkppdukrelvmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002055.0060022-653-185388041982421/AnsiballZ_stat.py'
Dec 06 06:20:55 compute-1 sudo[50615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:55 compute-1 python3.9[50617]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:20:55 compute-1 sudo[50615]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:55 compute-1 sudo[50738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgvfmhwotkpugicpqppmqmnvmgajmhsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002055.0060022-653-185388041982421/AnsiballZ_copy.py'
Dec 06 06:20:55 compute-1 sudo[50738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:56 compute-1 python3.9[50740]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002055.0060022-653-185388041982421/.source _original_basename=.t03ehky1 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:20:56 compute-1 sudo[50738]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:56 compute-1 sudo[50890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcikuyrgkrlzrznryanijilobixrjyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002056.3270812-698-235025551797225/AnsiballZ_file.py'
Dec 06 06:20:56 compute-1 sudo[50890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:56 compute-1 python3.9[50892]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:20:56 compute-1 sudo[50890]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:57 compute-1 sudo[51042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaeyvtwrrbrpwojqqbisyfxowpmvfdhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002057.0427418-722-235392280793585/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 06 06:20:57 compute-1 sudo[51042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:57 compute-1 python3.9[51044]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 06 06:20:57 compute-1 sudo[51042]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:58 compute-1 sudo[51194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynqdwwkmcpigslgwwzgotkzastqmlgkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002057.9024484-749-143848543474845/AnsiballZ_file.py'
Dec 06 06:20:58 compute-1 sudo[51194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:58 compute-1 python3.9[51196]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:20:58 compute-1 sudo[51194]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:59 compute-1 sudo[51346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agvqqbpzhvudxweyobptsxomccrlitej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002058.7374966-779-231848325745176/AnsiballZ_stat.py'
Dec 06 06:20:59 compute-1 sudo[51346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:20:59 compute-1 sudo[51346]: pam_unix(sudo:session): session closed for user root
Dec 06 06:20:59 compute-1 sudo[51469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cngzfntbyunttyviayqtdhdbwqegaruv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002058.7374966-779-231848325745176/AnsiballZ_copy.py'
Dec 06 06:20:59 compute-1 sudo[51469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:00 compute-1 sudo[51469]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:00 compute-1 sudo[51621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eunsftbtkrajthghdktawuugsbfgendz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002060.265958-824-165161418473318/AnsiballZ_slurp.py'
Dec 06 06:21:00 compute-1 sudo[51621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:00 compute-1 python3.9[51623]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 06 06:21:00 compute-1 sudo[51621]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:01 compute-1 sudo[51796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilfswdhgmcsnuxdldpsrgjbtqnvejwtb ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002061.182966-851-148467521885036/async_wrapper.py j470571262334 300 /home/zuul/.ansible/tmp/ansible-tmp-1765002061.182966-851-148467521885036/AnsiballZ_edpm_os_net_config.py _'
Dec 06 06:21:01 compute-1 sudo[51796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:02 compute-1 ansible-async_wrapper.py[51798]: Invoked with j470571262334 300 /home/zuul/.ansible/tmp/ansible-tmp-1765002061.182966-851-148467521885036/AnsiballZ_edpm_os_net_config.py _
Dec 06 06:21:02 compute-1 ansible-async_wrapper.py[51801]: Starting module and watcher
Dec 06 06:21:02 compute-1 ansible-async_wrapper.py[51801]: Start watching 51802 (300)
Dec 06 06:21:02 compute-1 ansible-async_wrapper.py[51802]: Start module (51802)
Dec 06 06:21:02 compute-1 ansible-async_wrapper.py[51798]: Return async_wrapper task started.
Dec 06 06:21:02 compute-1 sudo[51796]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:02 compute-1 python3.9[51803]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 06 06:21:02 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 06 06:21:02 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 06 06:21:02 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 06 06:21:02 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 06 06:21:02 compute-1 kernel: cfg80211: failed to load regulatory.db
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.1340] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.1365] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2028] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2029] audit: op="connection-add" uuid="db91256b-1d5a-4f68-b926-4a6b5a4c0288" name="br-ex-br" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2043] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2044] audit: op="connection-add" uuid="23043eff-f8f9-452e-a689-a7c6c0ffd5dd" name="br-ex-port" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2055] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2056] audit: op="connection-add" uuid="18217bbd-c7b5-4b35-8932-e24a75257c56" name="eth1-port" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2067] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2068] audit: op="connection-add" uuid="ed599d17-b0ab-4266-8705-30146d7bedee" name="vlan20-port" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2080] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2081] audit: op="connection-add" uuid="f96dd994-3677-4d73-969f-f65862f47597" name="vlan21-port" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2090] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2092] audit: op="connection-add" uuid="ecd643ea-66e9-450d-aa07-663da260e32c" name="vlan22-port" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2103] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2104] audit: op="connection-add" uuid="6a60cd49-ecd5-41cb-9cee-8563f7292a45" name="vlan23-port" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2123] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2139] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.2141] audit: op="connection-add" uuid="3a2da376-ebee-4e5d-8f22-ad95c5bda05b" name="br-ex-if" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9560] audit: op="connection-update" uuid="a71610cf-c796-5ff3-8929-fe296ca655c2" name="ci-private-network" args="ovs-interface.type,ovs-external-ids.data,connection.timestamp,connection.master,connection.port-type,connection.controller,connection.slave-type,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.dns,ipv6.addresses,ipv6.routes,ipv6.method,ipv4.routing-rules,ipv4.never-default,ipv4.dns,ipv4.addresses,ipv4.routes,ipv4.method" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9606] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9609] audit: op="connection-add" uuid="c172978c-09cd-48de-9442-7fcc95359223" name="vlan20-if" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9642] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9646] audit: op="connection-add" uuid="bd65283a-4b3b-45de-9035-e8ab183ede27" name="vlan21-if" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9674] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9677] audit: op="connection-add" uuid="6689592b-cbec-4749-a565-51831f47d91a" name="vlan22-if" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9707] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9709] audit: op="connection-add" uuid="9f2de67b-80e9-419b-b743-1ea1e934625f" name="vlan23-if" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9730] audit: op="connection-delete" uuid="8f9bc9fc-8e7d-33b0-a6ce-5a8609487289" name="Wired connection 1" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9750] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9769] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9782] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (db91256b-1d5a-4f68-b926-4a6b5a4c0288)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9784] audit: op="connection-activate" uuid="db91256b-1d5a-4f68-b926-4a6b5a4c0288" name="br-ex-br" pid=51804 uid=0 result="success"
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9788] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9805] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9814] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (23043eff-f8f9-452e-a689-a7c6c0ffd5dd)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9818] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9831] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9840] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (18217bbd-c7b5-4b35-8932-e24a75257c56)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9844] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9859] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9869] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (ed599d17-b0ab-4266-8705-30146d7bedee)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9873] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9889] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9898] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (f96dd994-3677-4d73-969f-f65862f47597)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9901] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9917] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9925] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ecd643ea-66e9-450d-aa07-663da260e32c)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9929] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9944] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9953] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (6a60cd49-ecd5-41cb-9cee-8563f7292a45)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9955] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9961] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9965] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9978] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9988] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9998] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (3a2da376-ebee-4e5d-8f22-ad95c5bda05b)
Dec 06 06:21:04 compute-1 NetworkManager[49031]: <info>  [1765002064.9999] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0007] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0011] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0013] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0017] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0039] device (eth1): disconnecting for new activation request.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0041] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0048] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0052] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0055] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0061] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0071] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0080] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c172978c-09cd-48de-9442-7fcc95359223)
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0083] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0090] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0095] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0097] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0103] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0113] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0123] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (bd65283a-4b3b-45de-9035-e8ab183ede27)
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0125] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0132] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0136] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0139] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0146] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0156] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0166] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (6689592b-cbec-4749-a565-51831f47d91a)
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0167] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0174] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0178] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0181] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0187] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0199] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0210] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (9f2de67b-80e9-419b-b743-1ea1e934625f)
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0212] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0218] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0222] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0225] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0229] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0256] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method" pid=51804 uid=0 result="success"
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0259] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0267] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0271] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0285] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 kernel: ovs-system: entered promiscuous mode
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0317] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0323] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0327] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0329] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0334] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 systemd-udevd[51808]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0339] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0344] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0346] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0351] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0356] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0360] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0361] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0370] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0375] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0380] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0383] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0389] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0394] dhcp4 (eth0): canceled DHCP transaction
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0394] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0394] dhcp4 (eth0): state changed no lease
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.0396] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1165] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1168] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51804 uid=0 result="fail" reason="Device is not activated"
Dec 06 06:21:05 compute-1 kernel: Timeout policy base is empty
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1177] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1263] device (eth1): Activation: starting connection 'ci-private-network' (a71610cf-c796-5ff3-8929-fe296ca655c2)
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1268] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1269] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1270] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1272] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1274] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1276] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1277] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1280] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1289] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1296] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1303] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1310] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1313] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1317] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1323] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1327] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1330] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1335] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1338] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1343] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1346] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1350] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1353] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1357] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1361] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1365] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1368] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.1372] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 kernel: br-ex: entered promiscuous mode
Dec 06 06:21:05 compute-1 kernel: vlan22: entered promiscuous mode
Dec 06 06:21:05 compute-1 systemd-udevd[51810]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:21:05 compute-1 kernel: vlan20: entered promiscuous mode
Dec 06 06:21:05 compute-1 kernel: vlan21: entered promiscuous mode
Dec 06 06:21:05 compute-1 kernel: vlan23: entered promiscuous mode
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5697] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5701] device (eth1): released from controller device eth1
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5707] device (eth1): disconnecting for new activation request.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5708] audit: op="connection-activate" uuid="a71610cf-c796-5ff3-8929-fe296ca655c2" name="ci-private-network" pid=51804 uid=0 result="success"
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5732] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5741] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5747] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5753] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5760] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5795] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5796] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5803] device (eth1): Activation: starting connection 'ci-private-network' (a71610cf-c796-5ff3-8929-fe296ca655c2)
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5833] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5836] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5846] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5852] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5860] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5867] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5875] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5888] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5896] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5897] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5902] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5912] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 06 06:21:05 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.5915] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6793] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6796] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6798] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6799] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6801] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6815] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6821] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6827] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6832] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6838] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6842] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6848] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6852] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6858] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.6863] device (eth1): Activation: successful, device activated.
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.8452] checkpoint[0x564c5adec950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 06 06:21:05 compute-1 NetworkManager[49031]: <info>  [1765002065.8454] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Dec 06 06:21:06 compute-1 NetworkManager[49031]: <info>  [1765002066.1691] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51804 uid=0 result="success"
Dec 06 06:21:06 compute-1 NetworkManager[49031]: <info>  [1765002066.1704] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51804 uid=0 result="success"
Dec 06 06:21:06 compute-1 NetworkManager[49031]: <info>  [1765002066.3742] audit: op="networking-control" arg="global-dns-configuration" pid=51804 uid=0 result="success"
Dec 06 06:21:06 compute-1 NetworkManager[49031]: <info>  [1765002066.3779] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 06 06:21:06 compute-1 NetworkManager[49031]: <info>  [1765002066.3815] audit: op="networking-control" arg="global-dns-configuration" pid=51804 uid=0 result="success"
Dec 06 06:21:06 compute-1 NetworkManager[49031]: <info>  [1765002066.3845] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51804 uid=0 result="success"
Dec 06 06:21:06 compute-1 sudo[52171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tejwvrkjazwubwsmebnjqcrvhtpqinzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002066.1366918-851-192824266691483/AnsiballZ_async_status.py'
Dec 06 06:21:06 compute-1 sudo[52171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:06 compute-1 NetworkManager[49031]: <info>  [1765002066.5459] checkpoint[0x564c5adeca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 06 06:21:06 compute-1 NetworkManager[49031]: <info>  [1765002066.5466] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51804 uid=0 result="success"
Dec 06 06:21:06 compute-1 ansible-async_wrapper.py[51802]: Module complete (51802)
Dec 06 06:21:06 compute-1 python3.9[52173]: ansible-ansible.legacy.async_status Invoked with jid=j470571262334.51798 mode=status _async_dir=/root/.ansible_async
Dec 06 06:21:06 compute-1 sudo[52171]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:06 compute-1 sudo[52270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmopfnyudtcwmytazzffymjcbizqrkdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002066.1366918-851-192824266691483/AnsiballZ_async_status.py'
Dec 06 06:21:06 compute-1 sudo[52270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:07 compute-1 ansible-async_wrapper.py[51801]: Done in kid B.
Dec 06 06:21:07 compute-1 python3.9[52272]: ansible-ansible.legacy.async_status Invoked with jid=j470571262334.51798 mode=cleanup _async_dir=/root/.ansible_async
Dec 06 06:21:07 compute-1 sudo[52270]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:10 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 06:21:10 compute-1 sudo[52426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sctpwoenoqikwubzgmwhueawdkijhlmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002069.9891145-927-70822702883375/AnsiballZ_stat.py'
Dec 06 06:21:10 compute-1 sudo[52426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:10 compute-1 python3.9[52428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:21:10 compute-1 sudo[52426]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:11 compute-1 sudo[52549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifohbypdsbffagqfjpgtjibxaysnrjch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002069.9891145-927-70822702883375/AnsiballZ_copy.py'
Dec 06 06:21:11 compute-1 sudo[52549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:11 compute-1 python3.9[52551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002069.9891145-927-70822702883375/.source.returncode _original_basename=.w_tndbi4 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:21:11 compute-1 sudo[52549]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:11 compute-1 sudo[52703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmjxzwwijjsdfrrhzsiezotlayakqqrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002071.5149474-975-200054921263090/AnsiballZ_stat.py'
Dec 06 06:21:11 compute-1 sudo[52703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:12 compute-1 python3.9[52705]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:21:12 compute-1 sudo[52703]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:12 compute-1 sudo[52826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnyeomezfiusxyhirgehxmsllxxatpsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002071.5149474-975-200054921263090/AnsiballZ_copy.py'
Dec 06 06:21:12 compute-1 sudo[52826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:12 compute-1 python3.9[52828]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002071.5149474-975-200054921263090/.source.cfg _original_basename=.prz9jhme follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:21:12 compute-1 sudo[52826]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:13 compute-1 sudo[52979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vemiymdyzqknyzuupyzvlqijmglmffmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002073.1072564-1020-94831323340041/AnsiballZ_systemd.py'
Dec 06 06:21:13 compute-1 sudo[52979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:13 compute-1 sshd-session[52662]: Received disconnect from 150.95.85.24 port 42858:11:  [preauth]
Dec 06 06:21:13 compute-1 sshd-session[52662]: Disconnected from authenticating user root 150.95.85.24 port 42858 [preauth]
Dec 06 06:21:13 compute-1 python3.9[52981]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:21:13 compute-1 systemd[1]: Reloading Network Manager...
Dec 06 06:21:13 compute-1 NetworkManager[49031]: <info>  [1765002073.7967] audit: op="reload" arg="0" pid=52985 uid=0 result="success"
Dec 06 06:21:13 compute-1 NetworkManager[49031]: <info>  [1765002073.7976] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 06 06:21:13 compute-1 systemd[1]: Reloaded Network Manager.
Dec 06 06:21:13 compute-1 sudo[52979]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:14 compute-1 sshd-session[45034]: Connection closed by 192.168.122.30 port 48724
Dec 06 06:21:14 compute-1 sshd-session[45031]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:21:14 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Dec 06 06:21:14 compute-1 systemd[1]: session-11.scope: Consumed 50.333s CPU time.
Dec 06 06:21:14 compute-1 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Dec 06 06:21:14 compute-1 systemd-logind[788]: Removed session 11.
Dec 06 06:21:19 compute-1 sshd-session[53016]: Accepted publickey for zuul from 192.168.122.30 port 56258 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:21:19 compute-1 systemd-logind[788]: New session 12 of user zuul.
Dec 06 06:21:19 compute-1 systemd[1]: Started Session 12 of User zuul.
Dec 06 06:21:19 compute-1 sshd-session[53016]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:21:20 compute-1 python3.9[53169]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:21:21 compute-1 python3.9[53323]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:21:23 compute-1 python3.9[53517]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:21:23 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 06:21:24 compute-1 sshd-session[53019]: Connection closed by 192.168.122.30 port 56258
Dec 06 06:21:24 compute-1 sshd-session[53016]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:21:24 compute-1 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Dec 06 06:21:24 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Dec 06 06:21:24 compute-1 systemd[1]: session-12.scope: Consumed 2.327s CPU time.
Dec 06 06:21:24 compute-1 systemd-logind[788]: Removed session 12.
Dec 06 06:21:29 compute-1 sshd-session[53547]: Accepted publickey for zuul from 192.168.122.30 port 41440 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:21:29 compute-1 systemd-logind[788]: New session 13 of user zuul.
Dec 06 06:21:29 compute-1 systemd[1]: Started Session 13 of User zuul.
Dec 06 06:21:29 compute-1 sshd-session[53547]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:21:30 compute-1 python3.9[53700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:21:31 compute-1 python3.9[53854]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:21:32 compute-1 sudo[54009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyugxfjodmjdjcymjcmoybcqlgcbffvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002091.5470054-86-64967967176024/AnsiballZ_setup.py'
Dec 06 06:21:32 compute-1 sudo[54009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:32 compute-1 python3.9[54011]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:21:32 compute-1 sudo[54009]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:32 compute-1 sudo[54093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nemqnwcaursluhtomciehqnznfhoypha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002091.5470054-86-64967967176024/AnsiballZ_dnf.py'
Dec 06 06:21:32 compute-1 sudo[54093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:33 compute-1 python3.9[54095]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:21:34 compute-1 sudo[54093]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:35 compute-1 sudo[54247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubviwgdhufsgplzkfjedmccvbuyocoty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002094.7965927-122-55402327134542/AnsiballZ_setup.py'
Dec 06 06:21:35 compute-1 sudo[54247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:35 compute-1 python3.9[54249]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:21:35 compute-1 sudo[54247]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:36 compute-1 sudo[54442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wputomftcorjwnbhlterwhpdqhalqdmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002096.1631548-155-19263322633208/AnsiballZ_file.py'
Dec 06 06:21:36 compute-1 sudo[54442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:36 compute-1 python3.9[54444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:21:36 compute-1 sudo[54442]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:37 compute-1 sudo[54594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqlcdpchkazxtiwcikematemrqxniefy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002097.0306048-179-237744055977322/AnsiballZ_command.py'
Dec 06 06:21:37 compute-1 sudo[54594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:37 compute-1 python3.9[54596]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:21:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat4146759607-merged.mount: Deactivated successfully.
Dec 06 06:21:37 compute-1 podman[54597]: 2025-12-06 06:21:37.79509183 +0000 UTC m=+0.141461761 system refresh
Dec 06 06:21:37 compute-1 sudo[54594]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:38 compute-1 sudo[54757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxopdgtjhovehnagfrqdgfjiokkhgwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002098.1547835-203-263207022558778/AnsiballZ_stat.py'
Dec 06 06:21:38 compute-1 sudo[54757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:38 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:21:38 compute-1 python3.9[54759]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:21:38 compute-1 sudo[54757]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:39 compute-1 sudo[54880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfadeqcyqdcnbpljwozoaoftlgokoadf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002098.1547835-203-263207022558778/AnsiballZ_copy.py'
Dec 06 06:21:39 compute-1 sudo[54880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:39 compute-1 python3.9[54882]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002098.1547835-203-263207022558778/.source.json follow=False _original_basename=podman_network_config.j2 checksum=5823f7454ceac3936f5594462678d83307e0e5b6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:21:39 compute-1 sudo[54880]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:39 compute-1 sudo[55032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrerkzwqtwurigbihbblsguvioqvgmnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002099.654031-248-25766752302193/AnsiballZ_stat.py'
Dec 06 06:21:39 compute-1 sudo[55032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:40 compute-1 python3.9[55034]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:21:40 compute-1 sudo[55032]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:40 compute-1 sudo[55155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqbzbnypvndqqxummtarwhnshilefowa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002099.654031-248-25766752302193/AnsiballZ_copy.py'
Dec 06 06:21:40 compute-1 sudo[55155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:40 compute-1 python3.9[55157]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765002099.654031-248-25766752302193/.source.conf follow=False _original_basename=registries.conf.j2 checksum=6a7a92c6689685dca24f397866405caa5861defb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:21:40 compute-1 sudo[55155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:41 compute-1 sudo[55307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyognjjftjgwniwsnvojlpnwysxzyiqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002101.0040812-296-115932811829368/AnsiballZ_ini_file.py'
Dec 06 06:21:41 compute-1 sudo[55307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:41 compute-1 python3.9[55309]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:21:41 compute-1 sudo[55307]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:41 compute-1 sudo[55459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcstxfbhijaqetlgiuynvwhhlbacoxal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002101.7054496-296-154558874013221/AnsiballZ_ini_file.py'
Dec 06 06:21:41 compute-1 sudo[55459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:42 compute-1 python3.9[55461]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:21:42 compute-1 sudo[55459]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:42 compute-1 sudo[55611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqucllqbiuqbmiyutrubqltyugzecvdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002102.3015234-296-247015950051205/AnsiballZ_ini_file.py'
Dec 06 06:21:42 compute-1 sudo[55611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:42 compute-1 python3.9[55613]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:21:42 compute-1 sudo[55611]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:43 compute-1 sudo[55763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zclfevzmepsrxndruuyxadpiycntizeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002102.9143922-296-183628198072838/AnsiballZ_ini_file.py'
Dec 06 06:21:43 compute-1 sudo[55763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:43 compute-1 python3.9[55765]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:21:43 compute-1 sudo[55763]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:44 compute-1 sudo[55915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkmdwzbxbpiulkqbeldlcjytgwxfromx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002103.8444488-389-156373844409050/AnsiballZ_dnf.py'
Dec 06 06:21:44 compute-1 sudo[55915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:44 compute-1 python3.9[55917]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:21:45 compute-1 sudo[55915]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:46 compute-1 sudo[56068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhesttsglfyvoorcdzimefomnuzlrxxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002106.6005452-422-150544908719654/AnsiballZ_setup.py'
Dec 06 06:21:46 compute-1 sudo[56068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:47 compute-1 python3.9[56070]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:21:47 compute-1 sudo[56068]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:47 compute-1 sudo[56222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhifgxwyryppoxfsisnhjeyejnvvqkyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002107.511593-446-16620488933638/AnsiballZ_stat.py'
Dec 06 06:21:47 compute-1 sudo[56222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:47 compute-1 python3.9[56224]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:21:48 compute-1 sudo[56222]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:48 compute-1 sudo[56374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoepitgqeroboihlpfhsauulfhodlrma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002108.247913-473-55152513666619/AnsiballZ_stat.py'
Dec 06 06:21:48 compute-1 sudo[56374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:48 compute-1 python3.9[56376]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:21:48 compute-1 sudo[56374]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:50 compute-1 sudo[56526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolcmuuupkwpvbfhnnffjgdcmrhfdukz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002109.7374802-503-205914844233987/AnsiballZ_command.py'
Dec 06 06:21:50 compute-1 sudo[56526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:50 compute-1 python3.9[56528]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:21:50 compute-1 sudo[56526]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:50 compute-1 sudo[56679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btwthpilzektmnyuewwmvhuwjvqkjkhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002110.5493453-533-12433503275818/AnsiballZ_service_facts.py'
Dec 06 06:21:50 compute-1 sudo[56679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:51 compute-1 python3.9[56681]: ansible-service_facts Invoked
Dec 06 06:21:51 compute-1 network[56698]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:21:51 compute-1 network[56699]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:21:51 compute-1 network[56700]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:21:54 compute-1 sudo[56679]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:55 compute-1 sudo[56983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyiuakmsgsjwtlfppwpcgofbgiqmzvdi ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765002114.877907-578-31554284181760/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765002114.877907-578-31554284181760/args'
Dec 06 06:21:55 compute-1 sudo[56983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:55 compute-1 sudo[56983]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:55 compute-1 sudo[57150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnpdbdjzcbeinujwgnedasquwlkualqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002115.6273887-611-252788237287317/AnsiballZ_dnf.py'
Dec 06 06:21:55 compute-1 sudo[57150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:56 compute-1 python3.9[57152]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:21:58 compute-1 sudo[57150]: pam_unix(sudo:session): session closed for user root
Dec 06 06:21:59 compute-1 sudo[57303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brkkhdydbuwcrjwyuqcgahlasdqggyzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002118.7186935-650-107425681853950/AnsiballZ_package_facts.py'
Dec 06 06:21:59 compute-1 sudo[57303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:21:59 compute-1 python3.9[57305]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 06 06:21:59 compute-1 sudo[57303]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:00 compute-1 sudo[57455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvymdnlzwwwxtbeiffipzxijvljarsgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002120.430123-680-139173177376071/AnsiballZ_stat.py'
Dec 06 06:22:00 compute-1 sudo[57455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:00 compute-1 python3.9[57457]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:00 compute-1 sudo[57455]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:01 compute-1 sudo[57580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdukinxaznotmyzbckrsszyopchwuwca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002120.430123-680-139173177376071/AnsiballZ_copy.py'
Dec 06 06:22:01 compute-1 sudo[57580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:01 compute-1 python3.9[57582]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002120.430123-680-139173177376071/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:01 compute-1 sudo[57580]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:02 compute-1 sudo[57734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rulzaopeckdrfpoadcrcbwzcginylrei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002121.853552-726-147176418537797/AnsiballZ_stat.py'
Dec 06 06:22:02 compute-1 sudo[57734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:02 compute-1 python3.9[57736]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:02 compute-1 sudo[57734]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:02 compute-1 sudo[57859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riwmjkfqdrymlpyhfvvenbeygocfvcyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002121.853552-726-147176418537797/AnsiballZ_copy.py'
Dec 06 06:22:02 compute-1 sudo[57859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:02 compute-1 python3.9[57861]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002121.853552-726-147176418537797/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:02 compute-1 sudo[57859]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:04 compute-1 sudo[58013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhoeyglnzodjtrkrrfijshocnwtdappi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002123.9599288-789-212279272751259/AnsiballZ_lineinfile.py'
Dec 06 06:22:04 compute-1 sudo[58013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:04 compute-1 python3.9[58015]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:04 compute-1 sudo[58013]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:06 compute-1 sudo[58167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sayskqtgnsjppamxnihnedxgtulihrje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002125.7149885-834-87725319855309/AnsiballZ_setup.py'
Dec 06 06:22:06 compute-1 sudo[58167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:06 compute-1 python3.9[58169]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:22:06 compute-1 sudo[58167]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:07 compute-1 sudo[58251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuiamrpdmtbkjlwjagfjbzzxmxdsuhcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002125.7149885-834-87725319855309/AnsiballZ_systemd.py'
Dec 06 06:22:07 compute-1 sudo[58251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:07 compute-1 python3.9[58253]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:22:07 compute-1 sudo[58251]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:08 compute-1 sudo[58405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uknzzcjwpxywsewxkijjwugzxjzcrhcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002128.3156042-881-279033770414116/AnsiballZ_setup.py'
Dec 06 06:22:08 compute-1 sudo[58405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:08 compute-1 python3.9[58407]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:22:09 compute-1 sudo[58405]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:09 compute-1 sudo[58489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzzgcezdgzdjmxhibcdorlrlokqhkctj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002128.3156042-881-279033770414116/AnsiballZ_systemd.py'
Dec 06 06:22:09 compute-1 sudo[58489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:09 compute-1 python3.9[58491]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:22:09 compute-1 chronyd[786]: chronyd exiting
Dec 06 06:22:09 compute-1 systemd[1]: Stopping NTP client/server...
Dec 06 06:22:09 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Dec 06 06:22:09 compute-1 systemd[1]: Stopped NTP client/server.
Dec 06 06:22:09 compute-1 systemd[1]: Starting NTP client/server...
Dec 06 06:22:09 compute-1 chronyd[58500]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 06 06:22:09 compute-1 chronyd[58500]: Frequency -26.415 +/- 0.253 ppm read from /var/lib/chrony/drift
Dec 06 06:22:09 compute-1 chronyd[58500]: Loaded seccomp filter (level 2)
Dec 06 06:22:09 compute-1 systemd[1]: Started NTP client/server.
Dec 06 06:22:09 compute-1 sudo[58489]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:10 compute-1 sshd-session[53550]: Connection closed by 192.168.122.30 port 41440
Dec 06 06:22:10 compute-1 sshd-session[53547]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:22:10 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Dec 06 06:22:10 compute-1 systemd[1]: session-13.scope: Consumed 24.650s CPU time.
Dec 06 06:22:10 compute-1 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Dec 06 06:22:10 compute-1 systemd-logind[788]: Removed session 13.
Dec 06 06:22:16 compute-1 sshd-session[58526]: Accepted publickey for zuul from 192.168.122.30 port 50350 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:22:16 compute-1 systemd-logind[788]: New session 14 of user zuul.
Dec 06 06:22:16 compute-1 systemd[1]: Started Session 14 of User zuul.
Dec 06 06:22:16 compute-1 sshd-session[58526]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:22:16 compute-1 sudo[58679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkphgrdkrfurrdquxlteyajadoifjmgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002136.2035022-32-38460592020234/AnsiballZ_file.py'
Dec 06 06:22:16 compute-1 sudo[58679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:16 compute-1 python3.9[58681]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:16 compute-1 sudo[58679]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:17 compute-1 sudo[58831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricpjovoujxiwiggesrwglosreuqjobk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002137.1693616-68-130510717487245/AnsiballZ_stat.py'
Dec 06 06:22:17 compute-1 sudo[58831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:17 compute-1 python3.9[58833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:17 compute-1 sudo[58831]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:18 compute-1 sudo[58954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pubitqtofodtelllorpyqxgsgslbehib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002137.1693616-68-130510717487245/AnsiballZ_copy.py'
Dec 06 06:22:18 compute-1 sudo[58954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:18 compute-1 python3.9[58956]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002137.1693616-68-130510717487245/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:18 compute-1 sudo[58954]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:18 compute-1 sshd-session[58529]: Connection closed by 192.168.122.30 port 50350
Dec 06 06:22:18 compute-1 sshd-session[58526]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:22:18 compute-1 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Dec 06 06:22:18 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Dec 06 06:22:18 compute-1 systemd[1]: session-14.scope: Consumed 1.616s CPU time.
Dec 06 06:22:18 compute-1 systemd-logind[788]: Removed session 14.
Dec 06 06:22:25 compute-1 sshd-session[58981]: Accepted publickey for zuul from 192.168.122.30 port 53532 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:22:25 compute-1 systemd-logind[788]: New session 15 of user zuul.
Dec 06 06:22:25 compute-1 systemd[1]: Started Session 15 of User zuul.
Dec 06 06:22:25 compute-1 sshd-session[58981]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:22:26 compute-1 python3.9[59134]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:22:27 compute-1 sudo[59288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldcwofausbvmyrifmejplrobznvxgefn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002146.6342425-65-105930173326623/AnsiballZ_file.py'
Dec 06 06:22:27 compute-1 sudo[59288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:27 compute-1 python3.9[59290]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:27 compute-1 sudo[59288]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:27 compute-1 sudo[59463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfshmjlaynextnhnnaiqrdcvzmqibgva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002147.3971999-89-158681844448942/AnsiballZ_stat.py'
Dec 06 06:22:27 compute-1 sudo[59463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:28 compute-1 python3.9[59465]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:28 compute-1 sudo[59463]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:28 compute-1 sudo[59586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nspwhdnbjtqnwukmozrdlfxxqzucndyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002147.3971999-89-158681844448942/AnsiballZ_copy.py'
Dec 06 06:22:28 compute-1 sudo[59586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:28 compute-1 python3.9[59588]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765002147.3971999-89-158681844448942/.source.json _original_basename=.hk6stg04 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:28 compute-1 sudo[59586]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:29 compute-1 sudo[59738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pogjghhwffmdnfgxhplyckjdhwjdhlgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002149.3374555-158-30134235438153/AnsiballZ_stat.py'
Dec 06 06:22:29 compute-1 sudo[59738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:30 compute-1 python3.9[59740]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:30 compute-1 sudo[59738]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:30 compute-1 sudo[59861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfvyuxmsyhemaofefroqrpkhdtmuxpvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002149.3374555-158-30134235438153/AnsiballZ_copy.py'
Dec 06 06:22:30 compute-1 sudo[59861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:30 compute-1 python3.9[59863]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002149.3374555-158-30134235438153/.source _original_basename=.bsgd04al follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:30 compute-1 sudo[59861]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:31 compute-1 sudo[60013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuwcrudkspexinazgzcltngexukzzjkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002150.9353166-206-55394628002215/AnsiballZ_file.py'
Dec 06 06:22:31 compute-1 sudo[60013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:31 compute-1 python3.9[60015]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:22:31 compute-1 sudo[60013]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:31 compute-1 sudo[60165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbkmglwhfbpvpbcdipuecihztjmpabkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002151.6583753-230-87640287535149/AnsiballZ_stat.py'
Dec 06 06:22:31 compute-1 sudo[60165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:32 compute-1 python3.9[60167]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:32 compute-1 sudo[60165]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:32 compute-1 sudo[60288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojvjttlflcgnfjxgdhkddrkxslvirwrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002151.6583753-230-87640287535149/AnsiballZ_copy.py'
Dec 06 06:22:32 compute-1 sudo[60288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:32 compute-1 python3.9[60290]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765002151.6583753-230-87640287535149/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:22:32 compute-1 sudo[60288]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:33 compute-1 sudo[60440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aebsfhnkmdtfwjnkjproxsmmeoxjyvqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002152.7809305-230-23393530177756/AnsiballZ_stat.py'
Dec 06 06:22:33 compute-1 sudo[60440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:33 compute-1 python3.9[60442]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:33 compute-1 sudo[60440]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:33 compute-1 sudo[60563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nejqpvgvncsdpirlbbmpwwpsdhpybeet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002152.7809305-230-23393530177756/AnsiballZ_copy.py'
Dec 06 06:22:33 compute-1 sudo[60563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:33 compute-1 python3.9[60565]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765002152.7809305-230-23393530177756/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:22:33 compute-1 sudo[60563]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:34 compute-1 sudo[60715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sayknwqbckngoswyfcrfiiiljipgwqzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002154.067822-317-66597323131641/AnsiballZ_file.py'
Dec 06 06:22:34 compute-1 sudo[60715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:34 compute-1 python3.9[60717]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:34 compute-1 sudo[60715]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:35 compute-1 sudo[60867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffijyidmpihuoviujexhxaclgxlpoixp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002154.724771-341-168766063101555/AnsiballZ_stat.py'
Dec 06 06:22:35 compute-1 sudo[60867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:35 compute-1 python3.9[60869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:35 compute-1 sudo[60867]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:35 compute-1 sudo[60990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobygsranteawcfyctuzxymmjtrxwogy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002154.724771-341-168766063101555/AnsiballZ_copy.py'
Dec 06 06:22:35 compute-1 sudo[60990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:35 compute-1 python3.9[60992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002154.724771-341-168766063101555/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:35 compute-1 sudo[60990]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:36 compute-1 sudo[61142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzkpfqelnyoqwptzkbvgvhaeuoovzpfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002156.0315468-387-10355445464779/AnsiballZ_stat.py'
Dec 06 06:22:36 compute-1 sudo[61142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:36 compute-1 python3.9[61144]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:36 compute-1 sudo[61142]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:36 compute-1 sudo[61265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avmnjzrvcknivhlqyhnterwdyfikqhus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002156.0315468-387-10355445464779/AnsiballZ_copy.py'
Dec 06 06:22:36 compute-1 sudo[61265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:37 compute-1 python3.9[61267]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002156.0315468-387-10355445464779/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:37 compute-1 sudo[61265]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:38 compute-1 sudo[61417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avcghaltxugwdajxzmlspqyruhvyibjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002157.4465067-431-102505590340575/AnsiballZ_systemd.py'
Dec 06 06:22:38 compute-1 sudo[61417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:38 compute-1 python3.9[61419]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:22:38 compute-1 systemd[1]: Reloading.
Dec 06 06:22:38 compute-1 systemd-rc-local-generator[61447]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:22:38 compute-1 systemd-sysv-generator[61451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:22:38 compute-1 systemd[1]: Reloading.
Dec 06 06:22:38 compute-1 systemd-sysv-generator[61488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:22:38 compute-1 systemd-rc-local-generator[61485]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:22:38 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Dec 06 06:22:39 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Dec 06 06:22:39 compute-1 sudo[61417]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:39 compute-1 sudo[61645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivlmgogjteiarmjhchrmgmukiutondjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002159.2245793-455-217675644908303/AnsiballZ_stat.py'
Dec 06 06:22:39 compute-1 sudo[61645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:39 compute-1 python3.9[61647]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:39 compute-1 sudo[61645]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:40 compute-1 sudo[61768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwwfowwdmtruzbxbqhtshgkovptwztzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002159.2245793-455-217675644908303/AnsiballZ_copy.py'
Dec 06 06:22:40 compute-1 sudo[61768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:40 compute-1 python3.9[61770]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002159.2245793-455-217675644908303/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:40 compute-1 sudo[61768]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:40 compute-1 sudo[61920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnyywfjeawhpghluxkglttzoqhzotoat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002160.7221239-500-205621716830265/AnsiballZ_stat.py'
Dec 06 06:22:40 compute-1 sudo[61920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:41 compute-1 python3.9[61922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:41 compute-1 sudo[61920]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:41 compute-1 sudo[62043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdcpvqxglmcrltgnfyevjgnivahrapny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002160.7221239-500-205621716830265/AnsiballZ_copy.py'
Dec 06 06:22:41 compute-1 sudo[62043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:41 compute-1 python3.9[62045]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002160.7221239-500-205621716830265/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:41 compute-1 sudo[62043]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:42 compute-1 sudo[62195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjodsyvagizrvdirnlluhzbjlwygqfro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002162.110378-545-190476906923586/AnsiballZ_systemd.py'
Dec 06 06:22:42 compute-1 sudo[62195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:42 compute-1 python3.9[62197]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:22:42 compute-1 systemd[1]: Reloading.
Dec 06 06:22:42 compute-1 systemd-sysv-generator[62227]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:22:42 compute-1 systemd-rc-local-generator[62223]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:22:43 compute-1 systemd[1]: Reloading.
Dec 06 06:22:43 compute-1 systemd-sysv-generator[62268]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:22:43 compute-1 systemd-rc-local-generator[62265]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:22:44 compute-1 systemd[1]: Starting Create netns directory...
Dec 06 06:22:44 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:22:44 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:22:44 compute-1 systemd[1]: Finished Create netns directory.
Dec 06 06:22:44 compute-1 sudo[62195]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:44 compute-1 python3.9[62425]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:22:44 compute-1 network[62442]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:22:44 compute-1 network[62443]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:22:44 compute-1 network[62444]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:22:50 compute-1 sudo[62704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-togylaynmukcsfnifokpazounztzippp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002169.8440254-593-238882887901761/AnsiballZ_systemd.py'
Dec 06 06:22:50 compute-1 sudo[62704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:50 compute-1 python3.9[62706]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:22:50 compute-1 systemd[1]: Reloading.
Dec 06 06:22:50 compute-1 systemd-rc-local-generator[62735]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:22:50 compute-1 systemd-sysv-generator[62740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:22:51 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 06 06:22:51 compute-1 iptables.init[62746]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 06 06:22:51 compute-1 iptables.init[62746]: iptables: Flushing firewall rules: [  OK  ]
Dec 06 06:22:51 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Dec 06 06:22:51 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 06 06:22:51 compute-1 sudo[62704]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:51 compute-1 sudo[62942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiwtntquzmqrtkdujzzyllvhtocscure ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002171.6457455-593-105125848262693/AnsiballZ_systemd.py'
Dec 06 06:22:51 compute-1 sudo[62942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:52 compute-1 python3.9[62944]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:22:52 compute-1 sudo[62942]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:52 compute-1 sudo[63096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdjyuzitxfbfdnefifewbtoludirbzba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002172.5797336-641-21273143863092/AnsiballZ_systemd.py'
Dec 06 06:22:52 compute-1 sudo[63096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:53 compute-1 python3.9[63098]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:22:53 compute-1 systemd[1]: Reloading.
Dec 06 06:22:53 compute-1 systemd-sysv-generator[63131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:22:53 compute-1 systemd-rc-local-generator[63128]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:22:53 compute-1 systemd[1]: Starting Netfilter Tables...
Dec 06 06:22:53 compute-1 systemd[1]: Finished Netfilter Tables.
Dec 06 06:22:53 compute-1 sudo[63096]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:54 compute-1 sudo[63288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwlhxkkgyofichheqtqxomxibtfdxtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002173.8665895-665-65083935354174/AnsiballZ_command.py'
Dec 06 06:22:54 compute-1 sudo[63288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:54 compute-1 python3.9[63290]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:22:54 compute-1 sudo[63288]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:55 compute-1 sudo[63441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdxuvehnuujekhylwkgqqpijjpqelob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002175.125262-707-12750554759051/AnsiballZ_stat.py'
Dec 06 06:22:55 compute-1 sudo[63441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:55 compute-1 python3.9[63443]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:55 compute-1 sudo[63441]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:56 compute-1 sudo[63566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chyxlpmtkjtxxtnjhqfgeeqxvubblliw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002175.125262-707-12750554759051/AnsiballZ_copy.py'
Dec 06 06:22:56 compute-1 sudo[63566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:56 compute-1 python3.9[63568]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002175.125262-707-12750554759051/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:56 compute-1 sudo[63566]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:56 compute-1 sudo[63719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbsewnsrgeaakybvrbiseuaofpjdwof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002176.48483-752-115803351191978/AnsiballZ_systemd.py'
Dec 06 06:22:56 compute-1 sudo[63719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:57 compute-1 python3.9[63721]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:22:57 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Dec 06 06:22:57 compute-1 sshd[1011]: Received SIGHUP; restarting.
Dec 06 06:22:57 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Dec 06 06:22:57 compute-1 sshd[1011]: Server listening on 0.0.0.0 port 22.
Dec 06 06:22:57 compute-1 sshd[1011]: Server listening on :: port 22.
Dec 06 06:22:57 compute-1 sudo[63719]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:57 compute-1 sudo[63875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrsgbpldyidqpxakcrkclermpwjaoxhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002177.3436553-776-162395940009632/AnsiballZ_file.py'
Dec 06 06:22:57 compute-1 sudo[63875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:57 compute-1 python3.9[63877]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:57 compute-1 sudo[63875]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:58 compute-1 sudo[64027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvfzsocfuypvhluvcpcfxvawjbscmnct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002178.0677133-800-133594748650220/AnsiballZ_stat.py'
Dec 06 06:22:58 compute-1 sudo[64027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:58 compute-1 python3.9[64029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:22:58 compute-1 sudo[64027]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:58 compute-1 sudo[64150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtwjghmmhtymwdxcptvbewkxaxqvhftf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002178.0677133-800-133594748650220/AnsiballZ_copy.py'
Dec 06 06:22:58 compute-1 sudo[64150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:22:58 compute-1 python3.9[64152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002178.0677133-800-133594748650220/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:22:59 compute-1 sudo[64150]: pam_unix(sudo:session): session closed for user root
Dec 06 06:22:59 compute-1 sudo[64302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esbqtpyrroqqikvockbrynohsttlshvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002179.5254333-854-3177875655553/AnsiballZ_timezone.py'
Dec 06 06:22:59 compute-1 sudo[64302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:00 compute-1 python3.9[64304]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 06:23:00 compute-1 systemd[1]: Starting Time & Date Service...
Dec 06 06:23:00 compute-1 systemd[1]: Started Time & Date Service.
Dec 06 06:23:01 compute-1 sudo[64302]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:01 compute-1 sudo[64458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emytulwpkuvzloclbfcxvarqhnoarede ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002181.510282-882-40880295742457/AnsiballZ_file.py'
Dec 06 06:23:01 compute-1 sudo[64458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:01 compute-1 python3.9[64460]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:02 compute-1 sudo[64458]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:02 compute-1 sudo[64610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdrlsmlhpsynpqzogtevpxkgckiqyfev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002182.185951-905-279039453730317/AnsiballZ_stat.py'
Dec 06 06:23:02 compute-1 sudo[64610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:02 compute-1 python3.9[64612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:23:02 compute-1 sudo[64610]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:02 compute-1 sudo[64733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esdchkkxukqsxemqcqbohpinnhxcizpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002182.185951-905-279039453730317/AnsiballZ_copy.py'
Dec 06 06:23:02 compute-1 sudo[64733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:03 compute-1 python3.9[64735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002182.185951-905-279039453730317/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:03 compute-1 sudo[64733]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:03 compute-1 sudo[64885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nipdoevqzsvfobepilkzkgeyqibjuxwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002183.3253686-950-219259516333086/AnsiballZ_stat.py'
Dec 06 06:23:03 compute-1 sudo[64885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:03 compute-1 python3.9[64887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:23:03 compute-1 sudo[64885]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:04 compute-1 sudo[65008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qinwmezxpvltayynucwgoyqtadtimpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002183.3253686-950-219259516333086/AnsiballZ_copy.py'
Dec 06 06:23:04 compute-1 sudo[65008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:04 compute-1 python3.9[65010]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765002183.3253686-950-219259516333086/.source.yaml _original_basename=.brp8rto1 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:04 compute-1 sudo[65008]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:04 compute-1 sudo[65160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yigrruuhhovxvqkwmjxlxplirpkomebb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002184.5014637-995-248557401903687/AnsiballZ_stat.py'
Dec 06 06:23:04 compute-1 sudo[65160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:04 compute-1 python3.9[65162]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:23:04 compute-1 sudo[65160]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:05 compute-1 sudo[65283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvdppqbbxbqqdspzfzxadardcvicvfgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002184.5014637-995-248557401903687/AnsiballZ_copy.py'
Dec 06 06:23:05 compute-1 sudo[65283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:05 compute-1 python3.9[65285]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002184.5014637-995-248557401903687/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:05 compute-1 sudo[65283]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:05 compute-1 sudo[65435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntjdcqlkqnyndicdspwwcxtcdwyvhemp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002185.64274-1040-43090031394211/AnsiballZ_command.py'
Dec 06 06:23:05 compute-1 sudo[65435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:06 compute-1 python3.9[65437]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:23:06 compute-1 sudo[65435]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:06 compute-1 sudo[65588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyhybquvlqrojrzdckbkhygjjbxvkiva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002186.367316-1064-232098099393950/AnsiballZ_command.py'
Dec 06 06:23:06 compute-1 sudo[65588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:06 compute-1 python3.9[65590]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:23:06 compute-1 sudo[65588]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:07 compute-1 sudo[65741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yejqxkqfrhrkumuodwjfmnpkhrfszpif ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765002187.084751-1088-184022703788911/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 06:23:07 compute-1 sudo[65741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:07 compute-1 python3[65743]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 06:23:07 compute-1 sudo[65741]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:08 compute-1 sudo[65893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otnqwtcaigmnxqnauvcfrbkbwvgvcdol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002188.252949-1112-159185578611687/AnsiballZ_stat.py'
Dec 06 06:23:08 compute-1 sudo[65893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:08 compute-1 python3.9[65895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:23:08 compute-1 sudo[65893]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:08 compute-1 sudo[66016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxscuqazqyxatuyxbdnepsqjbfllcgqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002188.252949-1112-159185578611687/AnsiballZ_copy.py'
Dec 06 06:23:08 compute-1 sudo[66016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:09 compute-1 python3.9[66018]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002188.252949-1112-159185578611687/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:09 compute-1 sudo[66016]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:10 compute-1 sudo[66168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftexykchpnhfsqyghrmvzbmcszwaxrbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002189.7685735-1157-93735994275716/AnsiballZ_stat.py'
Dec 06 06:23:10 compute-1 sudo[66168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:10 compute-1 python3.9[66170]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:23:10 compute-1 sudo[66168]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:10 compute-1 sudo[66291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppicselecvlbfgazktdbjzkywfxknpmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002189.7685735-1157-93735994275716/AnsiballZ_copy.py'
Dec 06 06:23:10 compute-1 sudo[66291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:10 compute-1 python3.9[66293]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002189.7685735-1157-93735994275716/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:10 compute-1 sudo[66291]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:11 compute-1 sudo[66443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyryekojiyxazrlsodmyctmwcrlwdeaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002191.050047-1202-198354305496255/AnsiballZ_stat.py'
Dec 06 06:23:11 compute-1 sudo[66443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:11 compute-1 python3.9[66445]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:23:11 compute-1 sudo[66443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:11 compute-1 sudo[66566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnrmyigzmoittiuqxdeaayvmdketheva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002191.050047-1202-198354305496255/AnsiballZ_copy.py'
Dec 06 06:23:11 compute-1 sudo[66566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:12 compute-1 python3.9[66568]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002191.050047-1202-198354305496255/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:12 compute-1 sudo[66566]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:12 compute-1 sudo[66718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frvncftgjakhupfszgcnzcrtdioasidh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002192.3225954-1247-114595988177347/AnsiballZ_stat.py'
Dec 06 06:23:12 compute-1 sudo[66718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:12 compute-1 python3.9[66720]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:23:12 compute-1 sudo[66718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:13 compute-1 sudo[66841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxymkmhpqbhsbnpocjsnucwqywenckbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002192.3225954-1247-114595988177347/AnsiballZ_copy.py'
Dec 06 06:23:13 compute-1 sudo[66841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:13 compute-1 python3.9[66843]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002192.3225954-1247-114595988177347/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:13 compute-1 sudo[66841]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:13 compute-1 sudo[66993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxplriwtayrproeebgpiilijvgbbpbcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002193.5112119-1292-10312202254839/AnsiballZ_stat.py'
Dec 06 06:23:13 compute-1 sudo[66993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:14 compute-1 python3.9[66995]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:23:14 compute-1 sudo[66993]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:14 compute-1 sudo[67116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvisxpapoiwwjoxuxgrceortqpuyxmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002193.5112119-1292-10312202254839/AnsiballZ_copy.py'
Dec 06 06:23:14 compute-1 sudo[67116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:14 compute-1 python3.9[67118]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765002193.5112119-1292-10312202254839/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:14 compute-1 sudo[67116]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:15 compute-1 sudo[67268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmfycqgxqxetlyvkrxspxyxsvvxtnjvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002194.8114388-1337-165194798315725/AnsiballZ_file.py'
Dec 06 06:23:15 compute-1 sudo[67268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:15 compute-1 python3.9[67270]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:15 compute-1 sudo[67268]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:15 compute-1 sudo[67420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqqrahhigumplmxeuajrauwxxdwdlhsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002195.4892519-1361-247835302537189/AnsiballZ_command.py'
Dec 06 06:23:15 compute-1 sudo[67420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:15 compute-1 python3.9[67422]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:23:16 compute-1 sudo[67420]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:16 compute-1 sudo[67579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiivdnysgqmjzmtachdxrcsppkcmbnrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002196.2391882-1385-119364510207171/AnsiballZ_blockinfile.py'
Dec 06 06:23:16 compute-1 sudo[67579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:16 compute-1 python3.9[67581]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:16 compute-1 sudo[67579]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:17 compute-1 sudo[67732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feblezyxfkmwrpyuicksduizqonswdhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002197.140932-1412-79813984376126/AnsiballZ_file.py'
Dec 06 06:23:17 compute-1 sudo[67732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:17 compute-1 python3.9[67734]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:17 compute-1 sudo[67732]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:18 compute-1 sudo[67884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gawssojuhnzwitghmlnkrckdyrgfdjjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002197.8070254-1412-189855659539788/AnsiballZ_file.py'
Dec 06 06:23:18 compute-1 sudo[67884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:18 compute-1 python3.9[67886]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:18 compute-1 sudo[67884]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:18 compute-1 sudo[68036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsjymujhwrzgfjajunpibrbjdnajpgds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002198.4916825-1457-211022182885668/AnsiballZ_mount.py'
Dec 06 06:23:18 compute-1 sudo[68036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:19 compute-1 python3.9[68038]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 06:23:19 compute-1 sudo[68036]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:19 compute-1 sudo[68189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyswxbovemvaqvupzpqxtbdptdgzfmni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002199.2852905-1457-163089403579898/AnsiballZ_mount.py'
Dec 06 06:23:19 compute-1 sudo[68189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:19 compute-1 python3.9[68191]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 06:23:19 compute-1 sudo[68189]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:20 compute-1 sshd-session[58984]: Connection closed by 192.168.122.30 port 53532
Dec 06 06:23:20 compute-1 sshd-session[58981]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:23:20 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Dec 06 06:23:20 compute-1 systemd[1]: session-15.scope: Consumed 33.589s CPU time.
Dec 06 06:23:20 compute-1 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Dec 06 06:23:20 compute-1 systemd-logind[788]: Removed session 15.
Dec 06 06:23:25 compute-1 sshd-session[68217]: Accepted publickey for zuul from 192.168.122.30 port 39226 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:23:25 compute-1 systemd-logind[788]: New session 16 of user zuul.
Dec 06 06:23:25 compute-1 systemd[1]: Started Session 16 of User zuul.
Dec 06 06:23:25 compute-1 sshd-session[68217]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:23:26 compute-1 sudo[68370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppcvcausbhkriccmkfutnbazkmksujnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002206.069208-23-164937057448979/AnsiballZ_tempfile.py'
Dec 06 06:23:26 compute-1 sudo[68370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:26 compute-1 python3.9[68372]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 06 06:23:26 compute-1 sudo[68370]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:27 compute-1 sudo[68522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmsyajqalihsdtotykuurkpspewfroqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002207.0145378-59-265365201920370/AnsiballZ_stat.py'
Dec 06 06:23:27 compute-1 sudo[68522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:27 compute-1 python3.9[68524]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:23:27 compute-1 sudo[68522]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:28 compute-1 sudo[68674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmowchxlrdmtiyqywwvdslhmglmnghhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002207.9946303-89-197502061305957/AnsiballZ_setup.py'
Dec 06 06:23:28 compute-1 sudo[68674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:28 compute-1 python3.9[68676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:23:28 compute-1 sudo[68674]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:29 compute-1 sudo[68826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhqxezwvfygjxoqsollvggewhagdxzbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002209.1749086-114-97966755853070/AnsiballZ_blockinfile.py'
Dec 06 06:23:29 compute-1 sudo[68826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:29 compute-1 python3.9[68828]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTKFyfmCvsG9hwBkDHhSMH95Mc80Ub24C1l2ydnJGHzY4+vYnN+ZopDHd6HVKXgsP9msqkZAEdNXjbQn0sbWYw0v02CY6OcbMq306Rwo9N1fcSO6QVC6w79bGbRJasTA8jgAoGm1VSg3XpzU9C2Delv2ginn7LqCUou48j9w9jyaklDA2EV0anjvZ6hGLjcFaMQSlFPO8rr2pGS5nfNk2Re6GtYYWF4SPkd5xfecWi9szdT+tnG8VrwRX440/Pe3eV5UyVyHQzIEvxJK6DbTgtieOn0PVz3yHI3Uo8VatpsXahO8FsABY1GaI5QAj3qUudWz4YWsiV/qy0G5Wm27CB69LVPGWRr7y4+pVz0HxWiYGyYbRZdxHVZ3jqfGNBdMXJb2shp9BlIo+lpjEydkorHn66AKIpFCZFaGpHzkFPocaTP9yMPAxo/0YPllct7AqO/4CCBNhz4E6/0aMx2lsilFN6Oo/Mj0azpEQOHvuTEwqKJ5BK3MJCWgR11ccN7PM=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEDAV7OqwkHgR6GxlfPQDRQoPSdwQxyp1ILKzyPaTiD9
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeZVQBWQAXkYJ2DA26L5Jq1a5s2ScFbJ/Q/8jiRPf43wzW24IvQwAq99mI5t4QhVhmTRCbptw5L79elvFEyDY8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXnm8ofk0+O7jBFjA8fasReq5BkfwaahdsMJZLiqB71W18faenPK2mMCw6/Lyxaif1wFQKBGWK21bUZXsQguqG7iZV8rSLfQmLlKel/CnEgi2ekXmEhhYL/5GAB8Hq0UxChaI6YQUu0gWku4cPruBw6/+Lz36/PvLLwKqQupEi8npPR3O7a4jF6Px433cpBkZ/hgwG2m5+61NMAcNSCjjNj1cdXLugpDN9+05k6A3QV1sDXS2Zx6zdxPhgmLDKZLBGesQaz+glwYPo/2KfwAwlU4tAuY5eSV2BPX04PqKqexy3iziex/q3pFmtD6f1cRmqFZiyNs+kOfsxwABOVKQ6GG1iKKgzHMsK/paqNWMoHBj0lrRIJoX88Fd2A5DdPs2UPHwy3iUxLYekNcgiigT3O/4x92cFRritKJ8i8j83J6wJOQ0DnpyWxu4WFCjI4mBSKeA0NQzqMPICgkmtmtYKfSlzSdaL9W56FqnfE5JHkSrspcV9xnX3D/ijnD/8PxU=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4NkvA88mf0HvkHx7766e1aduefm45OK4uK2xW0LF1S
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIy6uH9XhIotH/UH4KICHfHUvzEiJMGjuOaC3xgcK45R/4kFK8w4At6C/G8bcf1l2+wNZCsHSuKrF09EzQCKCOU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXhoI1gGk2X98AQB4B5ZyJDup3CVjjMbiB6L30cKGocfIwEEwBz5d1xDpwA7euANP32L9+ddZ3Cn0VVuebREE5y184Yi1sdS+2O6H8M7BUT+RANGW4sY7jPXbTTJt6Bp2WWZu+AKxIRGMoo0UfvvFdscomysN+yxWB/KZ/niGARJyw61l1eO1/8shGJiP1LBuA4mdwHMTBYwXiYjk6LgI/i5m6zQk5ggmw2nKJqCwwPGyf2Xf7/LbRDgnryAatph9gA4JZ+QXULUJ8U+ILis30MPOGNA7vJ07ovYFAVwsoKYRCsxrEpg8AxMeRikU+CERKL2QQPABlbuJKnDZFrW2kY/L+B2g+i8FDWpaug4GQ6ZO7REu47ARhAUnuaIuJrhJgLrDq43vTqCgagXFz7UHhLI6KXLayNe3B/4It4UaZIVv3X8K+bZiI0zMWNhyjIBAU5VFZd0QjZDjt+Wv5WMYEFiWDyil+NVEHCxdSl46yd68mUvgMxWiv2Z57ICY9i+k=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF08740jXaMkiBlZr9+3kjjW/VDtcxAKNNm3eT7v4C7q
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUJnc2vyZ7uGIoKmOtL2ok+zmjLSq/3vZNdtT52cNcj41FV66OIff0lT2r5neBPmMGSlOfqKMRY8iTu1fJs+/c=
                                             create=True mode=0644 path=/tmp/ansible.2f8xi2pd state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:29 compute-1 sudo[68826]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:30 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 06:23:30 compute-1 sudo[68980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guxgygfjikzzwepbuybenktjqlwwtvxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002209.9925454-138-51306095813459/AnsiballZ_command.py'
Dec 06 06:23:30 compute-1 sudo[68980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:30 compute-1 python3.9[68982]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.2f8xi2pd' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:23:30 compute-1 sudo[68980]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:31 compute-1 sudo[69134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otbyygcjtscnnipoptrqexpealerpwhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002210.83975-162-263212140723864/AnsiballZ_file.py'
Dec 06 06:23:31 compute-1 sudo[69134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:31 compute-1 python3.9[69136]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.2f8xi2pd state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:31 compute-1 sudo[69134]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:31 compute-1 sshd-session[68220]: Connection closed by 192.168.122.30 port 39226
Dec 06 06:23:31 compute-1 sshd-session[68217]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:23:31 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Dec 06 06:23:31 compute-1 systemd[1]: session-16.scope: Consumed 3.353s CPU time.
Dec 06 06:23:31 compute-1 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Dec 06 06:23:31 compute-1 systemd-logind[788]: Removed session 16.
Dec 06 06:23:37 compute-1 sshd-session[69161]: Accepted publickey for zuul from 192.168.122.30 port 43614 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:23:37 compute-1 systemd-logind[788]: New session 17 of user zuul.
Dec 06 06:23:37 compute-1 systemd[1]: Started Session 17 of User zuul.
Dec 06 06:23:37 compute-1 sshd-session[69161]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:23:38 compute-1 python3.9[69314]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:23:39 compute-1 sudo[69468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxmuccikyowsppzrwbbcncynpvwtvbku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002219.3725562-62-196924490027815/AnsiballZ_systemd.py'
Dec 06 06:23:39 compute-1 sudo[69468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:40 compute-1 python3.9[69470]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 06:23:40 compute-1 sudo[69468]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:40 compute-1 sudo[69622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldedockqgocgzqdsysbljcisytzpcnqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002220.6018116-86-35858180331186/AnsiballZ_systemd.py'
Dec 06 06:23:40 compute-1 sudo[69622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:41 compute-1 python3.9[69624]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:23:41 compute-1 sudo[69622]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:41 compute-1 sudo[69775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymybuzhqnooxkemctcsilvrrgynykoiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002221.5185332-113-15385826202725/AnsiballZ_command.py'
Dec 06 06:23:41 compute-1 sudo[69775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:42 compute-1 python3.9[69777]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:23:42 compute-1 sudo[69775]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:42 compute-1 sudo[69928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgixmnjdenwopvjvpeuoikenuqygfpuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002222.348546-137-275871230940595/AnsiballZ_stat.py'
Dec 06 06:23:42 compute-1 sudo[69928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:43 compute-1 python3.9[69930]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:23:43 compute-1 sudo[69928]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:43 compute-1 sudo[70082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdewiskxzbzsrnxkhbrmgcenrdtkypyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002223.219631-161-99002672117672/AnsiballZ_command.py'
Dec 06 06:23:43 compute-1 sudo[70082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:43 compute-1 python3.9[70084]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:23:43 compute-1 sudo[70082]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:44 compute-1 sudo[70239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmrmxbbvgsywqyiyganyagtyfanowerx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002224.5961008-185-271721465914209/AnsiballZ_file.py'
Dec 06 06:23:44 compute-1 sudo[70239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:45 compute-1 python3.9[70241]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:23:45 compute-1 sudo[70239]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:45 compute-1 sshd-session[69164]: Connection closed by 192.168.122.30 port 43614
Dec 06 06:23:45 compute-1 sshd-session[69161]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:23:45 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Dec 06 06:23:45 compute-1 systemd[1]: session-17.scope: Consumed 4.289s CPU time.
Dec 06 06:23:45 compute-1 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Dec 06 06:23:45 compute-1 systemd-logind[788]: Removed session 17.
Dec 06 06:23:46 compute-1 sshd-session[70211]: Received disconnect from 45.78.201.60 port 60762:11: Bye Bye [preauth]
Dec 06 06:23:46 compute-1 sshd-session[70211]: Disconnected from authenticating user root 45.78.201.60 port 60762 [preauth]
Dec 06 06:23:50 compute-1 sshd-session[70266]: Accepted publickey for zuul from 192.168.122.30 port 56430 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:23:50 compute-1 systemd-logind[788]: New session 18 of user zuul.
Dec 06 06:23:50 compute-1 systemd[1]: Started Session 18 of User zuul.
Dec 06 06:23:50 compute-1 sshd-session[70266]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:23:52 compute-1 python3.9[70419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:23:52 compute-1 sudo[70573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhshupbzmaahzhwazjthszxexqnoqkfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002232.4964306-68-27533381050097/AnsiballZ_setup.py'
Dec 06 06:23:52 compute-1 sudo[70573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:53 compute-1 python3.9[70575]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:23:53 compute-1 sudo[70573]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:53 compute-1 sudo[70657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxbcpxlboaupdsztofrhxoettayhskux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002232.4964306-68-27533381050097/AnsiballZ_dnf.py'
Dec 06 06:23:53 compute-1 sudo[70657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:23:53 compute-1 python3.9[70659]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 06:23:56 compute-1 sudo[70657]: pam_unix(sudo:session): session closed for user root
Dec 06 06:23:56 compute-1 python3.9[70810]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:23:58 compute-1 python3.9[70961]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:23:58 compute-1 python3.9[71111]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:23:58 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:23:59 compute-1 python3.9[71262]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:24:00 compute-1 sshd-session[70269]: Connection closed by 192.168.122.30 port 56430
Dec 06 06:24:00 compute-1 sshd-session[70266]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:24:00 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Dec 06 06:24:00 compute-1 systemd[1]: session-18.scope: Consumed 5.807s CPU time.
Dec 06 06:24:00 compute-1 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Dec 06 06:24:00 compute-1 systemd-logind[788]: Removed session 18.
Dec 06 06:24:09 compute-1 sshd-session[71287]: Accepted publickey for zuul from 38.102.83.248 port 33838 ssh2: RSA SHA256:aHyVcRaDK3hfZLzaCXAUf9WeLucbkCfDdQjLc4/bZwE
Dec 06 06:24:09 compute-1 systemd-logind[788]: New session 19 of user zuul.
Dec 06 06:24:09 compute-1 systemd[1]: Started Session 19 of User zuul.
Dec 06 06:24:09 compute-1 sshd-session[71287]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:24:10 compute-1 sudo[71363]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svngwfpqprggdvkadyvtwracdgkltflr ; /usr/bin/python3'
Dec 06 06:24:10 compute-1 sudo[71363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:11 compute-1 useradd[71367]: new group: name=ceph-admin, GID=42478
Dec 06 06:24:11 compute-1 useradd[71367]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 06 06:24:12 compute-1 sudo[71363]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:13 compute-1 sudo[71449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifmyeagjikjirlxrsdlbzvphnshbmepk ; /usr/bin/python3'
Dec 06 06:24:13 compute-1 sudo[71449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:13 compute-1 sudo[71449]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:13 compute-1 sudo[71522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdpuxpweqntpmsiickevpusqgynqqszy ; /usr/bin/python3'
Dec 06 06:24:13 compute-1 sudo[71522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:13 compute-1 sudo[71522]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:14 compute-1 sudo[71572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iithkzzchirvddvfgpnvisvvvwooptsd ; /usr/bin/python3'
Dec 06 06:24:14 compute-1 sudo[71572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:14 compute-1 sudo[71572]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:14 compute-1 sudo[71598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yofwfhxfssefmjijzvzicsszwfvvbqba ; /usr/bin/python3'
Dec 06 06:24:14 compute-1 sudo[71598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:14 compute-1 sudo[71598]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:14 compute-1 sudo[71624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksihqtlxteapzjhwbtejpzurqoilfekt ; /usr/bin/python3'
Dec 06 06:24:14 compute-1 sudo[71624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:14 compute-1 sudo[71624]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:15 compute-1 sudo[71650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzbhpyvueihvarzlymnjjmofmijhxhef ; /usr/bin/python3'
Dec 06 06:24:15 compute-1 sudo[71650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:15 compute-1 sudo[71650]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:15 compute-1 sudo[71728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srvfyhyhzlsjcubbbpmqrpunojzpjodn ; /usr/bin/python3'
Dec 06 06:24:15 compute-1 sudo[71728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:16 compute-1 sudo[71728]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:16 compute-1 sudo[71801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkmnhyqpydzlmfetunnjpeygekdcgadi ; /usr/bin/python3'
Dec 06 06:24:16 compute-1 sudo[71801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:16 compute-1 sudo[71801]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:16 compute-1 sudo[71903]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbimkvslpocgjbmzcxsfnkrknibjrmwk ; /usr/bin/python3'
Dec 06 06:24:16 compute-1 sudo[71903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:17 compute-1 sudo[71903]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:17 compute-1 sudo[71976]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgatuvzeryamqqflgzhzxewrisnaiqhd ; /usr/bin/python3'
Dec 06 06:24:17 compute-1 sudo[71976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:17 compute-1 sudo[71976]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:17 compute-1 sudo[72026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhebhhzrrfofdbbkgofgwwcbjrgfnfdc ; /usr/bin/python3'
Dec 06 06:24:17 compute-1 sudo[72026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:18 compute-1 python3[72028]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:24:18 compute-1 sudo[72026]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:19 compute-1 sudo[72121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqsfunhxnybfskkuevktfbsmmsljmrm ; /usr/bin/python3'
Dec 06 06:24:19 compute-1 sudo[72121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:19 compute-1 python3[72123]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 06:24:20 compute-1 chronyd[58500]: Selected source 216.232.132.102 (pool.ntp.org)
Dec 06 06:24:21 compute-1 sudo[72121]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:21 compute-1 sudo[72148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebnsvifgnylnxxcrlibdoikrcrillfc ; /usr/bin/python3'
Dec 06 06:24:21 compute-1 sudo[72148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:21 compute-1 python3[72150]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 06:24:21 compute-1 sudo[72148]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:22 compute-1 sudo[72174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usyllfariwhagvazmcggzvfsmfhjikbc ; /usr/bin/python3'
Dec 06 06:24:22 compute-1 sudo[72174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:22 compute-1 python3[72176]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:24:22 compute-1 kernel: loop: module loaded
Dec 06 06:24:22 compute-1 kernel: loop3: detected capacity change from 0 to 14680064
Dec 06 06:24:22 compute-1 sudo[72174]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:22 compute-1 sudo[72210]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpomnynwedhzdyqmpfpsxxfnfabyxxqt ; /usr/bin/python3'
Dec 06 06:24:22 compute-1 sudo[72210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:22 compute-1 python3[72212]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:24:22 compute-1 lvm[72215]: PV /dev/loop3 not used.
Dec 06 06:24:22 compute-1 lvm[72224]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:24:22 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 06 06:24:22 compute-1 sudo[72210]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:22 compute-1 lvm[72226]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 06 06:24:22 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 06 06:24:26 compute-1 sudo[72302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpfsymexokmfpberempdskbrmyawgejn ; /usr/bin/python3'
Dec 06 06:24:26 compute-1 sudo[72302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:26 compute-1 python3[72304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:24:26 compute-1 sudo[72302]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:26 compute-1 sudo[72375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maiuwghmanbxvzwbmeqnorzvvathuldv ; /usr/bin/python3'
Dec 06 06:24:26 compute-1 sudo[72375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:26 compute-1 python3[72377]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002266.2192714-36994-168064023314330/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:24:26 compute-1 sudo[72375]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:27 compute-1 sudo[72425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyopdofuuicmrikfrmxoigjtxyqutrpb ; /usr/bin/python3'
Dec 06 06:24:27 compute-1 sudo[72425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:27 compute-1 python3[72427]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:24:27 compute-1 systemd[1]: Reloading.
Dec 06 06:24:27 compute-1 systemd-rc-local-generator[72455]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:24:27 compute-1 systemd-sysv-generator[72458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:24:27 compute-1 systemd[1]: Starting Ceph OSD losetup...
Dec 06 06:24:27 compute-1 bash[72466]: /dev/loop3: [64513]:4327948 (/var/lib/ceph-osd-0.img)
Dec 06 06:24:27 compute-1 systemd[1]: Finished Ceph OSD losetup.
Dec 06 06:24:28 compute-1 sudo[72425]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:28 compute-1 lvm[72468]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:24:28 compute-1 lvm[72468]: VG ceph_vg0 finished
Dec 06 06:24:29 compute-1 sshd-session[72469]: Connection closed by 80.82.70.133 port 60000
Dec 06 06:24:30 compute-1 python3[72493]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:25:17 compute-1 sshd-session[72538]: Connection reset by authenticating user root 91.202.233.33 port 52686 [preauth]
Dec 06 06:25:20 compute-1 sshd-session[72541]: Connection reset by authenticating user root 91.202.233.33 port 52698 [preauth]
Dec 06 06:25:22 compute-1 sshd-session[72544]: Connection reset by authenticating user root 91.202.233.33 port 52702 [preauth]
Dec 06 06:25:25 compute-1 sshd-session[72546]: Connection reset by authenticating user root 91.202.233.33 port 59678 [preauth]
Dec 06 06:25:27 compute-1 sshd-session[72548]: Connection reset by authenticating user root 91.202.233.33 port 59690 [preauth]
Dec 06 06:25:31 compute-1 sshd-session[72540]: Received disconnect from 14.63.196.175 port 52336:11: Bye Bye [preauth]
Dec 06 06:25:31 compute-1 sshd-session[72540]: Disconnected from authenticating user root 14.63.196.175 port 52336 [preauth]
Dec 06 06:26:44 compute-1 sshd-session[72550]: Accepted publickey for ceph-admin from 192.168.122.100 port 35600 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:44 compute-1 systemd-logind[788]: New session 20 of user ceph-admin.
Dec 06 06:26:44 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Dec 06 06:26:44 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 06 06:26:44 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 06 06:26:44 compute-1 systemd[1]: Starting User Manager for UID 42477...
Dec 06 06:26:44 compute-1 systemd[72554]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:44 compute-1 systemd[72554]: Queued start job for default target Main User Target.
Dec 06 06:26:44 compute-1 systemd[72554]: Created slice User Application Slice.
Dec 06 06:26:44 compute-1 systemd[72554]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 06:26:44 compute-1 systemd[72554]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 06:26:44 compute-1 systemd[72554]: Reached target Paths.
Dec 06 06:26:44 compute-1 systemd[72554]: Reached target Timers.
Dec 06 06:26:44 compute-1 systemd[72554]: Starting D-Bus User Message Bus Socket...
Dec 06 06:26:44 compute-1 systemd[72554]: Starting Create User's Volatile Files and Directories...
Dec 06 06:26:44 compute-1 systemd[72554]: Finished Create User's Volatile Files and Directories.
Dec 06 06:26:44 compute-1 systemd[72554]: Listening on D-Bus User Message Bus Socket.
Dec 06 06:26:44 compute-1 systemd[72554]: Reached target Sockets.
Dec 06 06:26:44 compute-1 systemd[72554]: Reached target Basic System.
Dec 06 06:26:44 compute-1 systemd[72554]: Reached target Main User Target.
Dec 06 06:26:44 compute-1 systemd[72554]: Startup finished in 101ms.
Dec 06 06:26:44 compute-1 systemd[1]: Started User Manager for UID 42477.
Dec 06 06:26:44 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Dec 06 06:26:44 compute-1 sshd-session[72550]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:44 compute-1 sshd-session[72569]: Accepted publickey for ceph-admin from 192.168.122.100 port 35606 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:44 compute-1 systemd-logind[788]: New session 22 of user ceph-admin.
Dec 06 06:26:44 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Dec 06 06:26:44 compute-1 sshd-session[72569]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:44 compute-1 sudo[72575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:44 compute-1 sudo[72575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:44 compute-1 sudo[72575]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:44 compute-1 sudo[72600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:44 compute-1 sudo[72600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:45 compute-1 sudo[72600]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:45 compute-1 sshd-session[72625]: Accepted publickey for ceph-admin from 192.168.122.100 port 35618 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:45 compute-1 systemd-logind[788]: New session 23 of user ceph-admin.
Dec 06 06:26:45 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Dec 06 06:26:45 compute-1 sshd-session[72625]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:45 compute-1 sudo[72629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:45 compute-1 sudo[72629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:45 compute-1 sudo[72629]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:45 compute-1 sudo[72654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Dec 06 06:26:45 compute-1 sudo[72654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:45 compute-1 sudo[72654]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:45 compute-1 sshd-session[72679]: Accepted publickey for ceph-admin from 192.168.122.100 port 35628 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:45 compute-1 systemd-logind[788]: New session 24 of user ceph-admin.
Dec 06 06:26:45 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Dec 06 06:26:45 compute-1 sshd-session[72679]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:45 compute-1 sudo[72683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:45 compute-1 sudo[72683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:45 compute-1 sudo[72683]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:45 compute-1 sudo[72708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Dec 06 06:26:45 compute-1 sudo[72708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:45 compute-1 sudo[72708]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:45 compute-1 sshd-session[72733]: Accepted publickey for ceph-admin from 192.168.122.100 port 35630 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:45 compute-1 systemd-logind[788]: New session 25 of user ceph-admin.
Dec 06 06:26:45 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Dec 06 06:26:45 compute-1 sshd-session[72733]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:46 compute-1 sudo[72737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:46 compute-1 sudo[72737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:46 compute-1 sudo[72737]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:46 compute-1 sudo[72762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:46 compute-1 sudo[72762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:46 compute-1 sudo[72762]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:46 compute-1 sshd-session[72787]: Accepted publickey for ceph-admin from 192.168.122.100 port 35642 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:46 compute-1 systemd-logind[788]: New session 26 of user ceph-admin.
Dec 06 06:26:46 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Dec 06 06:26:46 compute-1 sshd-session[72787]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:46 compute-1 sudo[72791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:46 compute-1 sudo[72791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:46 compute-1 sudo[72791]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:46 compute-1 sudo[72816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:46 compute-1 sudo[72816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:46 compute-1 sudo[72816]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:46 compute-1 sshd-session[72841]: Accepted publickey for ceph-admin from 192.168.122.100 port 35648 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:46 compute-1 systemd-logind[788]: New session 27 of user ceph-admin.
Dec 06 06:26:46 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Dec 06 06:26:46 compute-1 sshd-session[72841]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:46 compute-1 sudo[72845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:46 compute-1 sudo[72845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:46 compute-1 sudo[72845]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:46 compute-1 sudo[72870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Dec 06 06:26:46 compute-1 sudo[72870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:46 compute-1 sudo[72870]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:47 compute-1 sshd-session[72895]: Accepted publickey for ceph-admin from 192.168.122.100 port 35652 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:47 compute-1 systemd-logind[788]: New session 28 of user ceph-admin.
Dec 06 06:26:47 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Dec 06 06:26:47 compute-1 sshd-session[72895]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:47 compute-1 sudo[72899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:47 compute-1 sudo[72899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:47 compute-1 sudo[72899]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:47 compute-1 sudo[72924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:47 compute-1 sudo[72924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:47 compute-1 sudo[72924]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:47 compute-1 sshd-session[72949]: Accepted publickey for ceph-admin from 192.168.122.100 port 35658 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:47 compute-1 systemd-logind[788]: New session 29 of user ceph-admin.
Dec 06 06:26:47 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Dec 06 06:26:47 compute-1 sshd-session[72949]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:47 compute-1 sudo[72953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:47 compute-1 sudo[72953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:47 compute-1 sudo[72953]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:47 compute-1 sudo[72978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Dec 06 06:26:47 compute-1 sudo[72978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:47 compute-1 sudo[72978]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:47 compute-1 sshd-session[73003]: Accepted publickey for ceph-admin from 192.168.122.100 port 35660 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:47 compute-1 systemd-logind[788]: New session 30 of user ceph-admin.
Dec 06 06:26:47 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Dec 06 06:26:47 compute-1 sshd-session[73003]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:48 compute-1 sshd-session[73030]: Accepted publickey for ceph-admin from 192.168.122.100 port 35674 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:48 compute-1 systemd-logind[788]: New session 31 of user ceph-admin.
Dec 06 06:26:48 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Dec 06 06:26:48 compute-1 sshd-session[73030]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:48 compute-1 sudo[73034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:48 compute-1 sudo[73034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:48 compute-1 sudo[73034]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:48 compute-1 sudo[73059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Dec 06 06:26:48 compute-1 sudo[73059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:48 compute-1 sudo[73059]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:48 compute-1 sshd-session[73084]: Accepted publickey for ceph-admin from 192.168.122.100 port 35682 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:48 compute-1 systemd-logind[788]: New session 32 of user ceph-admin.
Dec 06 06:26:48 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Dec 06 06:26:48 compute-1 sshd-session[73084]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:48 compute-1 sudo[73088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:48 compute-1 sudo[73088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:48 compute-1 sudo[73088]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:48 compute-1 sudo[73113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Dec 06 06:26:48 compute-1 sudo[73113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:26:49 compute-1 sudo[73113]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:49 compute-1 sudo[73159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:49 compute-1 sudo[73159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 sudo[73159]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:49 compute-1 sudo[73184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:49 compute-1 sudo[73184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 sudo[73184]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:49 compute-1 sudo[73209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:49 compute-1 sudo[73209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 sudo[73209]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:49 compute-1 sudo[73234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 06:26:49 compute-1 sudo[73234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:26:49 compute-1 sudo[73234]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:49 compute-1 sudo[73279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:49 compute-1 sudo[73279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 sudo[73279]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:49 compute-1 sudo[73304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:49 compute-1 sudo[73304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 sudo[73304]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:49 compute-1 sudo[73329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:49 compute-1 sudo[73329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 sudo[73329]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:49 compute-1 sudo[73354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:26:49 compute-1 sudo[73354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:49 compute-1 sudo[73354]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:50 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:26:50 compute-1 sudo[73415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:50 compute-1 sudo[73415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:50 compute-1 sudo[73415]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:50 compute-1 sudo[73440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:50 compute-1 sudo[73440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:50 compute-1 sudo[73440]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:50 compute-1 sudo[73465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:50 compute-1 sudo[73465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:50 compute-1 sudo[73465]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:50 compute-1 sudo[73490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:26:50 compute-1 sudo[73490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:50 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:26:50 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73527 (sysctl)
Dec 06 06:26:50 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 06 06:26:50 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 06 06:26:50 compute-1 sudo[73490]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:50 compute-1 sudo[73549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:50 compute-1 sudo[73549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:50 compute-1 sudo[73549]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:50 compute-1 sudo[73574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:50 compute-1 sudo[73574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:50 compute-1 sudo[73574]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:50 compute-1 sudo[73599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:50 compute-1 sudo[73599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:50 compute-1 sudo[73599]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:50 compute-1 sudo[73624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 06:26:50 compute-1 sudo[73624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:51 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:26:51 compute-1 sudo[73624]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:52 compute-1 sudo[73668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:52 compute-1 sudo[73668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:52 compute-1 sudo[73668]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:52 compute-1 sudo[73693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:52 compute-1 sudo[73693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:52 compute-1 sudo[73693]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:52 compute-1 sudo[73718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:52 compute-1 sudo[73718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:52 compute-1 sudo[73718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:52 compute-1 sudo[73743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 06:26:52 compute-1 sudo[73743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:52 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:26:52 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:26:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat211633920-lower\x2dmapped.mount: Deactivated successfully.
Dec 06 06:27:31 compute-1 podman[73804]: 2025-12-06 06:27:31.057929669 +0000 UTC m=+38.103993590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:27:31 compute-1 podman[73804]: 2025-12-06 06:27:31.502764298 +0000 UTC m=+38.548828209 container create eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_babbage, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec 06 06:27:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2900127704-merged.mount: Deactivated successfully.
Dec 06 06:27:36 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 06 06:27:36 compute-1 systemd[1]: Started libpod-conmon-eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d.scope.
Dec 06 06:27:36 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:27:36 compute-1 podman[73804]: 2025-12-06 06:27:36.610290134 +0000 UTC m=+43.656354155 container init eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:27:36 compute-1 podman[73804]: 2025-12-06 06:27:36.62404621 +0000 UTC m=+43.670110141 container start eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:27:36 compute-1 interesting_babbage[73864]: 167 167
Dec 06 06:27:36 compute-1 systemd[1]: libpod-eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d.scope: Deactivated successfully.
Dec 06 06:27:37 compute-1 podman[73804]: 2025-12-06 06:27:37.727364996 +0000 UTC m=+44.773428997 container attach eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_babbage, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:27:37 compute-1 podman[73804]: 2025-12-06 06:27:37.728012613 +0000 UTC m=+44.774076554 container died eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:27:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-8aa5f905bb4968f547f377bcbcf5faed613b054ee3f57b725c041a16991e18d7-merged.mount: Deactivated successfully.
Dec 06 06:27:49 compute-1 podman[73804]: 2025-12-06 06:27:49.127597595 +0000 UTC m=+56.173661506 container remove eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_babbage, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:27:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:27:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:27:49 compute-1 systemd[1]: libpod-conmon-eab734a12bd1dc1682a0bba11d97d68d4002c9c24ac0f8685b53500afb7d259d.scope: Deactivated successfully.
Dec 06 06:27:49 compute-1 podman[73893]: 2025-12-06 06:27:49.267299379 +0000 UTC m=+0.023716470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:27:49 compute-1 podman[73893]: 2025-12-06 06:27:49.813690311 +0000 UTC m=+0.570107372 container create 8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cori, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:27:49 compute-1 systemd[1]: Started libpod-conmon-8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc.scope.
Dec 06 06:27:49 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:27:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac60f124424ba6675746b3c5728045e94a57900b077f23d46b9cb6603bb9c1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:27:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac60f124424ba6675746b3c5728045e94a57900b077f23d46b9cb6603bb9c1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:27:50 compute-1 podman[73893]: 2025-12-06 06:27:50.577835924 +0000 UTC m=+1.334253005 container init 8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:27:50 compute-1 podman[73893]: 2025-12-06 06:27:50.586882391 +0000 UTC m=+1.343299442 container start 8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 06 06:27:50 compute-1 podman[73893]: 2025-12-06 06:27:50.722277097 +0000 UTC m=+1.478694178 container attach 8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cori, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:27:51 compute-1 affectionate_cori[73909]: [
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:     {
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         "available": false,
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         "ceph_device": false,
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         "lsm_data": {},
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         "lvs": [],
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         "path": "/dev/sr0",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         "rejected_reasons": [
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "Has a FileSystem",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "Insufficient space (<5GB)"
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         ],
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         "sys_api": {
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "actuators": null,
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "device_nodes": "sr0",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "devname": "sr0",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "human_readable_size": "482.00 KB",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "id_bus": "ata",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "model": "QEMU DVD-ROM",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "nr_requests": "2",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "parent": "/dev/sr0",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "partitions": {},
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "path": "/dev/sr0",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "removable": "1",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "rev": "2.5+",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "ro": "0",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "rotational": "1",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "sas_address": "",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "sas_device_handle": "",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "scheduler_mode": "mq-deadline",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "sectors": 0,
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "sectorsize": "2048",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "size": 493568.0,
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "support_discard": "2048",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "type": "disk",
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:             "vendor": "QEMU"
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:         }
Dec 06 06:27:51 compute-1 affectionate_cori[73909]:     }
Dec 06 06:27:51 compute-1 affectionate_cori[73909]: ]
Dec 06 06:27:51 compute-1 systemd[1]: libpod-8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc.scope: Deactivated successfully.
Dec 06 06:27:51 compute-1 podman[73893]: 2025-12-06 06:27:51.686893195 +0000 UTC m=+2.443310256 container died 8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cori, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:27:51 compute-1 systemd[1]: libpod-8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc.scope: Consumed 1.100s CPU time.
Dec 06 06:27:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-dac60f124424ba6675746b3c5728045e94a57900b077f23d46b9cb6603bb9c1d-merged.mount: Deactivated successfully.
Dec 06 06:27:52 compute-1 podman[73893]: 2025-12-06 06:27:52.448492338 +0000 UTC m=+3.204909409 container remove 8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cori, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:27:52 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:27:52 compute-1 sudo[73743]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:52 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:27:52 compute-1 systemd[1]: libpod-conmon-8c761fb5b23841d5c2c198571d1ff2ebec44c43cbb0c6bc37113363437b533bc.scope: Deactivated successfully.
Dec 06 06:27:52 compute-1 sudo[74902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:52 compute-1 sudo[74902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:52 compute-1 sudo[74902]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:52 compute-1 sudo[74927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 06:27:52 compute-1 sudo[74927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:52 compute-1 sudo[74927]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:52 compute-1 sudo[74952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:52 compute-1 sudo[74952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:52 compute-1 sudo[74952]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[74977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph
Dec 06 06:27:53 compute-1 sudo[74977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[74977]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:53 compute-1 sudo[75002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75002]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:27:53 compute-1 sudo[75027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75027]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:53 compute-1 sudo[75052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75052]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:27:53 compute-1 sudo[75077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75077]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:53 compute-1 sudo[75102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75102]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:27:53 compute-1 sudo[75127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75127]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:53 compute-1 sudo[75175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75175]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:27:53 compute-1 sudo[75200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75200]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:53 compute-1 sudo[75225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75225]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:27:53 compute-1 sudo[75250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75250]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:53 compute-1 sudo[75275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:53 compute-1 sudo[75275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:53 compute-1 sudo[75275]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 06:27:54 compute-1 sudo[75300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75300]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:54 compute-1 sudo[75325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75325]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:27:54 compute-1 sudo[75350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75350]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:54 compute-1 sudo[75375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75375]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:27:54 compute-1 sudo[75400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75400]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:54 compute-1 sudo[75425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75425]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:27:54 compute-1 sudo[75450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75450]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:54 compute-1 sudo[75475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75475]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:27:54 compute-1 sudo[75500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75500]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:54 compute-1 sudo[75525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75525]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:27:54 compute-1 sudo[75550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75550]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:54 compute-1 sudo[75598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75598]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:27:54 compute-1 sudo[75623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75623]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:54 compute-1 sudo[75648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:54 compute-1 sudo[75648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:54 compute-1 sudo[75648]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:27:55 compute-1 sudo[75673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75673]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:55 compute-1 sudo[75698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75698]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:27:55 compute-1 sudo[75723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75723]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:55 compute-1 sudo[75748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75748]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 06:27:55 compute-1 sudo[75773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75773]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:55 compute-1 sudo[75798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75798]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph
Dec 06 06:27:55 compute-1 sudo[75823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75823]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:55 compute-1 sudo[75848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75848]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new
Dec 06 06:27:55 compute-1 sudo[75873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75873]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:55 compute-1 sudo[75898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75898]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:27:55 compute-1 sudo[75923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75923]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:55 compute-1 sudo[75948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75948]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:55 compute-1 sudo[75973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new
Dec 06 06:27:55 compute-1 sudo[75973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:55 compute-1 sudo[75973]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:56 compute-1 sudo[76021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76021]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new
Dec 06 06:27:56 compute-1 sudo[76046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76046]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:56 compute-1 sudo[76071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76071]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new
Dec 06 06:27:56 compute-1 sudo[76096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76096]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:56 compute-1 sudo[76121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76121]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 06:27:56 compute-1 sudo[76146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76146]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:56 compute-1 sudo[76171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76171]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:27:56 compute-1 sudo[76196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76196]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:56 compute-1 sudo[76221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76221]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:27:56 compute-1 sudo[76246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76246]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:56 compute-1 sudo[76271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76271]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new
Dec 06 06:27:56 compute-1 sudo[76296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76296]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:56 compute-1 sudo[76321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76321]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:27:56 compute-1 sudo[76346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76346]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:56 compute-1 sudo[76371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76371]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:56 compute-1 sudo[76396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new
Dec 06 06:27:56 compute-1 sudo[76396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:56 compute-1 sudo[76396]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:57 compute-1 sudo[76444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76444]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new
Dec 06 06:27:57 compute-1 sudo[76469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76469]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:57 compute-1 sudo[76494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76494]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new
Dec 06 06:27:57 compute-1 sudo[76520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76520]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:57 compute-1 sudo[76545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76545]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:27:57 compute-1 sudo[76570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76570]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:57 compute-1 sudo[76595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76595]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:27:57 compute-1 sudo[76620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76620]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:27:57 compute-1 sudo[76645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 sudo[76645]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-1 sudo[76670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:27:57 compute-1 sudo[76670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:27:57 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:27:57 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:27:58 compute-1 podman[76735]: 2025-12-06 06:27:58.065759117 +0000 UTC m=+0.093283055 container create 34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:27:58 compute-1 podman[76735]: 2025-12-06 06:27:57.994458195 +0000 UTC m=+0.021982193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:27:58 compute-1 systemd[1]: Started libpod-conmon-34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8.scope.
Dec 06 06:27:58 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:27:58 compute-1 podman[76735]: 2025-12-06 06:27:58.131707961 +0000 UTC m=+0.159231919 container init 34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:27:58 compute-1 podman[76735]: 2025-12-06 06:27:58.137761976 +0000 UTC m=+0.165285914 container start 34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:27:58 compute-1 affectionate_fermi[76751]: 167 167
Dec 06 06:27:58 compute-1 systemd[1]: libpod-34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8.scope: Deactivated successfully.
Dec 06 06:27:58 compute-1 conmon[76751]: conmon 34af14521c6c03483c9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8.scope/container/memory.events
Dec 06 06:27:58 compute-1 podman[76735]: 2025-12-06 06:27:58.142845726 +0000 UTC m=+0.170369664 container attach 34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:27:58 compute-1 podman[76735]: 2025-12-06 06:27:58.143187855 +0000 UTC m=+0.170711793 container died 34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:27:58 compute-1 podman[76735]: 2025-12-06 06:27:58.183894749 +0000 UTC m=+0.211418687 container remove 34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_fermi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 06:27:58 compute-1 systemd[1]: libpod-conmon-34af14521c6c03483c9aa3115218de228b28e5c39634fe4e3fca5542b6ae7ca8.scope: Deactivated successfully.
Dec 06 06:27:58 compute-1 systemd[1]: Reloading.
Dec 06 06:27:58 compute-1 systemd-sysv-generator[76799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:27:58 compute-1 systemd-rc-local-generator[76793]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:27:58 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:27:58 compute-1 systemd[1]: Reloading.
Dec 06 06:27:58 compute-1 systemd-rc-local-generator[76835]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:27:58 compute-1 systemd-sysv-generator[76838]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:27:59 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Dec 06 06:27:59 compute-1 systemd[1]: Reloading.
Dec 06 06:27:59 compute-1 systemd-rc-local-generator[76876]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:27:59 compute-1 systemd-sysv-generator[76879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:27:59 compute-1 systemd[1]: Reached target Ceph cluster 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:27:59 compute-1 systemd[1]: Reloading.
Dec 06 06:27:59 compute-1 systemd-rc-local-generator[76912]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:27:59 compute-1 systemd-sysv-generator[76915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:27:59 compute-1 systemd[1]: Reloading.
Dec 06 06:27:59 compute-1 systemd-rc-local-generator[76951]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:27:59 compute-1 systemd-sysv-generator[76955]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:27:59 compute-1 systemd[1]: Created slice Slice /system/ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:27:59 compute-1 systemd[1]: Reached target System Time Set.
Dec 06 06:27:59 compute-1 systemd[1]: Reached target System Time Synchronized.
Dec 06 06:27:59 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:28:00 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:28:00 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:28:00 compute-1 podman[77009]: 2025-12-06 06:28:00.172869882 +0000 UTC m=+0.040196712 container create 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 06 06:28:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d92487ca8a07fd461354de96ef0404c855b97c7a06817a107859872ee4ea235/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d92487ca8a07fd461354de96ef0404c855b97c7a06817a107859872ee4ea235/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d92487ca8a07fd461354de96ef0404c855b97c7a06817a107859872ee4ea235/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:00 compute-1 podman[77009]: 2025-12-06 06:28:00.220886186 +0000 UTC m=+0.088213036 container init 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 06:28:00 compute-1 podman[77009]: 2025-12-06 06:28:00.225931753 +0000 UTC m=+0.093258603 container start 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:00 compute-1 bash[77009]: 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644
Dec 06 06:28:00 compute-1 podman[77009]: 2025-12-06 06:28:00.154764306 +0000 UTC m=+0.022091156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:00 compute-1 systemd[1]: Started Ceph crash.compute-1 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:28:00 compute-1 sudo[76670]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:00 compute-1 sudo[77029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:00 compute-1 sudo[77029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:00 compute-1 sudo[77029]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:00 compute-1 sudo[77054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:00 compute-1 sudo[77054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:00 compute-1 sudo[77054]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 06 06:28:00 compute-1 sudo[77079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:00 compute-1 sudo[77079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:00 compute-1 sudo[77079]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:00 compute-1 sudo[77106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:28:00 compute-1 sudo[77106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: 2025-12-06T06:28:00.628+0000 7f035045a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: 2025-12-06T06:28:00.628+0000 7f035045a640 -1 AuthRegistry(0x7f0348067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: 2025-12-06T06:28:00.629+0000 7f035045a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: 2025-12-06T06:28:00.629+0000 7f035045a640 -1 AuthRegistry(0x7f0350459000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: 2025-12-06T06:28:00.631+0000 7f034e1cf640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: 2025-12-06T06:28:00.631+0000 7f035045a640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 06 06:28:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1[77024]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 06 06:28:00 compute-1 podman[77182]: 2025-12-06 06:28:00.814695206 +0000 UTC m=+0.024631485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:01 compute-1 podman[77182]: 2025-12-06 06:28:01.233922629 +0000 UTC m=+0.443858868 container create 4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_newton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:28:01 compute-1 systemd[1]: Started libpod-conmon-4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8.scope.
Dec 06 06:28:01 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:01 compute-1 podman[77182]: 2025-12-06 06:28:01.305130698 +0000 UTC m=+0.515066947 container init 4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_newton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:01 compute-1 podman[77182]: 2025-12-06 06:28:01.312204971 +0000 UTC m=+0.522141210 container start 4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_newton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:01 compute-1 quirky_newton[77198]: 167 167
Dec 06 06:28:01 compute-1 systemd[1]: libpod-4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8.scope: Deactivated successfully.
Dec 06 06:28:01 compute-1 podman[77182]: 2025-12-06 06:28:01.401827584 +0000 UTC m=+0.611763813 container attach 4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_newton, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:01 compute-1 podman[77182]: 2025-12-06 06:28:01.402609116 +0000 UTC m=+0.612545345 container died 4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-0b0c50bdd6b198943dcd7a68603981ae6964f13037b05cd7faf5f79d398ac217-merged.mount: Deactivated successfully.
Dec 06 06:28:01 compute-1 podman[77182]: 2025-12-06 06:28:01.61688977 +0000 UTC m=+0.826825999 container remove 4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:01 compute-1 systemd[1]: libpod-conmon-4f4a30db60cdb991b13f981de1960271e554cb5b400cffe51d1329510304d0d8.scope: Deactivated successfully.
Dec 06 06:28:01 compute-1 podman[77222]: 2025-12-06 06:28:01.772778596 +0000 UTC m=+0.045849526 container create a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:28:01 compute-1 systemd[1]: Started libpod-conmon-a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9.scope.
Dec 06 06:28:01 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109a8c9f10cb2c12a8fcceadcd45119bee69c16e2f4451b8b0a9c1c85c373ea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109a8c9f10cb2c12a8fcceadcd45119bee69c16e2f4451b8b0a9c1c85c373ea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109a8c9f10cb2c12a8fcceadcd45119bee69c16e2f4451b8b0a9c1c85c373ea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109a8c9f10cb2c12a8fcceadcd45119bee69c16e2f4451b8b0a9c1c85c373ea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109a8c9f10cb2c12a8fcceadcd45119bee69c16e2f4451b8b0a9c1c85c373ea4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-1 podman[77222]: 2025-12-06 06:28:01.750486956 +0000 UTC m=+0.023557906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:01 compute-1 podman[77222]: 2025-12-06 06:28:01.851100199 +0000 UTC m=+0.124171149 container init a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:28:01 compute-1 podman[77222]: 2025-12-06 06:28:01.857946067 +0000 UTC m=+0.131016977 container start a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:01 compute-1 podman[77222]: 2025-12-06 06:28:01.861542486 +0000 UTC m=+0.134613426 container attach a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 06:28:02 compute-1 silly_williams[77238]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:28:02 compute-1 silly_williams[77238]: --> relative data size: 1.0
Dec 06 06:28:02 compute-1 silly_williams[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 06:28:02 compute-1 silly_williams[77238]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e647688b-053d-4d67-9db1-a787df62bd8a
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 06:28:03 compute-1 lvm[77286]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:28:03 compute-1 lvm[77286]: VG ceph_vg0 finished
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 06 06:28:03 compute-1 silly_williams[77238]:  stderr: got monmap epoch 1
Dec 06 06:28:03 compute-1 silly_williams[77238]: --> Creating keyring file for osd.1
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 06 06:28:03 compute-1 silly_williams[77238]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid e647688b-053d-4d67-9db1-a787df62bd8a --setuser ceph --setgroup ceph
Dec 06 06:28:06 compute-1 silly_williams[77238]:  stderr: 2025-12-06T06:28:03.761+0000 7ffb1b65b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 06 06:28:06 compute-1 silly_williams[77238]:  stderr: 2025-12-06T06:28:03.761+0000 7ffb1b65b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 06 06:28:06 compute-1 silly_williams[77238]:  stderr: 2025-12-06T06:28:03.761+0000 7ffb1b65b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 06 06:28:06 compute-1 silly_williams[77238]:  stderr: 2025-12-06T06:28:03.761+0000 7ffb1b65b740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 06 06:28:06 compute-1 silly_williams[77238]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 06 06:28:06 compute-1 silly_williams[77238]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 06:28:06 compute-1 silly_williams[77238]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 06 06:28:06 compute-1 silly_williams[77238]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:06 compute-1 silly_williams[77238]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:06 compute-1 silly_williams[77238]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 06:28:06 compute-1 silly_williams[77238]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 06:28:06 compute-1 silly_williams[77238]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 06 06:28:06 compute-1 silly_williams[77238]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 06 06:28:06 compute-1 systemd[1]: libpod-a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9.scope: Deactivated successfully.
Dec 06 06:28:06 compute-1 systemd[1]: libpod-a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9.scope: Consumed 2.330s CPU time.
Dec 06 06:28:06 compute-1 conmon[77238]: conmon a2100287073a12ed6deb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9.scope/container/memory.events
Dec 06 06:28:06 compute-1 podman[78197]: 2025-12-06 06:28:06.434948765 +0000 UTC m=+0.026286840 container died a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-109a8c9f10cb2c12a8fcceadcd45119bee69c16e2f4451b8b0a9c1c85c373ea4-merged.mount: Deactivated successfully.
Dec 06 06:28:06 compute-1 podman[78197]: 2025-12-06 06:28:06.512287832 +0000 UTC m=+0.103625857 container remove a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_williams, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 06:28:06 compute-1 systemd[1]: libpod-conmon-a2100287073a12ed6debfdd45d695a5c6527f01ed09b6fb454294622942a4af9.scope: Deactivated successfully.
Dec 06 06:28:06 compute-1 sudo[77106]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:06 compute-1 sudo[78213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:06 compute-1 sudo[78213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:06 compute-1 sudo[78213]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:06 compute-1 sudo[78238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:06 compute-1 sudo[78238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:06 compute-1 sudo[78238]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:06 compute-1 sudo[78263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:06 compute-1 sudo[78263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:06 compute-1 sudo[78263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:06 compute-1 sudo[78288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:28:06 compute-1 sudo[78288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:07 compute-1 podman[78355]: 2025-12-06 06:28:07.103358217 +0000 UTC m=+0.090362723 container create 549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:07 compute-1 podman[78355]: 2025-12-06 06:28:07.034881873 +0000 UTC m=+0.021886409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:07 compute-1 systemd[1]: Started libpod-conmon-549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb.scope.
Dec 06 06:28:07 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:07 compute-1 podman[78355]: 2025-12-06 06:28:07.188322543 +0000 UTC m=+0.175327069 container init 549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:28:07 compute-1 podman[78355]: 2025-12-06 06:28:07.194654677 +0000 UTC m=+0.181659183 container start 549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:07 compute-1 inspiring_pasteur[78371]: 167 167
Dec 06 06:28:07 compute-1 systemd[1]: libpod-549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb.scope: Deactivated successfully.
Dec 06 06:28:07 compute-1 podman[78355]: 2025-12-06 06:28:07.210579722 +0000 UTC m=+0.197584238 container attach 549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 06:28:07 compute-1 podman[78355]: 2025-12-06 06:28:07.211551509 +0000 UTC m=+0.198556015 container died 549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-197a0525e7ec0ce8bb233cdda7cb22fe64d2ad6423e9c9bdbceffe5ca928f405-merged.mount: Deactivated successfully.
Dec 06 06:28:07 compute-1 podman[78355]: 2025-12-06 06:28:07.293338947 +0000 UTC m=+0.280343453 container remove 549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:28:07 compute-1 systemd[1]: libpod-conmon-549e001aa984aa2b2baebdc421f377562a60804ac07677e0750a731ee46edebb.scope: Deactivated successfully.
Dec 06 06:28:07 compute-1 podman[78397]: 2025-12-06 06:28:07.456678357 +0000 UTC m=+0.057120404 container create 6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:28:07 compute-1 systemd[1]: Started libpod-conmon-6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be.scope.
Dec 06 06:28:07 compute-1 podman[78397]: 2025-12-06 06:28:07.425296878 +0000 UTC m=+0.025738945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:07 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78796a43e3596895436b9d847f6c3a2d9da4c1e667b7205c95a9824be8ad9286/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78796a43e3596895436b9d847f6c3a2d9da4c1e667b7205c95a9824be8ad9286/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78796a43e3596895436b9d847f6c3a2d9da4c1e667b7205c95a9824be8ad9286/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78796a43e3596895436b9d847f6c3a2d9da4c1e667b7205c95a9824be8ad9286/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:07 compute-1 podman[78397]: 2025-12-06 06:28:07.57190543 +0000 UTC m=+0.172347487 container init 6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_darwin, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:07 compute-1 podman[78397]: 2025-12-06 06:28:07.579731875 +0000 UTC m=+0.180173922 container start 6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_darwin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:07 compute-1 podman[78397]: 2025-12-06 06:28:07.587264541 +0000 UTC m=+0.187706608 container attach 6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_darwin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]: {
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:     "1": [
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:         {
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "devices": [
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "/dev/loop3"
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             ],
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "lv_name": "ceph_lv0",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "lv_size": "7511998464",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=AwlNRs-g8fG-8V5k-egVD-Y1RS-UG1l-kfzKaS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e647688b-053d-4d67-9db1-a787df62bd8a,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "lv_uuid": "AwlNRs-g8fG-8V5k-egVD-Y1RS-UG1l-kfzKaS",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "name": "ceph_lv0",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "tags": {
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.block_uuid": "AwlNRs-g8fG-8V5k-egVD-Y1RS-UG1l-kfzKaS",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.cluster_name": "ceph",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.crush_device_class": "",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.encrypted": "0",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.osd_fsid": "e647688b-053d-4d67-9db1-a787df62bd8a",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.osd_id": "1",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.type": "block",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:                 "ceph.vdo": "0"
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             },
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "type": "block",
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:             "vg_name": "ceph_vg0"
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:         }
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]:     ]
Dec 06 06:28:08 compute-1 unruffled_darwin[78413]: }
Dec 06 06:28:08 compute-1 systemd[1]: libpod-6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be.scope: Deactivated successfully.
Dec 06 06:28:08 compute-1 podman[78397]: 2025-12-06 06:28:08.350111908 +0000 UTC m=+0.950553955 container died 6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 06:28:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-78796a43e3596895436b9d847f6c3a2d9da4c1e667b7205c95a9824be8ad9286-merged.mount: Deactivated successfully.
Dec 06 06:28:08 compute-1 podman[78397]: 2025-12-06 06:28:08.517964151 +0000 UTC m=+1.118406198 container remove 6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 06:28:08 compute-1 systemd[1]: libpod-conmon-6db39eb4005b0b3e89bfc59b0d8e8e73be5e29ccc67b6f75b979ea60ddfb97be.scope: Deactivated successfully.
Dec 06 06:28:08 compute-1 sudo[78288]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:08 compute-1 sudo[78437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:08 compute-1 sudo[78437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:08 compute-1 sudo[78437]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:08 compute-1 sudo[78462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:08 compute-1 sudo[78462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:08 compute-1 sudo[78462]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:08 compute-1 sudo[78487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:08 compute-1 sudo[78487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:08 compute-1 sudo[78487]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:08 compute-1 sudo[78512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:28:08 compute-1 sudo[78512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:09 compute-1 podman[78578]: 2025-12-06 06:28:09.112580494 +0000 UTC m=+0.085748268 container create f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:28:09 compute-1 podman[78578]: 2025-12-06 06:28:09.045046516 +0000 UTC m=+0.018214320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:09 compute-1 systemd[1]: Started libpod-conmon-f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07.scope.
Dec 06 06:28:09 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:09 compute-1 podman[78578]: 2025-12-06 06:28:09.2768839 +0000 UTC m=+0.250051704 container init f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 06:28:09 compute-1 podman[78578]: 2025-12-06 06:28:09.283673046 +0000 UTC m=+0.256840820 container start f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:09 compute-1 compassionate_mccarthy[78594]: 167 167
Dec 06 06:28:09 compute-1 systemd[1]: libpod-f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07.scope: Deactivated successfully.
Dec 06 06:28:09 compute-1 podman[78578]: 2025-12-06 06:28:09.463709244 +0000 UTC m=+0.436877068 container attach f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:09 compute-1 podman[78578]: 2025-12-06 06:28:09.46467145 +0000 UTC m=+0.437839234 container died f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:28:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-f431e52834bfe79d12cdd61d1705f8948ae837648c5368e9a7cdb25a4489557d-merged.mount: Deactivated successfully.
Dec 06 06:28:09 compute-1 podman[78578]: 2025-12-06 06:28:09.659622165 +0000 UTC m=+0.632789939 container remove f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 06:28:09 compute-1 systemd[1]: libpod-conmon-f67c02a2827990aca8f75fbae08d63f5df6e5c84a7678f19a474aa0b4218ec07.scope: Deactivated successfully.
Dec 06 06:28:10 compute-1 podman[78626]: 2025-12-06 06:28:09.938165158 +0000 UTC m=+0.023019711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:11 compute-1 podman[78626]: 2025-12-06 06:28:11.079945505 +0000 UTC m=+1.164800038 container create 5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:11 compute-1 systemd[1]: Started libpod-conmon-5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f.scope.
Dec 06 06:28:11 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca4c779dd54524d6c99743757375f2b41d69447ea854e2c511fbf4e035ed431/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca4c779dd54524d6c99743757375f2b41d69447ea854e2c511fbf4e035ed431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca4c779dd54524d6c99743757375f2b41d69447ea854e2c511fbf4e035ed431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca4c779dd54524d6c99743757375f2b41d69447ea854e2c511fbf4e035ed431/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca4c779dd54524d6c99743757375f2b41d69447ea854e2c511fbf4e035ed431/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:11 compute-1 podman[78626]: 2025-12-06 06:28:11.251588182 +0000 UTC m=+1.336442725 container init 5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:11 compute-1 podman[78626]: 2025-12-06 06:28:11.258209604 +0000 UTC m=+1.343064137 container start 5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:28:11 compute-1 podman[78626]: 2025-12-06 06:28:11.279827235 +0000 UTC m=+1.364681768 container attach 5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:11 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test[78643]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 06 06:28:11 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test[78643]:                             [--no-systemd] [--no-tmpfs]
Dec 06 06:28:11 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test[78643]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 06 06:28:11 compute-1 systemd[1]: libpod-5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f.scope: Deactivated successfully.
Dec 06 06:28:11 compute-1 podman[78626]: 2025-12-06 06:28:11.98270278 +0000 UTC m=+2.067557313 container died 5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-fca4c779dd54524d6c99743757375f2b41d69447ea854e2c511fbf4e035ed431-merged.mount: Deactivated successfully.
Dec 06 06:28:12 compute-1 podman[78626]: 2025-12-06 06:28:12.135309347 +0000 UTC m=+2.220163880 container remove 5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate-test, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:12 compute-1 systemd[1]: libpod-conmon-5e970d29eb99882771a6f7ba6fd2353c0a1c3052b3a36dba58af50947e658c4f.scope: Deactivated successfully.
Dec 06 06:28:12 compute-1 systemd[1]: Reloading.
Dec 06 06:28:12 compute-1 systemd-rc-local-generator[78707]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:28:12 compute-1 systemd-sysv-generator[78710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:28:12 compute-1 systemd[1]: Reloading.
Dec 06 06:28:12 compute-1 systemd-rc-local-generator[78745]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:28:12 compute-1 systemd-sysv-generator[78749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:28:12 compute-1 systemd[1]: Starting Ceph osd.1 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:28:13 compute-1 podman[78804]: 2025-12-06 06:28:13.063729415 +0000 UTC m=+0.042205586 container create ba279261d16fcd445015c19bef0e9b9f63e4a61782e971c4314c6c834e495f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:13 compute-1 podman[78804]: 2025-12-06 06:28:13.04525535 +0000 UTC m=+0.023731541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:13 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ef8d3ec9720d5af80417dcbebbb14f3e34b0ec3834c5dbb7ebe88d8e2634e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ef8d3ec9720d5af80417dcbebbb14f3e34b0ec3834c5dbb7ebe88d8e2634e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ef8d3ec9720d5af80417dcbebbb14f3e34b0ec3834c5dbb7ebe88d8e2634e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ef8d3ec9720d5af80417dcbebbb14f3e34b0ec3834c5dbb7ebe88d8e2634e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ef8d3ec9720d5af80417dcbebbb14f3e34b0ec3834c5dbb7ebe88d8e2634e8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-1 podman[78804]: 2025-12-06 06:28:13.226065557 +0000 UTC m=+0.204541748 container init ba279261d16fcd445015c19bef0e9b9f63e4a61782e971c4314c6c834e495f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 06:28:13 compute-1 podman[78804]: 2025-12-06 06:28:13.233050669 +0000 UTC m=+0.211526830 container start ba279261d16fcd445015c19bef0e9b9f63e4a61782e971c4314c6c834e495f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:28:13 compute-1 podman[78804]: 2025-12-06 06:28:13.276545129 +0000 UTC m=+0.255021330 container attach ba279261d16fcd445015c19bef0e9b9f63e4a61782e971c4314c6c834e495f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 06:28:14 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate[78819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 06:28:14 compute-1 bash[78804]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 06:28:14 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate[78819]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 06:28:14 compute-1 bash[78804]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 06:28:14 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate[78819]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 06:28:14 compute-1 bash[78804]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 06:28:14 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate[78819]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 06:28:14 compute-1 bash[78804]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 06:28:14 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate[78819]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:14 compute-1 bash[78804]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:14 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate[78819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 06:28:14 compute-1 bash[78804]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 06 06:28:14 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate[78819]: --> ceph-volume raw activate successful for osd ID: 1
Dec 06 06:28:14 compute-1 bash[78804]: --> ceph-volume raw activate successful for osd ID: 1
Dec 06 06:28:14 compute-1 systemd[1]: libpod-ba279261d16fcd445015c19bef0e9b9f63e4a61782e971c4314c6c834e495f57.scope: Deactivated successfully.
Dec 06 06:28:14 compute-1 podman[78804]: 2025-12-06 06:28:14.22184247 +0000 UTC m=+1.200318651 container died ba279261d16fcd445015c19bef0e9b9f63e4a61782e971c4314c6c834e495f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 06:28:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-88ef8d3ec9720d5af80417dcbebbb14f3e34b0ec3834c5dbb7ebe88d8e2634e8-merged.mount: Deactivated successfully.
Dec 06 06:28:14 compute-1 podman[78804]: 2025-12-06 06:28:14.547841371 +0000 UTC m=+1.526317542 container remove ba279261d16fcd445015c19bef0e9b9f63e4a61782e971c4314c6c834e495f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 06:28:14 compute-1 podman[78982]: 2025-12-06 06:28:14.695239615 +0000 UTC m=+0.018187499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:15 compute-1 podman[78982]: 2025-12-06 06:28:15.197672674 +0000 UTC m=+0.520620548 container create ebce93884e6320c2dff74506dc8940c20112d86fe8463499957c15da8139d6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:28:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d2bab198749d906c76f1e6ff22225ad9a48b75d00f2e8eaf5c51b5a1626f66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d2bab198749d906c76f1e6ff22225ad9a48b75d00f2e8eaf5c51b5a1626f66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d2bab198749d906c76f1e6ff22225ad9a48b75d00f2e8eaf5c51b5a1626f66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d2bab198749d906c76f1e6ff22225ad9a48b75d00f2e8eaf5c51b5a1626f66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d2bab198749d906c76f1e6ff22225ad9a48b75d00f2e8eaf5c51b5a1626f66/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:15 compute-1 podman[78982]: 2025-12-06 06:28:15.598883915 +0000 UTC m=+0.921831789 container init ebce93884e6320c2dff74506dc8940c20112d86fe8463499957c15da8139d6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:15 compute-1 podman[78982]: 2025-12-06 06:28:15.60456414 +0000 UTC m=+0.927512004 container start ebce93884e6320c2dff74506dc8940c20112d86fe8463499957c15da8139d6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:28:15 compute-1 ceph-osd[79002]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:28:15 compute-1 ceph-osd[79002]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec 06 06:28:15 compute-1 ceph-osd[79002]: pidfile_write: ignore empty --pid-file
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b5510f7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b5510f7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b5510f7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b5510f7800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b551f39800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b551f39800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b551f39800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b551f39800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b551f39800 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 06:28:15 compute-1 bash[78982]: ebce93884e6320c2dff74506dc8940c20112d86fe8463499957c15da8139d6ae
Dec 06 06:28:15 compute-1 systemd[1]: Started Ceph osd.1 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:28:15 compute-1 sudo[78512]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:15 compute-1 sudo[79015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:15 compute-1 ceph-osd[79002]: bdev(0x55b5510f7800 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 06:28:15 compute-1 sudo[79015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:15 compute-1 sudo[79015]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:15 compute-1 sudo[79040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:15 compute-1 sudo[79040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:15 compute-1 sudo[79040]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:16 compute-1 sudo[79067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:16 compute-1 sudo[79067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:16 compute-1 sudo[79067]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:16 compute-1 sudo[79092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:28:16 compute-1 sudo[79092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:16 compute-1 ceph-osd[79002]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 06 06:28:16 compute-1 ceph-osd[79002]: load: jerasure load: lrc 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 06:28:16 compute-1 podman[79164]: 2025-12-06 06:28:16.40112962 +0000 UTC m=+0.020885823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:16 compute-1 podman[79164]: 2025-12-06 06:28:16.58453525 +0000 UTC m=+0.204291433 container create 6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 06:28:16 compute-1 ceph-osd[79002]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 06 06:28:16 compute-1 ceph-osd[79002]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbac00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs mount
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs mount shared_bdev_used = 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: RocksDB version: 7.9.2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Git sha 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: DB SUMMARY
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: DB Session ID:  QHT2TG46J4J3OIF2UBET
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: CURRENT file:  CURRENT
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                         Options.error_if_exists: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.create_if_missing: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                                     Options.env: 0x55b551f8bd50
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                                Options.info_log: 0x55b551174b60
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                              Options.statistics: (nil)
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.use_fsync: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                              Options.db_log_dir: 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.write_buffer_manager: 0x55b552094460
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.unordered_write: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.row_cache: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                              Options.wal_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.two_write_queues: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.wal_compression: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.atomic_flush: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.max_background_jobs: 4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.max_background_compactions: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.max_subcompactions: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.max_open_files: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Compression algorithms supported:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kZSTD supported: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kXpressCompression supported: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kBZip2Compression supported: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kLZ4Compression supported: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kZlibCompression supported: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kSnappyCompression supported: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 systemd[1]: Started libpod-conmon-6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc.scope.
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b5511745a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3510fcac-bbe5-41ab-8cb5-136284b949ec
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002496769715, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002496769995, "job": 1, "event": "recovery_finished"}
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: freelist init
Dec 06 06:28:16 compute-1 ceph-osd[79002]: freelist _read_cfg
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs umount
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) close
Dec 06 06:28:16 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bdev(0x55b551fbb400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs mount
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluefs mount shared_bdev_used = 4718592
Dec 06 06:28:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: RocksDB version: 7.9.2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Git sha 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: DB SUMMARY
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: DB Session ID:  QHT2TG46J4J3OIF2UBES
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: CURRENT file:  CURRENT
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                         Options.error_if_exists: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.create_if_missing: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                                     Options.env: 0x55b5512c23f0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                                Options.info_log: 0x55b551f87700
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                              Options.statistics: (nil)
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.use_fsync: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                              Options.db_log_dir: 
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.write_buffer_manager: 0x55b552094960
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.unordered_write: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.row_cache: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                              Options.wal_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.two_write_queues: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.wal_compression: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.atomic_flush: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.max_background_jobs: 4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.max_background_compactions: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.max_subcompactions: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.max_open_files: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Compression algorithms supported:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kZSTD supported: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kXpressCompression supported: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kBZip2Compression supported: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kLZ4Compression supported: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kZlibCompression supported: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         kSnappyCompression supported: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:16 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e780)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e740)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e740)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b55117e740)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b55116b770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3510fcac-bbe5-41ab-8cb5-136284b949ec
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002497021078, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 06:28:17 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 06:28:18 compute-1 podman[79164]: 2025-12-06 06:28:18.013108374 +0000 UTC m=+1.632864567 container init 6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noether, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:28:18 compute-1 podman[79164]: 2025-12-06 06:28:18.021755661 +0000 UTC m=+1.641511834 container start 6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:28:18 compute-1 jolly_noether[79315]: 167 167
Dec 06 06:28:18 compute-1 systemd[1]: libpod-6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc.scope: Deactivated successfully.
Dec 06 06:28:18 compute-1 ceph-osd[79002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002498030703, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002497, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3510fcac-bbe5-41ab-8cb5-136284b949ec", "db_session_id": "QHT2TG46J4J3OIF2UBES", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:28:18 compute-1 podman[79164]: 2025-12-06 06:28:18.204625036 +0000 UTC m=+1.824381209 container attach 6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:28:18 compute-1 podman[79164]: 2025-12-06 06:28:18.206508857 +0000 UTC m=+1.826265050 container died 6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noether, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:18 compute-1 ceph-osd[79002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002498645953, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002498, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3510fcac-bbe5-41ab-8cb5-136284b949ec", "db_session_id": "QHT2TG46J4J3OIF2UBES", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:28:19 compute-1 ceph-osd[79002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002499153584, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002498, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3510fcac-bbe5-41ab-8cb5-136284b949ec", "db_session_id": "QHT2TG46J4J3OIF2UBES", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:28:20 compute-1 ceph-osd[79002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002500118696, "job": 1, "event": "recovery_finished"}
Dec 06 06:28:20 compute-1 ceph-osd[79002]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 06 06:28:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-13f09d436aa717a491cb9521a7557621cb92277a0a2dcaad25ced8ee5c18ad4c-merged.mount: Deactivated successfully.
Dec 06 06:28:21 compute-1 podman[79164]: 2025-12-06 06:28:21.082286129 +0000 UTC m=+4.702042302 container remove 6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:21 compute-1 systemd[1]: libpod-conmon-6f6a0996b6497e243d14d367945b71eb1fe2f0b4bdb52d68644d692213c4cbcc.scope: Deactivated successfully.
Dec 06 06:28:21 compute-1 podman[79585]: 2025-12-06 06:28:21.22373066 +0000 UTC m=+0.024497761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b55215bc00
Dec 06 06:28:22 compute-1 ceph-osd[79002]: rocksdb: DB pointer 0x55b55207da00
Dec 06 06:28:22 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 06:28:22 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 06 06:28:22 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 06 06:28:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:28:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5.4 total, 5.4 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 06:28:22 compute-1 podman[79585]: 2025-12-06 06:28:22.435544955 +0000 UTC m=+1.236312046 container create 76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:28:22 compute-1 ceph-osd[79002]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 06 06:28:22 compute-1 ceph-osd[79002]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 06 06:28:22 compute-1 ceph-osd[79002]: _get_class not permitted to load lua
Dec 06 06:28:22 compute-1 ceph-osd[79002]: _get_class not permitted to load sdk
Dec 06 06:28:22 compute-1 ceph-osd[79002]: _get_class not permitted to load test_remote_reads
Dec 06 06:28:22 compute-1 ceph-osd[79002]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 06 06:28:22 compute-1 ceph-osd[79002]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 06 06:28:22 compute-1 ceph-osd[79002]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 06 06:28:22 compute-1 ceph-osd[79002]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 06 06:28:22 compute-1 ceph-osd[79002]: osd.1 0 load_pgs
Dec 06 06:28:22 compute-1 ceph-osd[79002]: osd.1 0 load_pgs opened 0 pgs
Dec 06 06:28:22 compute-1 ceph-osd[79002]: osd.1 0 log_to_monitors true
Dec 06 06:28:22 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1[78998]: 2025-12-06T06:28:22.443+0000 7f89b65ef740 -1 osd.1 0 log_to_monitors true
Dec 06 06:28:22 compute-1 systemd[1]: Started libpod-conmon-76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef.scope.
Dec 06 06:28:22 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b90d67afa8d61b72d23f5fac5efddbc2fbb78a7164d0f649425934642d60e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b90d67afa8d61b72d23f5fac5efddbc2fbb78a7164d0f649425934642d60e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b90d67afa8d61b72d23f5fac5efddbc2fbb78a7164d0f649425934642d60e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b90d67afa8d61b72d23f5fac5efddbc2fbb78a7164d0f649425934642d60e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:23 compute-1 podman[79585]: 2025-12-06 06:28:23.253625173 +0000 UTC m=+2.054392284 container init 76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:23 compute-1 podman[79585]: 2025-12-06 06:28:23.262685461 +0000 UTC m=+2.063452552 container start 76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 06:28:23 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 06 06:28:23 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 06 06:28:23 compute-1 podman[79585]: 2025-12-06 06:28:23.805109636 +0000 UTC m=+2.605876757 container attach 76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:24 compute-1 brave_nash[79634]: {
Dec 06 06:28:24 compute-1 brave_nash[79634]:     "e647688b-053d-4d67-9db1-a787df62bd8a": {
Dec 06 06:28:24 compute-1 brave_nash[79634]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:28:24 compute-1 brave_nash[79634]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:28:24 compute-1 brave_nash[79634]:         "osd_id": 1,
Dec 06 06:28:24 compute-1 brave_nash[79634]:         "osd_uuid": "e647688b-053d-4d67-9db1-a787df62bd8a",
Dec 06 06:28:24 compute-1 brave_nash[79634]:         "type": "bluestore"
Dec 06 06:28:24 compute-1 brave_nash[79634]:     }
Dec 06 06:28:24 compute-1 brave_nash[79634]: }
Dec 06 06:28:24 compute-1 systemd[1]: libpod-76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef.scope: Deactivated successfully.
Dec 06 06:28:24 compute-1 conmon[79634]: conmon 76c46bbc2fd8c52dba1a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef.scope/container/memory.events
Dec 06 06:28:24 compute-1 podman[79585]: 2025-12-06 06:28:24.091449711 +0000 UTC m=+2.892216822 container died 76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:24 compute-1 ceph-osd[79002]: osd.1 0 done with init, starting boot process
Dec 06 06:28:24 compute-1 ceph-osd[79002]: osd.1 0 start_boot
Dec 06 06:28:24 compute-1 ceph-osd[79002]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 06 06:28:24 compute-1 ceph-osd[79002]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 06 06:28:24 compute-1 ceph-osd[79002]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 06 06:28:24 compute-1 ceph-osd[79002]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 06 06:28:24 compute-1 ceph-osd[79002]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 06 06:28:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-c01b90d67afa8d61b72d23f5fac5efddbc2fbb78a7164d0f649425934642d60e-merged.mount: Deactivated successfully.
Dec 06 06:28:31 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.633696079s, txc = 0x55b552274f00
Dec 06 06:28:31 compute-1 podman[79585]: 2025-12-06 06:28:31.369747146 +0000 UTC m=+10.170514267 container remove 76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:31 compute-1 sudo[79092]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-1 systemd[1]: libpod-conmon-76c46bbc2fd8c52dba1a03492e9f1d045f966396be8e3c631d9bf6bbabca3cef.scope: Deactivated successfully.
Dec 06 06:28:31 compute-1 sudo[79668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:31 compute-1 sudo[79668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-1 sudo[79668]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-1 sudo[79693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:28:31 compute-1 sudo[79693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-1 sudo[79693]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.344602585s, txc = 0x55b552275200
Dec 06 06:28:31 compute-1 sudo[79718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:31 compute-1 sudo[79718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-1 sudo[79718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-1 sudo[79743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:31 compute-1 sudo[79743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-1 sudo[79743]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-1 sudo[79768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:31 compute-1 sudo[79768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-1 sudo[79768]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:32 compute-1 sudo[79793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:28:32 compute-1 sudo[79793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:32 compute-1 podman[79888]: 2025-12-06 06:28:32.520213411 +0000 UTC m=+0.102687371 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:34 compute-1 podman[79888]: 2025-12-06 06:28:34.345985346 +0000 UTC m=+1.928459296 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:34 compute-1 sudo[79793]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:34 compute-1 sudo[79939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:34 compute-1 sudo[79939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:34 compute-1 sudo[79939]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:34 compute-1 sudo[79964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:34 compute-1 sudo[79964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:34 compute-1 sudo[79964]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:34 compute-1 sudo[79989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:34 compute-1 sudo[79989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:35 compute-1 sudo[79989]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:35 compute-1 sudo[80014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:28:35 compute-1 sudo[80014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:35 compute-1 sudo[80014]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:35 compute-1 sudo[80071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:35 compute-1 sudo[80071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:35 compute-1 sudo[80071]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:35 compute-1 sudo[80096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:35 compute-1 sudo[80096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:35 compute-1 sudo[80096]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:35 compute-1 sudo[80121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:35 compute-1 sudo[80121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:35 compute-1 sudo[80121]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:35 compute-1 sudo[80146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 06:28:35 compute-1 sudo[80146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:36 compute-1 podman[80212]: 2025-12-06 06:28:36.053919227 +0000 UTC m=+0.026000852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:37 compute-1 podman[80212]: 2025-12-06 06:28:37.337325161 +0000 UTC m=+1.309406766 container create ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:39 compute-1 systemd[1]: Started libpod-conmon-ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0.scope.
Dec 06 06:28:39 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:41 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.906738758s, txc = 0x55b552275500
Dec 06 06:28:41 compute-1 podman[80212]: 2025-12-06 06:28:41.814244793 +0000 UTC m=+5.786326428 container init ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:41 compute-1 podman[80212]: 2025-12-06 06:28:41.821519117 +0000 UTC m=+5.793600722 container start ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:41 compute-1 interesting_khayyam[80229]: 167 167
Dec 06 06:28:41 compute-1 systemd[1]: libpod-ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0.scope: Deactivated successfully.
Dec 06 06:28:42 compute-1 podman[80212]: 2025-12-06 06:28:42.085670838 +0000 UTC m=+6.057752453 container attach ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:28:42 compute-1 podman[80212]: 2025-12-06 06:28:42.086675135 +0000 UTC m=+6.058756750 container died ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-88774f17f77f1e84e79809098c56ef6c4fa6c6f1be26a39e4379690ba32fbe53-merged.mount: Deactivated successfully.
Dec 06 06:28:43 compute-1 podman[80212]: 2025-12-06 06:28:43.292644764 +0000 UTC m=+7.264726399 container remove ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:43 compute-1 systemd[1]: libpod-conmon-ac980a86cfba4deb8c00263d4638903f30965a0a566cea643ff404234bbc5dc0.scope: Deactivated successfully.
Dec 06 06:28:43 compute-1 podman[80255]: 2025-12-06 06:28:43.413483972 +0000 UTC m=+0.024397661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:44 compute-1 podman[80255]: 2025-12-06 06:28:44.905679819 +0000 UTC m=+1.516593458 container create b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:28:44 compute-1 systemd[72554]: Starting Mark boot as successful...
Dec 06 06:28:44 compute-1 systemd[72554]: Finished Mark boot as successful.
Dec 06 06:28:46 compute-1 systemd[1]: Started libpod-conmon-b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890.scope.
Dec 06 06:28:46 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:28:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b3966a2b7d6714f5a215bdbb8997b97a69253edfd6a4d56720eb28becc756a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b3966a2b7d6714f5a215bdbb8997b97a69253edfd6a4d56720eb28becc756a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b3966a2b7d6714f5a215bdbb8997b97a69253edfd6a4d56720eb28becc756a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b3966a2b7d6714f5a215bdbb8997b97a69253edfd6a4d56720eb28becc756a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:47 compute-1 podman[80255]: 2025-12-06 06:28:47.82260468 +0000 UTC m=+4.433518459 container init b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:47 compute-1 podman[80255]: 2025-12-06 06:28:47.832008719 +0000 UTC m=+4.442922378 container start b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 06:28:48 compute-1 podman[80255]: 2025-12-06 06:28:48.779481569 +0000 UTC m=+5.390395258 container attach b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:49 compute-1 musing_borg[80272]: [
Dec 06 06:28:49 compute-1 musing_borg[80272]:     {
Dec 06 06:28:49 compute-1 musing_borg[80272]:         "available": false,
Dec 06 06:28:49 compute-1 musing_borg[80272]:         "ceph_device": false,
Dec 06 06:28:49 compute-1 musing_borg[80272]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 06:28:49 compute-1 musing_borg[80272]:         "lsm_data": {},
Dec 06 06:28:49 compute-1 musing_borg[80272]:         "lvs": [],
Dec 06 06:28:49 compute-1 musing_borg[80272]:         "path": "/dev/sr0",
Dec 06 06:28:49 compute-1 musing_borg[80272]:         "rejected_reasons": [
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "Insufficient space (<5GB)",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "Has a FileSystem"
Dec 06 06:28:49 compute-1 musing_borg[80272]:         ],
Dec 06 06:28:49 compute-1 musing_borg[80272]:         "sys_api": {
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "actuators": null,
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "device_nodes": "sr0",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "devname": "sr0",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "human_readable_size": "482.00 KB",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "id_bus": "ata",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "model": "QEMU DVD-ROM",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "nr_requests": "2",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "parent": "/dev/sr0",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "partitions": {},
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "path": "/dev/sr0",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "removable": "1",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "rev": "2.5+",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "ro": "0",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "rotational": "1",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "sas_address": "",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "sas_device_handle": "",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "scheduler_mode": "mq-deadline",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "sectors": 0,
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "sectorsize": "2048",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "size": 493568.0,
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "support_discard": "2048",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "type": "disk",
Dec 06 06:28:49 compute-1 musing_borg[80272]:             "vendor": "QEMU"
Dec 06 06:28:49 compute-1 musing_borg[80272]:         }
Dec 06 06:28:49 compute-1 musing_borg[80272]:     }
Dec 06 06:28:49 compute-1 musing_borg[80272]: ]
Dec 06 06:28:49 compute-1 systemd[1]: libpod-b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890.scope: Deactivated successfully.
Dec 06 06:28:49 compute-1 podman[80255]: 2025-12-06 06:28:49.037496496 +0000 UTC m=+5.648410135 container died b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:28:49 compute-1 systemd[1]: libpod-b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890.scope: Consumed 1.218s CPU time.
Dec 06 06:28:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.209106445s, txc = 0x55b5533c1800
Dec 06 06:28:51 compute-1 sshd-session[81302]: Connection reset by authenticating user root 45.140.17.124 port 48026 [preauth]
Dec 06 06:28:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-c7b3966a2b7d6714f5a215bdbb8997b97a69253edfd6a4d56720eb28becc756a-merged.mount: Deactivated successfully.
Dec 06 06:28:54 compute-1 sshd-session[81305]: Invalid user sysadmin from 45.140.17.124 port 48030
Dec 06 06:28:55 compute-1 podman[80255]: 2025-12-06 06:28:55.429248907 +0000 UTC m=+12.040162586 container remove b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_borg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:55 compute-1 sshd-session[81305]: Connection reset by invalid user sysadmin 45.140.17.124 port 48030 [preauth]
Dec 06 06:28:55 compute-1 sudo[80146]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:55 compute-1 systemd[1]: libpod-conmon-b5dedd00d8bbd20e62d810ab2929eefd9cee052cdc5d9edfe8c62040f1039890.scope: Deactivated successfully.
Dec 06 06:28:57 compute-1 sshd-session[81307]: Connection reset by authenticating user root 45.140.17.124 port 24468 [preauth]
Dec 06 06:29:01 compute-1 sshd-session[81309]: Connection reset by authenticating user root 45.140.17.124 port 24472 [preauth]
Dec 06 06:29:03 compute-1 sshd-session[81311]: Connection reset by authenticating user root 45.140.17.124 port 24482 [preauth]
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 6.474 iops: 1657.429 elapsed_sec: 1.810
Dec 06 06:29:18 compute-1 ceph-osd[79002]: log_channel(cluster) log [WRN] : OSD bench result of 1657.429499 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 0 waiting for initial osdmap
Dec 06 06:29:18 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1[78998]: 2025-12-06T06:29:18.021+0000 7f89b256f640 -1 osd.1 0 waiting for initial osdmap
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 34 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 34 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 34 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 34 check_osdmap_features require_osd_release unknown -> reef
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 34 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 34 set_numa_affinity not setting numa affinity
Dec 06 06:29:18 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-1[78998]: 2025-12-06T06:29:18.544+0000 7f89adb97640 -1 osd.1 34 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 06:29:18 compute-1 ceph-osd[79002]: osd.1 34 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec 06 06:29:19 compute-1 ceph-osd[79002]: osd.1 34 tick checking mon for new map
Dec 06 06:29:19 compute-1 ceph-osd[79002]: osd.1 36 state: booting -> active
Dec 06 06:29:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 36 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 pi=[13,36)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 36 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 pi=[22,36)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:20 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 37 pg[2.0( empty local-lis/les=36/37 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 pi=[13,36)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:20 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=36/37 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 pi=[22,36)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:22 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 38 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=36) [1] r=0 lpr=38 pi=[9,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:23 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 38 pg[1.0( v 10'32 lc 10'30 (0'0,10'32] local-lis/les=36/38 n=2 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=36) [1] r=0 lpr=38 pi=[9,36)/1 crt=10'32 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:25 compute-1 sudo[81315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:29:25 compute-1 sudo[81315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:29:25 compute-1 sudo[81315]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:25 compute-1 sudo[81340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:29:25 compute-1 sudo[81340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:29:25 compute-1 sudo[81340]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:25 compute-1 sudo[81365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:29:25 compute-1 sudo[81365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:29:25 compute-1 sudo[81365]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:25 compute-1 sudo[81390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:29:25 compute-1 sudo[81390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:29:25 compute-1 podman[81455]: 2025-12-06 06:29:25.494500512 +0000 UTC m=+0.054056649 container create 3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:29:25 compute-1 podman[81455]: 2025-12-06 06:29:25.459882162 +0000 UTC m=+0.019438309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:29:25 compute-1 systemd[1]: Started libpod-conmon-3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74.scope.
Dec 06 06:29:25 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:29:25 compute-1 podman[81455]: 2025-12-06 06:29:25.669440589 +0000 UTC m=+0.228996756 container init 3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:29:25 compute-1 podman[81455]: 2025-12-06 06:29:25.676522687 +0000 UTC m=+0.236078824 container start 3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:29:25 compute-1 podman[81455]: 2025-12-06 06:29:25.679734474 +0000 UTC m=+0.239290631 container attach 3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:25 compute-1 lucid_kowalevski[81471]: 167 167
Dec 06 06:29:25 compute-1 podman[81455]: 2025-12-06 06:29:25.682381473 +0000 UTC m=+0.241937620 container died 3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:29:25 compute-1 systemd[1]: libpod-3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74.scope: Deactivated successfully.
Dec 06 06:29:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-5f78dde21ec113a5b77e8166bb84b3f73e2e79f02c8f5a3887cdb0dc5a1e87ad-merged.mount: Deactivated successfully.
Dec 06 06:29:25 compute-1 podman[81455]: 2025-12-06 06:29:25.715877885 +0000 UTC m=+0.275434022 container remove 3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec 06 06:29:25 compute-1 systemd[1]: libpod-conmon-3733bf8aae79383356f107de085ca3ca44bad104e02180ea8f8f6dd846ba9a74.scope: Deactivated successfully.
Dec 06 06:29:25 compute-1 podman[81490]: 2025-12-06 06:29:25.758031778 +0000 UTC m=+0.019433059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:29:26 compute-1 podman[81490]: 2025-12-06 06:29:26.172058187 +0000 UTC m=+0.433459448 container create 94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 06:29:26 compute-1 systemd[1]: Started libpod-conmon-94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd.scope.
Dec 06 06:29:26 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:29:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e5acb7a505576678be4fcc877b373594aab90e8a45c01a79c3af7f78dd4de0/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e5acb7a505576678be4fcc877b373594aab90e8a45c01a79c3af7f78dd4de0/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e5acb7a505576678be4fcc877b373594aab90e8a45c01a79c3af7f78dd4de0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38e5acb7a505576678be4fcc877b373594aab90e8a45c01a79c3af7f78dd4de0/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:26 compute-1 podman[81490]: 2025-12-06 06:29:26.562671385 +0000 UTC m=+0.824072736 container init 94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 06:29:26 compute-1 podman[81490]: 2025-12-06 06:29:26.574206831 +0000 UTC m=+0.835608132 container start 94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:26 compute-1 podman[81490]: 2025-12-06 06:29:26.68198744 +0000 UTC m=+0.943388741 container attach 94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:29:26 compute-1 systemd[1]: libpod-94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd.scope: Deactivated successfully.
Dec 06 06:29:26 compute-1 podman[81490]: 2025-12-06 06:29:26.784894909 +0000 UTC m=+1.046296180 container died 94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:29:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-38e5acb7a505576678be4fcc877b373594aab90e8a45c01a79c3af7f78dd4de0-merged.mount: Deactivated successfully.
Dec 06 06:29:26 compute-1 podman[81490]: 2025-12-06 06:29:26.827072732 +0000 UTC m=+1.088474013 container remove 94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 06:29:26 compute-1 systemd[1]: libpod-conmon-94eb52aed467accb4911d706c366033b3cbef33a72e05ede1ead2ffc169d7fcd.scope: Deactivated successfully.
Dec 06 06:29:27 compute-1 systemd[1]: Reloading.
Dec 06 06:29:27 compute-1 systemd-sysv-generator[81578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:29:27 compute-1 systemd-rc-local-generator[81575]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:29:28 compute-1 systemd[1]: Reloading.
Dec 06 06:29:28 compute-1 systemd-sysv-generator[81614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:29:28 compute-1 systemd-rc-local-generator[81607]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:29:28 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:29:28 compute-1 podman[81669]: 2025-12-06 06:29:28.671231738 +0000 UTC m=+0.026030253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:29:29 compute-1 podman[81669]: 2025-12-06 06:29:29.216266715 +0000 UTC m=+0.571065220 container create 885cacdcc95787c7319f9b264b79b49beed0eac651923a7f7f0f1b4c70a6c782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efc969704c7412c29de5db10320e187a9f52a8be1c21f867d19249e9b24fcc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efc969704c7412c29de5db10320e187a9f52a8be1c21f867d19249e9b24fcc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efc969704c7412c29de5db10320e187a9f52a8be1c21f867d19249e9b24fcc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efc969704c7412c29de5db10320e187a9f52a8be1c21f867d19249e9b24fcc9/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:29 compute-1 podman[81669]: 2025-12-06 06:29:29.788094987 +0000 UTC m=+1.142893472 container init 885cacdcc95787c7319f9b264b79b49beed0eac651923a7f7f0f1b4c70a6c782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:29:29 compute-1 podman[81669]: 2025-12-06 06:29:29.798121343 +0000 UTC m=+1.152919798 container start 885cacdcc95787c7319f9b264b79b49beed0eac651923a7f7f0f1b4c70a6c782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:29:29 compute-1 ceph-mon[81689]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:29:29 compute-1 ceph-mon[81689]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec 06 06:29:29 compute-1 ceph-mon[81689]: pidfile_write: ignore empty --pid-file
Dec 06 06:29:29 compute-1 ceph-mon[81689]: load: jerasure load: lrc 
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: RocksDB version: 7.9.2
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Git sha 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: DB SUMMARY
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: DB Session ID:  SLV0S33CGVISHGWW623C
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: CURRENT file:  CURRENT
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                         Options.error_if_exists: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                       Options.create_if_missing: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                                     Options.env: 0x558b74c01c40
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                                Options.info_log: 0x558b75ae0fc0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                              Options.statistics: (nil)
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                               Options.use_fsync: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                              Options.db_log_dir: 
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                                 Options.wal_dir: 
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                    Options.write_buffer_manager: 0x558b75af0b40
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.unordered_write: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                               Options.row_cache: None
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                              Options.wal_filter: None
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.two_write_queues: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.wal_compression: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.atomic_flush: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.max_background_jobs: 2
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.max_background_compactions: -1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.max_subcompactions: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.max_total_wal_size: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                          Options.max_open_files: -1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:       Options.compaction_readahead_size: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Compression algorithms supported:
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         kZSTD supported: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         kXpressCompression supported: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         kBZip2Compression supported: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         kLZ4Compression supported: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         kZlibCompression supported: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         kSnappyCompression supported: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:           Options.merge_operator: 
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:        Options.compaction_filter: None
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b75ae0c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558b75ad91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:        Options.write_buffer_size: 33554432
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:  Options.max_write_buffer_number: 2
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:          Options.compression: NoCompression
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.num_levels: 7
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002569856474, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002569904317, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002569904576, "job": 1, "event": "recovery_finished"}
Dec 06 06:29:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 06 06:29:29 compute-1 bash[81669]: 885cacdcc95787c7319f9b264b79b49beed0eac651923a7f7f0f1b4c70a6c782
Dec 06 06:29:29 compute-1 systemd[1]: Started Ceph mon.compute-1 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:29:30 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:29:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558b75b02e00
Dec 06 06:29:30 compute-1 ceph-mon[81689]: rocksdb: DB pointer 0x558b75b8c000
Dec 06 06:29:30 compute-1 ceph-mon[81689]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Dec 06 06:29:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:29:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:29:30 compute-1 ceph-mon[81689]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Dec 06 06:29:30 compute-1 ceph-mon[81689]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:29:30 compute-1 ceph-mon[81689]: mon.compute-1@-1(???) e0 preinit fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:29:30 compute-1 sudo[81390]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:31 compute-1 sshd-session[71290]: Received disconnect from 38.102.83.248 port 33838:11: disconnected by user
Dec 06 06:29:31 compute-1 sshd-session[71290]: Disconnected from user zuul 38.102.83.248 port 33838
Dec 06 06:29:31 compute-1 sshd-session[71287]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:29:31 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Dec 06 06:29:31 compute-1 systemd[1]: session-19.scope: Consumed 8.703s CPU time.
Dec 06 06:29:31 compute-1 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Dec 06 06:29:31 compute-1 systemd-logind[788]: Removed session 19.
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).mds e2 new map
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:29:04.228395+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e24 e24: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e25 e25: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e26 e26: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e27 e27: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e28 e28: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e29 e29: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e30 e30: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e31 e31: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e32 e32: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e33 e33: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e34 e34: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e35 e35: 2 total, 1 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e36 e36: 2 total, 2 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e37 e37: 2 total, 2 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e38 e38: 2 total, 2 up, 2 in
Dec 06 06:29:32 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e39 e39: 2 total, 2 up, 2 in
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.18( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.1a( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.1b( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.1c( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.1b( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.1f( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.1c( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.1( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[6.3( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[6.5( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.10( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.15( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.13( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.15( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.14( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.13( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.10( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.1f( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[6.7( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.2( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.7( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[6.a( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[6.8( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.a( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.d( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.c( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[4.e( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=0/0 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=0/0 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[2.0( empty local-lis/les=36/37 n=0 ec=13/13 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=11.692206383s) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 82.235534668s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=36/37 n=0 ec=22/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=11.692127228s) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 82.235908508s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[2.0( empty local-lis/les=36/37 n=0 ec=13/13 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=11.692206383s) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown pruub 82.235534668s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:32 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=36/37 n=0 ec=22/22 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=11.692127228s) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown pruub 82.235908508s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:33 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e39 crush map has features 3314933000852226048, adjusting msgr requires
Dec 06 06:29:33 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e39 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 06:29:33 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e39 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 06:29:33 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e39 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: pgmap v112: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e30: 2 total, 1 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e31: 2 total, 1 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e32: 2 total, 1 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: pgmap v115: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: Updating compute-2:/etc/ceph/ceph.conf
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e33: 2 total, 1 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e34: 2 total, 1 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: 3.1 scrub starts
Dec 06 06:29:33 compute-1 ceph-mon[81689]: 3.1 scrub ok
Dec 06 06:29:33 compute-1 ceph-mon[81689]: pgmap v118: 38 pgs: 1 peering, 3 active+clean, 34 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:33 compute-1 ceph-mon[81689]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:29:33 compute-1 ceph-mon[81689]: OSD bench result of 1657.429499 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e35: 2 total, 1 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420] boot
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e36: 2 total, 2 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: 3.2 scrub starts
Dec 06 06:29:33 compute-1 ceph-mon[81689]: 3.2 scrub ok
Dec 06 06:29:33 compute-1 ceph-mon[81689]: pgmap v121: 100 pgs: 1 peering, 3 active+clean, 96 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-1 ceph-mon[81689]: pgmap v123: 115 pgs: 1 peering, 3 active+clean, 111 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:33 compute-1 ceph-mon[81689]: Deploying daemon mon.compute-2 on compute-2
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e37: 2 total, 2 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e38: 2 total, 2 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: pgmap v126: 115 pgs: 2 creating+peering, 97 active+clean, 16 unknown; 0 B data, 454 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:33 compute-1 ceph-mon[81689]: 3.3 scrub starts
Dec 06 06:29:33 compute-1 ceph-mon[81689]: 3.3 scrub ok
Dec 06 06:29:33 compute-1 ceph-mon[81689]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
Dec 06 06:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-1 ceph-mon[81689]: osdmap e39: 2 total, 2 up, 2 in
Dec 06 06:29:33 compute-1 ceph-mon[81689]: pgmap v128: 115 pgs: 2 creating+peering, 97 active+clean, 16 unknown; 0 B data, 454 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:33 compute-1 sudo[81728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:29:33 compute-1 sudo[81728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:29:33 compute-1 sudo[81728]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:33 compute-1 sudo[81753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:29:33 compute-1 sudo[81753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:29:33 compute-1 sudo[81753]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:33 compute-1 sudo[81778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:29:33 compute-1 sudo[81778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:29:33 compute-1 sudo[81778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:33 compute-1 sudo[81803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:29:33 compute-1 sudo[81803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:29:34 compute-1 podman[81869]: 2025-12-06 06:29:34.069292541 +0000 UTC m=+0.024011051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:29:34 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3
Dec 06 06:29:37 compute-1 podman[81869]: 2025-12-06 06:29:37.305693495 +0000 UTC m=+3.260412005 container create 2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:29:37 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e40 e40: 2 total, 2 up, 2 in
Dec 06 06:29:37 compute-1 ceph-mon[81689]: mon.compute-1@-1(synchronizing).osd e41 e41: 2 total, 2 up, 2 in
Dec 06 06:29:37 compute-1 systemd[1]: Started libpod-conmon-2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c.scope.
Dec 06 06:29:37 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1a( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.8( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.b( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.c( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.f( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.e( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.16( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.19( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.14( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.12( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.13( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.11( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.10( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.2( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:37 compute-1 ceph-mon[81689]: 3.4 scrub starts
Dec 06 06:29:37 compute-1 ceph-mon[81689]: 3.4 scrub ok
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:29:37 compute-1 ceph-mon[81689]: 3.5 scrub starts
Dec 06 06:29:37 compute-1 ceph-mon[81689]: 3.5 scrub ok
Dec 06 06:29:37 compute-1 ceph-mon[81689]: pgmap v129: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:29:37 compute-1 ceph-mon[81689]: pgmap v130: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: 3.6 scrub starts
Dec 06 06:29:37 compute-1 ceph-mon[81689]: 3.6 scrub ok
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: pgmap v131: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:29:37 compute-1 ceph-mon[81689]: fsmap cephfs:0
Dec 06 06:29:37 compute-1 ceph-mon[81689]: osdmap e39: 2 total, 2 up, 2 in
Dec 06 06:29:37 compute-1 ceph-mon[81689]: mgrmap e9: compute-0.sfzyix(active, since 3m)
Dec 06 06:29:37 compute-1 ceph-mon[81689]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds; Reduced data availability: 1 pg inactive
Dec 06 06:29:37 compute-1 ceph-mon[81689]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec 06 06:29:37 compute-1 ceph-mon[81689]:     fs cephfs is offline because no MDS is active for it.
Dec 06 06:29:37 compute-1 ceph-mon[81689]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:37 compute-1 ceph-mon[81689]:     fs cephfs has 0 MDS online, but wants 1
Dec 06 06:29:37 compute-1 ceph-mon[81689]: [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
Dec 06 06:29:37 compute-1 ceph-mon[81689]:     pg 1.0 is stuck inactive for 62s, current state unknown, last acting []
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:37 compute-1 ceph-mon[81689]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive)
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: osdmap e40: 2 total, 2 up, 2 in
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: Deploying daemon mgr.compute-2.ytlehq on compute-2
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/570367341' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: pgmap v133: 177 pgs: 62 unknown, 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:37 compute-1 ceph-mon[81689]: osdmap e41: 2 total, 2 up, 2 in
Dec 06 06:29:38 compute-1 podman[81869]: 2025-12-06 06:29:38.363805608 +0000 UTC m=+4.318524178 container init 2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.18( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1d( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1f( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=36/37 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1e( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:38 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1c( empty local-lis/les=36/37 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:38 compute-1 podman[81869]: 2025-12-06 06:29:38.378382066 +0000 UTC m=+4.333100596 container start 2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:29:38 compute-1 beautiful_elbakyan[81886]: 167 167
Dec 06 06:29:38 compute-1 systemd[1]: libpod-2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c.scope: Deactivated successfully.
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.1b( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.1f( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.e( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=40/41 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[6.3( empty local-lis/les=40/41 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.1c( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[6.2( empty local-lis/les=40/41 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[6.5( empty local-lis/les=40/41 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.15( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.1f( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.10( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[6.7( empty local-lis/les=40/41 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[6.a( empty local-lis/les=40/41 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[6.8( empty local-lis/les=40/41 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.c( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[6.e( empty local-lis/les=40/41 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.8( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1a( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.c( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.e( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.0( empty local-lis/les=40/41 n=0 ec=13/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=40/41 n=0 ec=22/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.16( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.14( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.12( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.11( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 podman[81869]: 2025-12-06 06:29:39.250223362 +0000 UTC m=+5.204941892 container attach 2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:29:39 compute-1 podman[81869]: 2025-12-06 06:29:39.25127075 +0000 UTC m=+5.205989350 container died 2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.10( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.2( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 41 pg[2.1e( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=36/36 les/c/f=37/37/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-a98ba22e008b94eb3bf3ea06c5a0fa1d63daefc208b7db301d66875faac18596-merged.mount: Deactivated successfully.
Dec 06 06:29:39 compute-1 podman[81869]: 2025-12-06 06:29:39.524340139 +0000 UTC m=+5.479058639 container remove 2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 06:29:39 compute-1 systemd[1]: libpod-conmon-2e728bac4accf411637a15b96775dd51072969dbb05f03c50469934d8506069c.scope: Deactivated successfully.
Dec 06 06:29:39 compute-1 systemd[1]: Reloading.
Dec 06 06:29:39 compute-1 systemd-rc-local-generator[81928]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:29:39 compute-1 systemd-sysv-generator[81933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:29:40 compute-1 systemd[1]: Reloading.
Dec 06 06:29:40 compute-1 systemd-rc-local-generator[81973]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:29:40 compute-1 systemd-sysv-generator[81977]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:29:40 compute-1 ceph-mon[81689]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Dec 06 06:29:40 compute-1 systemd[1]: Starting Ceph mgr.compute-1.nmklwp for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:29:40 compute-1 ceph-mon[81689]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 06 06:29:40 compute-1 ceph-mon[81689]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 06 06:29:40 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:40 compute-1 podman[82029]: 2025-12-06 06:29:40.635823753 +0000 UTC m=+0.097052154 container create 65c07498f6713506784cadbdfc6d31a974ee86438da42315f86a37c409cb1b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:29:40 compute-1 podman[82029]: 2025-12-06 06:29:40.560155269 +0000 UTC m=+0.021383710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:29:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e1929247a8cd7cd93b795159455b62f27e56181430283acaeba97132a89f53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e1929247a8cd7cd93b795159455b62f27e56181430283acaeba97132a89f53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e1929247a8cd7cd93b795159455b62f27e56181430283acaeba97132a89f53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e1929247a8cd7cd93b795159455b62f27e56181430283acaeba97132a89f53/merged/var/lib/ceph/mgr/ceph-compute-1.nmklwp supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:40 compute-1 podman[82029]: 2025-12-06 06:29:40.843530073 +0000 UTC m=+0.304758454 container init 65c07498f6713506784cadbdfc6d31a974ee86438da42315f86a37c409cb1b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:29:40 compute-1 podman[82029]: 2025-12-06 06:29:40.849574093 +0000 UTC m=+0.310802454 container start 65c07498f6713506784cadbdfc6d31a974ee86438da42315f86a37c409cb1b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:29:40 compute-1 bash[82029]: 65c07498f6713506784cadbdfc6d31a974ee86438da42315f86a37c409cb1b4c
Dec 06 06:29:40 compute-1 systemd[1]: Started Ceph mgr.compute-1.nmklwp for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:29:40 compute-1 sudo[81803]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:42 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 06 06:29:42 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 06 06:29:43 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Dec 06 06:29:43 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Dec 06 06:29:43 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:44 compute-1 ceph-mgr[82049]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:29:44 compute-1 ceph-mgr[82049]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 06 06:29:44 compute-1 ceph-mgr[82049]: pidfile_write: ignore empty --pid-file
Dec 06 06:29:44 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 06:29:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 06 06:29:44 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'alerts'
Dec 06 06:29:44 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 06 06:29:44 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 06 06:29:44 compute-1 ceph-mgr[82049]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 06:29:44 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'balancer'
Dec 06 06:29:44 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:44.552+0000 7fae1addd140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 06:29:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 06 06:29:44 compute-1 ceph-mgr[82049]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 06:29:44 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'cephadm'
Dec 06 06:29:44 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:44.798+0000 7fae1addd140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 06:29:45 compute-1 ceph-mon[81689]: Deploying daemon mgr.compute-1.nmklwp on compute-1
Dec 06 06:29:45 compute-1 ceph-mon[81689]: pgmap v135: 177 pgs: 62 unknown, 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:45 compute-1 ceph-mon[81689]: 3.7 scrub starts
Dec 06 06:29:45 compute-1 ceph-mon[81689]: 3.7 scrub ok
Dec 06 06:29:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:45 compute-1 ceph-mon[81689]: pgmap v136: 177 pgs: 57 peering, 62 unknown, 58 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:45 compute-1 ceph-mon[81689]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-12-06T06:29:26.682139Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec 06 06:29:46 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 06 06:29:46 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 06 06:29:46 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'crash'
Dec 06 06:29:47 compute-1 ceph-mgr[82049]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 06:29:47 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'dashboard'
Dec 06 06:29:47 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:47.070+0000 7fae1addd140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 06:29:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e42 e42: 2 total, 2 up, 2 in
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:29:48 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: 3.b scrub starts
Dec 06 06:29:48 compute-1 ceph-mon[81689]: 3.b scrub ok
Dec 06 06:29:48 compute-1 ceph-mon[81689]: pgmap v138: 177 pgs: 57 peering, 62 unknown, 58 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: pgmap v139: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: 2.1 scrub starts
Dec 06 06:29:48 compute-1 ceph-mon[81689]: 2.1 scrub ok
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: 2.2 deep-scrub starts
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: 2.2 deep-scrub ok
Dec 06 06:29:48 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:29:48 compute-1 ceph-mon[81689]: pgmap v140: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: 2.3 scrub starts
Dec 06 06:29:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:48 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:29:48 compute-1 ceph-mon[81689]: fsmap cephfs:0
Dec 06 06:29:48 compute-1 ceph-mon[81689]: osdmap e41: 2 total, 2 up, 2 in
Dec 06 06:29:48 compute-1 ceph-mon[81689]: mgrmap e9: compute-0.sfzyix(active, since 3m)
Dec 06 06:29:48 compute-1 ceph-mon[81689]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:48 compute-1 ceph-mon[81689]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec 06 06:29:48 compute-1 ceph-mon[81689]:     fs cephfs is offline because no MDS is active for it.
Dec 06 06:29:48 compute-1 ceph-mon[81689]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:48 compute-1 ceph-mon[81689]:     fs cephfs has 0 MDS online, but wants 1
Dec 06 06:29:48 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 06 06:29:48 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'devicehealth'
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633359909s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.687965393s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633323669s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.687950134s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633421898s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.687950134s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633502007s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688201904s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633271217s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.687965393s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633262634s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.687950134s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633479118s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688201904s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633143425s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.687950134s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633011818s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.687965393s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633004189s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.687965393s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632991791s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.687965393s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632975578s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.687965393s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.c( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632949829s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688018799s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.c( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632931709s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688018799s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633093834s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688247681s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633070946s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688247681s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632801056s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688018799s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632781029s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688018799s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.e( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632738113s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688117981s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.e( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632709503s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688117981s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633028984s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688468933s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.633007050s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688468933s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632685661s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688247681s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632667542s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688247681s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632705688s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688354492s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632681847s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688354492s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632709503s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688468933s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632688522s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688468933s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632620811s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688476562s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632591248s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688476562s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632535934s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688598633s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632462502s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688537598s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632438660s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688537598s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632505417s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688598633s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.10( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.769130707s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.825401306s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.10( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.769106865s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.825401306s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.769064903s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.825393677s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.769016266s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.825393677s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.1( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632225037s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.688606262s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.768942833s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.825386047s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.1( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.632185936s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.688606262s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.768933296s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.825416565s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.768926620s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.825386047s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.768911362s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.825416565s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.768808365s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.825408936s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.768790245s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.825408936s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.771302223s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active+scrubbing pruub 100.827980042s@ [ 2.4:  ]  mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790810585s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.847511292s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790791512s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.847511292s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790768623s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.847572327s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790749550s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.847572327s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790653229s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.847572327s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790691376s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.847640991s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790670395s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.847640991s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790624619s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.847572327s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790907860s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.847946167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.790888786s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.847946167s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.770881653s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.828010559s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.770864487s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.828010559s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.1e( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.791406631s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.848571777s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.1e( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.791352272s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.848571777s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.791460991s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 100.848556519s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.791245461s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.848556519s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=14.771255493s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.827980042s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:48 compute-1 ceph-mgr[82049]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 06:29:48 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 06:29:48 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:48.733+0000 7fae1addd140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 06:29:49 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 06:29:49 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 06:29:49 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]:   from numpy import show_config as show_numpy_config
Dec 06 06:29:49 compute-1 ceph-mgr[82049]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 06:29:49 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:49.316+0000 7fae1addd140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 06:29:49 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'influx'
Dec 06 06:29:49 compute-1 ceph-mgr[82049]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 06:29:49 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:49.569+0000 7fae1addd140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 06:29:49 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'insights'
Dec 06 06:29:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1019915529 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:49 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'iostat'
Dec 06 06:29:50 compute-1 ceph-mgr[82049]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 06:29:50 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'k8sevents'
Dec 06 06:29:50 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:50.080+0000 7fae1addd140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 06:29:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e43 e43: 2 total, 2 up, 2 in
Dec 06 06:29:50 compute-1 ceph-mon[81689]: mon.compute-1 calling monitor election
Dec 06 06:29:50 compute-1 ceph-mon[81689]: 2.3 scrub ok
Dec 06 06:29:50 compute-1 ceph-mon[81689]: 3.12 scrub starts
Dec 06 06:29:50 compute-1 ceph-mon[81689]: 3.12 scrub ok
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:50 compute-1 ceph-mon[81689]: pgmap v141: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:50 compute-1 ceph-mon[81689]: 5.18 scrub starts
Dec 06 06:29:50 compute-1 ceph-mon[81689]: 5.18 scrub ok
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-1 ceph-mon[81689]: osdmap e42: 2 total, 2 up, 2 in
Dec 06 06:29:50 compute-1 ceph-mon[81689]: pgmap v143: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:50 compute-1 ceph-mon[81689]: 2.4 scrub starts
Dec 06 06:29:51 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'localpool'
Dec 06 06:29:52 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 06:29:52 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 06 06:29:52 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'mirroring'
Dec 06 06:29:52 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 06 06:29:52 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'nfs'
Dec 06 06:29:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:53 compute-1 ceph-mon[81689]: osdmap e43: 2 total, 2 up, 2 in
Dec 06 06:29:53 compute-1 ceph-mon[81689]: pgmap v145: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:53 compute-1 ceph-mgr[82049]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 06:29:53 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'orchestrator'
Dec 06 06:29:53 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:53.743+0000 7fae1addd140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 06:29:54 compute-1 ceph-mgr[82049]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 06:29:54 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:54.401+0000 7fae1addd140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 06:29:54 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 06:29:54 compute-1 ceph-mgr[82049]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 06:29:54 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:54.655+0000 7fae1addd140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 06:29:54 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'osd_support'
Dec 06 06:29:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020052970 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:54 compute-1 ceph-mgr[82049]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 06:29:54 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 06:29:54 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:54.874+0000 7fae1addd140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 06:29:55 compute-1 ceph-mgr[82049]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 06:29:55 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:55.142+0000 7fae1addd140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 06:29:55 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'progress'
Dec 06 06:29:55 compute-1 ceph-mgr[82049]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 06:29:55 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'prometheus'
Dec 06 06:29:55 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:55.384+0000 7fae1addd140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 06:29:56 compute-1 ceph-mgr[82049]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 06:29:56 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:56.319+0000 7fae1addd140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 06:29:56 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'rbd_support'
Dec 06 06:29:56 compute-1 ceph-mgr[82049]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 06:29:56 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:56.606+0000 7fae1addd140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 06:29:56 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'restful'
Dec 06 06:29:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:57 compute-1 ceph-mon[81689]: pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:57 compute-1 ceph-mon[81689]: 3.1d scrub starts
Dec 06 06:29:57 compute-1 ceph-mon[81689]: 3.1d scrub ok
Dec 06 06:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1508508610' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:29:57 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'rgw'
Dec 06 06:29:58 compute-1 ceph-mgr[82049]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 06:29:58 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:29:58.056+0000 7fae1addd140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 06:29:58 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'rook'
Dec 06 06:30:00 compute-1 ceph-mgr[82049]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 06:30:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:00.179+0000 7fae1addd140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 06:30:00 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'selftest'
Dec 06 06:30:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:00 compute-1 ceph-mgr[82049]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 06:30:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:00.426+0000 7fae1addd140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 06:30:00 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'snap_schedule'
Dec 06 06:30:00 compute-1 ceph-mgr[82049]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 06:30:00 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'stats'
Dec 06 06:30:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:00.684+0000 7fae1addd140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 06:30:00 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.042203426s, txc = 0x55b5535e5200
Dec 06 06:30:00 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.042327404s, txc = 0x55b552efb500
Dec 06 06:30:00 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.042197227s, txc = 0x55b552c7f800
Dec 06 06:30:00 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.041730881s, txc = 0x55b55352af00
Dec 06 06:30:00 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.041583061s, txc = 0x55b5535eac00
Dec 06 06:30:00 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'status'
Dec 06 06:30:01 compute-1 ceph-mgr[82049]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 06:30:01 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'telegraf'
Dec 06 06:30:01 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:01.209+0000 7fae1addd140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 06:30:01 compute-1 ceph-mgr[82049]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 06:30:01 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'telemetry'
Dec 06 06:30:01 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:01.447+0000 7fae1addd140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 06:30:01 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Dec 06 06:30:01 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 06 06:30:02 compute-1 ceph-mgr[82049]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 06:30:02 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 06:30:02 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:02.059+0000 7fae1addd140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 06:30:02 compute-1 ceph-mon[81689]: 3.17 deep-scrub starts
Dec 06 06:30:02 compute-1 ceph-mon[81689]: 3.17 deep-scrub ok
Dec 06 06:30:02 compute-1 ceph-mon[81689]: 3.18 scrub starts
Dec 06 06:30:02 compute-1 ceph-mon[81689]: 3.18 scrub ok
Dec 06 06:30:02 compute-1 ceph-mon[81689]: pgmap v147: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2371049915' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:30:02 compute-1 ceph-mon[81689]: Standby manager daemon compute-2.ytlehq started
Dec 06 06:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/740520273' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 06 06:30:02 compute-1 ceph-mon[81689]: pgmap v148: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:02 compute-1 ceph-mgr[82049]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 06:30:02 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'volumes'
Dec 06 06:30:02 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:02.729+0000 7fae1addd140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 06:30:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 06 06:30:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Dec 06 06:30:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 06 06:30:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 06 06:30:03 compute-1 ceph-mgr[82049]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 06:30:03 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:03.430+0000 7fae1addd140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 06:30:03 compute-1 ceph-mgr[82049]: mgr[py] Loading python module 'zabbix'
Dec 06 06:30:03 compute-1 ceph-mgr[82049]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 06:30:03 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-1-nmklwp[82045]: 2025-12-06T06:30:03.701+0000 7fae1addd140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 06:30:03 compute-1 ceph-mgr[82049]: ms_deliver_dispatch: unhandled message 0x56438121b1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec 06 06:30:03 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 06:30:04 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 06:30:05 compute-1 ceph-mon[81689]: 3.19 deep-scrub starts
Dec 06 06:30:05 compute-1 ceph-mon[81689]: 3.19 deep-scrub ok
Dec 06 06:30:05 compute-1 ceph-mon[81689]: 3.1b scrub starts
Dec 06 06:30:05 compute-1 ceph-mon[81689]: 3.1b scrub ok
Dec 06 06:30:05 compute-1 ceph-mon[81689]: pgmap v149: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:05 compute-1 ceph-mon[81689]: from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:05 compute-1 ceph-mon[81689]: overall HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:30:05 compute-1 ceph-mon[81689]: pgmap v150: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 06:30:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:05 compute-1 ceph-mon[81689]: Deploying daemon crash.compute-2 on compute-2
Dec 06 06:30:05 compute-1 ceph-mon[81689]: from='client.14289 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:05 compute-1 ceph-mon[81689]: 2.5 deep-scrub starts
Dec 06 06:30:05 compute-1 ceph-mon[81689]: 4.1a scrub starts
Dec 06 06:30:05 compute-1 ceph-mon[81689]: pgmap v151: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:05 compute-1 ceph-mon[81689]: mgrmap e10: compute-0.sfzyix(active, since 3m), standbys: compute-2.ytlehq
Dec 06 06:30:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ytlehq", "id": "compute-2.ytlehq"}]: dispatch
Dec 06 06:30:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:06 compute-1 ceph-mon[81689]: from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:06 compute-1 ceph-mon[81689]: 3.1c scrub starts
Dec 06 06:30:06 compute-1 ceph-mon[81689]: 2.5 deep-scrub ok
Dec 06 06:30:06 compute-1 ceph-mon[81689]: 3.1c scrub ok
Dec 06 06:30:06 compute-1 ceph-mon[81689]: 4.1a scrub ok
Dec 06 06:30:06 compute-1 ceph-mon[81689]: Standby manager daemon compute-1.nmklwp started
Dec 06 06:30:06 compute-1 ceph-mon[81689]: pgmap v152: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:06 compute-1 ceph-mon[81689]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:06 compute-1 ceph-mon[81689]: 3.1e scrub starts
Dec 06 06:30:06 compute-1 ceph-mon[81689]: 3.1e scrub ok
Dec 06 06:30:06 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 3m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:30:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-1.nmklwp", "id": "compute-1.nmklwp"}]: dispatch
Dec 06 06:30:07 compute-1 ceph-mon[81689]: pgmap v153: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/416954814' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:30:10 compute-1 ceph-mon[81689]: pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1169795211' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:30:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:10 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 06 06:30:11 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:11 compute-1 ceph-mon[81689]: 3.1f scrub starts
Dec 06 06:30:11 compute-1 ceph-mon[81689]: 3.1f scrub ok
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:11 compute-1 ceph-mon[81689]: pgmap v155: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/404259740' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 06 06:30:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:12 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 06 06:30:12 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 06 06:30:12 compute-1 ceph-mon[81689]: 5.3 scrub starts
Dec 06 06:30:12 compute-1 ceph-mon[81689]: 5.3 scrub ok
Dec 06 06:30:12 compute-1 ceph-mon[81689]: 7.1 scrub starts
Dec 06 06:30:12 compute-1 ceph-mon[81689]: 7.1 scrub ok
Dec 06 06:30:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e44 e44: 3 total, 2 up, 3 in
Dec 06 06:30:15 compute-1 ceph-mon[81689]: 4.1b scrub starts
Dec 06 06:30:15 compute-1 ceph-mon[81689]: 4.1b scrub ok
Dec 06 06:30:15 compute-1 ceph-mon[81689]: pgmap v156: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3873500984' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 06 06:30:15 compute-1 ceph-mon[81689]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]: dispatch
Dec 06 06:30:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1558451655' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]: dispatch
Dec 06 06:30:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:15 compute-1 ceph-mon[81689]: 5.5 scrub starts
Dec 06 06:30:15 compute-1 ceph-mon[81689]: 5.5 scrub ok
Dec 06 06:30:15 compute-1 ceph-mon[81689]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]': finished
Dec 06 06:30:15 compute-1 ceph-mon[81689]: osdmap e44: 3 total, 2 up, 3 in
Dec 06 06:30:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:15 compute-1 ceph-mon[81689]: pgmap v158: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1378465590' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 06:30:17 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Dec 06 06:30:17 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Dec 06 06:30:17 compute-1 ceph-mon[81689]: 5.6 deep-scrub starts
Dec 06 06:30:17 compute-1 ceph-mon[81689]: 5.6 deep-scrub ok
Dec 06 06:30:17 compute-1 ceph-mon[81689]: pgmap v159: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:18 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 06 06:30:18 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 06 06:30:18 compute-1 ceph-mon[81689]: 5.1a deep-scrub starts
Dec 06 06:30:18 compute-1 ceph-mon[81689]: 5.1a deep-scrub ok
Dec 06 06:30:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:19 compute-1 ceph-mon[81689]: 3.1a scrub starts
Dec 06 06:30:19 compute-1 ceph-mon[81689]: 3.1a scrub ok
Dec 06 06:30:19 compute-1 ceph-mon[81689]: pgmap v160: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:20 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 06 06:30:20 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 06 06:30:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:21 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 06 06:30:21 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 06 06:30:21 compute-1 ceph-mon[81689]: 4.2 scrub starts
Dec 06 06:30:21 compute-1 ceph-mon[81689]: 4.2 scrub ok
Dec 06 06:30:21 compute-1 ceph-mon[81689]: pgmap v161: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:21 compute-1 ceph-mon[81689]: 2.7 scrub starts
Dec 06 06:30:21 compute-1 ceph-mon[81689]: 2.7 scrub ok
Dec 06 06:30:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 06 06:30:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:22 compute-1 ceph-mon[81689]: Deploying daemon osd.2 on compute-2
Dec 06 06:30:22 compute-1 ceph-mon[81689]: 4.3 scrub starts
Dec 06 06:30:22 compute-1 ceph-mon[81689]: 4.3 scrub ok
Dec 06 06:30:22 compute-1 ceph-mon[81689]: 5.e scrub starts
Dec 06 06:30:22 compute-1 ceph-mon[81689]: 5.e scrub ok
Dec 06 06:30:22 compute-1 ceph-mon[81689]: pgmap v162: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:25 compute-1 ceph-mon[81689]: pgmap v163: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:25 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.d deep-scrub starts
Dec 06 06:30:25 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.d deep-scrub ok
Dec 06 06:30:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:26 compute-1 ceph-mon[81689]: 6.d deep-scrub starts
Dec 06 06:30:26 compute-1 ceph-mon[81689]: 6.d deep-scrub ok
Dec 06 06:30:26 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Dec 06 06:30:26 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Dec 06 06:30:27 compute-1 ceph-mon[81689]: 5.8 scrub starts
Dec 06 06:30:27 compute-1 ceph-mon[81689]: 5.8 scrub ok
Dec 06 06:30:27 compute-1 ceph-mon[81689]: pgmap v164: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:27 compute-1 ceph-mon[81689]: 4.4 deep-scrub starts
Dec 06 06:30:27 compute-1 ceph-mon[81689]: 4.4 deep-scrub ok
Dec 06 06:30:27 compute-1 ceph-mon[81689]: 2.8 deep-scrub starts
Dec 06 06:30:27 compute-1 ceph-mon[81689]: 2.8 deep-scrub ok
Dec 06 06:30:30 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 06 06:30:30 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 06 06:30:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:31 compute-1 ceph-mon[81689]: pgmap v165: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:33 compute-1 ceph-mon[81689]: 5.a scrub starts
Dec 06 06:30:33 compute-1 ceph-mon[81689]: 5.a scrub ok
Dec 06 06:30:33 compute-1 ceph-mon[81689]: pgmap v166: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:33 compute-1 ceph-mon[81689]: 5.1c scrub starts
Dec 06 06:30:33 compute-1 ceph-mon[81689]: 5.1c scrub ok
Dec 06 06:30:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:33 compute-1 ceph-mon[81689]: from='osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 06 06:30:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:33 compute-1 ceph-mon[81689]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 06 06:30:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e45 e45: 3 total, 2 up, 3 in
Dec 06 06:30:35 compute-1 ceph-mon[81689]: pgmap v167: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:35 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 06 06:30:35 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 06 06:30:36 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 06 06:30:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:36 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 06 06:30:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e46 e46: 3 total, 2 up, 3 in
Dec 06 06:30:37 compute-1 ceph-mon[81689]: purged_snaps scrub starts
Dec 06 06:30:37 compute-1 ceph-mon[81689]: purged_snaps scrub ok
Dec 06 06:30:37 compute-1 ceph-mon[81689]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 06 06:30:37 compute-1 ceph-mon[81689]: osdmap e45: 3 total, 2 up, 3 in
Dec 06 06:30:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:37 compute-1 ceph-mon[81689]: from='osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 06 06:30:37 compute-1 ceph-mon[81689]: pgmap v169: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:37 compute-1 ceph-mon[81689]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 06 06:30:37 compute-1 ceph-mon[81689]: 4.1 scrub starts
Dec 06 06:30:37 compute-1 ceph-mon[81689]: 4.1 scrub ok
Dec 06 06:30:37 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968671799s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active+scrubbing pruub 149.370117188s@ [ 2.b:  ]  mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968488693s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369949341s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968466759s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369949341s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968466759s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369949341s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968671799s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370117188s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968488693s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369949341s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967803001s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369628906s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967716217s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369552612s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967803001s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369628906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968182564s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370040894s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967716217s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369552612s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968182564s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370040894s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967565536s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369537354s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968008041s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370056152s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967565536s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369537354s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967920303s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370025635s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968008041s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370056152s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967920303s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370025635s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[5.4( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967132568s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369369507s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[5.4( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967132568s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369369507s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.1f( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.966769218s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369064331s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968071938s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370452881s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.1f( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.966769218s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369064331s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968071938s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370452881s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.12( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967986107s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370422363s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.12( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967986107s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370422363s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967891693s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370437622s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.15( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968369484s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370925903s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967891693s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370437622s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.15( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.968369484s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370925903s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967000008s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369613647s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967000008s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369613647s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.1( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.965954781s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.368698120s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.966578484s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369323730s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.966578484s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369323730s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[4.1( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.965954781s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368698120s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.966227531s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369049072s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[5.e( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.965972900s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.368820190s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.966227531s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369049072s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967878342s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370788574s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[5.e( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.965972900s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368820190s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967878342s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370788574s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967774391s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370758057s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.966074944s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.369094849s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967734337s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.370788574s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967774391s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370758057s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.966074944s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369094849s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.967734337s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370788574s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.965488434s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 149.368591309s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=13.965488434s) [] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368591309s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:38 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Dec 06 06:30:40 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Dec 06 06:30:41 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 06 06:30:41 compute-1 ceph-mon[81689]: pgmap v170: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:41 compute-1 ceph-mon[81689]: 4.6 scrub starts
Dec 06 06:30:41 compute-1 ceph-mon[81689]: 4.6 scrub ok
Dec 06 06:30:41 compute-1 ceph-mon[81689]: 5.1f scrub starts
Dec 06 06:30:41 compute-1 ceph-mon[81689]: 5.1f scrub ok
Dec 06 06:30:41 compute-1 ceph-mon[81689]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Dec 06 06:30:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:41 compute-1 ceph-mon[81689]: osdmap e46: 3 total, 2 up, 3 in
Dec 06 06:30:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:41 compute-1 ceph-mon[81689]: 2.b scrub starts
Dec 06 06:30:41 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 06 06:30:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-1 ceph-mon[81689]: pgmap v172: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:43 compute-1 ceph-mon[81689]: 6.2 deep-scrub starts
Dec 06 06:30:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-1 ceph-mon[81689]: pgmap v173: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:43 compute-1 ceph-mon[81689]: 6.2 deep-scrub ok
Dec 06 06:30:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-1 ceph-mon[81689]: 6.5 scrub starts
Dec 06 06:30:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Dec 06 06:30:44 compute-1 ceph-mon[81689]: 6.5 scrub ok
Dec 06 06:30:44 compute-1 ceph-mon[81689]: pgmap v174: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:44 compute-1 ceph-mon[81689]: 5.c scrub starts
Dec 06 06:30:44 compute-1 ceph-mon[81689]: 5.c scrub ok
Dec 06 06:30:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:44 compute-1 ceph-mon[81689]: Deploying daemon rgw.rgw.compute-2.oieczf on compute-2
Dec 06 06:30:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:44 compute-1 ceph-mon[81689]: OSD bench result of 5358.078177 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 06:30:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671799183s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369949341s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671891689s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370117188s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671836853s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370117188s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671664715s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370040894s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671582699s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370040894s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671404839s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369949341s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.9( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671010017s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369628906s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.9( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671288013s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369949341s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.9( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670946121s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369628906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.8( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670828342s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369537354s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.8( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670803547s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369537354s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671273708s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370056152s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.e( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670790672s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369552612s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671251297s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370056152s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671178341s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370025635s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671156883s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370025635s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.e( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670693874s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369552612s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[5.4( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670410156s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369369507s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[5.4( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670388699s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369369507s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.1f( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670014858s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369064331s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.15( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670165062s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369323730s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.1f( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.669985294s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369064331s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671268940s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370452881s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.15( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670136452s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369323730s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671244144s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370452881s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.12( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671096325s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370422363s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.12( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671054363s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370422363s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671031475s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370437622s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671010494s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370437622s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.15( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671486855s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370925903s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.15( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671455383s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370925903s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.11( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670048237s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369613647s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.11( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670021534s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369613647s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.1( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.669030190s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368698120s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.9( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.671235561s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369949341s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[4.1( empty local-lis/les=40/41 n=0 ec=34/16 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.669006824s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368698120s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[5.e( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.668996334s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368820190s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[5.e( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.668976307s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368820190s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.1a( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.669094563s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369049072s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670814514s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370788574s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670812130s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370758057s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[5.1a( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.669115067s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369094849s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670682907s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370758057s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[5.1a( empty local-lis/les=40/41 n=0 ec=34/18 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.668960094s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369094849s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670722961s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370788574s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.1a( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.669041634s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.369049072s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670577526s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370788574s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.670553684s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.370788574s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.1d( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.668305874s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368591309s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:47 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 47 pg[3.1d( empty local-lis/les=40/41 n=0 ec=32/14 lis/c=40/40 les/c/f=41/41/0 sis=47 pruub=4.668283463s) [2] r=-1 lpr=47 pi=[40,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 149.368591309s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:48 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 06 06:30:48 compute-1 ceph-mon[81689]: pgmap v175: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:48 compute-1 ceph-mon[81689]: 4.7 scrub starts
Dec 06 06:30:48 compute-1 ceph-mon[81689]: 4.7 scrub ok
Dec 06 06:30:48 compute-1 ceph-mon[81689]: osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493] boot
Dec 06 06:30:48 compute-1 ceph-mon[81689]: osdmap e47: 3 total, 3 up, 3 in
Dec 06 06:30:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:48 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 06 06:30:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Dec 06 06:30:51 compute-1 ceph-mon[81689]: pgmap v177: 177 pgs: 27 peering, 150 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:51 compute-1 ceph-mon[81689]: 4.b scrub starts
Dec 06 06:30:51 compute-1 ceph-mon[81689]: 4.b scrub ok
Dec 06 06:30:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:51 compute-1 ceph-mon[81689]: 6.3 scrub starts
Dec 06 06:30:51 compute-1 ceph-mon[81689]: pgmap v178: 177 pgs: 27 peering, 150 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:52 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 06 06:30:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Dec 06 06:30:53 compute-1 ceph-mon[81689]: 6.3 scrub ok
Dec 06 06:30:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:53 compute-1 ceph-mon[81689]: 5.14 scrub starts
Dec 06 06:30:53 compute-1 ceph-mon[81689]: pgmap v179: 177 pgs: 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:53 compute-1 ceph-mon[81689]: 5.14 scrub ok
Dec 06 06:30:53 compute-1 ceph-mon[81689]: osdmap e48: 3 total, 3 up, 3 in
Dec 06 06:30:53 compute-1 sudo[82086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:53 compute-1 sudo[82086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:53 compute-1 sudo[82086]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:53 compute-1 sudo[82111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:30:53 compute-1 sudo[82111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:53 compute-1 sudo[82111]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:53 compute-1 sudo[82136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:53 compute-1 sudo[82136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:53 compute-1 sudo[82136]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:53 compute-1 sudo[82161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:30:53 compute-1 sudo[82161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:54 compute-1 podman[82224]: 2025-12-06 06:30:54.184553035 +0000 UTC m=+0.020658644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:54 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 06 06:30:55 compute-1 podman[82224]: 2025-12-06 06:30:55.661577212 +0000 UTC m=+1.497682791 container create b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Dec 06 06:30:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 5.17 scrub starts
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 5.17 scrub ok
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 3.9 scrub starts
Dec 06 06:30:55 compute-1 ceph-mon[81689]: pgmap v181: 178 pgs: 1 unknown, 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 3.9 scrub ok
Dec 06 06:30:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:55 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 3.10 scrub starts
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 4.f scrub starts
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 4.f scrub ok
Dec 06 06:30:55 compute-1 ceph-mon[81689]: osdmap e49: 3 total, 3 up, 3 in
Dec 06 06:30:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 4.1c scrub starts
Dec 06 06:30:55 compute-1 ceph-mon[81689]: 4.1c scrub ok
Dec 06 06:30:55 compute-1 systemd[1]: Started libpod-conmon-b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3.scope.
Dec 06 06:30:55 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:30:55 compute-1 podman[82224]: 2025-12-06 06:30:55.755774089 +0000 UTC m=+1.591879688 container init b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:30:55 compute-1 podman[82224]: 2025-12-06 06:30:55.764566928 +0000 UTC m=+1.600672507 container start b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:55 compute-1 podman[82224]: 2025-12-06 06:30:55.768636109 +0000 UTC m=+1.604741718 container attach b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:30:55 compute-1 beautiful_wing[82240]: 167 167
Dec 06 06:30:55 compute-1 systemd[1]: libpod-b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3.scope: Deactivated successfully.
Dec 06 06:30:55 compute-1 podman[82224]: 2025-12-06 06:30:55.771984641 +0000 UTC m=+1.608090220 container died b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:30:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-7fdbe10cf17b76f89ee1abcc6d2b20180f7d58c36cb6ae68c2c9e36d9c5d754d-merged.mount: Deactivated successfully.
Dec 06 06:30:56 compute-1 podman[82224]: 2025-12-06 06:30:56.226924497 +0000 UTC m=+2.063030076 container remove b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:30:56 compute-1 systemd[1]: libpod-conmon-b97b2b5203746c48dc18337492b48c6ec4914367d6d9f6b9f6456781ff87c4d3.scope: Deactivated successfully.
Dec 06 06:30:56 compute-1 systemd[1]: Reloading.
Dec 06 06:30:56 compute-1 systemd-sysv-generator[82287]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:30:56 compute-1 systemd-rc-local-generator[82282]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:30:56 compute-1 systemd[1]: Reloading.
Dec 06 06:30:56 compute-1 systemd-rc-local-generator[82326]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:30:56 compute-1 systemd-sysv-generator[82330]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:30:56 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 06 06:30:56 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 06 06:30:56 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.dmyhav for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:30:57 compute-1 ceph-mon[81689]: Deploying daemon rgw.rgw.compute-1.dmyhav on compute-1
Dec 06 06:30:57 compute-1 ceph-mon[81689]: pgmap v183: 178 pgs: 1 unknown, 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:57 compute-1 ceph-mon[81689]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:30:57 compute-1 ceph-mon[81689]: 3.10 scrub ok
Dec 06 06:30:57 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 06 06:30:57 compute-1 ceph-mon[81689]: osdmap e50: 3 total, 3 up, 3 in
Dec 06 06:30:57 compute-1 ceph-mon[81689]: pgmap v185: 178 pgs: 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Dec 06 06:30:57 compute-1 podman[82384]: 2025-12-06 06:30:57.14008768 +0000 UTC m=+0.020958302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:57 compute-1 podman[82384]: 2025-12-06 06:30:57.51440431 +0000 UTC m=+0.395274912 container create 27a3b6c61032b948d800b083ac7f0738de51b00e95da2dcd87d12eeaa6180954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-1-dmyhav, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:30:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Dec 06 06:30:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a616f134320f93df2d57bd2a361f1ad1f59b0e2574937d2afaf9e6f2ee96623b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a616f134320f93df2d57bd2a361f1ad1f59b0e2574937d2afaf9e6f2ee96623b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a616f134320f93df2d57bd2a361f1ad1f59b0e2574937d2afaf9e6f2ee96623b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a616f134320f93df2d57bd2a361f1ad1f59b0e2574937d2afaf9e6f2ee96623b/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.dmyhav supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:57 compute-1 podman[82384]: 2025-12-06 06:30:57.849195622 +0000 UTC m=+0.730066244 container init 27a3b6c61032b948d800b083ac7f0738de51b00e95da2dcd87d12eeaa6180954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-1-dmyhav, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:30:57 compute-1 podman[82384]: 2025-12-06 06:30:57.85350627 +0000 UTC m=+0.734376872 container start 27a3b6c61032b948d800b083ac7f0738de51b00e95da2dcd87d12eeaa6180954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-1-dmyhav, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:30:57 compute-1 bash[82384]: 27a3b6c61032b948d800b083ac7f0738de51b00e95da2dcd87d12eeaa6180954
Dec 06 06:30:57 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.dmyhav for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:30:57 compute-1 sudo[82161]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:57 compute-1 radosgw[82404]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:30:57 compute-1 radosgw[82404]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec 06 06:30:57 compute-1 radosgw[82404]: framework: beast
Dec 06 06:30:57 compute-1 radosgw[82404]: framework conf key: endpoint, val: 192.168.122.101:8082
Dec 06 06:30:57 compute-1 radosgw[82404]: init_numa not setting numa affinity
Dec 06 06:30:58 compute-1 ceph-mon[81689]: 4.10 scrub starts
Dec 06 06:30:58 compute-1 ceph-mon[81689]: 4.10 scrub ok
Dec 06 06:30:58 compute-1 ceph-mon[81689]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:30:58 compute-1 ceph-mon[81689]: 7.7 scrub starts
Dec 06 06:30:58 compute-1 ceph-mon[81689]: 7.7 scrub ok
Dec 06 06:30:58 compute-1 ceph-mon[81689]: 5.19 scrub starts
Dec 06 06:30:58 compute-1 ceph-mon[81689]: 5.19 scrub ok
Dec 06 06:30:58 compute-1 ceph-mon[81689]: osdmap e51: 3 total, 3 up, 3 in
Dec 06 06:30:58 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 06:30:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 06:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Dec 06 06:30:59 compute-1 ceph-mon[81689]: pgmap v187: 179 pgs: 1 unknown, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Dec 06 06:30:59 compute-1 ceph-mon[81689]: 4.11 scrub starts
Dec 06 06:30:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:59 compute-1 ceph-mon[81689]: 4.11 scrub ok
Dec 06 06:30:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:59 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 06 06:30:59 compute-1 ceph-mon[81689]: osdmap e52: 3 total, 3 up, 3 in
Dec 06 06:30:59 compute-1 ceph-mon[81689]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:30:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Dec 06 06:30:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec 06 06:30:59 compute-1 ceph-mon[81689]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/558176623' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:30:59 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 53 pg[10.0( empty local-lis/les=0/0 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:01 compute-1 anacron[30936]: Job `cron.weekly' started
Dec 06 06:31:01 compute-1 anacron[30936]: Job `cron.weekly' terminated
Dec 06 06:31:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Dec 06 06:31:02 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 06 06:31:02 compute-1 ceph-mon[81689]: Deploying daemon rgw.rgw.compute-0.wqlami on compute-0
Dec 06 06:31:02 compute-1 ceph-mon[81689]: 4.19 scrub starts
Dec 06 06:31:02 compute-1 ceph-mon[81689]: 4.19 scrub ok
Dec 06 06:31:02 compute-1 ceph-mon[81689]: osdmap e53: 3 total, 3 up, 3 in
Dec 06 06:31:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1768129791' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:02 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/558176623' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:02 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:02 compute-1 ceph-mon[81689]: pgmap v190: 180 pgs: 2 unknown, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 922 B/s rd, 922 B/s wr, 1 op/s
Dec 06 06:31:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:02 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 06 06:31:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:02 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 06 06:31:03 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 54 pg[10.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 06 06:31:04 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 06 06:31:04 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 06 06:31:04 compute-1 ceph-mon[81689]: 4.12 scrub starts
Dec 06 06:31:04 compute-1 ceph-mon[81689]: 4.12 scrub ok
Dec 06 06:31:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1768129791' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:04 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:04 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:04 compute-1 ceph-mon[81689]: osdmap e54: 3 total, 3 up, 3 in
Dec 06 06:31:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Dec 06 06:31:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec 06 06:31:04 compute-1 ceph-mon[81689]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/758001210' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:04 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Dec 06 06:31:05 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 3.13 scrub starts
Dec 06 06:31:05 compute-1 ceph-mon[81689]: pgmap v192: 180 pgs: 1 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 2 op/s
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 5.1d scrub starts
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 5.1d scrub ok
Dec 06 06:31:05 compute-1 ceph-mon[81689]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 3.13 scrub ok
Dec 06 06:31:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:05 compute-1 ceph-mon[81689]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 4.1d scrub starts
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 4.1d scrub ok
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 5.15 scrub starts
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 5.15 scrub ok
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 3.14 scrub starts
Dec 06 06:31:05 compute-1 ceph-mon[81689]: 3.14 scrub ok
Dec 06 06:31:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:05 compute-1 ceph-mon[81689]: pgmap v193: 180 pgs: 1 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1 op/s
Dec 06 06:31:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:05 compute-1 ceph-mon[81689]: osdmap e55: 3 total, 3 up, 3 in
Dec 06 06:31:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/758001210' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:05 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Dec 06 06:31:07 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 06 06:31:07 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 06 06:31:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:09 compute-1 ceph-mon[81689]: 4.18 deep-scrub starts
Dec 06 06:31:09 compute-1 ceph-mon[81689]: 4.18 deep-scrub ok
Dec 06 06:31:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:09 compute-1 ceph-mon[81689]: pgmap v195: 181 pgs: 2 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 KiB/s wr, 130 op/s
Dec 06 06:31:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec 06 06:31:09 compute-1 ceph-mon[81689]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/758001210' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:10 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 06 06:31:10 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 06 06:31:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Dec 06 06:31:10 compute-1 ceph-mon[81689]: 2.1b scrub starts
Dec 06 06:31:10 compute-1 ceph-mon[81689]: 2.1b scrub ok
Dec 06 06:31:10 compute-1 ceph-mon[81689]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:31:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:10 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:10 compute-1 ceph-mon[81689]: osdmap e56: 3 total, 3 up, 3 in
Dec 06 06:31:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:10 compute-1 ceph-mon[81689]: Deploying daemon mds.cephfs.compute-2.tjfgow on compute-2
Dec 06 06:31:10 compute-1 ceph-mon[81689]: 4.13 scrub starts
Dec 06 06:31:10 compute-1 ceph-mon[81689]: 4.13 scrub ok
Dec 06 06:31:10 compute-1 ceph-mon[81689]: pgmap v197: 181 pgs: 1 creating+peering, 180 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 5.0 KiB/s wr, 120 op/s
Dec 06 06:31:10 compute-1 ceph-mon[81689]: 4.16 scrub starts
Dec 06 06:31:10 compute-1 ceph-mon[81689]: 4.16 scrub ok
Dec 06 06:31:10 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Dec 06 06:31:10 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Dec 06 06:31:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:12 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 06 06:31:13 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 06 06:31:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Dec 06 06:31:13 compute-1 radosgw[82404]: LDAP not started since no server URIs were provided in the configuration.
Dec 06 06:31:13 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-1-dmyhav[82400]: 2025-12-06T06:31:13.747+0000 7fc86aa71940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 06 06:31:13 compute-1 radosgw[82404]: framework: beast
Dec 06 06:31:13 compute-1 radosgw[82404]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 06 06:31:13 compute-1 radosgw[82404]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 06 06:31:13 compute-1 radosgw[82404]: starting handler: beast
Dec 06 06:31:13 compute-1 radosgw[82404]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:31:13 compute-1 radosgw[82404]: mgrc service_daemon_register rgw.24131 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.dmyhav,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=22f3b99e-039b-412d-b524-6d79a2ee4dad,zone_name=default,zonegroup_id=3605fdfe-bab9-40f4-83c5-5b52927c1749,zonegroup_name=default}
Dec 06 06:31:14 compute-1 ceph-mon[81689]: 6.1 scrub starts
Dec 06 06:31:14 compute-1 ceph-mon[81689]: 6.1 scrub ok
Dec 06 06:31:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/758001210' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:14 compute-1 ceph-mon[81689]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:31:14 compute-1 ceph-mon[81689]: 5.1 scrub starts
Dec 06 06:31:14 compute-1 ceph-mon[81689]: 5.1 scrub ok
Dec 06 06:31:14 compute-1 ceph-mon[81689]: pgmap v198: 181 pgs: 1 creating+peering, 180 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 4.2 KiB/s wr, 103 op/s
Dec 06 06:31:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:14 compute-1 ceph-mon[81689]: osdmap e57: 3 total, 3 up, 3 in
Dec 06 06:31:14 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:14 compute-1 ceph-mon[81689]: 2.11 deep-scrub starts
Dec 06 06:31:14 compute-1 ceph-mon[81689]: 2.11 deep-scrub ok
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.0 deep-scrub starts
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.0 deep-scrub ok
Dec 06 06:31:15 compute-1 ceph-mon[81689]: pgmap v200: 181 pgs: 181 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 105 KiB/s rd, 4.4 KiB/s wr, 211 op/s
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.1e scrub starts
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.1e scrub ok
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 3.0 scrub starts
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.1b scrub starts
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.1b scrub ok
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 4.17 deep-scrub starts
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 4.17 deep-scrub ok
Dec 06 06:31:15 compute-1 ceph-mon[81689]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:15 compute-1 ceph-mon[81689]: osdmap e58: 3 total, 3 up, 3 in
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 3.0 scrub ok
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.d scrub starts
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.d scrub ok
Dec 06 06:31:15 compute-1 ceph-mon[81689]: pgmap v202: 181 pgs: 181 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.b scrub starts
Dec 06 06:31:15 compute-1 ceph-mon[81689]: 5.b scrub ok
Dec 06 06:31:15 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 06 06:31:15 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 06 06:31:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e3 new map
Dec 06 06:31:17 compute-1 ceph-mon[81689]: 5.16 scrub starts
Dec 06 06:31:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:29:04.228395+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.tjfgow{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:31:17 compute-1 ceph-mon[81689]: 5.16 scrub ok
Dec 06 06:31:17 compute-1 ceph-mon[81689]: pgmap v203: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 171 KiB/s rd, 255 B/s wr, 280 op/s
Dec 06 06:31:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e4 new map
Dec 06 06:31:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:17.652602+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Dec 06 06:31:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:17 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 06 06:31:17 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 06 06:31:18 compute-1 ceph-mon[81689]: 2.a scrub starts
Dec 06 06:31:18 compute-1 ceph-mon[81689]: 2.a scrub ok
Dec 06 06:31:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:18 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] up:boot
Dec 06 06:31:18 compute-1 ceph-mon[81689]: daemon mds.cephfs.compute-2.tjfgow assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 06 06:31:18 compute-1 ceph-mon[81689]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 06 06:31:18 compute-1 ceph-mon[81689]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 06 06:31:18 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 06:31:18 compute-1 ceph-mon[81689]: fsmap cephfs:0 1 up:standby
Dec 06 06:31:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.tjfgow"}]: dispatch
Dec 06 06:31:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:18 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:creating}
Dec 06 06:31:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:18 compute-1 ceph-mon[81689]: Deploying daemon mds.cephfs.compute-0.qqwnku on compute-0
Dec 06 06:31:18 compute-1 ceph-mon[81689]: daemon mds.cephfs.compute-2.tjfgow is now active in filesystem cephfs as rank 0
Dec 06 06:31:18 compute-1 ceph-mon[81689]: 3.16 scrub starts
Dec 06 06:31:18 compute-1 ceph-mon[81689]: 3.16 scrub ok
Dec 06 06:31:18 compute-1 ceph-mon[81689]: pgmap v204: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 171 KiB/s rd, 255 B/s wr, 280 op/s
Dec 06 06:31:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e5 new map
Dec 06 06:31:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:18.697695+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Dec 06 06:31:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Dec 06 06:31:18 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 06 06:31:18 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 06 06:31:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Dec 06 06:31:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:20 compute-1 ceph-mon[81689]: osdmap e59: 3 total, 3 up, 3 in
Dec 06 06:31:20 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] up:active
Dec 06 06:31:20 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active}
Dec 06 06:31:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:20 compute-1 ceph-mon[81689]: 5.10 scrub starts
Dec 06 06:31:20 compute-1 ceph-mon[81689]: 5.10 scrub ok
Dec 06 06:31:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:22 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 06 06:31:22 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 06 06:31:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e6 new map
Dec 06 06:31:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:18.697695+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:31:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:23 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 06 06:31:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e7 new map
Dec 06 06:31:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:18.697695+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:31:24 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 06 06:31:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Dec 06 06:31:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:24 compute-1 ceph-mon[81689]: pgmap v206: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 255 B/s wr, 178 op/s
Dec 06 06:31:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:24 compute-1 ceph-mon[81689]: 4.1e scrub starts
Dec 06 06:31:24 compute-1 ceph-mon[81689]: 4.1e scrub ok
Dec 06 06:31:24 compute-1 ceph-mon[81689]: 2.d scrub starts
Dec 06 06:31:24 compute-1 ceph-mon[81689]: 2.d scrub ok
Dec 06 06:31:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:24 compute-1 ceph-mon[81689]: osdmap e60: 3 total, 3 up, 3 in
Dec 06 06:31:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:24 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:boot
Dec 06 06:31:24 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 1 up:standby
Dec 06 06:31:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qqwnku"}]: dispatch
Dec 06 06:31:24 compute-1 sudo[83020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:31:24 compute-1 sudo[83020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:31:24 compute-1 sudo[83020]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:24 compute-1 sudo[83045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:31:24 compute-1 sudo[83045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:31:24 compute-1 sudo[83045]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:24 compute-1 sudo[83070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:31:24 compute-1 sudo[83070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:31:24 compute-1 sudo[83070]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:24 compute-1 sudo[83095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:31:24 compute-1 sudo[83095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:31:24 compute-1 podman[83158]: 2025-12-06 06:31:24.564597747 +0000 UTC m=+0.025808355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:31:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Dec 06 06:31:25 compute-1 podman[83158]: 2025-12-06 06:31:25.44769379 +0000 UTC m=+0.908904368 container create 675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 2.c scrub starts
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 2.c scrub ok
Dec 06 06:31:26 compute-1 ceph-mon[81689]: pgmap v208: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 KiB/s wr, 184 op/s
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 7.1d deep-scrub starts
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 7.1d deep-scrub ok
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 5.11 scrub starts
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 5.11 scrub ok
Dec 06 06:31:26 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 1 up:standby
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 2.14 scrub starts
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:26 compute-1 ceph-mon[81689]: osdmap e61: 3 total, 3 up, 3 in
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 2.14 scrub ok
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:26 compute-1 ceph-mon[81689]: Deploying daemon mds.cephfs.compute-1.vsxbzt on compute-1
Dec 06 06:31:26 compute-1 ceph-mon[81689]: pgmap v210: 212 pgs: 31 unknown, 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s wr, 7 op/s
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 6.4 scrub starts
Dec 06 06:31:26 compute-1 ceph-mon[81689]: 6.4 scrub ok
Dec 06 06:31:26 compute-1 systemd[1]: Started libpod-conmon-675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3.scope.
Dec 06 06:31:26 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 62 pg[10.0( v 57'96 (0'0,57'96] local-lis/les=53/54 n=8 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=8.585104942s) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 57'95 mlcod 57'95 active pruub 192.594329834s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:26 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 62 pg[10.0( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=8.585104942s) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 57'95 mlcod 0'0 unknown pruub 192.594329834s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:31:26 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Dec 06 06:31:27 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Dec 06 06:31:27 compute-1 podman[83158]: 2025-12-06 06:31:27.650189515 +0000 UTC m=+3.111400113 container init 675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:31:27 compute-1 podman[83158]: 2025-12-06 06:31:27.660236469 +0000 UTC m=+3.121447047 container start 675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:31:27 compute-1 adoring_babbage[83174]: 167 167
Dec 06 06:31:27 compute-1 systemd[1]: libpod-675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3.scope: Deactivated successfully.
Dec 06 06:31:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.1b( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.7( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.12( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.3( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.5( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.4( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.6( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.8( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.d( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.b( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.11( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.1c( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.2( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.1f( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.1a( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.1d( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.19( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.9( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.18( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.e( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.f( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.c( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.a( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.1( v 57'96 (0'0,57'96] local-lis/les=53/54 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.13( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.15( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.10( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.14( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.17( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.16( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 63 pg[10.1e( v 57'96 lc 0'0 (0'0,57'96] local-lis/les=53/54 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:29 compute-1 podman[83158]: 2025-12-06 06:31:29.938701645 +0000 UTC m=+5.399912243 container attach 675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:31:29 compute-1 podman[83158]: 2025-12-06 06:31:29.94035401 +0000 UTC m=+5.401564598 container died 675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:31:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:32 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 06 06:31:32 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 06 06:31:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-fef9c8147ccbed291492f073090cbe07ef09d62c3035633ab97d8bc7917b706f-merged.mount: Deactivated successfully.
Dec 06 06:31:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Dec 06 06:31:33 compute-1 ceph-mon[81689]: 7.a deep-scrub starts
Dec 06 06:31:33 compute-1 ceph-mon[81689]: 7.a deep-scrub ok
Dec 06 06:31:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-1 ceph-mon[81689]: osdmap e62: 3 total, 3 up, 3 in
Dec 06 06:31:33 compute-1 ceph-mon[81689]: pgmap v212: 274 pgs: 1 peering, 62 unknown, 211 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s wr, 6 op/s
Dec 06 06:31:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:33 compute-1 ceph-mon[81689]: 6.6 scrub starts
Dec 06 06:31:33 compute-1 ceph-mon[81689]: 6.6 scrub ok
Dec 06 06:31:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-1 ceph-mon[81689]: osdmap e63: 3 total, 3 up, 3 in
Dec 06 06:31:33 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 06 06:31:34 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 06 06:31:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:35 compute-1 sshd-session[83190]: Accepted publickey for zuul from 192.168.122.30 port 53074 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:31:35 compute-1 systemd-logind[788]: New session 33 of user zuul.
Dec 06 06:31:35 compute-1 systemd[1]: Started Session 33 of User zuul.
Dec 06 06:31:35 compute-1 sshd-session[83190]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:31:36 compute-1 python3.9[83343]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:31:38 compute-1 sudo[83555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkzhxaodgrjhtaurxqwyqsgevuauhynq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002697.8483284-62-42458996994242/AnsiballZ_command.py'
Dec 06 06:31:38 compute-1 sudo[83555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:31:38 compute-1 python3.9[83557]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:31:38 compute-1 podman[83158]: 2025-12-06 06:31:38.76309689 +0000 UTC m=+14.224307468 container remove 675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_babbage, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:31:38 compute-1 systemd[1]: libpod-conmon-675d863e194d8e247be180c62e4ce975bb4275248667e8da5a24ce0303ce3fe3.scope: Deactivated successfully.
Dec 06 06:31:38 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 06 06:31:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 8.108765602s
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 8.108765602s
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.440337181s, txc = 0x55b552ca7200
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.440299988s, txc = 0x55b552ca7500
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.440257072s, txc = 0x55b552ca6600
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.440155029s, txc = 0x55b552ca7800
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.1b( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.440061569s, txc = 0x55b5549e6000
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.440093994s, txc = 0x55b55480c600
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.440052986s, txc = 0x55b552ca7b00
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.440041542s, txc = 0x55b553cbec00
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439811707s, txc = 0x55b5549e8000
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439757347s, txc = 0x55b55480c300
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439725876s, txc = 0x55b5549e6300
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439704895s, txc = 0x55b553ccd800
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439626694s, txc = 0x55b5549e8300
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439620972s, txc = 0x55b554848900
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439710617s, txc = 0x55b5545fd800
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439510345s, txc = 0x55b5549e6600
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439471245s, txc = 0x55b554827500
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439308167s, txc = 0x55b5549e8600
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439258575s, txc = 0x55b554825200
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439270973s, txc = 0x55b5549e6900
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.0( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 57'95 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439286232s, txc = 0x55b554802900
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439222336s, txc = 0x55b5549e8900
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439149857s, txc = 0x55b553cbe300
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.439002037s, txc = 0x55b55480c000
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.438950539s, txc = 0x55b554b23200
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.438962936s, txc = 0x55b5549e6c00
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.438947678s, txc = 0x55b5549e8c00
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.438837051s, txc = 0x55b5549e8f00
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.438841820s, txc = 0x55b5547e3b00
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.438699722s, txc = 0x55b5549e6f00
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.438553810s, txc = 0x55b5547e3800
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.438546181s, txc = 0x55b5545fdb00
Dec 06 06:31:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.437354088s, txc = 0x55b554827800
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.7( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.5( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.2( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.3( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.6( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.4( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.8( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.d( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.1e( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.b( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.11( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.12( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.1d( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.1c( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.1a( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.18( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.19( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.9( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.e( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.c( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.f( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.a( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.1( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.13( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.1f( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.17( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.10( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.16( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.15( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 64 pg[10.14( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=53/53 les/c/f=54/54/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:46 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 06 06:31:47 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 7.c deep-scrub starts
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 7.c deep-scrub ok
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 5.12 scrub starts
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 5.12 scrub ok
Dec 06 06:31:47 compute-1 ceph-mon[81689]: pgmap v214: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 93 unknown, 209 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 2.10 scrub starts
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 2.10 scrub ok
Dec 06 06:31:47 compute-1 ceph-mon[81689]: pgmap v215: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 93 unknown, 209 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 7.14 scrub starts
Dec 06 06:31:47 compute-1 ceph-mon[81689]: pgmap v216: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 62 unknown, 240 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 7.d scrub starts
Dec 06 06:31:47 compute-1 ceph-mon[81689]: 7.d scrub ok
Dec 06 06:31:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:47 compute-1 ceph-mon[81689]: osdmap e64: 3 total, 3 up, 3 in
Dec 06 06:31:47 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.938028336s, txc = 0x55b5549e9500
Dec 06 06:31:47 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 06 06:31:48 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.138540268s, txc = 0x55b552ca6c00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.138397217s, txc = 0x55b5549e7200
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.138360977s, txc = 0x55b554b29b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.138159752s, txc = 0x55b554643800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.138154030s, txc = 0x55b553cbe900
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.138115406s, txc = 0x55b5549e9800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.138099194s, txc = 0x55b554643b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137994766s, txc = 0x55b554aff800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137989044s, txc = 0x55b5546ae000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137961388s, txc = 0x55b5549e9b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137943268s, txc = 0x55b554849500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137934685s, txc = 0x55b5546ae300
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137898922s, txc = 0x55b5546b0000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137896061s, txc = 0x55b553974000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137842655s, txc = 0x55b553974600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137809753s, txc = 0x55b5546ae600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137813091s, txc = 0x55b554849200
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137771130s, txc = 0x55b5546ae900
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137814999s, txc = 0x55b5547e3500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137815952s, txc = 0x55b554b29500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137787819s, txc = 0x55b5546aec00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137712479s, txc = 0x55b5546aef00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137709618s, txc = 0x55b5549e7500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137654305s, txc = 0x55b5547e3200
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137476444s, txc = 0x55b5546af200
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137439251s, txc = 0x55b5546af500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137431145s, txc = 0x55b5549e7800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137393951s, txc = 0x55b5549e7b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137385368s, txc = 0x55b554b28600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137379169s, txc = 0x55b554802600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137393951s, txc = 0x55b5546af800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137362003s, txc = 0x55b554a70000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137306213s, txc = 0x55b5546afb00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137299061s, txc = 0x55b554a70300
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137308598s, txc = 0x55b554825800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137298107s, txc = 0x55b5546b0300
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137268066s, txc = 0x55b554a70600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137239456s, txc = 0x55b554a70900
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137231350s, txc = 0x55b554827b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137217045s, txc = 0x55b554a70c00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137237072s, txc = 0x55b5546b0600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137168407s, txc = 0x55b554a70f00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137143135s, txc = 0x55b554a71200
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137151718s, txc = 0x55b554a72000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137145519s, txc = 0x55b5547e2f00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137128353s, txc = 0x55b554a71500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137147427s, txc = 0x55b554848f00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137094975s, txc = 0x55b554619b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137070179s, txc = 0x55b554a71800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137076855s, txc = 0x55b553cbe000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137059212s, txc = 0x55b554802300
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137044430s, txc = 0x55b554a71b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137018681s, txc = 0x55b554b2f500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137017727s, txc = 0x55b554a7c000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.137005806s, txc = 0x55b554a72300
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136978149s, txc = 0x55b554619800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136966228s, txc = 0x55b553974300
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136981487s, txc = 0x55b554a7c300
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136962414s, txc = 0x55b554a72600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136891365s, txc = 0x55b5547e2c00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136835098s, txc = 0x55b5547f9b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136823177s, txc = 0x55b554a7c600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136808395s, txc = 0x55b554a72900
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136781216s, txc = 0x55b554802000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136728287s, txc = 0x55b5549e6f00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136620998s, txc = 0x55b5546b0900
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136569023s, txc = 0x55b5547e3b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136485100s, txc = 0x55b5549e8f00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.136405468s, txc = 0x55b554a72c00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.135546207s, txc = 0x55b554a72f00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.135321140s, txc = 0x55b554a73200
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.135256767s, txc = 0x55b554a73500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.134649277s, txc = 0x55b554a73800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.134492397s, txc = 0x55b554a73b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.134412766s, txc = 0x55b554a8a000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.134067535s, txc = 0x55b5549e8c00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.133919716s, txc = 0x55b5546b0c00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.133848667s, txc = 0x55b5546b0f00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.133532524s, txc = 0x55b5546b1200
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.133458138s, txc = 0x55b5546b1500
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.133402824s, txc = 0x55b5546b1800
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.133126259s, txc = 0x55b5546b1b00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.133025169s, txc = 0x55b554a90000
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.132923603s, txc = 0x55b554a90300
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.132711887s, txc = 0x55b554a90600
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.132628441s, txc = 0x55b554a90900
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.132644653s, txc = 0x55b554a90c00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.132375717s, txc = 0x55b554a90f00
Dec 06 06:31:51 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.128331661s, txc = 0x55b554a7c900
Dec 06 06:31:51 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 06 06:31:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:53 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.622981071s, txc = 0x55b554a7cf00
Dec 06 06:31:53 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.306416035s, txc = 0x55b554a8a600
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 7.14 scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 5.13 scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 5.13 scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 4.5 scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: pgmap v218: 305 pgs: 33 peering, 31 unknown, 241 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 4.5 scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: pgmap v219: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 2.13 scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 2.13 scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: pgmap v220: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.7 scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.9 scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.9 scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 4.14 deep-scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 4.14 deep-scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.b scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.b scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: pgmap v221: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 2.15 scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 2.15 scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.c scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.c scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: pgmap v222: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 2.b scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 2.b scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.f scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 6.f scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: pgmap v223: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 4.9 scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 4.9 scrub ok
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 2.19 scrub starts
Dec 06 06:31:54 compute-1 ceph-mon[81689]: 2.19 scrub ok
Dec 06 06:31:54 compute-1 systemd[1]: Reloading.
Dec 06 06:31:54 compute-1 systemd-rc-local-generator[83611]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:31:54 compute-1 systemd-sysv-generator[83614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:31:54 compute-1 systemd[1]: Reloading.
Dec 06 06:31:54 compute-1 systemd-sysv-generator[83654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:31:54 compute-1 systemd-rc-local-generator[83648]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:31:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.14( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.14( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.17( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.12( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.12( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.8( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.f( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.1b( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.1b( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.18( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.1c( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.1d( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.1e( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.7( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.4( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.5( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.4( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.1( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.10( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[8.19( empty local-lis/les=0/0 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[11.1a( empty local-lis/les=0/0 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.1b( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.801244736s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.583389282s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.1b( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.801207542s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.583389282s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.12( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.801875114s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584259033s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.12( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.801847458s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584259033s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.3( v 64'99 (0'0,64'99] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.801097870s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 64'98 mlcod 64'98 active pruub 227.583938599s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.3( v 64'99 (0'0,64'99] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.801046371s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 64'98 mlcod 0'0 unknown NOTIFY pruub 227.583938599s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.5( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800869942s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.583801270s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.5( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800834656s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.583801270s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.4( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800970078s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584030151s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.8( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800874710s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584030151s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.8( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800854683s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584030151s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.4( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800922394s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584030151s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.1e( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800997734s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584197998s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.1e( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800977707s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584197998s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.2( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800512314s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.583862305s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.2( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800484657s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.583862305s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.11( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800785065s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584243774s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.11( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800756454s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584243774s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.19( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800792694s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584487915s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.19( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800772667s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584487915s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.18( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800700188s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584487915s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.18( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800652504s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584487915s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.f( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800658226s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584579468s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.f( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800600052s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584579468s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.1( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800491333s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584701538s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.1( v 57'96 (0'0,57'96] local-lis/les=62/64 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800475121s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584701538s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.13( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800395966s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584808350s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.13( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800378799s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584808350s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.10( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800336838s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active pruub 227.584884644s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.10( v 57'96 (0'0,57'96] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800321579s) [2] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.584884644s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.15( v 64'99 (0'0,64'99] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800361633s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 64'98 mlcod 64'98 active pruub 227.585052490s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.15( v 64'99 (0'0,64'99] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800333023s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 64'98 mlcod 0'0 unknown NOTIFY pruub 227.585052490s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.14( v 64'99 (0'0,64'99] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.800011635s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 64'98 mlcod 64'98 active pruub 227.584854126s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 65 pg[10.14( v 64'99 (0'0,64'99] local-lis/les=62/64 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65 pruub=14.799990654s) [0] r=-1 lpr=65 pi=[62,65)/1 crt=57'96 lcod 64'98 mlcod 0'0 unknown NOTIFY pruub 227.584854126s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.vsxbzt for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:31:55 compute-1 podman[83708]: 2025-12-06 06:31:55.515432349 +0000 UTC m=+0.021161898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 7.1f deep-scrub starts
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 7.1f deep-scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 6.7 scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: pgmap v224: 305 pgs: 27 active, 2 active+clean+scrubbing, 2 activating, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 2.16 scrub starts
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 2.16 scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 2.f scrub starts
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 2.f scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 2.17 scrub starts
Dec 06 06:31:55 compute-1 ceph-mon[81689]: pgmap v225: 305 pgs: 27 active, 2 active+clean+scrubbing, 2 activating, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-1 ceph-mon[81689]: pgmap v226: 305 pgs: 27 active, 1 active+clean+scrubbing, 2 activating, 275 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 7.5 scrub starts
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 7.5 scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 2.17 scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 7.13 deep-scrub starts
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 7.13 deep-scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: pgmap v227: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 7.10 deep-scrub starts
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 7.10 deep-scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: pgmap v228: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 2.e scrub starts
Dec 06 06:31:55 compute-1 ceph-mon[81689]: 2.e scrub ok
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 06:31:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-1 ceph-mon[81689]: osdmap e65: 3 total, 3 up, 3 in
Dec 06 06:31:55 compute-1 podman[83708]: 2025-12-06 06:31:55.693171303 +0000 UTC m=+0.198900822 container create 14a5434bcc3b0a5a3ec74433e5fd04113d0c6446cd7fb369fc52060578c96dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-1-vsxbzt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:31:55 compute-1 systemd[72554]: Created slice User Background Tasks Slice.
Dec 06 06:31:55 compute-1 systemd[72554]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 06:31:55 compute-1 systemd[72554]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 06:31:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62bada6f2b4829952b9818da39df5810521ec04aa7536139a01bcb4be2723c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62bada6f2b4829952b9818da39df5810521ec04aa7536139a01bcb4be2723c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62bada6f2b4829952b9818da39df5810521ec04aa7536139a01bcb4be2723c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62bada6f2b4829952b9818da39df5810521ec04aa7536139a01bcb4be2723c7/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.vsxbzt supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:56 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 06 06:31:58 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 06 06:31:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Dec 06 06:31:59 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 06 06:32:00 compute-1 podman[83708]: 2025-12-06 06:32:00.356475384 +0000 UTC m=+4.862204923 container init 14a5434bcc3b0a5a3ec74433e5fd04113d0c6446cd7fb369fc52060578c96dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-1-vsxbzt, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:32:00 compute-1 podman[83708]: 2025-12-06 06:32:00.377888948 +0000 UTC m=+4.883618467 container start 14a5434bcc3b0a5a3ec74433e5fd04113d0c6446cd7fb369fc52060578c96dfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-1-vsxbzt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:32:00 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 06 06:32:00 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 06 06:32:00 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 06 06:32:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:00 compute-1 ceph-mds[83729]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:32:00 compute-1 ceph-mds[83729]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec 06 06:32:00 compute-1 ceph-mds[83729]: main not setting numa affinity
Dec 06 06:32:00 compute-1 ceph-mds[83729]: pidfile_write: ignore empty --pid-file
Dec 06 06:32:00 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-1-vsxbzt[83725]: starting mds.cephfs.compute-1.vsxbzt at 
Dec 06 06:32:00 compute-1 bash[83708]: 14a5434bcc3b0a5a3ec74433e5fd04113d0c6446cd7fb369fc52060578c96dfe
Dec 06 06:32:00 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.vsxbzt for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.14( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.12( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.14( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.12( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.17( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.f( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.1b( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.1b( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.8( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.1c( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.1d( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.4( v 50'4 (0'0,50'4] local-lis/les=65/66 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.7( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.5( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.4( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.18( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.1( v 58'3 (0'0,58'3] local-lis/les=65/66 n=1 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.1e( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.10( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[8.19( v 50'4 (0'0,50'4] local-lis/les=65/66 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65) [1] r=0 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 66 pg[11.1a( v 58'3 (0'0,58'3] local-lis/les=65/66 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65) [1] r=0 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:01 compute-1 ceph-mon[81689]: 5.4 scrub starts
Dec 06 06:32:01 compute-1 ceph-mon[81689]: 5.4 scrub ok
Dec 06 06:32:01 compute-1 ceph-mon[81689]: 7.b scrub starts
Dec 06 06:32:01 compute-1 ceph-mon[81689]: 7.b scrub ok
Dec 06 06:32:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:32:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:32:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 06:32:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:32:01 compute-1 ceph-mon[81689]: osdmap e66: 3 total, 3 up, 3 in
Dec 06 06:32:01 compute-1 sudo[83095]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:01 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Updating MDS map to version 7 from mon.2
Dec 06 06:32:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e8 new map
Dec 06 06:32:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:18.697695+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:03 compute-1 ceph-mon[81689]: pgmap v231: 305 pgs: 23 peering, 1 active+clean+scrubbing, 281 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 7.8 scrub starts
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 7.8 scrub ok
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 7.12 scrub starts
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 3.15 scrub starts
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 3.15 scrub ok
Dec 06 06:32:03 compute-1 ceph-mon[81689]: pgmap v232: 305 pgs: 44 peering, 261 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 5.7 scrub starts
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 2.1a scrub starts
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 7.9 scrub starts
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 7.9 scrub ok
Dec 06 06:32:03 compute-1 ceph-mon[81689]: pgmap v233: 305 pgs: 21 peering, 284 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 11 B/s, 0 objects/s recovering
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 2.1a scrub ok
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 7.12 scrub ok
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 5.7 scrub ok
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 4.1f scrub starts
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 4.1f scrub ok
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 7.f scrub starts
Dec 06 06:32:03 compute-1 ceph-mon[81689]: 7.f scrub ok
Dec 06 06:32:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:03 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Updating MDS map to version 8 from mon.2
Dec 06 06:32:03 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Monitors have assigned me to become a standby.
Dec 06 06:32:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:06 compute-1 ceph-mon[81689]: pgmap v234: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 11 B/s, 0 objects/s recovering
Dec 06 06:32:06 compute-1 ceph-mon[81689]: 7.11 deep-scrub starts
Dec 06 06:32:06 compute-1 ceph-mon[81689]: 7.11 deep-scrub ok
Dec 06 06:32:06 compute-1 ceph-mon[81689]: 7.e scrub starts
Dec 06 06:32:06 compute-1 ceph-mon[81689]: 7.e scrub ok
Dec 06 06:32:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:06 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] up:boot
Dec 06 06:32:06 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 2 up:standby
Dec 06 06:32:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vsxbzt"}]: dispatch
Dec 06 06:32:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:06 compute-1 ceph-mon[81689]: pgmap v235: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 9 B/s, 0 objects/s recovering
Dec 06 06:32:06 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 06 06:32:06 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 06 06:32:07 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 06 06:32:07 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 06 06:32:07 compute-1 ceph-mon[81689]: 7.16 scrub starts
Dec 06 06:32:07 compute-1 ceph-mon[81689]: 7.16 scrub ok
Dec 06 06:32:07 compute-1 ceph-mon[81689]: Health check failed: Degraded data redundancy: 1/216 objects degraded (0.463%), 2 pgs degraded (PG_DEGRADED)
Dec 06 06:32:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:07 compute-1 ceph-mon[81689]: 7.6 scrub starts
Dec 06 06:32:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:07 compute-1 ceph-mon[81689]: 7.6 scrub ok
Dec 06 06:32:07 compute-1 ceph-mon[81689]: pgmap v236: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 8 B/s, 0 objects/s recovering
Dec 06 06:32:07 compute-1 ceph-mon[81689]: 7.15 scrub starts
Dec 06 06:32:07 compute-1 ceph-mon[81689]: 7.15 scrub ok
Dec 06 06:32:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e9 new map
Dec 06 06:32:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:32:07.852539+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        67
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14385}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-0.qqwnku{0:14385} state up:replay seq 13 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Dec 06 06:32:08 compute-1 ceph-mon[81689]: Deploying daemon haproxy.rgw.default.compute-0.ybrwqj on compute-0
Dec 06 06:32:08 compute-1 ceph-mon[81689]: 2.12 scrub starts
Dec 06 06:32:08 compute-1 ceph-mon[81689]: 2.12 scrub ok
Dec 06 06:32:08 compute-1 ceph-mon[81689]: 3.f scrub starts
Dec 06 06:32:08 compute-1 ceph-mon[81689]: 3.f scrub ok
Dec 06 06:32:08 compute-1 ceph-mon[81689]: Dropping low affinity active daemon mds.cephfs.compute-2.tjfgow in favor of higher affinity standby.
Dec 06 06:32:08 compute-1 ceph-mon[81689]: Replacing daemon mds.cephfs.compute-2.tjfgow as rank 0 with standby daemon mds.cephfs.compute-0.qqwnku
Dec 06 06:32:08 compute-1 ceph-mon[81689]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Dec 06 06:32:08 compute-1 ceph-mon[81689]: osdmap e67: 3 total, 3 up, 3 in
Dec 06 06:32:08 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:standby
Dec 06 06:32:08 compute-1 ceph-mon[81689]: fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:replay} 1 up:standby
Dec 06 06:32:08 compute-1 ceph-mon[81689]: pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 138 B/s, 0 objects/s recovering
Dec 06 06:32:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 06 06:32:08 compute-1 sudo[83555]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:08 compute-1 sshd-session[83193]: Connection closed by 192.168.122.30 port 53074
Dec 06 06:32:08 compute-1 sshd-session[83190]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:32:08 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Dec 06 06:32:08 compute-1 systemd[1]: session-33.scope: Consumed 8.635s CPU time.
Dec 06 06:32:08 compute-1 systemd-logind[788]: Session 33 logged out. Waiting for processes to exit.
Dec 06 06:32:08 compute-1 systemd-logind[788]: Removed session 33.
Dec 06 06:32:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Dec 06 06:32:09 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 06 06:32:09 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 06 06:32:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e10 new map
Dec 06 06:32:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e10 print_map
                                           e10
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        10
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:32:09.094595+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        67
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14385}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-0.qqwnku{0:14385} state up:reconnect seq 14 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:10 compute-1 ceph-mon[81689]: 3.11 deep-scrub starts
Dec 06 06:32:10 compute-1 ceph-mon[81689]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/216 objects degraded (0.463%), 2 pgs degraded)
Dec 06 06:32:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 06 06:32:10 compute-1 ceph-mon[81689]: osdmap e68: 3 total, 3 up, 3 in
Dec 06 06:32:10 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:reconnect
Dec 06 06:32:10 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] up:boot
Dec 06 06:32:10 compute-1 ceph-mon[81689]: fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:reconnect} 2 up:standby
Dec 06 06:32:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.tjfgow"}]: dispatch
Dec 06 06:32:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000087s ======
Dec 06 06:32:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:11.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000087s
Dec 06 06:32:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e11 new map
Dec 06 06:32:12 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Updating MDS map to version 11 from mon.2
Dec 06 06:32:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e11 print_map
                                           e11
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        11
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:32:10.364755+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        67
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14385}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-0.qqwnku{0:14385} state up:rejoin seq 15 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Dec 06 06:32:12 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts
Dec 06 06:32:12 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok
Dec 06 06:32:12 compute-1 ceph-mon[81689]: 3.11 deep-scrub ok
Dec 06 06:32:12 compute-1 ceph-mon[81689]: 7.17 scrub starts
Dec 06 06:32:12 compute-1 ceph-mon[81689]: 7.17 scrub ok
Dec 06 06:32:12 compute-1 ceph-mon[81689]: 7.4 scrub starts
Dec 06 06:32:12 compute-1 ceph-mon[81689]: 7.4 scrub ok
Dec 06 06:32:12 compute-1 ceph-mon[81689]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 161 B/s, 0 objects/s recovering
Dec 06 06:32:12 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 06 06:32:13 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 06 06:32:13 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 06 06:32:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:13.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Dec 06 06:32:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e70 crush map has features 3314933000854323200, adjusting msgr requires
Dec 06 06:32:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 06:32:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 06:32:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 70 crush map has features 432629239337189376, adjusting msgr requires for clients
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 70 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 70 crush map has features 3314933000854323200, adjusting msgr requires for osds
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 70 pg[9.14( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [1] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 70 pg[9.2( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [1] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 70 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [1] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 70 pg[9.8( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [1] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [1] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [1] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e12 new map
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).mds e12 print_map
                                           e12
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        12
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:32:13.099122+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        67
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14385}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-0.qqwnku{0:14385} state up:active seq 16 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:15 compute-1 ceph-mon[81689]: osdmap e69: 3 total, 3 up, 3 in
Dec 06 06:32:15 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:rejoin
Dec 06 06:32:15 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] up:standby
Dec 06 06:32:15 compute-1 ceph-mon[81689]: fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:rejoin} 2 up:standby
Dec 06 06:32:15 compute-1 ceph-mon[81689]: daemon mds.cephfs.compute-0.qqwnku is now active in filesystem cephfs as rank 0
Dec 06 06:32:15 compute-1 ceph-mon[81689]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 2 op/s; 215 B/s, 0 objects/s recovering
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 5.9 deep-scrub starts
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 5.9 deep-scrub ok
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]}]: dispatch
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]}]: dispatch
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]}]: dispatch
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]: dispatch
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]}]: dispatch
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]}]: dispatch
Dec 06 06:32:15 compute-1 ceph-mon[81689]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Dec 06 06:32:15 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 06:32:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.14( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.2( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.14( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.2( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:15.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:15 compute-1 ceph-mon[81689]: Deploying daemon haproxy.rgw.default.compute-2.nyemkw on compute-2
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 5.2 scrub starts
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 5.2 scrub ok
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 4.15 deep-scrub starts
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 4.15 deep-scrub ok
Dec 06 06:32:15 compute-1 ceph-mon[81689]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 2 op/s
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: osdmap e70: 3 total, 3 up, 3 in
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 7.3 scrub starts
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 7.3 scrub ok
Dec 06 06:32:15 compute-1 ceph-mon[81689]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:active
Dec 06 06:32:15 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 2.6 scrub starts
Dec 06 06:32:15 compute-1 ceph-mon[81689]: 2.6 scrub ok
Dec 06 06:32:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 06:32:15 compute-1 ceph-mon[81689]: osdmap e71: 3 total, 3 up, 3 in
Dec 06 06:32:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Dec 06 06:32:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:17 compute-1 ceph-mon[81689]: pgmap v246: 305 pgs: 8 active+remapped, 6 remapped+peering, 291 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 8 op/s; 201 B/s, 6 objects/s recovering
Dec 06 06:32:17 compute-1 ceph-mon[81689]: 7.2 deep-scrub starts
Dec 06 06:32:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:17 compute-1 ceph-mon[81689]: 7.2 deep-scrub ok
Dec 06 06:32:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:17 compute-1 ceph-mon[81689]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 06:32:17 compute-1 ceph-mon[81689]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 06:32:17 compute-1 ceph-mon[81689]: Deploying daemon keepalived.rgw.default.compute-2.cossgt on compute-2
Dec 06 06:32:17 compute-1 ceph-mon[81689]: osdmap e72: 3 total, 3 up, 3 in
Dec 06 06:32:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Dec 06 06:32:17 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 73 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=73) [1] r=0 lpr=73 pi=[62,73)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:17 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 73 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=73) [1] r=0 lpr=73 pi=[62,73)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:17.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:18 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 74 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=73/74 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=73) [1] r=0 lpr=73 pi=[62,73)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:18 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 06 06:32:18 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 06 06:32:18 compute-1 ceph-mon[81689]: osdmap e73: 3 total, 3 up, 3 in
Dec 06 06:32:18 compute-1 ceph-mon[81689]: osdmap e74: 3 total, 3 up, 3 in
Dec 06 06:32:18 compute-1 ceph-mon[81689]: pgmap v250: 305 pgs: 8 active+remapped, 6 remapped+peering, 291 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 16 op/s; 302 B/s, 9 objects/s recovering
Dec 06 06:32:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:18.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Dec 06 06:32:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 75 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 75 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 75 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 75 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 75 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74) [1] r=0 lpr=74 pi=[62,74)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:19.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:19 compute-1 ceph-mon[81689]: 6.8 scrub starts
Dec 06 06:32:19 compute-1 ceph-mon[81689]: 6.8 scrub ok
Dec 06 06:32:19 compute-1 ceph-mon[81689]: osdmap e75: 3 total, 3 up, 3 in
Dec 06 06:32:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000058s ======
Dec 06 06:32:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:20.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Dec 06 06:32:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:21 compute-1 ceph-mon[81689]: 2.18 scrub starts
Dec 06 06:32:21 compute-1 ceph-mon[81689]: 2.18 scrub ok
Dec 06 06:32:21 compute-1 ceph-mon[81689]: pgmap v252: 305 pgs: 6 remapped+peering, 299 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 6 op/s
Dec 06 06:32:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:21.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:22.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:23 compute-1 ceph-mon[81689]: pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 5 objects/s recovering
Dec 06 06:32:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 06 06:32:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Dec 06 06:32:23 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 76 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=76 pruub=12.210833549s) [2] r=-1 lpr=76 pi=[74,76)/1 crt=58'1159 mlcod 0'0 active pruub 252.897613525s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:23 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 76 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=76 pruub=12.210776329s) [2] r=-1 lpr=76 pi=[74,76)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 252.897613525s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:23 compute-1 sshd-session[83782]: Accepted publickey for zuul from 192.168.122.30 port 55614 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:32:23 compute-1 systemd-logind[788]: New session 34 of user zuul.
Dec 06 06:32:23 compute-1 systemd[1]: Started Session 34 of User zuul.
Dec 06 06:32:23 compute-1 sshd-session[83782]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:32:24 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 06 06:32:24 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 06 06:32:24 compute-1 ceph-mon[81689]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 06:32:24 compute-1 ceph-mon[81689]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 06:32:24 compute-1 ceph-mon[81689]: Deploying daemon keepalived.rgw.default.compute-0.fknpoc on compute-0
Dec 06 06:32:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 06 06:32:24 compute-1 ceph-mon[81689]: osdmap e76: 3 total, 3 up, 3 in
Dec 06 06:32:24 compute-1 ceph-mon[81689]: 2.1d deep-scrub starts
Dec 06 06:32:24 compute-1 ceph-mon[81689]: 2.1d deep-scrub ok
Dec 06 06:32:24 compute-1 python3.9[83935]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 06 06:32:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Dec 06 06:32:25 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 77 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=77) [2]/[1] r=0 lpr=77 pi=[74,77)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:25 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 77 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=77) [2]/[1] r=0 lpr=77 pi=[74,77)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:25 compute-1 ceph-mon[81689]: 4.a scrub starts
Dec 06 06:32:25 compute-1 ceph-mon[81689]: 4.a scrub ok
Dec 06 06:32:25 compute-1 ceph-mon[81689]: pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 90 B/s, 5 objects/s recovering
Dec 06 06:32:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 06 06:32:25 compute-1 ceph-mon[81689]: osdmap e77: 3 total, 3 up, 3 in
Dec 06 06:32:25 compute-1 ceph-mon[81689]: 3.e scrub starts
Dec 06 06:32:25 compute-1 ceph-mon[81689]: 3.e scrub ok
Dec 06 06:32:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:25.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:25 compute-1 python3.9[84109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:32:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:26 compute-1 sudo[84263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uammtzetriraurixpcduaikdlogokbat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002746.514257-99-80698534584968/AnsiballZ_command.py'
Dec 06 06:32:26 compute-1 sudo[84263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:27 compute-1 python3.9[84265]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:32:27 compute-1 sudo[84263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:27 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 06 06:32:27 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 06 06:32:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:27 compute-1 sudo[84416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibiudnhjdvmeehiivfqyntwoohvjoel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002747.528917-135-262744393769186/AnsiballZ_stat.py'
Dec 06 06:32:27 compute-1 sudo[84416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:28 compute-1 python3.9[84418]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:32:28 compute-1 sudo[84416]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Dec 06 06:32:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 78 pg[9.16( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [1] r=0 lpr=78 pi=[62,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 78 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [1] r=0 lpr=78 pi=[62,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 78 pg[9.6( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [1] r=0 lpr=78 pi=[62,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:28 compute-1 ceph-mon[81689]: pgmap v257: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 4 objects/s recovering
Dec 06 06:32:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 78 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=77/78 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[74,77)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:28.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:28 compute-1 sudo[84520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:28 compute-1 sudo[84520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:28 compute-1 sudo[84520]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:28 compute-1 sudo[84570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:32:28 compute-1 sudo[84570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:28 compute-1 sudo[84618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyqlimgytxzcbbqvdrtolzoafawdpojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002748.5593822-168-20171951915270/AnsiballZ_file.py'
Dec 06 06:32:28 compute-1 sudo[84618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:28 compute-1 sudo[84570]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Dec 06 06:32:29 compute-1 python3.9[84622]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:32:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 79 pg[9.6( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=-1 lpr=79 pi=[62,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 79 pg[9.6( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=-1 lpr=79 pi=[62,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 79 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=-1 lpr=79 pi=[62,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 79 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=77/78 n=5 ec=62/51 lis/c=77/74 les/c/f=78/75/0 sis=79 pruub=15.459033966s) [2] async=[2] r=-1 lpr=79 pi=[74,79)/1 crt=58'1159 mlcod 58'1159 active pruub 262.177001953s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 79 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=-1 lpr=79 pi=[62,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 79 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=77/78 n=5 ec=62/51 lis/c=77/74 les/c/f=78/75/0 sis=79 pruub=15.458874702s) [2] r=-1 lpr=79 pi=[74,79)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 262.177001953s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 79 pg[9.16( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=-1 lpr=79 pi=[62,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 79 pg[9.16( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=-1 lpr=79 pi=[62,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:29 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 06 06:32:29 compute-1 sudo[84623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:29 compute-1 sudo[84623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:29 compute-1 sudo[84623]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-1 sudo[84618]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 06 06:32:29 compute-1 sudo[84648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:29 compute-1 sudo[84648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:29 compute-1 sudo[84648]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-1 sudo[84697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:29 compute-1 sudo[84697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:29 compute-1 sudo[84697]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-1 sudo[84722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:32:29 compute-1 sudo[84722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:29 compute-1 ceph-mon[81689]: 2.1c scrub starts
Dec 06 06:32:29 compute-1 ceph-mon[81689]: 2.1c scrub ok
Dec 06 06:32:29 compute-1 ceph-mon[81689]: 3.d scrub starts
Dec 06 06:32:29 compute-1 ceph-mon[81689]: 3.d scrub ok
Dec 06 06:32:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 06 06:32:29 compute-1 ceph-mon[81689]: osdmap e78: 3 total, 3 up, 3 in
Dec 06 06:32:29 compute-1 ceph-mon[81689]: pgmap v259: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:29 compute-1 ceph-mon[81689]: 2.9 deep-scrub starts
Dec 06 06:32:29 compute-1 ceph-mon[81689]: 2.9 deep-scrub ok
Dec 06 06:32:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-1 ceph-mon[81689]: osdmap e79: 3 total, 3 up, 3 in
Dec 06 06:32:29 compute-1 sudo[84915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mocaayfzicsjvhxflkkfzboheenpzdwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002749.3953328-195-250067114998029/AnsiballZ_file.py'
Dec 06 06:32:29 compute-1 sudo[84915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:29.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:29 compute-1 podman[84948]: 2025-12-06 06:32:29.828562973 +0000 UTC m=+0.077618591 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 06:32:29 compute-1 python3.9[84920]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:32:29 compute-1 sudo[84915]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-1 podman[84948]: 2025-12-06 06:32:29.931784131 +0000 UTC m=+0.180839729 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:32:30 compute-1 sudo[84722]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:30 compute-1 python3.9[85222]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:32:30 compute-1 network[85239]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:32:30 compute-1 network[85240]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:32:30 compute-1 network[85241]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:32:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:30.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:31 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 06 06:32:31 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 06 06:32:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:32 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 06 06:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Dec 06 06:32:32 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 06 06:32:32 compute-1 ceph-mon[81689]: 3.c scrub starts
Dec 06 06:32:32 compute-1 ceph-mon[81689]: 3.c scrub ok
Dec 06 06:32:32 compute-1 ceph-mon[81689]: pgmap v261: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:32.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:33 compute-1 sudo[85312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:33 compute-1 sudo[85312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:33 compute-1 sudo[85312]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:33 compute-1 sudo[85340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:33 compute-1 sudo[85340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:33 compute-1 sudo[85340]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:33 compute-1 sudo[85368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:33 compute-1 sudo[85368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:33 compute-1 sudo[85368]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:33 compute-1 sudo[85397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:32:33 compute-1 sudo[85397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:34 compute-1 sudo[85397]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Dec 06 06:32:34 compute-1 ceph-mon[81689]: 6.a scrub starts
Dec 06 06:32:34 compute-1 ceph-mon[81689]: 6.a scrub ok
Dec 06 06:32:34 compute-1 ceph-mon[81689]: osdmap e80: 3 total, 3 up, 3 in
Dec 06 06:32:34 compute-1 ceph-mon[81689]: pgmap v263: 305 pgs: 3 remapped+peering, 3 active+remapped, 1 peering, 298 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 8.8 KiB/s rd, 170 B/s wr, 15 op/s; 98 B/s, 3 objects/s recovering
Dec 06 06:32:34 compute-1 ceph-mon[81689]: 7.19 scrub starts
Dec 06 06:32:34 compute-1 ceph-mon[81689]: 7.19 scrub ok
Dec 06 06:32:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:34.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:35 compute-1 python3.9[85630]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:32:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-1 ceph-mon[81689]: pgmap v264: 305 pgs: 3 remapped+peering, 3 active+remapped, 1 peering, 298 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 163 B/s wr, 15 op/s; 94 B/s, 3 objects/s recovering
Dec 06 06:32:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-1 ceph-mon[81689]: osdmap e81: 3 total, 3 up, 3 in
Dec 06 06:32:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:35.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Dec 06 06:32:35 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 82 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:35 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 82 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:35 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 82 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:35 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 82 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:35 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 82 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:35 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 82 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:36 compute-1 python3.9[85780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:32:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:36.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:37 compute-1 python3.9[85934]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:32:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Dec 06 06:32:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 83 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=6 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 83 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:37 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 83 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:38 compute-1 ceph-mon[81689]: osdmap e82: 3 total, 3 up, 3 in
Dec 06 06:32:38 compute-1 ceph-mon[81689]: pgmap v267: 305 pgs: 3 remapped+peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 3 objects/s recovering
Dec 06 06:32:38 compute-1 ceph-mon[81689]: 7.1e scrub starts
Dec 06 06:32:38 compute-1 ceph-mon[81689]: 7.1e scrub ok
Dec 06 06:32:38 compute-1 sudo[86017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-1 sudo[86017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86017]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 06:32:38 compute-1 sudo[86064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86064]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-1 sudo[86096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86096]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayazfoywsfqkbcrddutxwujfwhofcdal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002758.0202916-339-190841560435446/AnsiballZ_setup.py'
Dec 06 06:32:38 compute-1 sudo[86146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph
Dec 06 06:32:38 compute-1 sudo[86188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:38 compute-1 sudo[86146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86146]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-1 sudo[86193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86193]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:32:38 compute-1 sudo[86218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86218]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-1 sudo[86243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86243]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:38 compute-1 sudo[86268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86268]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-1 sudo[86293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86293]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 python3.9[86191]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:32:38 compute-1 sudo[86318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:32:38 compute-1 sudo[86318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86318]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-1 sudo[86374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86374]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:32:38 compute-1 sudo[86399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86399]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86188]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:38.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:38 compute-1 sudo[86424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-1 sudo[86424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86424]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-1 sudo[86449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:32:38 compute-1 sudo[86449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-1 sudo[86449]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 ceph-mon[81689]: 4.8 scrub starts
Dec 06 06:32:39 compute-1 ceph-mon[81689]: 4.8 scrub ok
Dec 06 06:32:39 compute-1 ceph-mon[81689]: 11.17 scrub starts
Dec 06 06:32:39 compute-1 ceph-mon[81689]: 11.17 scrub ok
Dec 06 06:32:39 compute-1 ceph-mon[81689]: osdmap e83: 3 total, 3 up, 3 in
Dec 06 06:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:32:39 compute-1 ceph-mon[81689]: Updating compute-0:/etc/ceph/ceph.conf
Dec 06 06:32:39 compute-1 ceph-mon[81689]: Updating compute-1:/etc/ceph/ceph.conf
Dec 06 06:32:39 compute-1 ceph-mon[81689]: Updating compute-2:/etc/ceph/ceph.conf
Dec 06 06:32:39 compute-1 ceph-mon[81689]: pgmap v269: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:32:39 compute-1 sudo[86474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-1 sudo[86474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86474]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 06:32:39 compute-1 sudo[86499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86499]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-1 sudo[86544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86544]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:32:39 compute-1 sudo[86592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86592]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctjxlkzqeyreuxpottgnjhzlbulfldxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002758.0202916-339-190841560435446/AnsiballZ_dnf.py'
Dec 06 06:32:39 compute-1 sudo[86652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:39 compute-1 sudo[86645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-1 sudo[86645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86645]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:32:39 compute-1 sudo[86675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86675]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-1 sudo[86700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86700]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:32:39 compute-1 sudo[86725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86725]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 python3.9[86672]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:32:39 compute-1 sudo[86750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-1 sudo[86750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86750]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:39 compute-1 sudo[86776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86776]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-1 sudo[86801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86801]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:32:39 compute-1 sudo[86826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86826]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-1 sudo[86877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86877]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:39.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:39 compute-1 sudo[86905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:32:39 compute-1 sudo[86905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86905]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-1 sudo[86931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-1 sudo[86931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-1 sudo[86931]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-1 ceph-mon[81689]: 8.2 scrub starts
Dec 06 06:32:40 compute-1 ceph-mon[81689]: 8.2 scrub ok
Dec 06 06:32:40 compute-1 ceph-mon[81689]: Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:40 compute-1 ceph-mon[81689]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:40 compute-1 ceph-mon[81689]: Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:40 compute-1 sudo[86958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:32:40 compute-1 sudo[86958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-1 sudo[86958]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-1 sudo[86986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:40 compute-1 sudo[86986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-1 sudo[86986]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-1 sudo[87011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:40 compute-1 sudo[87011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-1 sudo[87011]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:40.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:41 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 06 06:32:41 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-1 ceph-mon[81689]: pgmap v270: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:41 compute-1 ceph-mon[81689]: 2.1f scrub starts
Dec 06 06:32:41 compute-1 ceph-mon[81689]: 2.1f scrub ok
Dec 06 06:32:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:41.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Dec 06 06:32:42 compute-1 ceph-mon[81689]: 7.1a scrub starts
Dec 06 06:32:42 compute-1 ceph-mon[81689]: 7.1a scrub ok
Dec 06 06:32:42 compute-1 ceph-mon[81689]: 8.16 scrub starts
Dec 06 06:32:42 compute-1 ceph-mon[81689]: 8.16 scrub ok
Dec 06 06:32:42 compute-1 ceph-mon[81689]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 42 B/s, 2 objects/s recovering
Dec 06 06:32:42 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 06 06:32:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:42.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:43 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Dec 06 06:32:43 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Dec 06 06:32:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 06 06:32:43 compute-1 ceph-mon[81689]: osdmap e84: 3 total, 3 up, 3 in
Dec 06 06:32:43 compute-1 ceph-mon[81689]: 7.18 scrub starts
Dec 06 06:32:43 compute-1 ceph-mon[81689]: 7.18 scrub ok
Dec 06 06:32:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:43.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Dec 06 06:32:45 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 85 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=6 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=85 pruub=14.067381859s) [2] r=-1 lpr=85 pi=[74,85)/1 crt=58'1159 mlcod 0'0 active pruub 276.898193359s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:45 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 85 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=6 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=85 pruub=14.067333221s) [2] r=-1 lpr=85 pi=[74,85)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 276.898193359s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:45 compute-1 ceph-mon[81689]: 7.1c deep-scrub starts
Dec 06 06:32:45 compute-1 ceph-mon[81689]: 7.1c deep-scrub ok
Dec 06 06:32:45 compute-1 ceph-mon[81689]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 1 objects/s recovering
Dec 06 06:32:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 06 06:32:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:45.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:46 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 06 06:32:46 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 06 06:32:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:47.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 06 06:32:49 compute-1 ceph-mon[81689]: osdmap e85: 3 total, 3 up, 3 in
Dec 06 06:32:49 compute-1 ceph-mon[81689]: 8.9 scrub starts
Dec 06 06:32:49 compute-1 ceph-mon[81689]: 8.9 scrub ok
Dec 06 06:32:49 compute-1 ceph-mon[81689]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 06:32:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Dec 06 06:32:49 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 86 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=6 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=86) [2]/[1] r=0 lpr=86 pi=[74,86)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:49 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 86 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=6 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=86) [2]/[1] r=0 lpr=86 pi=[74,86)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:49 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 86 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=86 pruub=9.904769897s) [2] r=-1 lpr=86 pi=[74,86)/1 crt=58'1159 mlcod 0'0 active pruub 276.898132324s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:49 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 86 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=86 pruub=9.904736519s) [2] r=-1 lpr=86 pi=[74,86)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 276.898132324s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:49 compute-1 sudo[87091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:49 compute-1 sudo[87091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:49 compute-1 sudo[87091]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:49 compute-1 sudo[87116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:32:49 compute-1 sudo[87116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:49 compute-1 sudo[87116]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:50 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 06 06:32:50 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 06 06:32:50 compute-1 ceph-mon[81689]: 3.a scrub starts
Dec 06 06:32:50 compute-1 ceph-mon[81689]: 3.a scrub ok
Dec 06 06:32:50 compute-1 ceph-mon[81689]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 06:32:50 compute-1 ceph-mon[81689]: 7.1b scrub starts
Dec 06 06:32:50 compute-1 ceph-mon[81689]: 7.1b scrub ok
Dec 06 06:32:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 06:32:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:50 compute-1 ceph-mon[81689]: osdmap e86: 3 total, 3 up, 3 in
Dec 06 06:32:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:32:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:32:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:50 compute-1 ceph-mon[81689]: 11.a scrub starts
Dec 06 06:32:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:50 compute-1 ceph-mon[81689]: 11.a scrub ok
Dec 06 06:32:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Dec 06 06:32:50 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 87 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[74,87)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:50 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 87 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=74/75 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[74,87)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:50 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 87 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=86/87 n=6 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=86) [2]/[1] async=[2] r=0 lpr=86 pi=[74,86)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:50.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Dec 06 06:32:51 compute-1 ceph-mon[81689]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 06 06:32:51 compute-1 ceph-mon[81689]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 06:32:51 compute-1 ceph-mon[81689]: 6.e scrub starts
Dec 06 06:32:51 compute-1 ceph-mon[81689]: 6.e scrub ok
Dec 06 06:32:51 compute-1 ceph-mon[81689]: pgmap v278: 305 pgs: 1 active+clean+scrubbing, 2 unknown, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 06:32:51 compute-1 ceph-mon[81689]: osdmap e87: 3 total, 3 up, 3 in
Dec 06 06:32:51 compute-1 ceph-mon[81689]: 8.d scrub starts
Dec 06 06:32:51 compute-1 ceph-mon[81689]: 8.d scrub ok
Dec 06 06:32:51 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 88 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=86/87 n=6 ec=62/51 lis/c=86/74 les/c/f=87/75/0 sis=88 pruub=14.729565620s) [2] async=[2] r=-1 lpr=88 pi=[74,88)/1 crt=58'1159 mlcod 58'1159 active pruub 284.020111084s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:51 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 88 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=86/87 n=6 ec=62/51 lis/c=86/74 les/c/f=87/75/0 sis=88 pruub=14.729423523s) [2] r=-1 lpr=88 pi=[74,88)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 284.020111084s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:51 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 88 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=87/88 n=5 ec=62/51 lis/c=74/74 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[74,87)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:32:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:32:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:32:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:32:53 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 06 06:32:53 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 06 06:32:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Dec 06 06:32:53 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 89 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=87/88 n=5 ec=62/51 lis/c=87/74 les/c/f=88/75/0 sis=89 pruub=14.176657677s) [2] async=[2] r=-1 lpr=89 pi=[74,89)/1 crt=58'1159 mlcod 58'1159 active pruub 285.329559326s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:53 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 89 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=87/88 n=5 ec=62/51 lis/c=87/74 les/c/f=88/75/0 sis=89 pruub=14.176554680s) [2] r=-1 lpr=89 pi=[74,89)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 285.329559326s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:53 compute-1 ceph-mon[81689]: osdmap e88: 3 total, 3 up, 3 in
Dec 06 06:32:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:53 compute-1 ceph-mon[81689]: Reconfiguring mgr.compute-0.sfzyix (monmap changed)...
Dec 06 06:32:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.sfzyix", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:32:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:32:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:53 compute-1 ceph-mon[81689]: Reconfiguring daemon mgr.compute-0.sfzyix on compute-0
Dec 06 06:32:53 compute-1 ceph-mon[81689]: pgmap v281: 305 pgs: 2 active+remapped, 1 active+clean+scrubbing, 2 unknown, 300 active+clean; 456 KiB data, 125 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Dec 06 06:32:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 06:32:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:53.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 06:32:54 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 06 06:32:54 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 06 06:32:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Dec 06 06:32:54 compute-1 ceph-mon[81689]: 11.e scrub starts
Dec 06 06:32:54 compute-1 ceph-mon[81689]: 11.e scrub ok
Dec 06 06:32:54 compute-1 ceph-mon[81689]: 4.c scrub starts
Dec 06 06:32:54 compute-1 ceph-mon[81689]: 4.c scrub ok
Dec 06 06:32:54 compute-1 ceph-mon[81689]: 2.1e scrub starts
Dec 06 06:32:54 compute-1 ceph-mon[81689]: 2.1e scrub ok
Dec 06 06:32:54 compute-1 ceph-mon[81689]: osdmap e89: 3 total, 3 up, 3 in
Dec 06 06:32:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-1 ceph-mon[81689]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 06 06:32:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:32:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:54 compute-1 ceph-mon[81689]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 06 06:32:54 compute-1 ceph-mon[81689]: 8.f scrub starts
Dec 06 06:32:54 compute-1 ceph-mon[81689]: 8.f scrub ok
Dec 06 06:32:54 compute-1 ceph-mon[81689]: pgmap v283: 305 pgs: 2 active+remapped, 1 active+clean+scrubbing, 2 unknown, 300 active+clean; 456 KiB data, 125 MiB used, 21 GiB / 21 GiB avail; 23 B/s, 0 objects/s recovering
Dec 06 06:32:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 06:32:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:32:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:32:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:55.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:56 compute-1 ceph-mon[81689]: 4.e scrub starts
Dec 06 06:32:56 compute-1 ceph-mon[81689]: 4.e scrub ok
Dec 06 06:32:56 compute-1 ceph-mon[81689]: Reconfiguring osd.0 (monmap changed)...
Dec 06 06:32:56 compute-1 ceph-mon[81689]: Reconfiguring daemon osd.0 on compute-0
Dec 06 06:32:56 compute-1 ceph-mon[81689]: osdmap e90: 3 total, 3 up, 3 in
Dec 06 06:32:56 compute-1 sudo[87180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:56 compute-1 sudo[87180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:56 compute-1 sudo[87180]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:56 compute-1 sudo[87205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:56 compute-1 sudo[87205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:56 compute-1 sudo[87205]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:56 compute-1 sudo[87230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:56 compute-1 sudo[87230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:56 compute-1 sudo[87230]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:56 compute-1 sudo[87255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:56 compute-1 sudo[87255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:56 compute-1 podman[87304]: 2025-12-06 06:32:56.634591155 +0000 UTC m=+0.022679283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:56 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 06 06:32:56 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 06 06:32:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:32:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:57.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:32:58 compute-1 podman[87304]: 2025-12-06 06:32:58.184611623 +0000 UTC m=+1.572699741 container create 373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mayer, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:32:58 compute-1 systemd[1]: Started libpod-conmon-373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810.scope.
Dec 06 06:32:58 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:32:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Dec 06 06:32:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 91 pg[9.a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=91) [1] r=0 lpr=91 pi=[62,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 91 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=91) [1] r=0 lpr=91 pi=[62,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:58 compute-1 podman[87304]: 2025-12-06 06:32:58.371377984 +0000 UTC m=+1.759466122 container init 373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mayer, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:32:58 compute-1 podman[87304]: 2025-12-06 06:32:58.378804205 +0000 UTC m=+1.766892323 container start 373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:32:58 compute-1 podman[87304]: 2025-12-06 06:32:58.381911449 +0000 UTC m=+1.769999567 container attach 373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 06:32:58 compute-1 stoic_mayer[87321]: 167 167
Dec 06 06:32:58 compute-1 systemd[1]: libpod-373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810.scope: Deactivated successfully.
Dec 06 06:32:58 compute-1 conmon[87321]: conmon 373051fcdfa50b9e1606 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810.scope/container/memory.events
Dec 06 06:32:58 compute-1 podman[87326]: 2025-12-06 06:32:58.427006195 +0000 UTC m=+0.026166107 container died 373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:32:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-516baf2eab022fcb3e61dc77a73d526b18aaacb10e92ca81807f4d2ddb5a48e5-merged.mount: Deactivated successfully.
Dec 06 06:32:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:58 compute-1 ceph-mon[81689]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 357 B/s wr, 31 op/s; 99 B/s, 4 objects/s recovering
Dec 06 06:32:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 06:32:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:58 compute-1 ceph-mon[81689]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 06 06:32:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:32:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:58 compute-1 ceph-mon[81689]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 06 06:32:58 compute-1 ceph-mon[81689]: 4.d scrub starts
Dec 06 06:32:58 compute-1 podman[87326]: 2025-12-06 06:32:58.473022976 +0000 UTC m=+0.072182658 container remove 373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Dec 06 06:32:58 compute-1 systemd[1]: libpod-conmon-373051fcdfa50b9e160639e0635fe6e73a4a595d3e2b306afc6a39d7ceb5f810.scope: Deactivated successfully.
Dec 06 06:32:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Dec 06 06:32:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 92 pg[9.a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[62,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 92 pg[9.a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[62,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 92 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[62,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 92 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[62,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:58 compute-1 sudo[87255]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:58 compute-1 sudo[87343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:58 compute-1 sudo[87343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:58 compute-1 sudo[87343]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:58 compute-1 sudo[87368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:58 compute-1 sudo[87368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:58 compute-1 sudo[87368]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:58 compute-1 sudo[87393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:58 compute-1 sudo[87393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:58 compute-1 sudo[87393]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:58 compute-1 sudo[87418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:58 compute-1 sudo[87418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:59 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 06 06:32:59 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 06 06:32:59 compute-1 podman[87461]: 2025-12-06 06:32:59.035296412 +0000 UTC m=+0.052939730 container create 8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:32:59 compute-1 systemd[1]: Started libpod-conmon-8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35.scope.
Dec 06 06:32:59 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:32:59 compute-1 podman[87461]: 2025-12-06 06:32:59.098358262 +0000 UTC m=+0.116001580 container init 8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:32:59 compute-1 podman[87461]: 2025-12-06 06:32:59.103707906 +0000 UTC m=+0.121351224 container start 8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:32:59 compute-1 jovial_lamport[87477]: 167 167
Dec 06 06:32:59 compute-1 systemd[1]: libpod-8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35.scope: Deactivated successfully.
Dec 06 06:32:59 compute-1 podman[87461]: 2025-12-06 06:32:59.107965542 +0000 UTC m=+0.125608880 container attach 8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:32:59 compute-1 podman[87461]: 2025-12-06 06:32:59.108394833 +0000 UTC m=+0.126038141 container died 8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamport, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 06:32:59 compute-1 podman[87461]: 2025-12-06 06:32:59.017272645 +0000 UTC m=+0.034915973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-ba8abe23673bafba1f0538db141fc2b7d8848f6b68421173dd5526ddfa34f75d-merged.mount: Deactivated successfully.
Dec 06 06:32:59 compute-1 podman[87461]: 2025-12-06 06:32:59.142906743 +0000 UTC m=+0.160550051 container remove 8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lamport, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 06:32:59 compute-1 systemd[1]: libpod-conmon-8ec671b9b67f7eb42395b3180d5141e7d05e0568ac8d2226f48da70a46733c35.scope: Deactivated successfully.
Dec 06 06:32:59 compute-1 sudo[87418]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:59 compute-1 sudo[87502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:59 compute-1 sudo[87502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:59 compute-1 sudo[87502]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:59 compute-1 sudo[87527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:59 compute-1 sudo[87527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:59 compute-1 sudo[87527]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:59 compute-1 sudo[87552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:59 compute-1 sudo[87552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:59 compute-1 sudo[87552]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:59 compute-1 ceph-mon[81689]: 4.d scrub ok
Dec 06 06:32:59 compute-1 ceph-mon[81689]: 8.a scrub starts
Dec 06 06:32:59 compute-1 ceph-mon[81689]: 8.a scrub ok
Dec 06 06:32:59 compute-1 ceph-mon[81689]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 317 B/s wr, 29 op/s; 53 B/s, 2 objects/s recovering
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 06:32:59 compute-1 ceph-mon[81689]: osdmap e91: 3 total, 3 up, 3 in
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 06:32:59 compute-1 ceph-mon[81689]: osdmap e92: 3 total, 3 up, 3 in
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-1 ceph-mon[81689]: Reconfiguring osd.1 (monmap changed)...
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:59 compute-1 ceph-mon[81689]: Reconfiguring daemon osd.1 on compute-1
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-1 ceph-mon[81689]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:32:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:59 compute-1 ceph-mon[81689]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 06 06:32:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Dec 06 06:32:59 compute-1 sudo[87577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:59 compute-1 sudo[87577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:59 compute-1 podman[87618]: 2025-12-06 06:32:59.736117083 +0000 UTC m=+0.033050262 container create e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:32:59 compute-1 systemd[1]: Started libpod-conmon-e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919.scope.
Dec 06 06:32:59 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:32:59 compute-1 podman[87618]: 2025-12-06 06:32:59.818766652 +0000 UTC m=+0.115699841 container init e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:32:59 compute-1 podman[87618]: 2025-12-06 06:32:59.721427077 +0000 UTC m=+0.018360276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:59 compute-1 podman[87618]: 2025-12-06 06:32:59.825939396 +0000 UTC m=+0.122872575 container start e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:32:59 compute-1 podman[87618]: 2025-12-06 06:32:59.829454631 +0000 UTC m=+0.126387840 container attach e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:32:59 compute-1 vigilant_engelbart[87635]: 167 167
Dec 06 06:32:59 compute-1 systemd[1]: libpod-e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919.scope: Deactivated successfully.
Dec 06 06:32:59 compute-1 podman[87618]: 2025-12-06 06:32:59.832301188 +0000 UTC m=+0.129234367 container died e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:32:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-b7a54d0839a3192eba487e092ab55cee5d372f01264fea88f9b2e01e53fed154-merged.mount: Deactivated successfully.
Dec 06 06:32:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:32:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:59.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:59 compute-1 podman[87618]: 2025-12-06 06:32:59.885056441 +0000 UTC m=+0.181989620 container remove e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:32:59 compute-1 systemd[1]: libpod-conmon-e4026f7234e648bbb8497aa78bef986a00de7c7bdbd42d3857f794e399fb3919.scope: Deactivated successfully.
Dec 06 06:32:59 compute-1 sudo[87577]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:00 compute-1 sudo[87655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:00 compute-1 sudo[87655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:00 compute-1 sudo[87655]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:00 compute-1 sudo[87680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:33:00 compute-1 sudo[87680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:00 compute-1 sudo[87680]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:00 compute-1 sudo[87705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:00 compute-1 sudo[87705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:00 compute-1 sudo[87705]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:00 compute-1 sudo[87730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:33:00 compute-1 sudo[87730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:00 compute-1 podman[87770]: 2025-12-06 06:33:00.412450025 +0000 UTC m=+0.048403306 container create 39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:33:00 compute-1 systemd[1]: Started libpod-conmon-39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390.scope.
Dec 06 06:33:00 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:33:00 compute-1 podman[87770]: 2025-12-06 06:33:00.47753026 +0000 UTC m=+0.113483561 container init 39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec 06 06:33:00 compute-1 podman[87770]: 2025-12-06 06:33:00.385724984 +0000 UTC m=+0.021678285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:33:00 compute-1 podman[87770]: 2025-12-06 06:33:00.482807172 +0000 UTC m=+0.118760453 container start 39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:33:00 compute-1 wizardly_wozniak[87786]: 167 167
Dec 06 06:33:00 compute-1 podman[87770]: 2025-12-06 06:33:00.485789773 +0000 UTC m=+0.121743084 container attach 39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 06:33:00 compute-1 systemd[1]: libpod-39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390.scope: Deactivated successfully.
Dec 06 06:33:00 compute-1 conmon[87786]: conmon 39962b6612372333b562 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390.scope/container/memory.events
Dec 06 06:33:00 compute-1 podman[87770]: 2025-12-06 06:33:00.487178961 +0000 UTC m=+0.123132252 container died 39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:33:00 compute-1 ceph-mon[81689]: 5.f scrub starts
Dec 06 06:33:00 compute-1 ceph-mon[81689]: 5.f scrub ok
Dec 06 06:33:00 compute-1 ceph-mon[81689]: osdmap e93: 3 total, 3 up, 3 in
Dec 06 06:33:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:00 compute-1 ceph-mon[81689]: Reconfiguring mgr.compute-1.nmklwp (monmap changed)...
Dec 06 06:33:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:33:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:33:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:00 compute-1 ceph-mon[81689]: Reconfiguring daemon mgr.compute-1.nmklwp on compute-1
Dec 06 06:33:00 compute-1 ceph-mon[81689]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 369 B/s wr, 34 op/s; 62 B/s, 3 objects/s recovering
Dec 06 06:33:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 06 06:33:00 compute-1 systemd[1]: var-lib-containers-storage-overlay-c6a6a965b8c754c3395c9babb6fea1a1755859f33243990ae12e1f1061a68322-merged.mount: Deactivated successfully.
Dec 06 06:33:00 compute-1 podman[87770]: 2025-12-06 06:33:00.521832175 +0000 UTC m=+0.157785456 container remove 39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:33:00 compute-1 systemd[1]: libpod-conmon-39962b6612372333b5622eb385ff4ce1bc20da37b019aff02eaadb43f04a1390.scope: Deactivated successfully.
Dec 06 06:33:00 compute-1 sudo[87730]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Dec 06 06:33:00 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 94 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94) [1] r=0 lpr=94 pi=[62,94)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:00 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 94 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94) [1] r=0 lpr=94 pi=[62,94)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:00 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 94 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94) [1] r=0 lpr=94 pi=[62,94)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:00 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 94 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94) [1] r=0 lpr=94 pi=[62,94)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:01.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:02 compute-1 ceph-mon[81689]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:02 compute-1 ceph-mon[81689]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 06 06:33:02 compute-1 ceph-mon[81689]: osdmap e94: 3 total, 3 up, 3 in
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:33:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Dec 06 06:33:02 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 95 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=94/95 n=6 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94) [1] r=0 lpr=94 pi=[62,94)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:02 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 95 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=94/95 n=5 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94) [1] r=0 lpr=94 pi=[62,94)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 06 06:33:03 compute-1 ceph-mon[81689]: Reconfiguring mgr.compute-2.ytlehq (monmap changed)...
Dec 06 06:33:03 compute-1 ceph-mon[81689]: Reconfiguring daemon mgr.compute-2.ytlehq on compute-2
Dec 06 06:33:03 compute-1 ceph-mon[81689]: pgmap v292: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:33:03 compute-1 ceph-mon[81689]: 2.4 scrub starts
Dec 06 06:33:03 compute-1 ceph-mon[81689]: 2.4 scrub ok
Dec 06 06:33:03 compute-1 ceph-mon[81689]: osdmap e95: 3 total, 3 up, 3 in
Dec 06 06:33:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 06 06:33:03 compute-1 sudo[87806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:03 compute-1 sudo[87806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:03 compute-1 sudo[87806]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:03 compute-1 sudo[87831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:33:03 compute-1 sudo[87831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:03 compute-1 sudo[87831]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:03 compute-1 sudo[87856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:03 compute-1 sudo[87856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:03 compute-1 sudo[87856]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:03 compute-1 sudo[87881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:33:03 compute-1 sudo[87881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:03 compute-1 podman[87978]: 2025-12-06 06:33:03.795640924 +0000 UTC m=+0.053006521 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:33:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:03.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:03 compute-1 podman[87978]: 2025-12-06 06:33:03.908792445 +0000 UTC m=+0.166158002 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 06:33:04 compute-1 ceph-mon[81689]: 10.6 scrub starts
Dec 06 06:33:04 compute-1 ceph-mon[81689]: 10.6 scrub ok
Dec 06 06:33:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:04 compute-1 ceph-mon[81689]: 8.1 scrub starts
Dec 06 06:33:04 compute-1 ceph-mon[81689]: 8.1 scrub ok
Dec 06 06:33:04 compute-1 sudo[87881]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:04.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:05 compute-1 ceph-mon[81689]: pgmap v294: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 38 B/s, 2 objects/s recovering
Dec 06 06:33:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-1 ceph-mon[81689]: 8.7 deep-scrub starts
Dec 06 06:33:05 compute-1 ceph-mon[81689]: 8.7 deep-scrub ok
Dec 06 06:33:05 compute-1 ceph-mon[81689]: 11.8 deep-scrub starts
Dec 06 06:33:05 compute-1 ceph-mon[81689]: 11.8 deep-scrub ok
Dec 06 06:33:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:05.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:06 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 06 06:33:06 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 06 06:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:06 compute-1 ceph-mon[81689]: pgmap v295: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 32 B/s, 1 objects/s recovering
Dec 06 06:33:06 compute-1 ceph-mon[81689]: 8.e scrub starts
Dec 06 06:33:06 compute-1 ceph-mon[81689]: 8.e scrub ok
Dec 06 06:33:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:06.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:07 compute-1 ceph-mon[81689]: 10.7 scrub starts
Dec 06 06:33:07 compute-1 ceph-mon[81689]: 10.7 scrub ok
Dec 06 06:33:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:08 compute-1 ceph-mon[81689]: pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Dec 06 06:33:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 06 06:33:08 compute-1 ceph-mon[81689]: 8.13 scrub starts
Dec 06 06:33:08 compute-1 ceph-mon[81689]: 8.13 scrub ok
Dec 06 06:33:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Dec 06 06:33:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:33:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:33:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 06 06:33:09 compute-1 ceph-mon[81689]: osdmap e96: 3 total, 3 up, 3 in
Dec 06 06:33:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:33:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:33:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Dec 06 06:33:10 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 97 pg[9.d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=97) [1] r=0 lpr=97 pi=[80,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:10 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 97 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=79/79 les/c/f=80/80/0 sis=97) [1] r=0 lpr=97 pi=[79,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:10 compute-1 ceph-mon[81689]: 8.15 scrub starts
Dec 06 06:33:10 compute-1 ceph-mon[81689]: 8.15 scrub ok
Dec 06 06:33:10 compute-1 ceph-mon[81689]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 06 06:33:10 compute-1 ceph-mon[81689]: 8.1a scrub starts
Dec 06 06:33:10 compute-1 ceph-mon[81689]: 8.1a scrub ok
Dec 06 06:33:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:11 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 06 06:33:11 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 06 06:33:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Dec 06 06:33:11 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 98 pg[9.d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=98) [1]/[2] r=-1 lpr=98 pi=[80,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:11 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 98 pg[9.d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=98) [1]/[2] r=-1 lpr=98 pi=[80,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:11 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 98 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=79/79 les/c/f=80/80/0 sis=98) [1]/[2] r=-1 lpr=98 pi=[79,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:11 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 98 pg[9.1d( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=79/79 les/c/f=80/80/0 sis=98) [1]/[2] r=-1 lpr=98 pi=[79,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 06 06:33:11 compute-1 ceph-mon[81689]: osdmap e97: 3 total, 3 up, 3 in
Dec 06 06:33:11 compute-1 ceph-mon[81689]: 8.1d scrub starts
Dec 06 06:33:11 compute-1 ceph-mon[81689]: 8.1d scrub ok
Dec 06 06:33:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:11 compute-1 ceph-mon[81689]: osdmap e98: 3 total, 3 up, 3 in
Dec 06 06:33:11 compute-1 sudo[88131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:11 compute-1 sudo[88131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:11 compute-1 sudo[88131]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:11.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:11 compute-1 sudo[88156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:33:11 compute-1 sudo[88156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:11 compute-1 sudo[88156]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Dec 06 06:33:12 compute-1 ceph-mon[81689]: 10.9 scrub starts
Dec 06 06:33:12 compute-1 ceph-mon[81689]: 10.9 scrub ok
Dec 06 06:33:12 compute-1 ceph-mon[81689]: pgmap v301: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Dec 06 06:33:13 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 100 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=98/79 les/c/f=99/80/0 sis=100) [1] r=0 lpr=100 pi=[79,100)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:13 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 100 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=98/79 les/c/f=99/80/0 sis=100) [1] r=0 lpr=100 pi=[79,100)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:13 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 100 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=98/80 les/c/f=99/81/0 sis=100) [1] r=0 lpr=100 pi=[80,100)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:13 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 100 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=98/80 les/c/f=99/81/0 sis=100) [1] r=0 lpr=100 pi=[80,100)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:13 compute-1 ceph-mon[81689]: osdmap e99: 3 total, 3 up, 3 in
Dec 06 06:33:13 compute-1 ceph-mon[81689]: osdmap e100: 3 total, 3 up, 3 in
Dec 06 06:33:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:13.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Dec 06 06:33:14 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 101 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=100/101 n=6 ec=62/51 lis/c=98/80 les/c/f=99/81/0 sis=100) [1] r=0 lpr=100 pi=[80,100)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:14 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 101 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=100/101 n=5 ec=62/51 lis/c=98/79 les/c/f=99/80/0 sis=100) [1] r=0 lpr=100 pi=[79,100)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:14 compute-1 ceph-mon[81689]: 11.16 scrub starts
Dec 06 06:33:14 compute-1 ceph-mon[81689]: 11.16 scrub ok
Dec 06 06:33:14 compute-1 ceph-mon[81689]: pgmap v304: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:14 compute-1 ceph-mon[81689]: 8.1e scrub starts
Dec 06 06:33:14 compute-1 ceph-mon[81689]: 8.1e scrub ok
Dec 06 06:33:14 compute-1 ceph-mon[81689]: osdmap e101: 3 total, 3 up, 3 in
Dec 06 06:33:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:33:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:14.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:33:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:15.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:16 compute-1 ceph-mon[81689]: 8.11 scrub starts
Dec 06 06:33:16 compute-1 ceph-mon[81689]: 8.11 scrub ok
Dec 06 06:33:16 compute-1 ceph-mon[81689]: 9.1 scrub starts
Dec 06 06:33:16 compute-1 ceph-mon[81689]: 9.1 scrub ok
Dec 06 06:33:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:33:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:16.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:33:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:33:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:17.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:33:18 compute-1 ceph-mon[81689]: pgmap v306: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:18 compute-1 ceph-mon[81689]: 9.4 scrub starts
Dec 06 06:33:18 compute-1 ceph-mon[81689]: 9.4 scrub ok
Dec 06 06:33:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:18.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:19 compute-1 ceph-mon[81689]: pgmap v307: 305 pgs: 305 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 15 op/s; 146 B/s, 5 objects/s recovering
Dec 06 06:33:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 06 06:33:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Dec 06 06:33:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:19.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Dec 06 06:33:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:20.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:21 compute-1 ceph-mon[81689]: 8.3 deep-scrub starts
Dec 06 06:33:21 compute-1 ceph-mon[81689]: 8.3 deep-scrub ok
Dec 06 06:33:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 06 06:33:21 compute-1 ceph-mon[81689]: osdmap e102: 3 total, 3 up, 3 in
Dec 06 06:33:21 compute-1 ceph-mon[81689]: pgmap v309: 305 pgs: 305 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 14 op/s; 132 B/s, 4 objects/s recovering
Dec 06 06:33:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:33:21 compute-1 ceph-mon[81689]: 9.c scrub starts
Dec 06 06:33:21 compute-1 ceph-mon[81689]: 9.c scrub ok
Dec 06 06:33:21 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 103 pg[9.f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=103) [1] r=0 lpr=103 pi=[72,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:21 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 103 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=103) [1] r=0 lpr=103 pi=[72,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:21 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 06 06:33:21 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 06 06:33:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:33:22 compute-1 ceph-mon[81689]: osdmap e103: 3 total, 3 up, 3 in
Dec 06 06:33:22 compute-1 ceph-mon[81689]: 10.a scrub starts
Dec 06 06:33:22 compute-1 ceph-mon[81689]: 10.a scrub ok
Dec 06 06:33:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Dec 06 06:33:22 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 104 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] r=-1 lpr=104 pi=[72,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:22 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 104 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] r=-1 lpr=104 pi=[72,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:22 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 104 pg[9.f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] r=-1 lpr=104 pi=[72,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:22 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 104 pg[9.f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=104) [1]/[2] r=-1 lpr=104 pi=[72,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:23 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 06 06:33:23 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 06 06:33:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:24 compute-1 ceph-mon[81689]: 8.b scrub starts
Dec 06 06:33:24 compute-1 ceph-mon[81689]: 8.b scrub ok
Dec 06 06:33:24 compute-1 ceph-mon[81689]: osdmap e104: 3 total, 3 up, 3 in
Dec 06 06:33:24 compute-1 ceph-mon[81689]: 8.c scrub starts
Dec 06 06:33:24 compute-1 ceph-mon[81689]: pgmap v312: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 109 B/s, 3 objects/s recovering
Dec 06 06:33:24 compute-1 ceph-mon[81689]: 8.c scrub ok
Dec 06 06:33:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Dec 06 06:33:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:24.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:25 compute-1 sudo[86652]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:25 compute-1 ceph-mon[81689]: 10.b scrub starts
Dec 06 06:33:25 compute-1 ceph-mon[81689]: 10.b scrub ok
Dec 06 06:33:25 compute-1 ceph-mon[81689]: osdmap e105: 3 total, 3 up, 3 in
Dec 06 06:33:25 compute-1 ceph-mon[81689]: pgmap v314: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:25.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:26.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:27 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 106 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=106) [1] r=0 lpr=106 pi=[72,106)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:27 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 106 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=6 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=106) [1] r=0 lpr=106 pi=[72,106)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Dec 06 06:33:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 06 06:33:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:27.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:28 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 06 06:33:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Dec 06 06:33:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 107 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=107) [1] r=0 lpr=107 pi=[72,107)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 107 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=107) [1] r=0 lpr=107 pi=[72,107)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:28 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 06 06:33:28 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 107 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=106/107 n=6 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=106) [1] r=0 lpr=106 pi=[72,106)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:28.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:28 compute-1 ceph-mon[81689]: 11.13 scrub starts
Dec 06 06:33:28 compute-1 ceph-mon[81689]: 11.13 scrub ok
Dec 06 06:33:28 compute-1 ceph-mon[81689]: pgmap v315: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 1/218 objects misplaced (0.459%); 36 B/s, 1 objects/s recovering
Dec 06 06:33:28 compute-1 ceph-mon[81689]: 9.10 scrub starts
Dec 06 06:33:28 compute-1 ceph-mon[81689]: 9.10 scrub ok
Dec 06 06:33:28 compute-1 ceph-mon[81689]: osdmap e106: 3 total, 3 up, 3 in
Dec 06 06:33:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 06 06:33:28 compute-1 ceph-mon[81689]: osdmap e107: 3 total, 3 up, 3 in
Dec 06 06:33:28 compute-1 ceph-mon[81689]: pgmap v318: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 341 B/s wr, 31 op/s; 1/218 objects misplaced (0.459%); 73 B/s, 3 objects/s recovering
Dec 06 06:33:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 06 06:33:28 compute-1 ceph-mon[81689]: 9.11 scrub starts
Dec 06 06:33:28 compute-1 ceph-mon[81689]: 9.11 scrub ok
Dec 06 06:33:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Dec 06 06:33:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 108 pg[9.11( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=108) [1] r=0 lpr=108 pi=[62,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 108 pg[9.10( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=107) [1] r=0 lpr=108 pi=[62,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:29 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 108 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=107/108 n=5 ec=62/51 lis/c=104/72 les/c/f=105/73/0 sis=107) [1] r=0 lpr=107 pi=[72,107)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:29.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:30 compute-1 ceph-mon[81689]: 10.c scrub starts
Dec 06 06:33:30 compute-1 ceph-mon[81689]: 10.c scrub ok
Dec 06 06:33:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 06 06:33:30 compute-1 ceph-mon[81689]: osdmap e108: 3 total, 3 up, 3 in
Dec 06 06:33:30 compute-1 ceph-mon[81689]: 9.12 deep-scrub starts
Dec 06 06:33:30 compute-1 ceph-mon[81689]: 9.12 deep-scrub ok
Dec 06 06:33:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Dec 06 06:33:30 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 109 pg[9.12( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1] r=0 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:30 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 109 pg[9.10( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] r=-1 lpr=109 pi=[62,109)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:30 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 109 pg[9.11( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] r=-1 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:30 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 109 pg[9.10( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] r=-1 lpr=109 pi=[62,109)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:30 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 109 pg[9.11( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] r=-1 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:30 compute-1 sudo[88330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kegfsbauwkmwupkhglwreevdspahdqdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002810.380123-375-52399766818085/AnsiballZ_command.py'
Dec 06 06:33:30 compute-1 sudo[88330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:30 compute-1 python3.9[88332]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:33:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:31 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 06 06:33:31 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 06 06:33:31 compute-1 ceph-mon[81689]: 11.3 scrub starts
Dec 06 06:33:31 compute-1 ceph-mon[81689]: 11.3 scrub ok
Dec 06 06:33:31 compute-1 ceph-mon[81689]: pgmap v320: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 341 B/s wr, 31 op/s; 1/218 objects misplaced (0.459%); 73 B/s, 3 objects/s recovering
Dec 06 06:33:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 06 06:33:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 06 06:33:31 compute-1 ceph-mon[81689]: osdmap e109: 3 total, 3 up, 3 in
Dec 06 06:33:31 compute-1 sudo[88330]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Dec 06 06:33:31 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 110 pg[9.12( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=110) [1]/[0] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:31 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 110 pg[9.12( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=110) [1]/[0] r=-1 lpr=110 pi=[62,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:31.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:32 compute-1 sudo[88617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rctsdajopylfrwdrluesnfsdctxcqobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002811.8811185-399-222222567783420/AnsiballZ_selinux.py'
Dec 06 06:33:32 compute-1 sudo[88617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:32 compute-1 python3.9[88619]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 06 06:33:32 compute-1 sudo[88617]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:32.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:33 compute-1 ceph-mon[81689]: 10.d scrub starts
Dec 06 06:33:33 compute-1 ceph-mon[81689]: 10.d scrub ok
Dec 06 06:33:33 compute-1 ceph-mon[81689]: osdmap e110: 3 total, 3 up, 3 in
Dec 06 06:33:33 compute-1 ceph-mon[81689]: pgmap v323: 305 pgs: 1 unknown, 2 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:33:33 compute-1 sudo[88769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behkoctkaqjjidmakodeojbfugwolmgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002813.232315-432-174121109615747/AnsiballZ_command.py'
Dec 06 06:33:33 compute-1 sudo[88769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:33.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:34.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:35 compute-1 python3.9[88771]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 06 06:33:35 compute-1 sudo[88769]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:35 compute-1 sudo[88921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oziutjextfdykduckvddklvaymfzgvew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002815.2447145-456-58544177486618/AnsiballZ_file.py'
Dec 06 06:33:35 compute-1 sudo[88921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:35 compute-1 python3.9[88923]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:33:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Dec 06 06:33:35 compute-1 sudo[88921]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:35 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 111 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=2 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=111) [1] r=0 lpr=111 pi=[62,111)/2 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:35 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 111 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=2 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=111) [1] r=0 lpr=111 pi=[62,111)/2 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:36 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 06 06:33:36 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 06 06:33:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:36.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:37 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 06 06:33:37 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 06 06:33:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:38.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:39 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Dec 06 06:33:39 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Dec 06 06:33:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:39.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:40.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:41 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:33:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:41.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:42 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 06 06:33:42 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 06 06:33:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:43.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:43.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:45.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:45 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:33:45 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:33:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:45.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:47 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 06 06:33:47 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 06 06:33:47 compute-1 sudo[89073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foztvwcxhbexjeslkqigeojcwbqvxacs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002827.3255424-480-53349711540615/AnsiballZ_mount.py'
Dec 06 06:33:47 compute-1 sudo[89073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:33:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:47.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:33:48 compute-1 python3.9[89075]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 06 06:33:48 compute-1 sudo[89073]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:48 compute-1 ceph-mon[81689]: pgmap v324: 305 pgs: 1 unknown, 2 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:33:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:33:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:49 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 06 06:33:49 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 06 06:33:49 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:33:49 compute-1 sudo[89225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zljxmygpuvaouvriwcydjhunxbuffcdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002829.2517736-564-139780462492241/AnsiballZ_file.py'
Dec 06 06:33:49 compute-1 sudo[89225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:49 compute-1 python3.9[89227]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:33:49 compute-1 sudo[89225]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:50 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Dec 06 06:33:50 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 06 06:33:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:50.189441) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:33:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 06 06:33:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830189580, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6939, "num_deletes": 259, "total_data_size": 13483634, "memory_usage": 13694768, "flush_reason": "Manual Compaction"}
Dec 06 06:33:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 06 06:33:50 compute-1 sudo[89378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iopllflrehekrwkqbfgocmvwhglurltc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002830.053202-589-22006028517764/AnsiballZ_stat.py'
Dec 06 06:33:50 compute-1 sudo[89378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:50 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Dec 06 06:33:50 compute-1 python3.9[89380]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:33:50 compute-1 sudo[89378]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830713037, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 8546369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 257, "largest_seqno": 6944, "table_properties": {"data_size": 8516363, "index_size": 19651, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88358, "raw_average_key_size": 23, "raw_value_size": 8444413, "raw_average_value_size": 2274, "num_data_blocks": 867, "num_entries": 3712, "num_filter_entries": 3712, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002570, "oldest_key_time": 1765002570, "file_creation_time": 1765002830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:33:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 523642 microseconds, and 20990 cpu microseconds.
Dec 06 06:33:50 compute-1 sudo[89456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxaydnbokiunemvorclarpherkirdmcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002830.053202-589-22006028517764/AnsiballZ_file.py'
Dec 06 06:33:50 compute-1 sudo[89456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:51 compute-1 python3.9[89458]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:33:51 compute-1 sudo[89456]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:51 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:50.713091) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 8546369 bytes OK
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:50.713286) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:51.103335) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:51.103378) EVENT_LOG_v1 {"time_micros": 1765002831103370, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:51.103397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13443706, prev total WAL file size 13488101, number of live WAL files 2.
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:51.106094) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8346KB) 8(1648B)]
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002831106228, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 8548017, "oldest_snapshot_seqno": -1}
Dec 06 06:33:51 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3460 keys, 8542887 bytes, temperature: kUnknown
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002831906226, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 8542887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8513499, "index_size": 19614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8709, "raw_key_size": 84156, "raw_average_key_size": 24, "raw_value_size": 8444693, "raw_average_value_size": 2440, "num_data_blocks": 866, "num_entries": 3460, "num_filter_entries": 3460, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765002831, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:33:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Dec 06 06:33:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:51.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:51.906484) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 8542887 bytes
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:51.971572) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 10.7 rd, 10.7 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.2, 0.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3717, records dropped: 257 output_compression: NoCompression
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:51.971620) EVENT_LOG_v1 {"time_micros": 1765002831971603, "job": 4, "event": "compaction_finished", "compaction_time_micros": 800056, "compaction_time_cpu_micros": 19262, "output_level": 6, "num_output_files": 1, "total_output_size": 8542887, "num_input_records": 3717, "num_output_records": 3460, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 8.6 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 8.6 scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 8.5 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 8.5 scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.19 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.19 scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 8.1f scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: osdmap e111: 3 total, 3 up, 3 in
Dec 06 06:33:51 compute-1 ceph-mon[81689]: pgmap v326: 305 pgs: 1 active+remapped, 1 activating+remapped, 1 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 18 B/s, 0 objects/s recovering
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 9.1c scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.e scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 9.1c scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.e scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.16 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.16 scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 8.1c deep-scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: pgmap v327: 305 pgs: 1 active+clean+scrubbing, 1 peering, 1 active+remapped, 1 activating+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 14 B/s, 1 objects/s recovering
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.12 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.17 deep-scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.17 deep-scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:33:51 compute-1 ceph-mon[81689]: pgmap v328: 305 pgs: 1 active+clean+scrubbing, 1 peering, 1 active+remapped, 1 activating+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 12 B/s, 0 objects/s recovering
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002831973795, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.2 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.2 scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.6 deep-scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.6 deep-scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:33:51 compute-1 ceph-mon[81689]: pgmap v329: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 1 objects/s recovering
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002831973922, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.1a scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.1a scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.9 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.9 scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: pgmap v330: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 1 objects/s recovering
Dec 06 06:33:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:33:51.105981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.b scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.b scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:33:51 compute-1 ceph-mon[81689]: pgmap v331: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 20 B/s, 1 objects/s recovering
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.c deep-scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.c deep-scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.1c scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.1c scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 8.1f scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 10.12 scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 8.1c deep-scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:33:51 compute-1 ceph-mon[81689]: pgmap v332: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 299 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.d scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.d scrub ok
Dec 06 06:33:51 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:33:51 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:33:51 compute-1 ceph-mon[81689]: osdmap e111: 3 total, 3 up, 3 in
Dec 06 06:33:51 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 7m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:33:51 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.10 scrub starts
Dec 06 06:33:51 compute-1 ceph-mon[81689]: 11.10 scrub ok
Dec 06 06:33:52 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 112 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=112) [1] r=0 lpr=112 pi=[62,112)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:52 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 112 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=112) [1] r=0 lpr=112 pi=[62,112)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:52 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 112 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=4 ec=62/51 lis/c=110/62 les/c/f=111/63/0 sis=112) [1] r=0 lpr=112 pi=[62,112)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:52 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 112 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=4 ec=62/51 lis/c=110/62 les/c/f=111/63/0 sis=112) [1] r=0 lpr=112 pi=[62,112)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:52 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 06 06:33:52 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 06 06:33:52 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 112 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=111/112 n=2 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=111) [1] r=0 lpr=111 pi=[62,111)/2 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:52 compute-1 sudo[89609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzoqcmpeykebycqffvklwvlrhbtndxkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002832.2434163-652-222595928349497/AnsiballZ_stat.py'
Dec 06 06:33:52 compute-1 sudo[89609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 10.1d scrub starts
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 10.1d scrub ok
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 10.1f deep-scrub starts
Dec 06 06:33:52 compute-1 ceph-mon[81689]: pgmap v333: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 299 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 11.11 deep-scrub starts
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 11.11 deep-scrub ok
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 10.1f deep-scrub ok
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 11.14 scrub starts
Dec 06 06:33:52 compute-1 ceph-mon[81689]: osdmap e112: 3 total, 3 up, 3 in
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 11.14 scrub ok
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 10.4 scrub starts
Dec 06 06:33:52 compute-1 ceph-mon[81689]: 10.4 scrub ok
Dec 06 06:33:52 compute-1 python3.9[89611]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:33:52 compute-1 sudo[89609]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:53 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Dec 06 06:33:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:33:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:53.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:33:53 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 113 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=112/113 n=5 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=112) [1] r=0 lpr=112 pi=[62,112)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:53 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Dec 06 06:33:53 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 113 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=112/113 n=4 ec=62/51 lis/c=110/62 les/c/f=111/63/0 sis=112) [1] r=0 lpr=112 pi=[62,112)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:54 compute-1 sudo[89763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihutibjeflivastthnyoxtnauxbgslow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002833.6138322-690-208999812116618/AnsiballZ_getent.py'
Dec 06 06:33:54 compute-1 sudo[89763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:54 compute-1 python3.9[89765]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 06 06:33:54 compute-1 sudo[89763]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:54 compute-1 ceph-mon[81689]: 8.12 scrub starts
Dec 06 06:33:54 compute-1 ceph-mon[81689]: 8.12 scrub ok
Dec 06 06:33:54 compute-1 ceph-mon[81689]: pgmap v335: 305 pgs: 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 298 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:54 compute-1 ceph-mon[81689]: osdmap e113: 3 total, 3 up, 3 in
Dec 06 06:33:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:55.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:55 compute-1 sudo[89916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvuwchcrsyhozcravlfwemqgzrjdcsor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002834.714515-720-261740829438006/AnsiballZ_getent.py'
Dec 06 06:33:55 compute-1 sudo[89916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:55 compute-1 python3.9[89918]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 06 06:33:55 compute-1 sudo[89916]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:55 compute-1 ceph-mon[81689]: 8.17 deep-scrub starts
Dec 06 06:33:55 compute-1 ceph-mon[81689]: 8.17 deep-scrub ok
Dec 06 06:33:55 compute-1 ceph-mon[81689]: pgmap v337: 305 pgs: 1 active+clean+scrubbing, 1 peering, 2 active+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:55 compute-1 ceph-mon[81689]: 11.15 scrub starts
Dec 06 06:33:55 compute-1 ceph-mon[81689]: 11.15 scrub ok
Dec 06 06:33:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:55.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:56 compute-1 sudo[90069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuyavympmfwykseztavhfxwyidxjkfvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002835.5221524-744-159351222997989/AnsiballZ_group.py'
Dec 06 06:33:56 compute-1 sudo[90069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:56 compute-1 python3.9[90071]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:33:56 compute-1 sudo[90069]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:56 compute-1 sudo[90221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocjvrclwmghjcerhezpskovrgfincqrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002836.6644592-771-241205760880504/AnsiballZ_file.py'
Dec 06 06:33:56 compute-1 sudo[90221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:57.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:57 compute-1 python3.9[90223]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 06 06:33:57 compute-1 sudo[90221]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:57 compute-1 ceph-mon[81689]: 10.1e scrub starts
Dec 06 06:33:57 compute-1 ceph-mon[81689]: 10.1e scrub ok
Dec 06 06:33:57 compute-1 ceph-mon[81689]: pgmap v338: 305 pgs: 1 active+clean+scrubbing, 1 peering, 2 active+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:57 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 06 06:33:57 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 06 06:33:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:58 compute-1 sudo[90373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fptpyjpdjyzcdyydkzyjfhsyyxljjtbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002838.233067-804-122366513101659/AnsiballZ_dnf.py'
Dec 06 06:33:58 compute-1 sudo[90373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:58 compute-1 python3.9[90375]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:33:58 compute-1 ceph-mon[81689]: 11.18 scrub starts
Dec 06 06:33:58 compute-1 ceph-mon[81689]: 11.18 scrub ok
Dec 06 06:33:58 compute-1 ceph-mon[81689]: 8.14 scrub starts
Dec 06 06:33:58 compute-1 ceph-mon[81689]: 8.14 scrub ok
Dec 06 06:33:58 compute-1 ceph-mon[81689]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 06 06:33:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:33:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:59.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:33:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Dec 06 06:33:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:33:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:00 compute-1 sudo[90373]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 06 06:34:00 compute-1 ceph-mon[81689]: osdmap e114: 3 total, 3 up, 3 in
Dec 06 06:34:00 compute-1 ceph-mon[81689]: 11.1f deep-scrub starts
Dec 06 06:34:00 compute-1 ceph-mon[81689]: 11.1f deep-scrub ok
Dec 06 06:34:00 compute-1 sudo[90527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqxbexqklskfayfyegeyurhirajwbdoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002840.383576-828-66558906458902/AnsiballZ_file.py'
Dec 06 06:34:00 compute-1 sudo[90527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:00 compute-1 python3.9[90529]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:34:00 compute-1 sudo[90527]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:00 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 06 06:34:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Dec 06 06:34:00 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 06 06:34:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:01 compute-1 sudo[90679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolvpbmivtnygzwjststubyeqqsgnfop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002841.021125-852-91753959631201/AnsiballZ_stat.py'
Dec 06 06:34:01 compute-1 sudo[90679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:01 compute-1 python3.9[90681]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:34:01 compute-1 sudo[90679]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:01 compute-1 ceph-mon[81689]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 06 06:34:01 compute-1 ceph-mon[81689]: 10.5 scrub starts
Dec 06 06:34:01 compute-1 ceph-mon[81689]: 10.5 scrub ok
Dec 06 06:34:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 06 06:34:01 compute-1 ceph-mon[81689]: osdmap e115: 3 total, 3 up, 3 in
Dec 06 06:34:01 compute-1 ceph-mon[81689]: 11.12 scrub starts
Dec 06 06:34:01 compute-1 ceph-mon[81689]: 11.12 scrub ok
Dec 06 06:34:01 compute-1 sudo[90757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqnqzxuavogrdrcrhxntjuxiidpbhrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002841.021125-852-91753959631201/AnsiballZ_file.py'
Dec 06 06:34:01 compute-1 sudo[90757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:02 compute-1 python3.9[90759]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:34:02 compute-1 sudo[90757]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:02 compute-1 sudo[90909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muzckwonjpnezvnlkbkuagzjlblkzxbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002842.2891304-891-112228598361910/AnsiballZ_stat.py'
Dec 06 06:34:02 compute-1 sudo[90909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:02 compute-1 python3.9[90911]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:34:02 compute-1 sudo[90909]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:03 compute-1 sudo[90987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkaskvgwetwmrftdegzysnorkmqvvikx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002842.2891304-891-112228598361910/AnsiballZ_file.py'
Dec 06 06:34:03 compute-1 sudo[90987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:03.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:03 compute-1 python3.9[90989]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:34:03 compute-1 sudo[90987]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:03.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:04 compute-1 sudo[91139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mesohxuqvcrehpvldnenbinzeysmzvyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002843.8010914-936-48509006166180/AnsiballZ_dnf.py'
Dec 06 06:34:04 compute-1 sudo[91139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:04 compute-1 python3.9[91141]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:05.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:05.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:07 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Dec 06 06:34:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:07.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:34:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:07.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:34:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:09.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:09 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:34:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:09.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:11.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:11 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 06 06:34:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:11.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:12 compute-1 sudo[91144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:12 compute-1 sudo[91144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:12 compute-1 sudo[91144]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:12 compute-1 sudo[91169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:34:12 compute-1 sudo[91169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:12 compute-1 sudo[91169]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:12 compute-1 sudo[91194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:12 compute-1 sudo[91194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:12 compute-1 sudo[91194]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:12 compute-1 sudo[91219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:34:12 compute-1 sudo[91219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:13.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:13 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:34:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:15.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 14.875265121s
Dec 06 06:34:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 14.875266075s
Dec 06 06:34:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.875607491s, txc = 0x55b554ed4300
Dec 06 06:34:16 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 06 06:34:16 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Dec 06 06:34:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:17.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:17 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:34:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:18 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:34:18 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.550511360s, txc = 0x55b5547dd200
Dec 06 06:34:18 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.487549782s, txc = 0x55b554814300
Dec 06 06:34:18 compute-1 sudo[91139]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:18 compute-1 podman[91316]: 2025-12-06 06:34:18.764948397 +0000 UTC m=+6.122210411 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 116 pg[9.15( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=116) [1] r=0 lpr=116 pi=[80,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 117 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=73/74 n=5 ec=62/51 lis/c=73/73 les/c/f=74/74/0 sis=117 pruub=15.051497459s) [0] r=-1 lpr=117 pi=[73,117)/1 crt=58'1159 mlcod 0'0 active pruub 371.661468506s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 117 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=73/74 n=5 ec=62/51 lis/c=73/73 les/c/f=74/74/0 sis=117 pruub=15.051445961s) [0] r=-1 lpr=117 pi=[73,117)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 371.661468506s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:19.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:19 compute-1 podman[91316]: 2025-12-06 06:34:19.287724689 +0000 UTC m=+6.644986673 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:34:19 compute-1 python3.9[91496]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 118 pg[9.15( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[80,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 118 pg[9.15( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=80/80 les/c/f=81/81/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[80,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 118 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=73/74 n=5 ec=62/51 lis/c=73/73 les/c/f=74/74/0 sis=118) [0]/[1] r=0 lpr=118 pi=[73,118)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 118 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=73/74 n=5 ec=62/51 lis/c=73/73 les/c/f=74/74/0 sis=118) [0]/[1] r=0 lpr=118 pi=[73,118)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 118 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=118 pruub=10.082120895s) [2] r=-1 lpr=118 pi=[82,118)/1 crt=58'1159 mlcod 0'0 active pruub 367.562652588s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:19 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 118 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=118 pruub=10.082078934s) [2] r=-1 lpr=118 pi=[82,118)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 367.562652588s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:19.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:20 compute-1 python3.9[91648]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 06 06:34:20 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 06 06:34:20 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 119 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=119) [2]/[1] r=0 lpr=119 pi=[82,119)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:20 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 119 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=119) [2]/[1] r=0 lpr=119 pi=[82,119)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:20 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 06 06:34:20 compute-1 sudo[91219]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:20 compute-1 sudo[91860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:20 compute-1 sudo[91860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:20 compute-1 sudo[91860]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:20 compute-1 sudo[91912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:34:20 compute-1 sudo[91912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:20 compute-1 sudo[91912]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:21 compute-1 sudo[91937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:21 compute-1 sudo[91937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:21 compute-1 sudo[91937]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:21 compute-1 sudo[91962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:34:21 compute-1 sudo[91962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:21 compute-1 python3.9[91911]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:34:21 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 119 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=118/119 n=5 ec=62/51 lis/c=73/73 les/c/f=74/74/0 sis=118) [0]/[1] async=[0] r=0 lpr=118 pi=[73,118)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:21 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:34:21 compute-1 sudo[91962]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:21 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 120 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=118/119 n=5 ec=62/51 lis/c=118/73 les/c/f=119/74/0 sis=120 pruub=15.615807533s) [0] async=[0] r=-1 lpr=120 pi=[73,120)/1 crt=58'1159 mlcod 58'1159 active pruub 374.784637451s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:21 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 120 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=118/119 n=5 ec=62/51 lis/c=118/73 les/c/f=119/74/0 sis=120 pruub=15.615746498s) [0] r=-1 lpr=120 pi=[73,120)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 374.784637451s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:21.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:22 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 120 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=119/120 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=119) [2]/[1] async=[2] r=0 lpr=119 pi=[82,119)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:22 compute-1 sudo[92167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyvvcjlybxqnrdnugbcxeejcfsxswbkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002861.7546847-1059-93080409351786/AnsiballZ_systemd.py'
Dec 06 06:34:22 compute-1 sudo[92167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:22 compute-1 python3.9[92169]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:34:22 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 06 06:34:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:23.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:23 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Dec 06 06:34:23 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 06 06:34:23 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 06:34:23 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 06:34:23 compute-1 sudo[92167]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:23 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 122 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=119/120 n=5 ec=62/51 lis/c=119/82 les/c/f=120/83/0 sis=122 pruub=14.525171280s) [2] async=[2] r=-1 lpr=122 pi=[82,122)/1 crt=58'1159 mlcod 58'1159 active pruub 375.818572998s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:23 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 122 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=118/80 les/c/f=119/81/0 sis=122) [1] r=0 lpr=122 pi=[80,122)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:23 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 122 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=118/80 les/c/f=119/81/0 sis=122) [1] r=0 lpr=122 pi=[80,122)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:23 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 122 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=119/120 n=5 ec=62/51 lis/c=119/82 les/c/f=120/83/0 sis=122 pruub=14.525073051s) [2] r=-1 lpr=122 pi=[82,122)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 375.818572998s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:23.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:24 compute-1 python3.9[92332]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 06 06:34:24 compute-1 ceph-mon[81689]: paxos.2).electionLogic(19) init, last seen epoch 19, mid-election, bumping
Dec 06 06:34:24 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:34:24 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.13 scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.13 scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.f scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.f scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.3 scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.3 scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 8.8 deep-scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.11 scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.11 scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.18 deep-scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.18 deep-scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.19 scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.19 scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 8.1b scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.1b scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.1b scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.15 scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.15 scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.14 scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.14 scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 8.1b scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 8.8 deep-scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.1 scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.1 scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:34:25 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:34:25 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:34:25 compute-1 ceph-mon[81689]: osdmap e116: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 8m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:34:25 compute-1 ceph-mon[81689]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:34:25 compute-1 ceph-mon[81689]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:34:25 compute-1 ceph-mon[81689]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:34:25 compute-1 ceph-mon[81689]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.10 scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 10.10 scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: osdmap e117: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v353: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: osdmap e118: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v355: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 06 06:34:25 compute-1 ceph-mon[81689]: osdmap e119: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 11.f scrub starts
Dec 06 06:34:25 compute-1 ceph-mon[81689]: 11.f scrub ok
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:25 compute-1 ceph-mon[81689]: osdmap e120: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:34:25 compute-1 ceph-mon[81689]: pgmap v358: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:34:25 compute-1 ceph-mon[81689]: osdmap e121: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 06 06:34:25 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 06 06:34:25 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:34:25 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:34:25 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:34:25 compute-1 ceph-mon[81689]: osdmap e122: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 8m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:34:25 compute-1 ceph-mon[81689]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:34:25 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 06:34:25 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 123 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=122/123 n=5 ec=62/51 lis/c=118/80 les/c/f=119/81/0 sis=122) [1] r=0 lpr=122 pi=[80,122)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:34:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:26.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:34:26 compute-1 ceph-mon[81689]: 11.1d scrub starts
Dec 06 06:34:26 compute-1 ceph-mon[81689]: 11.1d scrub ok
Dec 06 06:34:26 compute-1 ceph-mon[81689]: 9.14 scrub starts
Dec 06 06:34:26 compute-1 ceph-mon[81689]: 9.14 scrub ok
Dec 06 06:34:26 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:34:26 compute-1 ceph-mon[81689]: osdmap e123: 3 total, 3 up, 3 in
Dec 06 06:34:26 compute-1 ceph-mon[81689]: 9.1b scrub starts
Dec 06 06:34:26 compute-1 ceph-mon[81689]: 9.1b scrub ok
Dec 06 06:34:26 compute-1 ceph-mon[81689]: pgmap v363: 305 pgs: 1 active+recovering+remapped, 1 remapped+peering, 303 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 1/216 objects misplaced (0.463%); 22 B/s, 0 objects/s recovering
Dec 06 06:34:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:34:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:27.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:34:27 compute-1 sudo[92482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjnprkrdatlhotzamdkgaxroyjsgartn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002867.2475016-1230-222558239512067/AnsiballZ_systemd.py'
Dec 06 06:34:27 compute-1 sudo[92482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:27 compute-1 sudo[92485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:27 compute-1 sudo[92485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:27 compute-1 sudo[92485]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:27 compute-1 sudo[92510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:34:27 compute-1 sudo[92510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:27 compute-1 sudo[92510]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:27 compute-1 python3.9[92484]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:34:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:28.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:28 compute-1 sudo[92482]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:28 compute-1 sudo[92686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsqhchwxkkhzkltyorbfqpsljytbzaqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002868.1296234-1230-199740552368264/AnsiballZ_systemd.py'
Dec 06 06:34:28 compute-1 sudo[92686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:28 compute-1 python3.9[92688]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:34:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:28 compute-1 sudo[92686]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 06 06:34:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:29.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Dec 06 06:34:29 compute-1 sshd-session[83785]: Connection closed by 192.168.122.30 port 55614
Dec 06 06:34:29 compute-1 sshd-session[83782]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:34:29 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Dec 06 06:34:29 compute-1 systemd[1]: session-34.scope: Consumed 1min 5.763s CPU time.
Dec 06 06:34:29 compute-1 systemd-logind[788]: Session 34 logged out. Waiting for processes to exit.
Dec 06 06:34:29 compute-1 systemd-logind[788]: Removed session 34.
Dec 06 06:34:29 compute-1 ceph-mon[81689]: pgmap v364: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Dec 06 06:34:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 06 06:34:29 compute-1 ceph-mon[81689]: osdmap e124: 3 total, 3 up, 3 in
Dec 06 06:34:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Dec 06 06:34:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:31 compute-1 ceph-mon[81689]: pgmap v366: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 06 06:34:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:31.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:31 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Dec 06 06:34:31 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Dec 06 06:34:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:32.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Dec 06 06:34:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 06 06:34:32 compute-1 ceph-mon[81689]: osdmap e125: 3 total, 3 up, 3 in
Dec 06 06:34:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:33.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:33 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 06 06:34:33 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 06 06:34:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:34 compute-1 ceph-mon[81689]: 8.18 deep-scrub starts
Dec 06 06:34:34 compute-1 ceph-mon[81689]: 8.18 deep-scrub ok
Dec 06 06:34:34 compute-1 ceph-mon[81689]: pgmap v368: 305 pgs: 1 unknown, 304 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:34 compute-1 ceph-mon[81689]: osdmap e126: 3 total, 3 up, 3 in
Dec 06 06:34:34 compute-1 ceph-mon[81689]: 9.3 scrub starts
Dec 06 06:34:34 compute-1 ceph-mon[81689]: 9.3 scrub ok
Dec 06 06:34:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Dec 06 06:34:34 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 06 06:34:34 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 06 06:34:34 compute-1 sshd-session[92715]: Accepted publickey for zuul from 192.168.122.30 port 59962 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:34:34 compute-1 systemd-logind[788]: New session 35 of user zuul.
Dec 06 06:34:34 compute-1 systemd[1]: Started Session 35 of User zuul.
Dec 06 06:34:34 compute-1 sshd-session[92715]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:34:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:35.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:35 compute-1 ceph-mon[81689]: 11.1e scrub starts
Dec 06 06:34:35 compute-1 ceph-mon[81689]: 11.1e scrub ok
Dec 06 06:34:35 compute-1 ceph-mon[81689]: osdmap e127: 3 total, 3 up, 3 in
Dec 06 06:34:35 compute-1 ceph-mon[81689]: pgmap v371: 305 pgs: 1 unknown, 304 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:35 compute-1 ceph-mon[81689]: 11.1c scrub starts
Dec 06 06:34:35 compute-1 ceph-mon[81689]: 11.1c scrub ok
Dec 06 06:34:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Dec 06 06:34:35 compute-1 python3.9[92868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:34:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:36 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 06 06:34:36 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 06 06:34:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Dec 06 06:34:36 compute-1 ceph-mon[81689]: osdmap e128: 3 total, 3 up, 3 in
Dec 06 06:34:36 compute-1 ceph-mon[81689]: 9.13 scrub starts
Dec 06 06:34:36 compute-1 ceph-mon[81689]: 9.13 scrub ok
Dec 06 06:34:36 compute-1 ceph-mon[81689]: pgmap v373: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:36 compute-1 sudo[93022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aldszijnofxaahyscyzkyrhkyjaaxuwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002876.3181098-73-50158389964327/AnsiballZ_getent.py'
Dec 06 06:34:36 compute-1 sudo[93022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:36 compute-1 python3.9[93024]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 06 06:34:36 compute-1 sudo[93022]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:37 compute-1 sudo[93175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzrheazxanfygitxvxylustjrilrqkzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002877.3293295-109-56790915013868/AnsiballZ_setup.py'
Dec 06 06:34:37 compute-1 sudo[93175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:37 compute-1 ceph-mon[81689]: 11.7 scrub starts
Dec 06 06:34:37 compute-1 ceph-mon[81689]: 11.7 scrub ok
Dec 06 06:34:37 compute-1 ceph-mon[81689]: osdmap e129: 3 total, 3 up, 3 in
Dec 06 06:34:37 compute-1 ceph-mon[81689]: 9.19 scrub starts
Dec 06 06:34:37 compute-1 python3.9[93177]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:34:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:38 compute-1 sudo[93175]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:38 compute-1 sudo[93259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpmfldjnldhakwllylyzewldylvskhjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002877.3293295-109-56790915013868/AnsiballZ_dnf.py'
Dec 06 06:34:38 compute-1 sudo[93259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:38 compute-1 ceph-mon[81689]: 9.19 scrub ok
Dec 06 06:34:38 compute-1 ceph-mon[81689]: 9.b deep-scrub starts
Dec 06 06:34:38 compute-1 ceph-mon[81689]: 9.b deep-scrub ok
Dec 06 06:34:38 compute-1 ceph-mon[81689]: pgmap v375: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:38 compute-1 python3.9[93261]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 06:34:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:34:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:34:39 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 06 06:34:39 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 06 06:34:39 compute-1 ceph-mon[81689]: 9.7 scrub starts
Dec 06 06:34:39 compute-1 ceph-mon[81689]: 9.7 scrub ok
Dec 06 06:34:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:40.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:40 compute-1 sudo[93259]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:40 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Dec 06 06:34:40 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Dec 06 06:34:40 compute-1 sudo[93412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzxchhbvjjugeokosliyubrcfvzwrsif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002880.645677-151-209804775450379/AnsiballZ_dnf.py'
Dec 06 06:34:40 compute-1 sudo[93412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:40 compute-1 ceph-mon[81689]: 11.1b scrub starts
Dec 06 06:34:40 compute-1 ceph-mon[81689]: 11.1b scrub ok
Dec 06 06:34:40 compute-1 ceph-mon[81689]: pgmap v376: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:34:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:34:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:41.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:34:41 compute-1 python3.9[93414]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:41 compute-1 ceph-mon[81689]: 8.4 deep-scrub starts
Dec 06 06:34:41 compute-1 ceph-mon[81689]: 8.4 deep-scrub ok
Dec 06 06:34:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:42.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:42 compute-1 sudo[93412]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:42 compute-1 ceph-mon[81689]: pgmap v377: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:42 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 06 06:34:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Dec 06 06:34:42 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 130 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=94/95 n=5 ec=62/51 lis/c=94/94 les/c/f=95/95/0 sis=130 pruub=11.979516029s) [0] r=-1 lpr=130 pi=[94,130)/1 crt=58'1159 mlcod 0'0 active pruub 392.521667480s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:42 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 130 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=94/95 n=5 ec=62/51 lis/c=94/94 les/c/f=95/95/0 sis=130 pruub=11.978986740s) [0] r=-1 lpr=130 pi=[94,130)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 392.521667480s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:43 compute-1 sudo[93565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmsmwmauamlhchjxrhkmdqdaqcifkhiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002882.7821286-175-175137483032769/AnsiballZ_systemd.py'
Dec 06 06:34:43 compute-1 sudo[93565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Dec 06 06:34:43 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 131 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=94/95 n=5 ec=62/51 lis/c=94/94 les/c/f=95/95/0 sis=131) [0]/[1] r=0 lpr=131 pi=[94,131)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:43 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 131 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=94/95 n=5 ec=62/51 lis/c=94/94 les/c/f=95/95/0 sis=131) [0]/[1] r=0 lpr=131 pi=[94,131)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:43 compute-1 python3.9[93567]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:34:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:43 compute-1 sudo[93565]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 06 06:34:43 compute-1 ceph-mon[81689]: osdmap e130: 3 total, 3 up, 3 in
Dec 06 06:34:43 compute-1 ceph-mon[81689]: osdmap e131: 3 total, 3 up, 3 in
Dec 06 06:34:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:44.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Dec 06 06:34:44 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 132 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=131/132 n=5 ec=62/51 lis/c=94/94 les/c/f=95/95/0 sis=131) [0]/[1] async=[0] r=0 lpr=131 pi=[94,131)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:45 compute-1 ceph-mon[81689]: 9.5 scrub starts
Dec 06 06:34:45 compute-1 ceph-mon[81689]: 9.5 scrub ok
Dec 06 06:34:45 compute-1 ceph-mon[81689]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 06 06:34:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 06 06:34:45 compute-1 ceph-mon[81689]: osdmap e132: 3 total, 3 up, 3 in
Dec 06 06:34:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:45 compute-1 python3.9[93720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:34:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Dec 06 06:34:45 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 133 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=131/132 n=5 ec=62/51 lis/c=131/94 les/c/f=132/95/0 sis=133 pruub=14.996418953s) [0] async=[0] r=-1 lpr=133 pi=[94,133)/1 crt=58'1159 mlcod 58'1159 active pruub 398.267364502s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:45 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 133 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=131/132 n=5 ec=62/51 lis/c=131/94 les/c/f=132/95/0 sis=133 pruub=14.996079445s) [0] r=-1 lpr=133 pi=[94,133)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 398.267364502s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:45 compute-1 sudo[93870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqlkwbeledbrjshvsgzwlldjpraveywu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002885.4208937-229-241971803497674/AnsiballZ_sefcontext.py'
Dec 06 06:34:45 compute-1 sudo[93870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:46.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:46 compute-1 python3.9[93872]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 06 06:34:46 compute-1 sudo[93870]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:47.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:47 compute-1 python3.9[94022]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:34:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Dec 06 06:34:47 compute-1 ceph-mon[81689]: osdmap e133: 3 total, 3 up, 3 in
Dec 06 06:34:47 compute-1 ceph-mon[81689]: pgmap v383: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:48.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Dec 06 06:34:48 compute-1 ceph-mon[81689]: osdmap e134: 3 total, 3 up, 3 in
Dec 06 06:34:48 compute-1 ceph-mon[81689]: pgmap v385: 305 pgs: 1 activating+remapped, 1 peering, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/212 objects misplaced (0.943%)
Dec 06 06:34:48 compute-1 ceph-mon[81689]: osdmap e135: 3 total, 3 up, 3 in
Dec 06 06:34:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:49.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:49 compute-1 sudo[94178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wczjeyfprqjrpxrieqynlqwsrhjbrdlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002888.8625505-283-77431685571262/AnsiballZ_dnf.py'
Dec 06 06:34:49 compute-1 sudo[94178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:49 compute-1 python3.9[94180]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Dec 06 06:34:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:50.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:50 compute-1 ceph-mon[81689]: 9.1a scrub starts
Dec 06 06:34:50 compute-1 ceph-mon[81689]: 9.1a scrub ok
Dec 06 06:34:50 compute-1 ceph-mon[81689]: 9.8 scrub starts
Dec 06 06:34:50 compute-1 ceph-mon[81689]: 9.8 scrub ok
Dec 06 06:34:50 compute-1 ceph-mon[81689]: osdmap e136: 3 total, 3 up, 3 in
Dec 06 06:34:50 compute-1 sudo[94178]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:51 compute-1 ceph-mon[81689]: pgmap v388: 305 pgs: 1 activating+remapped, 1 peering, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/212 objects misplaced (0.943%)
Dec 06 06:34:51 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 06 06:34:51 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 06 06:34:51 compute-1 sudo[94331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jckaozvppjknrttwjxlpyvilrkvuzrcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002891.2263014-307-130645155685084/AnsiballZ_command.py'
Dec 06 06:34:51 compute-1 sudo[94331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:52.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:52 compute-1 python3.9[94333]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:34:52 compute-1 sudo[94331]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:53.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:53 compute-1 ceph-mon[81689]: 9.18 scrub starts
Dec 06 06:34:53 compute-1 ceph-mon[81689]: 9.18 scrub ok
Dec 06 06:34:53 compute-1 ceph-mon[81689]: 11.5 scrub starts
Dec 06 06:34:53 compute-1 ceph-mon[81689]: 11.5 scrub ok
Dec 06 06:34:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Dec 06 06:34:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:34:53 compute-1 sudo[94618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gofqdovilwzvhogutzpxbzvmpnxfrykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002893.1866932-331-44003748533159/AnsiballZ_file.py'
Dec 06 06:34:53 compute-1 sudo[94618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:54 compute-1 python3.9[94620]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 06:34:54 compute-1 sudo[94618]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:54.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:54 compute-1 ceph-mon[81689]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 06 06:34:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 06 06:34:54 compute-1 ceph-mon[81689]: osdmap e137: 3 total, 3 up, 3 in
Dec 06 06:34:55 compute-1 python3.9[94770]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:34:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:34:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:55.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:34:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Dec 06 06:34:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 138 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=100/101 n=5 ec=62/51 lis/c=100/100 les/c/f=101/101/0 sis=138 pruub=10.665399551s) [2] r=-1 lpr=138 pi=[100,138)/1 crt=58'1159 mlcod 0'0 active pruub 404.130950928s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:55 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 138 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=100/101 n=5 ec=62/51 lis/c=100/100 les/c/f=101/101/0 sis=138 pruub=10.665344238s) [2] r=-1 lpr=138 pi=[100,138)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 404.130950928s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:56.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:56 compute-1 sudo[94922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idlniartdqvfvhwnyckehswvzeowfmjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002895.5936108-379-117004772553197/AnsiballZ_dnf.py'
Dec 06 06:34:56 compute-1 sudo[94922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:56 compute-1 ceph-mon[81689]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 06 06:34:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 06 06:34:56 compute-1 ceph-mon[81689]: osdmap e138: 3 total, 3 up, 3 in
Dec 06 06:34:56 compute-1 python3.9[94924]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:56 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 06 06:34:56 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 06 06:34:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Dec 06 06:34:56 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 139 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=100/101 n=5 ec=62/51 lis/c=100/100 les/c/f=101/101/0 sis=139) [2]/[1] r=0 lpr=139 pi=[100,139)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:56 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 139 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=100/101 n=5 ec=62/51 lis/c=100/100 les/c/f=101/101/0 sis=139) [2]/[1] r=0 lpr=139 pi=[100,139)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:34:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:34:57 compute-1 ceph-mon[81689]: 9.9 scrub starts
Dec 06 06:34:57 compute-1 ceph-mon[81689]: 9.9 scrub ok
Dec 06 06:34:57 compute-1 ceph-mon[81689]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 06 06:34:57 compute-1 ceph-mon[81689]: 11.4 scrub starts
Dec 06 06:34:57 compute-1 ceph-mon[81689]: 11.4 scrub ok
Dec 06 06:34:57 compute-1 ceph-mon[81689]: 9.16 scrub starts
Dec 06 06:34:57 compute-1 ceph-mon[81689]: 9.16 scrub ok
Dec 06 06:34:57 compute-1 ceph-mon[81689]: osdmap e139: 3 total, 3 up, 3 in
Dec 06 06:34:57 compute-1 sudo[94922]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 140 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=140 pruub=11.974717140s) [0] r=-1 lpr=140 pi=[82,140)/1 crt=58'1159 mlcod 0'0 active pruub 407.563446045s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 140 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=140 pruub=11.974670410s) [0] r=-1 lpr=140 pi=[82,140)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 407.563446045s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 140 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=139/140 n=5 ec=62/51 lis/c=100/100 les/c/f=101/101/0 sis=139) [2]/[1] async=[2] r=0 lpr=139 pi=[100,139)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:58.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:58 compute-1 sudo[95075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcxzxkdryortamariasujlghvtxqkyww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002897.8611677-406-205266563309539/AnsiballZ_dnf.py'
Dec 06 06:34:58 compute-1 sudo[95075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:58 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 06 06:34:58 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 06 06:34:58 compute-1 python3.9[95077]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:34:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 141 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=141) [0]/[1] r=0 lpr=141 pi=[82,141)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 141 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=139/140 n=5 ec=62/51 lis/c=139/100 les/c/f=140/101/0 sis=141 pruub=15.270381927s) [2] async=[2] r=-1 lpr=141 pi=[100,141)/1 crt=58'1159 mlcod 58'1159 active pruub 411.594268799s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 141 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=139/140 n=5 ec=62/51 lis/c=139/100 les/c/f=140/101/0 sis=141 pruub=15.270284653s) [2] r=-1 lpr=141 pi=[100,141)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 411.594268799s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 141 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=82/83 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=141) [0]/[1] r=0 lpr=141 pi=[82,141)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 141 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=107/108 n=5 ec=62/51 lis/c=107/107 les/c/f=108/108/0 sis=141 pruub=14.815155983s) [0] r=-1 lpr=141 pi=[107,141)/1 crt=58'1159 mlcod 0'0 active pruub 411.139739990s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:58 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 141 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=107/108 n=5 ec=62/51 lis/c=107/107 les/c/f=108/108/0 sis=141 pruub=14.815105438s) [0] r=-1 lpr=141 pi=[107,141)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 411.139739990s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 06 06:34:58 compute-1 ceph-mon[81689]: osdmap e140: 3 total, 3 up, 3 in
Dec 06 06:34:58 compute-1 ceph-mon[81689]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:34:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:34:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:59.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Dec 06 06:34:59 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 142 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=107/108 n=5 ec=62/51 lis/c=107/107 les/c/f=108/108/0 sis=142) [0]/[1] r=0 lpr=142 pi=[107,142)/1 crt=58'1159 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:59 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 142 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=107/108 n=5 ec=62/51 lis/c=107/107 les/c/f=108/108/0 sis=142) [0]/[1] r=0 lpr=142 pi=[107,142)/1 crt=58'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:59 compute-1 ceph-mon[81689]: 11.1 scrub starts
Dec 06 06:34:59 compute-1 ceph-mon[81689]: 11.1 scrub ok
Dec 06 06:34:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:34:59 compute-1 ceph-mon[81689]: osdmap e141: 3 total, 3 up, 3 in
Dec 06 06:34:59 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 142 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=141/142 n=5 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=141) [0]/[1] async=[0] r=0 lpr=141 pi=[82,141)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:59 compute-1 sudo[95075]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:00.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:00 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Dec 06 06:35:00 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Dec 06 06:35:01 compute-1 sudo[95228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgevtnfbznmhgvpyezohuazfqvxezngy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002900.7355523-442-99959239889949/AnsiballZ_stat.py'
Dec 06 06:35:01 compute-1 sudo[95228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:01.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:01 compute-1 python3.9[95230]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:35:01 compute-1 sudo[95228]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:01 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Dec 06 06:35:01 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Dec 06 06:35:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Dec 06 06:35:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 143 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=141/142 n=5 ec=62/51 lis/c=141/82 les/c/f=142/83/0 sis=143 pruub=14.158587456s) [0] async=[0] r=-1 lpr=143 pi=[82,143)/1 crt=58'1159 mlcod 58'1159 active pruub 413.434509277s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:35:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 143 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=141/142 n=5 ec=62/51 lis/c=141/82 les/c/f=142/83/0 sis=143 pruub=14.158426285s) [0] r=-1 lpr=143 pi=[82,143)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 413.434509277s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:35:01 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 143 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=142/143 n=5 ec=62/51 lis/c=107/107 les/c/f=108/108/0 sis=142) [0]/[1] async=[0] r=0 lpr=142 pi=[107,142)/1 crt=58'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:35:02 compute-1 ceph-mon[81689]: osdmap e142: 3 total, 3 up, 3 in
Dec 06 06:35:02 compute-1 ceph-mon[81689]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:35:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:02.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:35:02 compute-1 sudo[95382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgrgdzqrcbsplepejcbjlxynwiaztbwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002901.509651-466-8075408307117/AnsiballZ_slurp.py'
Dec 06 06:35:02 compute-1 sudo[95382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:02 compute-1 python3.9[95384]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 06 06:35:02 compute-1 sudo[95382]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:02 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 06 06:35:02 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 06 06:35:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Dec 06 06:35:02 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 144 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=142/143 n=5 ec=62/51 lis/c=142/107 les/c/f=143/108/0 sis=144 pruub=15.184892654s) [0] async=[0] r=-1 lpr=144 pi=[107,144)/1 crt=58'1159 mlcod 58'1159 active pruub 415.449615479s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:35:02 compute-1 ceph-osd[79002]: osd.1 pg_epoch: 144 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=142/143 n=5 ec=62/51 lis/c=142/107 les/c/f=143/108/0 sis=144 pruub=15.184741974s) [0] r=-1 lpr=144 pi=[107,144)/1 crt=58'1159 mlcod 0'0 unknown NOTIFY pruub 415.449615479s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:35:03 compute-1 ceph-mon[81689]: 8.10 deep-scrub starts
Dec 06 06:35:03 compute-1 ceph-mon[81689]: 8.10 deep-scrub ok
Dec 06 06:35:03 compute-1 ceph-mon[81689]: 9.1d scrub starts
Dec 06 06:35:03 compute-1 ceph-mon[81689]: 9.1d scrub ok
Dec 06 06:35:03 compute-1 ceph-mon[81689]: 11.1a deep-scrub starts
Dec 06 06:35:03 compute-1 ceph-mon[81689]: 11.1a deep-scrub ok
Dec 06 06:35:03 compute-1 ceph-mon[81689]: osdmap e143: 3 total, 3 up, 3 in
Dec 06 06:35:03 compute-1 ceph-mon[81689]: pgmap v401: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:03 compute-1 ceph-mon[81689]: 8.19 scrub starts
Dec 06 06:35:03 compute-1 ceph-mon[81689]: 8.19 scrub ok
Dec 06 06:35:03 compute-1 ceph-mon[81689]: osdmap e144: 3 total, 3 up, 3 in
Dec 06 06:35:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:03.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 06 06:35:03 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 06 06:35:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:04.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Dec 06 06:35:04 compute-1 ceph-mon[81689]: 9.2 scrub starts
Dec 06 06:35:04 compute-1 ceph-mon[81689]: 9.2 scrub ok
Dec 06 06:35:04 compute-1 ceph-mon[81689]: 9.1e scrub starts
Dec 06 06:35:04 compute-1 ceph-mon[81689]: 9.1e scrub ok
Dec 06 06:35:04 compute-1 ceph-mon[81689]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:35:04 compute-1 sshd-session[92718]: Connection closed by 192.168.122.30 port 59962
Dec 06 06:35:04 compute-1 sshd-session[92715]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:35:04 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Dec 06 06:35:04 compute-1 systemd[1]: session-35.scope: Consumed 17.274s CPU time.
Dec 06 06:35:04 compute-1 systemd-logind[788]: Session 35 logged out. Waiting for processes to exit.
Dec 06 06:35:04 compute-1 systemd-logind[788]: Removed session 35.
Dec 06 06:35:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:05.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:05 compute-1 ceph-mon[81689]: pgmap v404: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:05 compute-1 ceph-mon[81689]: 9.1f scrub starts
Dec 06 06:35:05 compute-1 ceph-mon[81689]: 9.1f scrub ok
Dec 06 06:35:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:06.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:06 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 06 06:35:06 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 06 06:35:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:07.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:07 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 06 06:35:07 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 06 06:35:07 compute-1 ceph-mon[81689]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:07 compute-1 ceph-mon[81689]: 9.e scrub starts
Dec 06 06:35:07 compute-1 ceph-mon[81689]: 9.e scrub ok
Dec 06 06:35:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:08.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:08 compute-1 ceph-mon[81689]: 9.6 scrub starts
Dec 06 06:35:08 compute-1 ceph-mon[81689]: 9.6 scrub ok
Dec 06 06:35:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:35:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:35:09 compute-1 ceph-mon[81689]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:10.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:10 compute-1 ceph-mon[81689]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:11.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:12.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:12 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 06 06:35:12 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 06 06:35:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:13.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:13 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 06 06:35:13 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 06 06:35:13 compute-1 ceph-mon[81689]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:13 compute-1 ceph-mon[81689]: 9.a scrub starts
Dec 06 06:35:13 compute-1 ceph-mon[81689]: 9.a scrub ok
Dec 06 06:35:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:14.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:15.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:15 compute-1 ceph-mon[81689]: 9.d scrub starts
Dec 06 06:35:15 compute-1 ceph-mon[81689]: 9.d scrub ok
Dec 06 06:35:15 compute-1 ceph-mon[81689]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:16.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:16 compute-1 sshd-session[95409]: Accepted publickey for zuul from 192.168.122.30 port 49864 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:35:16 compute-1 systemd-logind[788]: New session 36 of user zuul.
Dec 06 06:35:16 compute-1 systemd[1]: Started Session 36 of User zuul.
Dec 06 06:35:16 compute-1 sshd-session[95409]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:35:16 compute-1 ceph-mon[81689]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:17.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:17 compute-1 python3.9[95562]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:35:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:18.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:18 compute-1 python3.9[95716]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:35:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:20 compute-1 ceph-mon[81689]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:20 compute-1 python3.9[95909]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:35:20 compute-1 sshd-session[95412]: Connection closed by 192.168.122.30 port 49864
Dec 06 06:35:20 compute-1 sshd-session[95409]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:35:20 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Dec 06 06:35:20 compute-1 systemd[1]: session-36.scope: Consumed 2.152s CPU time.
Dec 06 06:35:20 compute-1 systemd-logind[788]: Session 36 logged out. Waiting for processes to exit.
Dec 06 06:35:20 compute-1 systemd-logind[788]: Removed session 36.
Dec 06 06:35:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:21.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:21 compute-1 ceph-mon[81689]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:22 compute-1 ceph-mon[81689]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:23.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:23 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 06 06:35:23 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 06 06:35:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:24.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:24 compute-1 ceph-mon[81689]: 9.f scrub starts
Dec 06 06:35:24 compute-1 ceph-mon[81689]: 9.f scrub ok
Dec 06 06:35:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:25.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:25 compute-1 ceph-mon[81689]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:26.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:26 compute-1 sshd-session[95936]: Accepted publickey for zuul from 192.168.122.30 port 47078 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:35:26 compute-1 systemd-logind[788]: New session 37 of user zuul.
Dec 06 06:35:26 compute-1 systemd[1]: Started Session 37 of User zuul.
Dec 06 06:35:26 compute-1 sshd-session[95936]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:35:26 compute-1 ceph-mon[81689]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:27 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 06 06:35:27 compute-1 ceph-osd[79002]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 06 06:35:27 compute-1 python3.9[96089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:35:27 compute-1 sudo[96118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:27 compute-1 sudo[96118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:27 compute-1 sudo[96118]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:28 compute-1 sudo[96143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:35:28 compute-1 sudo[96143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:28 compute-1 sudo[96143]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:28 compute-1 ceph-mon[81689]: 9.15 scrub starts
Dec 06 06:35:28 compute-1 ceph-mon[81689]: 9.15 scrub ok
Dec 06 06:35:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:28 compute-1 sudo[96168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:28 compute-1 sudo[96168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:28 compute-1 sudo[96168]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:28 compute-1 sudo[96216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:35:28 compute-1 sudo[96216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:28 compute-1 podman[96416]: 2025-12-06 06:35:28.656689941 +0000 UTC m=+0.083948374 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:35:28 compute-1 python3.9[96387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:35:28 compute-1 podman[96416]: 2025-12-06 06:35:28.737960761 +0000 UTC m=+0.165219164 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:35:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:29 compute-1 sudo[96216]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:29 compute-1 ceph-mon[81689]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:29 compute-1 sudo[96689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzqboascimnvkbsctzgxaaowesiakbua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002929.209101-86-22978341739407/AnsiballZ_setup.py'
Dec 06 06:35:29 compute-1 sudo[96689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:29 compute-1 python3.9[96691]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:35:30 compute-1 sudo[96689]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:30.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:30 compute-1 sudo[96700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:30 compute-1 sudo[96700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-1 sudo[96700]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-1 sudo[96725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:35:30 compute-1 sudo[96725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-1 sudo[96725]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-1 sudo[96750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:30 compute-1 sudo[96750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-1 sudo[96750]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-1 sudo[96798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:35:30 compute-1 sudo[96798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-1 sudo[96873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkpmqoiwjcwgzdrxhooaibnekfprlri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002929.209101-86-22978341739407/AnsiballZ_dnf.py'
Dec 06 06:35:30 compute-1 sudo[96873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:30 compute-1 python3.9[96876]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:35:30 compute-1 sudo[96798]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-1 ceph-mon[81689]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:32.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:32 compute-1 sudo[96873]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:35:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:35:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:35:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:35:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:35:33 compute-1 sudo[97056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqlopcymfwwdqexndcomfvnxludycbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002932.703197-122-196885246407163/AnsiballZ_setup.py'
Dec 06 06:35:33 compute-1 sudo[97056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:33.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:33 compute-1 python3.9[97058]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:35:33 compute-1 sudo[97056]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:34.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:34 compute-1 sudo[97251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slrjvyrkybsykxtdhbahpktvpaocvyiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002934.072773-155-113492904155807/AnsiballZ_file.py'
Dec 06 06:35:34 compute-1 sudo[97251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:34 compute-1 ceph-mon[81689]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:34 compute-1 python3.9[97253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:35:34 compute-1 sudo[97251]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:35 compute-1 sudo[97403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sagswumjkffjdvaoptfuthukswepergt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002934.951817-179-153973824399593/AnsiballZ_command.py'
Dec 06 06:35:35 compute-1 sudo[97403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:35 compute-1 python3.9[97405]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:35:35 compute-1 sudo[97403]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:35 compute-1 ceph-mon[81689]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:36.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:36 compute-1 sudo[97568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmkbepdjehghcxhgboptrczgprgtybxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002935.8752723-203-202334464835462/AnsiballZ_stat.py'
Dec 06 06:35:36 compute-1 sudo[97568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:36 compute-1 python3.9[97570]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:35:36 compute-1 sudo[97568]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:36 compute-1 sudo[97646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqodmiqgxcfexypfzosvjzsagmsprlcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002935.8752723-203-202334464835462/AnsiballZ_file.py'
Dec 06 06:35:36 compute-1 sudo[97646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:36 compute-1 ceph-mon[81689]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:36 compute-1 python3.9[97648]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:35:37 compute-1 sudo[97646]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:37.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:37 compute-1 sudo[97798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsurnofwfebtkkenbaqcujnazhcccnfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002937.2821276-239-166137753438085/AnsiballZ_stat.py'
Dec 06 06:35:37 compute-1 sudo[97798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:37 compute-1 python3.9[97800]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:35:37 compute-1 sudo[97798]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:38.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:38 compute-1 sudo[97876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhueeyvghgvjdvcvaugkkzeprkcadkcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002937.2821276-239-166137753438085/AnsiballZ_file.py'
Dec 06 06:35:38 compute-1 sudo[97876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:38 compute-1 python3.9[97878]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:38 compute-1 sudo[97876]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:39 compute-1 sudo[98028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pztnilqnrdjbdzxoswvvjcgajzegwsiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002938.651975-278-118231745216772/AnsiballZ_ini_file.py'
Dec 06 06:35:39 compute-1 sudo[98028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:39.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:39 compute-1 python3.9[98030]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:39 compute-1 sudo[98028]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:39 compute-1 sudo[98180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxfwmzgkfdbpeuyzqdjdlmgfzfvixzwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002939.5042124-278-4137913643081/AnsiballZ_ini_file.py'
Dec 06 06:35:39 compute-1 sudo[98180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:39 compute-1 ceph-mon[81689]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:40 compute-1 python3.9[98182]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:40 compute-1 sudo[98180]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:40 compute-1 sudo[98231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:40 compute-1 sudo[98231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:40 compute-1 sudo[98231]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:40 compute-1 sudo[98284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:35:40 compute-1 sudo[98284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:40 compute-1 sudo[98284]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:40 compute-1 sudo[98382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxkztxwvffexbsmtxxyrcyximjfwervo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002940.2337308-278-116113284090208/AnsiballZ_ini_file.py'
Dec 06 06:35:40 compute-1 sudo[98382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:40 compute-1 python3.9[98384]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:40 compute-1 sudo[98382]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:40 compute-1 ceph-mon[81689]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:41 compute-1 sudo[98534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkdvxaatlognkyraadppdtlngewhdekp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002940.8715909-278-263278371703162/AnsiballZ_ini_file.py'
Dec 06 06:35:41 compute-1 sudo[98534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:41.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:41 compute-1 python3.9[98536]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:41 compute-1 sudo[98534]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:41 compute-1 sudo[98686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtcxcirrrtmtiaineufdamzalflavat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002941.6571429-371-127812967301764/AnsiballZ_dnf.py'
Dec 06 06:35:41 compute-1 sudo[98686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:42.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:42 compute-1 python3.9[98688]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:35:43 compute-1 ceph-mon[81689]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:43.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:43 compute-1 sudo[98686]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:44.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:44 compute-1 sudo[98839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cregtypqgxqvswvguybawnvioyvaeull ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002944.1788824-404-80945960997246/AnsiballZ_setup.py'
Dec 06 06:35:44 compute-1 sudo[98839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:44 compute-1 python3.9[98841]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:35:44 compute-1 sudo[98839]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:45 compute-1 ceph-mon[81689]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:45 compute-1 sudo[98993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzhupdlilwtrojzrubkellbwabmcbogo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002945.2131445-428-234340520084066/AnsiballZ_stat.py'
Dec 06 06:35:45 compute-1 sudo[98993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:45 compute-1 python3.9[98995]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:35:45 compute-1 sudo[98993]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:46.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:46 compute-1 sudo[99145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slsuughuqukoalrvcdnknfeqsapvlrjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002946.0193944-455-209285331139955/AnsiballZ_stat.py'
Dec 06 06:35:46 compute-1 sudo[99145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:46 compute-1 python3.9[99147]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:35:46 compute-1 sudo[99145]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:46 compute-1 ceph-mon[81689]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:35:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:35:47 compute-1 sudo[99297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldizowxreptjurivhvzyzkjfypokdfob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002946.8225458-485-269275925045038/AnsiballZ_command.py'
Dec 06 06:35:47 compute-1 sudo[99297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:47 compute-1 python3.9[99299]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:35:47 compute-1 sudo[99297]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:48.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:48 compute-1 ceph-mon[81689]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:48 compute-1 sudo[99450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwjkwqevjdobwekpnyxnaukksqsszrqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002948.3319979-515-77578812806669/AnsiballZ_service_facts.py'
Dec 06 06:35:48 compute-1 sudo[99450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:49 compute-1 python3.9[99452]: ansible-service_facts Invoked
Dec 06 06:35:49 compute-1 network[99469]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:35:49 compute-1 network[99470]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:35:49 compute-1 network[99471]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:35:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:49.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:50.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:51.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:51 compute-1 ceph-mon[81689]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:52 compute-1 sudo[99450]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:52.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:53 compute-1 ceph-mon[81689]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:53 compute-1 sudo[99754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajnmyczlnnarbskymhsoruutcaogcdpm ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765002953.0860472-560-18680025192912/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765002953.0860472-560-18680025192912/args'
Dec 06 06:35:53 compute-1 sudo[99754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:53 compute-1 sudo[99754]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:54.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:54 compute-1 sudo[99921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrmavxvrpfmaifbdyaylrzvdvpjutnwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002953.8972523-593-136833095670145/AnsiballZ_dnf.py'
Dec 06 06:35:54 compute-1 sudo[99921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:54 compute-1 python3.9[99923]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:35:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:55.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:55 compute-1 ceph-mon[81689]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:55 compute-1 sudo[99921]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:56.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:56 compute-1 ceph-mon[81689]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:57.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:57 compute-1 sudo[100074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esjbrcodenqqmiopyjsmlbrkdqtmseqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002956.906954-632-213739025592119/AnsiballZ_package_facts.py'
Dec 06 06:35:57 compute-1 sudo[100074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:57 compute-1 python3.9[100076]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 06 06:35:58 compute-1 sudo[100074]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:58.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:59 compute-1 sudo[100226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebbvmugvwjjbdxftikrqmcgusjiibhau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002958.902594-662-143054529323943/AnsiballZ_stat.py'
Dec 06 06:35:59 compute-1 sudo[100226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:35:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:59.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:59 compute-1 python3.9[100228]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:35:59 compute-1 sudo[100226]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:59 compute-1 sudo[100304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwpxunnoawnkpfymiijbmoqdfgmkwhce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002958.902594-662-143054529323943/AnsiballZ_file.py'
Dec 06 06:35:59 compute-1 sudo[100304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:59 compute-1 python3.9[100306]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:35:59 compute-1 sudo[100304]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:00.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:00 compute-1 ceph-mon[81689]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:01 compute-1 sudo[100456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlejwafifftsqcaerfpbjivmcxaknyhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002960.8341727-698-24021687476622/AnsiballZ_stat.py'
Dec 06 06:36:01 compute-1 sudo[100456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:01.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:01 compute-1 python3.9[100458]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:01 compute-1 sudo[100456]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:01 compute-1 sudo[100535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrzynevikuqvaesmrpyebdwkdfqyjccc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002960.8341727-698-24021687476622/AnsiballZ_file.py'
Dec 06 06:36:01 compute-1 sudo[100535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:01 compute-1 python3.9[100537]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:01 compute-1 ceph-mon[81689]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:01 compute-1 sudo[100535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:02.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:03.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:03 compute-1 sudo[100688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivfgcxqednvcawzrfxzpwmyokblbbanr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002962.9254901-753-39799111323998/AnsiballZ_lineinfile.py'
Dec 06 06:36:03 compute-1 sudo[100688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:03 compute-1 python3.9[100690]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:03 compute-1 sudo[100688]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:03 compute-1 ceph-mon[81689]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:04.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:04 compute-1 sudo[100840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlvhrtnfvzjqoxetnmcertbnwgvnjeff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002964.7038996-797-75352403530432/AnsiballZ_setup.py'
Dec 06 06:36:04 compute-1 sudo[100840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:05 compute-1 python3.9[100842]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:36:05 compute-1 ceph-mon[81689]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:05 compute-1 sudo[100840]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:06 compute-1 sudo[100925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuuogzrkhbwxhlmjhkjsmvahexrnfkkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002964.7038996-797-75352403530432/AnsiballZ_systemd.py'
Dec 06 06:36:06 compute-1 sudo[100925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:06.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:06 compute-1 python3.9[100927]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:36:06 compute-1 sudo[100925]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:06 compute-1 ceph-mon[81689]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:07.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:07 compute-1 sshd-session[95939]: Connection closed by 192.168.122.30 port 47078
Dec 06 06:36:07 compute-1 sshd-session[95936]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:36:07 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Dec 06 06:36:07 compute-1 systemd[1]: session-37.scope: Consumed 24.063s CPU time.
Dec 06 06:36:07 compute-1 systemd-logind[788]: Session 37 logged out. Waiting for processes to exit.
Dec 06 06:36:07 compute-1 systemd-logind[788]: Removed session 37.
Dec 06 06:36:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:08.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:08 compute-1 ceph-mon[81689]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:09.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:10.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:11.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:11 compute-1 ceph-mon[81689]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:12.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:13.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:13 compute-1 ceph-mon[81689]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:13 compute-1 sshd-session[100954]: Accepted publickey for zuul from 192.168.122.30 port 49852 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:36:13 compute-1 systemd-logind[788]: New session 38 of user zuul.
Dec 06 06:36:13 compute-1 systemd[1]: Started Session 38 of User zuul.
Dec 06 06:36:13 compute-1 sshd-session[100954]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:36:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:14 compute-1 sudo[101107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpygopksbedocevbqsqmxapvddmaujsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002973.8393047-32-30816657489585/AnsiballZ_file.py'
Dec 06 06:36:14 compute-1 sudo[101107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:14 compute-1 python3.9[101109]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:14 compute-1 sudo[101107]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:15 compute-1 sudo[101259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoctoixpfntztsfdghbxljtdgbeenvaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002974.7890263-68-119718723233606/AnsiballZ_stat.py'
Dec 06 06:36:15 compute-1 sudo[101259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:15.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:15 compute-1 python3.9[101261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:15 compute-1 sudo[101259]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:15 compute-1 ceph-mon[81689]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:15 compute-1 sudo[101337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohevdvsnpthgfhkvsnwefdetbisxkmvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002974.7890263-68-119718723233606/AnsiballZ_file.py'
Dec 06 06:36:15 compute-1 sudo[101337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:15 compute-1 python3.9[101339]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:15 compute-1 sudo[101337]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:16 compute-1 sshd-session[100957]: Connection closed by 192.168.122.30 port 49852
Dec 06 06:36:16 compute-1 sshd-session[100954]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:36:16 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Dec 06 06:36:16 compute-1 systemd[1]: session-38.scope: Consumed 1.528s CPU time.
Dec 06 06:36:16 compute-1 systemd-logind[788]: Session 38 logged out. Waiting for processes to exit.
Dec 06 06:36:16 compute-1 systemd-logind[788]: Removed session 38.
Dec 06 06:36:16 compute-1 ceph-mon[81689]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:17.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:18.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:18 compute-1 sshd-session[100562]: Received disconnect from 14.63.196.175 port 54486:11: Bye Bye [preauth]
Dec 06 06:36:18 compute-1 sshd-session[100562]: Disconnected from authenticating user root 14.63.196.175 port 54486 [preauth]
Dec 06 06:36:18 compute-1 sshd-session[101364]: Invalid user factory from 45.135.232.92 port 56898
Dec 06 06:36:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:19.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:19 compute-1 sshd-session[101364]: Connection reset by invalid user factory 45.135.232.92 port 56898 [preauth]
Dec 06 06:36:19 compute-1 ceph-mon[81689]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:20.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:20 compute-1 ceph-mon[81689]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:21.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:21 compute-1 sshd-session[101366]: Invalid user netlink from 45.135.232.92 port 56902
Dec 06 06:36:21 compute-1 sshd-session[101366]: Connection reset by invalid user netlink 45.135.232.92 port 56902 [preauth]
Dec 06 06:36:22 compute-1 sshd-session[101369]: Accepted publickey for zuul from 192.168.122.30 port 55292 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:36:22 compute-1 systemd-logind[788]: New session 39 of user zuul.
Dec 06 06:36:22 compute-1 systemd[1]: Started Session 39 of User zuul.
Dec 06 06:36:22 compute-1 sshd-session[101369]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:36:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:22.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:22 compute-1 ceph-mon[81689]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:23 compute-1 python3.9[101523]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:36:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:23.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:23 compute-1 sshd-session[101368]: Invalid user client from 45.135.232.92 port 56934
Dec 06 06:36:24 compute-1 sudo[101677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeotkszgekdbzmxxgqymjvksherdffnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002983.7415416-65-115076083389233/AnsiballZ_file.py'
Dec 06 06:36:24 compute-1 sudo[101677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:24 compute-1 sshd-session[101368]: Connection reset by invalid user client 45.135.232.92 port 56934 [preauth]
Dec 06 06:36:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:24.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:24 compute-1 python3.9[101679]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:24 compute-1 sudo[101677]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:24 compute-1 ceph-mon[81689]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:25 compute-1 sudo[101854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgddrrtksnvfhxypzxwtmsgwlqqrakb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002984.5725954-89-149346892027684/AnsiballZ_stat.py'
Dec 06 06:36:25 compute-1 sudo[101854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:25.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:25 compute-1 python3.9[101856]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:25 compute-1 sudo[101854]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:25 compute-1 sudo[101932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnbfuumhxzxncoeffahbehsbjymccerr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002984.5725954-89-149346892027684/AnsiballZ_file.py'
Dec 06 06:36:25 compute-1 sudo[101932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:25 compute-1 python3.9[101934]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.qoa9wc4j recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:25 compute-1 sudo[101932]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:26.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:26 compute-1 sudo[102084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooqdupddxjimxdeibqugyclmlcbptqsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002986.4687939-149-34030505154925/AnsiballZ_stat.py'
Dec 06 06:36:26 compute-1 sudo[102084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:26 compute-1 ceph-mon[81689]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:26 compute-1 python3.9[102086]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:26 compute-1 sudo[102084]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:27 compute-1 sudo[102162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhdwljbivhfxaanjiwcsixxvpajcspqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002986.4687939-149-34030505154925/AnsiballZ_file.py'
Dec 06 06:36:27 compute-1 sudo[102162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:27 compute-1 sshd-session[101681]: Connection reset by authenticating user operator 45.135.232.92 port 54580 [preauth]
Dec 06 06:36:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:27.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:27 compute-1 python3.9[102164]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.w7c0wq1l recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:27 compute-1 sudo[102162]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:27 compute-1 sudo[102316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsliwpixohtaegadjfvkzwpbywyhouzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002987.691425-188-244578310023506/AnsiballZ_file.py'
Dec 06 06:36:27 compute-1 sudo[102316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:28 compute-1 python3.9[102318]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:36:28 compute-1 sudo[102316]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:36:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:28.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:36:28 compute-1 sudo[102468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungmgrugizpibtibavvbrrkqazctgtdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002988.3840265-212-93727463056458/AnsiballZ_stat.py'
Dec 06 06:36:28 compute-1 sudo[102468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:29 compute-1 python3.9[102470]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:29 compute-1 sudo[102468]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:29 compute-1 sudo[102546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwymheysuoeeegcahjjfcvnqznvqdvzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002988.3840265-212-93727463056458/AnsiballZ_file.py'
Dec 06 06:36:29 compute-1 sudo[102546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:29.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:29 compute-1 sshd-session[102165]: Invalid user status from 45.135.232.92 port 54584
Dec 06 06:36:29 compute-1 python3.9[102548]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:36:29 compute-1 sudo[102546]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:29 compute-1 sshd-session[102165]: Connection reset by invalid user status 45.135.232.92 port 54584 [preauth]
Dec 06 06:36:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:29 compute-1 sudo[102698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dymvwyjpnlrszmgdvpukdnscsewuvgwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002989.6534665-212-149356793040729/AnsiballZ_stat.py'
Dec 06 06:36:29 compute-1 sudo[102698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:30 compute-1 python3.9[102700]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:30.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:30 compute-1 sudo[102698]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:30 compute-1 sudo[102776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvztamorijcvxrdzklrnemxdtedrayri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002989.6534665-212-149356793040729/AnsiballZ_file.py'
Dec 06 06:36:30 compute-1 sudo[102776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:30 compute-1 ceph-mon[81689]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:30 compute-1 python3.9[102778]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:36:30 compute-1 sudo[102776]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:31 compute-1 sudo[102928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkvkvfqbpqnwiyhshvlvpovbwtqfpxmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002990.9159746-282-162330545329442/AnsiballZ_file.py'
Dec 06 06:36:31 compute-1 sudo[102928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:31.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:31 compute-1 python3.9[102930]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:31 compute-1 sudo[102928]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:31 compute-1 ceph-mon[81689]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:31 compute-1 sudo[103080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqymqukknlqgklvvqwvrmiptjvqjhtmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002991.6935463-305-27216676018059/AnsiballZ_stat.py'
Dec 06 06:36:31 compute-1 sudo[103080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:32 compute-1 python3.9[103082]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:32.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:32 compute-1 sudo[103080]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:32 compute-1 sudo[103158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmegmioohhhvwjaawhpdkisjkyfmhjcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002991.6935463-305-27216676018059/AnsiballZ_file.py'
Dec 06 06:36:32 compute-1 sudo[103158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:32 compute-1 python3.9[103160]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:32 compute-1 sudo[103158]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:33.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:33 compute-1 sudo[103310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jchawrccppqfhcmafczgjjvqdjcusaka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002993.1993783-341-191291869897297/AnsiballZ_stat.py'
Dec 06 06:36:33 compute-1 sudo[103310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:33 compute-1 python3.9[103312]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:33 compute-1 sudo[103310]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:33 compute-1 sudo[103388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsmahozvovxrkpshznpxhfquhmpdulrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002993.1993783-341-191291869897297/AnsiballZ_file.py'
Dec 06 06:36:33 compute-1 sudo[103388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:34 compute-1 python3.9[103390]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:34 compute-1 sudo[103388]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:34 compute-1 sshd-session[103419]: Connection closed by 220.250.59.155 port 41998
Dec 06 06:36:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:35 compute-1 sudo[103541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohqnuxcfzqmtxbfvercfyintpazbqms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002994.488219-377-30919734900848/AnsiballZ_systemd.py'
Dec 06 06:36:35 compute-1 sudo[103541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:35.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:35 compute-1 ceph-mon[81689]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:35 compute-1 python3.9[103543]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:36:35 compute-1 systemd[1]: Reloading.
Dec 06 06:36:35 compute-1 systemd-rc-local-generator[103572]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:36:35 compute-1 systemd-sysv-generator[103576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:36:35 compute-1 sudo[103541]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:36 compute-1 ceph-mon[81689]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:36 compute-1 sudo[103730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bntkzvlacahlzudbdjwcbfrzzyqyplmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002996.1251128-401-21729399829925/AnsiballZ_stat.py'
Dec 06 06:36:36 compute-1 sudo[103730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:36 compute-1 python3.9[103732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:36 compute-1 sudo[103730]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:37 compute-1 sudo[103808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqyjizqatxfxbhvokocdbabdnnzmhaon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002996.1251128-401-21729399829925/AnsiballZ_file.py'
Dec 06 06:36:37 compute-1 sudo[103808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:37 compute-1 python3.9[103810]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:37 compute-1 sudo[103808]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:37.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:37 compute-1 ceph-mon[81689]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:37 compute-1 sudo[103960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyidpfehelmqkfzgtrhqiojbxmwgmcvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002997.5013754-437-87070089996521/AnsiballZ_stat.py'
Dec 06 06:36:37 compute-1 sudo[103960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:37 compute-1 python3.9[103962]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:37 compute-1 sudo[103960]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:38 compute-1 sudo[104038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytqwcpduxzseuylofknvbrtsglihneoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002997.5013754-437-87070089996521/AnsiballZ_file.py'
Dec 06 06:36:38 compute-1 sudo[104038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:38 compute-1 python3.9[104040]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:38 compute-1 sudo[104038]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:38 compute-1 ceph-mon[81689]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:38 compute-1 sudo[104190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hodibnqgjemvqujgnmhxguucgdsbndbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002998.7149794-473-252459302895103/AnsiballZ_systemd.py'
Dec 06 06:36:38 compute-1 sudo[104190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:39 compute-1 python3.9[104192]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:36:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:39 compute-1 systemd[1]: Reloading.
Dec 06 06:36:39 compute-1 systemd-rc-local-generator[104218]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:36:39 compute-1 systemd-sysv-generator[104222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:36:39 compute-1 systemd[1]: Starting Create netns directory...
Dec 06 06:36:39 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:36:39 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:36:39 compute-1 systemd[1]: Finished Create netns directory.
Dec 06 06:36:39 compute-1 sudo[104190]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:40 compute-1 python3.9[104382]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:36:40 compute-1 network[104422]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:36:40 compute-1 network[104423]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:36:40 compute-1 network[104424]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:36:40 compute-1 sudo[104386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:40 compute-1 sudo[104386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:40 compute-1 sudo[104386]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:40 compute-1 ceph-mon[81689]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:41.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:41 compute-1 sudo[104432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:36:41 compute-1 sudo[104432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:41 compute-1 sudo[104432]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:41 compute-1 sudo[104458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:41 compute-1 sudo[104458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:41 compute-1 sudo[104458]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:41 compute-1 sudo[104487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:36:41 compute-1 sudo[104487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:36:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:42.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:36:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:43.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:44.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:44 compute-1 sudo[104842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwesfroaxwcstmahnadxpkevcarxpjnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003004.2685258-551-39662315366975/AnsiballZ_stat.py'
Dec 06 06:36:44 compute-1 sudo[104842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:44 compute-1 python3.9[104844]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:44 compute-1 sudo[104842]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:45 compute-1 sudo[104920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbevunkakeimonuvaocxwkosrkhaoovb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003004.2685258-551-39662315366975/AnsiballZ_file.py'
Dec 06 06:36:45 compute-1 sudo[104920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:45 compute-1 python3.9[104922]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:45 compute-1 sudo[104920]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:45.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:45 compute-1 ceph-mon[81689]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:45 compute-1 podman[104607]: 2025-12-06 06:36:45.681466184 +0000 UTC m=+3.801540570 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:36:45 compute-1 sudo[105090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwvwtdfeezitqpvhtpofsornebmrsgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003005.551688-590-47493408228074/AnsiballZ_file.py'
Dec 06 06:36:45 compute-1 sudo[105090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:45 compute-1 podman[105045]: 2025-12-06 06:36:45.899574018 +0000 UTC m=+0.108395646 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:36:46 compute-1 python3.9[105092]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:46 compute-1 sudo[105090]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:46 compute-1 podman[104607]: 2025-12-06 06:36:46.093582569 +0000 UTC m=+4.213656925 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:36:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:46.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:46 compute-1 sudo[105281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncxpxbtkfwithjhayoazypcgxdiideel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003006.2840219-614-33032463408357/AnsiballZ_stat.py'
Dec 06 06:36:46 compute-1 sudo[105281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:46 compute-1 python3.9[105285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:46 compute-1 sudo[105281]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:46 compute-1 sudo[104487]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:46 compute-1 sudo[105407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktrcldyxieefkjrpiadmdaezqbndrhmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003006.2840219-614-33032463408357/AnsiballZ_file.py'
Dec 06 06:36:46 compute-1 sudo[105407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:47 compute-1 python3.9[105409]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:47 compute-1 sudo[105407]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:47.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:48 compute-1 sudo[105559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtzgcsdrffbdaktgtqzdxonwyafwgsqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003007.72072-659-273840759388049/AnsiballZ_timezone.py'
Dec 06 06:36:48 compute-1 sudo[105559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:48 compute-1 python3.9[105561]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 06:36:48 compute-1 systemd[1]: Starting Time & Date Service...
Dec 06 06:36:48 compute-1 systemd[1]: Started Time & Date Service.
Dec 06 06:36:48 compute-1 ceph-mon[81689]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:48 compute-1 ceph-mon[81689]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:48 compute-1 sudo[105559]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:48 compute-1 sudo[105613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:48 compute-1 sudo[105613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:48 compute-1 sudo[105613]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:48 compute-1 sudo[105667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:36:48 compute-1 sudo[105667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:48 compute-1 sudo[105667]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:49 compute-1 sudo[105733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:49 compute-1 sudo[105733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:49 compute-1 sudo[105733]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:49 compute-1 sudo[105802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzvhfbfbvkfjkmtycxwttuvzzfjfurjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003008.8302999-686-167787364053527/AnsiballZ_file.py'
Dec 06 06:36:49 compute-1 sudo[105802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:49 compute-1 sudo[105779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:36:49 compute-1 sudo[105779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:49 compute-1 python3.9[105815]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:49.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:49 compute-1 sudo[105802]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:49 compute-1 sudo[105779]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:49 compute-1 sudo[105997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptyjjkgssasnfwavlgsmisujnekksrpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003009.5122068-710-200927657284382/AnsiballZ_stat.py'
Dec 06 06:36:49 compute-1 sudo[105997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:49 compute-1 python3.9[105999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:50 compute-1 sudo[105997]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:36:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:50.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:36:50 compute-1 sudo[106075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrnibjazukhetygxzrquanpufbecxqjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003009.5122068-710-200927657284382/AnsiballZ_file.py'
Dec 06 06:36:50 compute-1 sudo[106075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:50 compute-1 python3.9[106077]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:50 compute-1 sudo[106075]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:50.792834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010792906, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2416, "num_deletes": 258, "total_data_size": 5504280, "memory_usage": 5574400, "flush_reason": "Manual Compaction"}
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010930484, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 2570754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6949, "largest_seqno": 9360, "table_properties": {"data_size": 2561929, "index_size": 4999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 24143, "raw_average_key_size": 22, "raw_value_size": 2541753, "raw_average_value_size": 2321, "num_data_blocks": 222, "num_entries": 1095, "num_filter_entries": 1095, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002830, "oldest_key_time": 1765002830, "file_creation_time": 1765003010, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 137707 microseconds, and 5623 cpu microseconds.
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:50.930547) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 2570754 bytes OK
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:50.930572) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:50.960593) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:50.960655) EVENT_LOG_v1 {"time_micros": 1765003010960639, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:50.960684) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5492558, prev total WAL file size 5510592, number of live WAL files 2.
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:50.963102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323539' seq:0, type:0; will stop at (end)
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(2510KB)], [15(8342KB)]
Dec 06 06:36:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010963182, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11113641, "oldest_snapshot_seqno": -1}
Dec 06 06:36:51 compute-1 sudo[106227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcwtskpocolrbnvowvqingsofrmpksnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003011.027242-746-183869290381641/AnsiballZ_stat.py'
Dec 06 06:36:51 compute-1 sudo[106227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:51.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 4083 keys, 9539136 bytes, temperature: kUnknown
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003011342370, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9539136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9506464, "index_size": 21362, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 99144, "raw_average_key_size": 24, "raw_value_size": 9427353, "raw_average_value_size": 2308, "num_data_blocks": 941, "num_entries": 4083, "num_filter_entries": 4083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765003010, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:51.342595) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9539136 bytes
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:51.469786) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.3 rd, 25.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 4555, records dropped: 472 output_compression: NoCompression
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:51.469827) EVENT_LOG_v1 {"time_micros": 1765003011469812, "job": 6, "event": "compaction_finished", "compaction_time_micros": 379257, "compaction_time_cpu_micros": 29516, "output_level": 6, "num_output_files": 1, "total_output_size": 9539136, "num_input_records": 4555, "num_output_records": 4083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003011470374, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003011472177, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:50.962930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:51.472282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:51.472286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:51.472287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:51.472288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:36:51.472290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:51 compute-1 python3.9[106229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:51 compute-1 sudo[106227]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-1 ceph-mon[81689]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-1 sudo[106305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdterglswxcpdcylgynfpmbiyspswdxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003011.027242-746-183869290381641/AnsiballZ_file.py'
Dec 06 06:36:51 compute-1 sudo[106305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:51 compute-1 python3.9[106307]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2zjsqqnd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:51 compute-1 sudo[106305]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:52.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:36:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:36:52 compute-1 ceph-mon[81689]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:36:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:36:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:36:52 compute-1 ceph-mon[81689]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:52 compute-1 sudo[106457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpxzobrxxhxkoyivxunknmmntmmfdcjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003012.4941392-782-53592287550556/AnsiballZ_stat.py'
Dec 06 06:36:52 compute-1 sudo[106457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:52 compute-1 python3.9[106459]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:52 compute-1 sudo[106457]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:53 compute-1 sudo[106535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaoycieqwudoumdmppruuvlsrzdzvvle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003012.4941392-782-53592287550556/AnsiballZ_file.py'
Dec 06 06:36:53 compute-1 sudo[106535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:53.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:53 compute-1 python3.9[106537]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:53 compute-1 sudo[106535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:54 compute-1 sudo[106687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdsknslqhtlyagsxehwzsijmcqgdcqai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003013.7894921-821-40828639440421/AnsiballZ_command.py'
Dec 06 06:36:54 compute-1 sudo[106687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:54.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:54 compute-1 python3.9[106689]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:36:54 compute-1 sudo[106687]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:55 compute-1 sudo[106840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eebnqxajxqqdxwpkogynrqodcpzwaqfs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003014.6471424-846-110957426138727/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 06:36:55 compute-1 sudo[106840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:55 compute-1 python3[106842]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 06:36:55 compute-1 sudo[106840]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:55.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:55 compute-1 ceph-mon[81689]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:55 compute-1 sudo[106992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohtufjqreylzprobjpyfhyqgphcfymo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003015.4916222-869-182624859096744/AnsiballZ_stat.py'
Dec 06 06:36:55 compute-1 sudo[106992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:55 compute-1 python3.9[106994]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:55 compute-1 sudo[106992]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:56 compute-1 sudo[107070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiflvdtmfmmsqxmeshtvkcsdjywnfmjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003015.4916222-869-182624859096744/AnsiballZ_file.py'
Dec 06 06:36:56 compute-1 sudo[107070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:56.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:56 compute-1 python3.9[107072]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:56 compute-1 sudo[107070]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:57 compute-1 sudo[107222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apsjsazobfgpvboicyaunnyypkfoqkmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003016.774236-905-19877645532409/AnsiballZ_stat.py'
Dec 06 06:36:57 compute-1 sudo[107222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:57 compute-1 ceph-mon[81689]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:57.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:57 compute-1 python3.9[107224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:57 compute-1 sudo[107222]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:57 compute-1 sudo[107300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svexoufnnvkifppekaggunsubwcniypa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003016.774236-905-19877645532409/AnsiballZ_file.py'
Dec 06 06:36:57 compute-1 sudo[107300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:57 compute-1 python3.9[107302]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:57 compute-1 sudo[107300]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:36:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:58.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:36:58 compute-1 sudo[107452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unmyiwxjpkaqooddwoiuqmnoxjgxxwuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003018.10534-941-174098271707327/AnsiballZ_stat.py'
Dec 06 06:36:58 compute-1 sudo[107452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:58 compute-1 sudo[107453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:58 compute-1 sudo[107453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:58 compute-1 sudo[107453]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:58 compute-1 sudo[107480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:36:58 compute-1 sudo[107480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:58 compute-1 sudo[107480]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:58 compute-1 python3.9[107462]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:58 compute-1 sudo[107452]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:58 compute-1 sudo[107580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zudjwwsovpcizikqcmxerwmuhbtxlupw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003018.10534-941-174098271707327/AnsiballZ_file.py'
Dec 06 06:36:58 compute-1 sudo[107580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:59 compute-1 python3.9[107582]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:59 compute-1 sudo[107580]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:36:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:59.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:59 compute-1 ceph-mon[81689]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:59 compute-1 sudo[107732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvrbdlkekvpkixqtnmlvjakupaxtjhdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003019.3976188-977-25786860812995/AnsiballZ_stat.py'
Dec 06 06:36:59 compute-1 sudo[107732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:59 compute-1 python3.9[107734]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:59 compute-1 sudo[107732]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:00 compute-1 sudo[107810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pprmqffvxaowpikrobkiqdzdjnmhahfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003019.3976188-977-25786860812995/AnsiballZ_file.py'
Dec 06 06:37:00 compute-1 sudo[107810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:00.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:00 compute-1 python3.9[107812]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:00 compute-1 sudo[107810]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:01 compute-1 sudo[107962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awmcbzfcnnjbiseccpqzuqoenxwcmfmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003020.6611454-1013-77384271785902/AnsiballZ_stat.py'
Dec 06 06:37:01 compute-1 sudo[107962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:01.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:02.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:03.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:04.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:37:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:05.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:37:05 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:37:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:06.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:07.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:08.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:09.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:09 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:37:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:10.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:37:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:11.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:37:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:37:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:37:13 compute-1 python3.9[107964]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:37:13 compute-1 sudo[107962]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:13.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:13 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:37:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 252..980) lease_timeout -- calling new election
Dec 06 06:37:13 compute-1 sudo[108041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsqtxlxhjyfjyhqbujvvvwgebvrcmsry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003020.6611454-1013-77384271785902/AnsiballZ_file.py'
Dec 06 06:37:13 compute-1 sudo[108041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:13 compute-1 ceph-mon[81689]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 06 06:37:13 compute-1 ceph-mon[81689]: paxos.2).electionLogic(24) init, last seen epoch 24
Dec 06 06:37:13 compute-1 python3.9[108043]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:13 compute-1 sudo[108041]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:13 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:14 compute-1 sudo[108193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgdsvkgilbczzgylxelzodnunbvvrgam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003034.0009959-1053-118341876004673/AnsiballZ_command.py'
Dec 06 06:37:14 compute-1 sudo[108193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:14.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:14 compute-1 python3.9[108195]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:37:14 compute-1 sudo[108193]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:15.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:15 compute-1 sudo[108348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjrzilceflazbyodkhruljifrwqlyrjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003034.810777-1076-73638515631647/AnsiballZ_blockinfile.py'
Dec 06 06:37:15 compute-1 sudo[108348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:15 compute-1 python3.9[108350]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:15 compute-1 sudo[108348]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:16 compute-1 sudo[108500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixcknnrxrqaxjtdwgrujcnodpfzrvqla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003035.8311665-1103-257442205945846/AnsiballZ_file.py'
Dec 06 06:37:16 compute-1 sudo[108500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:16 compute-1 python3.9[108502]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:16.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:16 compute-1 sudo[108500]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:16 compute-1 sudo[108652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwvrtbferpofgrsxtqkijwqrzmkqboiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003036.4505627-1103-82123278377808/AnsiballZ_file.py'
Dec 06 06:37:16 compute-1 sudo[108652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:16 compute-1 python3.9[108654]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:16 compute-1 sudo[108652]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:17.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:17 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:37:17 compute-1 sudo[108804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gojhyzosjcixjcyzwaaiqydqlfejepza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003037.2388287-1148-58113861620374/AnsiballZ_mount.py'
Dec 06 06:37:17 compute-1 sudo[108804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:17 compute-1 python3.9[108806]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 06:37:17 compute-1 sudo[108804]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:18.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:18 compute-1 sudo[108956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhkrkcslwkbxsodwbypxqhevipatybvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003038.034807-1148-86557561436503/AnsiballZ_mount.py'
Dec 06 06:37:18 compute-1 sudo[108956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:18 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 06:37:18 compute-1 python3.9[108958]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 06:37:18 compute-1 sudo[108956]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:18 compute-1 ceph-mon[81689]: paxos.2).electionLogic(25) init, last seen epoch 25, mid-election, bumping
Dec 06 06:37:19 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:19 compute-1 sshd-session[101372]: Connection closed by 192.168.122.30 port 55292
Dec 06 06:37:19 compute-1 sshd-session[101369]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:37:19 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Dec 06 06:37:19 compute-1 systemd[1]: session-39.scope: Consumed 27.546s CPU time.
Dec 06 06:37:19 compute-1 systemd-logind[788]: Session 39 logged out. Waiting for processes to exit.
Dec 06 06:37:19 compute-1 systemd-logind[788]: Removed session 39.
Dec 06 06:37:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:37:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:19.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:37:19 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:37:19 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:37:19 compute-1 ceph-mon[81689]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:19 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:19 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:19 compute-1 ceph-mon[81689]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:19 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:19 compute-1 ceph-mon[81689]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:37:19 compute-1 ceph-mon[81689]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:19 compute-1 ceph-mon[81689]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:19 compute-1 ceph-mon[81689]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:37:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:20.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:20 compute-1 ceph-mon[81689]: mon.compute-1 calling monitor election
Dec 06 06:37:20 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:37:20 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:37:20 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:37:20 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:20 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:20 compute-1 ceph-mon[81689]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:20 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:20 compute-1 ceph-mon[81689]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:37:20 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 06:37:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:21.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:22.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:22 compute-1 ceph-mon[81689]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:22 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:37:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:23.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:23 compute-1 ceph-mon[81689]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:24.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:24 compute-1 sshd-session[108986]: Accepted publickey for zuul from 192.168.122.30 port 42328 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:37:24 compute-1 systemd-logind[788]: New session 40 of user zuul.
Dec 06 06:37:24 compute-1 systemd[1]: Started Session 40 of User zuul.
Dec 06 06:37:24 compute-1 sshd-session[108986]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:37:25 compute-1 ceph-mon[81689]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:25 compute-1 sudo[109139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkelixyoghbhoputtxgkqgqrqypaktzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003044.7739217-24-138348937672177/AnsiballZ_tempfile.py'
Dec 06 06:37:25 compute-1 sudo[109139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:25 compute-1 python3.9[109141]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 06 06:37:25 compute-1 sudo[109139]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:26 compute-1 sudo[109291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehggrtsdgbqbmlxlljteivwkgwzhuzvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003045.6561482-60-213335040439612/AnsiballZ_stat.py'
Dec 06 06:37:26 compute-1 sudo[109291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:26 compute-1 python3.9[109293]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:37:26 compute-1 sudo[109291]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:37:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:37:26 compute-1 ceph-mon[81689]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:27 compute-1 sudo[109445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaghvlsqliyzvbjidqtsflzmrgvwoyks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003046.5091734-84-47994670608669/AnsiballZ_slurp.py'
Dec 06 06:37:27 compute-1 sudo[109445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:27 compute-1 python3.9[109447]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 06 06:37:27 compute-1 sudo[109445]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:37:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:37:27 compute-1 sudo[109597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npnszmgmnzvcnmhqzbdlwnxpaxawscdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003047.4851673-108-95501387863044/AnsiballZ_stat.py'
Dec 06 06:37:27 compute-1 sudo[109597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:27 compute-1 python3.9[109599]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.y8y93s3g follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:37:27 compute-1 sudo[109597]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:28.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:28 compute-1 sudo[109722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtnicwsaqnqcibdayhvmqerkdihklbbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003047.4851673-108-95501387863044/AnsiballZ_copy.py'
Dec 06 06:37:28 compute-1 sudo[109722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:28 compute-1 python3.9[109724]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.y8y93s3g mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003047.4851673-108-95501387863044/.source.y8y93s3g _original_basename=.ax1le0ve follow=False checksum=660676d376f77098a981422bf7716e6cca0e00ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:28 compute-1 sudo[109722]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:29.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:29 compute-1 sudo[109874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tujzljvnruhdsdpwfbqltydqjaghurhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003049.077414-153-111761621680530/AnsiballZ_setup.py'
Dec 06 06:37:29 compute-1 sudo[109874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:29 compute-1 python3.9[109876]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:37:29 compute-1 sudo[109874]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:30.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:31 compute-1 sudo[110026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjknwrccktklzzntxiawqvcyscarlohy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003050.2381978-178-34328975054029/AnsiballZ_blockinfile.py'
Dec 06 06:37:31 compute-1 sudo[110026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:31 compute-1 python3.9[110028]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXhoI1gGk2X98AQB4B5ZyJDup3CVjjMbiB6L30cKGocfIwEEwBz5d1xDpwA7euANP32L9+ddZ3Cn0VVuebREE5y184Yi1sdS+2O6H8M7BUT+RANGW4sY7jPXbTTJt6Bp2WWZu+AKxIRGMoo0UfvvFdscomysN+yxWB/KZ/niGARJyw61l1eO1/8shGJiP1LBuA4mdwHMTBYwXiYjk6LgI/i5m6zQk5ggmw2nKJqCwwPGyf2Xf7/LbRDgnryAatph9gA4JZ+QXULUJ8U+ILis30MPOGNA7vJ07ovYFAVwsoKYRCsxrEpg8AxMeRikU+CERKL2QQPABlbuJKnDZFrW2kY/L+B2g+i8FDWpaug4GQ6ZO7REu47ARhAUnuaIuJrhJgLrDq43vTqCgagXFz7UHhLI6KXLayNe3B/4It4UaZIVv3X8K+bZiI0zMWNhyjIBAU5VFZd0QjZDjt+Wv5WMYEFiWDyil+NVEHCxdSl46yd68mUvgMxWiv2Z57ICY9i+k=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF08740jXaMkiBlZr9+3kjjW/VDtcxAKNNm3eT7v4C7q
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUJnc2vyZ7uGIoKmOtL2ok+zmjLSq/3vZNdtT52cNcj41FV66OIff0lT2r5neBPmMGSlOfqKMRY8iTu1fJs+/c=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTKFyfmCvsG9hwBkDHhSMH95Mc80Ub24C1l2ydnJGHzY4+vYnN+ZopDHd6HVKXgsP9msqkZAEdNXjbQn0sbWYw0v02CY6OcbMq306Rwo9N1fcSO6QVC6w79bGbRJasTA8jgAoGm1VSg3XpzU9C2Delv2ginn7LqCUou48j9w9jyaklDA2EV0anjvZ6hGLjcFaMQSlFPO8rr2pGS5nfNk2Re6GtYYWF4SPkd5xfecWi9szdT+tnG8VrwRX440/Pe3eV5UyVyHQzIEvxJK6DbTgtieOn0PVz3yHI3Uo8VatpsXahO8FsABY1GaI5QAj3qUudWz4YWsiV/qy0G5Wm27CB69LVPGWRr7y4+pVz0HxWiYGyYbRZdxHVZ3jqfGNBdMXJb2shp9BlIo+lpjEydkorHn66AKIpFCZFaGpHzkFPocaTP9yMPAxo/0YPllct7AqO/4CCBNhz4E6/0aMx2lsilFN6Oo/Mj0azpEQOHvuTEwqKJ5BK3MJCWgR11ccN7PM=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEDAV7OqwkHgR6GxlfPQDRQoPSdwQxyp1ILKzyPaTiD9
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeZVQBWQAXkYJ2DA26L5Jq1a5s2ScFbJ/Q/8jiRPf43wzW24IvQwAq99mI5t4QhVhmTRCbptw5L79elvFEyDY8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXnm8ofk0+O7jBFjA8fasReq5BkfwaahdsMJZLiqB71W18faenPK2mMCw6/Lyxaif1wFQKBGWK21bUZXsQguqG7iZV8rSLfQmLlKel/CnEgi2ekXmEhhYL/5GAB8Hq0UxChaI6YQUu0gWku4cPruBw6/+Lz36/PvLLwKqQupEi8npPR3O7a4jF6Px433cpBkZ/hgwG2m5+61NMAcNSCjjNj1cdXLugpDN9+05k6A3QV1sDXS2Zx6zdxPhgmLDKZLBGesQaz+glwYPo/2KfwAwlU4tAuY5eSV2BPX04PqKqexy3iziex/q3pFmtD6f1cRmqFZiyNs+kOfsxwABOVKQ6GG1iKKgzHMsK/paqNWMoHBj0lrRIJoX88Fd2A5DdPs2UPHwy3iUxLYekNcgiigT3O/4x92cFRritKJ8i8j83J6wJOQ0DnpyWxu4WFCjI4mBSKeA0NQzqMPICgkmtmtYKfSlzSdaL9W56FqnfE5JHkSrspcV9xnX3D/ijnD/8PxU=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4NkvA88mf0HvkHx7766e1aduefm45OK4uK2xW0LF1S
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIy6uH9XhIotH/UH4KICHfHUvzEiJMGjuOaC3xgcK45R/4kFK8w4At6C/G8bcf1l2+wNZCsHSuKrF09EzQCKCOU=
                                              create=True mode=0644 path=/tmp/ansible.y8y93s3g state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:31 compute-1 sudo[110026]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:31.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:32 compute-1 sudo[110178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scotzitmzkwwlalxnlbjflxnhwmdnneh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003051.5447047-202-169859116174038/AnsiballZ_command.py'
Dec 06 06:37:32 compute-1 sudo[110178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:32.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:32 compute-1 python3.9[110180]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.y8y93s3g' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:37:32 compute-1 sudo[110178]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:33 compute-1 sudo[110332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhqvmwtysgcbmvqnxchqkewhzagjrnds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003052.6244628-226-43934255812880/AnsiballZ_file.py'
Dec 06 06:37:33 compute-1 sudo[110332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:33 compute-1 python3.9[110334]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.y8y93s3g state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:33 compute-1 sudo[110332]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:33 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:37:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:33.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:33 compute-1 sshd-session[108989]: Connection closed by 192.168.122.30 port 42328
Dec 06 06:37:33 compute-1 sshd-session[108986]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:37:33 compute-1 systemd-logind[788]: Session 40 logged out. Waiting for processes to exit.
Dec 06 06:37:33 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Dec 06 06:37:33 compute-1 systemd[1]: session-40.scope: Consumed 4.673s CPU time.
Dec 06 06:37:33 compute-1 systemd-logind[788]: Removed session 40.
Dec 06 06:37:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:34.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:35.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:36.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:37 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:37:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:37:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:37.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:37:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 252..994) lease_timeout -- calling new election
Dec 06 06:37:38 compute-1 ceph-mon[81689]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 06 06:37:38 compute-1 ceph-mon[81689]: paxos.2).electionLogic(30) init, last seen epoch 30
Dec 06 06:37:38 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:38 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:38.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:38 compute-1 sshd-session[110359]: Accepted publickey for zuul from 192.168.122.30 port 37602 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:37:38 compute-1 systemd-logind[788]: New session 41 of user zuul.
Dec 06 06:37:38 compute-1 systemd[1]: Started Session 41 of User zuul.
Dec 06 06:37:38 compute-1 sshd-session[110359]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:37:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:39.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:39 compute-1 python3.9[110512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:37:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:40.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:40 compute-1 sudo[110666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvaziqmzmzbkcqhvzbpmoxocwdugbizd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003060.2785096-62-46614163458450/AnsiballZ_systemd.py'
Dec 06 06:37:40 compute-1 sudo[110666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:41 compute-1 python3.9[110668]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 06:37:41 compute-1 sudo[110666]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:41 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:37:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:37:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:41.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:37:41 compute-1 sudo[110820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkefukttkdjmsofjjjaczhycdpodtkwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003061.5111418-86-196674334525537/AnsiballZ_systemd.py'
Dec 06 06:37:41 compute-1 sudo[110820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:42 compute-1 python3.9[110822]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:37:42 compute-1 sudo[110820]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:37:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:42.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:37:42 compute-1 sudo[110973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbhrvugylzyrtxdhwrimzrxxcgehmvpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003062.4767394-113-171419241319255/AnsiballZ_command.py'
Dec 06 06:37:42 compute-1 sudo[110973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:43 compute-1 python3.9[110975]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:37:43 compute-1 sudo[110973]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:43.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:43 compute-1 sudo[111126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tszoxvvcyidrbikgydnadgmpsnorpnzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003063.3344133-137-218087277989912/AnsiballZ_stat.py'
Dec 06 06:37:43 compute-1 sudo[111126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:43 compute-1 python3.9[111128]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:37:43 compute-1 sudo[111126]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:44 compute-1 ceph-mon[81689]: paxos.2).electionLogic(31) init, last seen epoch 31, mid-election, bumping
Dec 06 06:37:44 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:44 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:44.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:44 compute-1 ceph-mon[81689]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:44 compute-1 ceph-mon[81689]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:44 compute-1 ceph-mon[81689]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:44 compute-1 ceph-mon[81689]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:44 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:37:44 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:37:44 compute-1 ceph-mon[81689]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:44 compute-1 ceph-mon[81689]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:44 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:37:44 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:44 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:44 compute-1 ceph-mon[81689]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:44 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:44 compute-1 ceph-mon[81689]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:37:44 compute-1 ceph-mon[81689]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:44 compute-1 ceph-mon[81689]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:44 compute-1 ceph-mon[81689]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:37:44 compute-1 ceph-mon[81689]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:44 compute-1 sudo[111278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkdvblilrmcqpuvwxeufvkgaeunbjfaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003064.1983716-164-145724212121404/AnsiballZ_file.py'
Dec 06 06:37:44 compute-1 sudo[111278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:44 compute-1 python3.9[111280]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:44 compute-1 sudo[111278]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:45 compute-1 sshd-session[110362]: Connection closed by 192.168.122.30 port 37602
Dec 06 06:37:45 compute-1 sshd-session[110359]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:37:45 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Dec 06 06:37:45 compute-1 systemd[1]: session-41.scope: Consumed 3.600s CPU time.
Dec 06 06:37:45 compute-1 systemd-logind[788]: Session 41 logged out. Waiting for processes to exit.
Dec 06 06:37:45 compute-1 systemd-logind[788]: Removed session 41.
Dec 06 06:37:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:45.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:46 compute-1 ceph-mon[81689]: mon.compute-1 calling monitor election
Dec 06 06:37:46 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:37:46 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:37:46 compute-1 ceph-mon[81689]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:46 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:37:46 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:46 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:46 compute-1 ceph-mon[81689]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:46 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:46 compute-1 ceph-mon[81689]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:37:46 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 06:37:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:46.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:47.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:47 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:37:47 compute-1 ceph-mon[81689]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:37:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:48.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:37:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:49.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:49 compute-1 ceph-mon[81689]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.544374) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069544449, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 628, "num_deletes": 251, "total_data_size": 896051, "memory_usage": 909032, "flush_reason": "Manual Compaction"}
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069554939, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 615781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9365, "largest_seqno": 9988, "table_properties": {"data_size": 612508, "index_size": 1117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8495, "raw_average_key_size": 19, "raw_value_size": 605511, "raw_average_value_size": 1424, "num_data_blocks": 50, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003010, "oldest_key_time": 1765003010, "file_creation_time": 1765003069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10644 microseconds, and 2851 cpu microseconds.
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.555017) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 615781 bytes OK
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.555049) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.557342) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.557372) EVENT_LOG_v1 {"time_micros": 1765003069557363, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.557394) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 892340, prev total WAL file size 892340, number of live WAL files 2.
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.558167) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(601KB)], [18(9315KB)]
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069558214, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10154917, "oldest_snapshot_seqno": -1}
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3986 keys, 8538819 bytes, temperature: kUnknown
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069644502, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 8538819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8508459, "index_size": 19306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 98501, "raw_average_key_size": 24, "raw_value_size": 8432581, "raw_average_value_size": 2115, "num_data_blocks": 839, "num_entries": 3986, "num_filter_entries": 3986, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765003069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.644853) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 8538819 bytes
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.648179) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.5 rd, 98.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(30.4) write-amplify(13.9) OK, records in: 4508, records dropped: 522 output_compression: NoCompression
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.648216) EVENT_LOG_v1 {"time_micros": 1765003069648194, "job": 8, "event": "compaction_finished", "compaction_time_micros": 86389, "compaction_time_cpu_micros": 24154, "output_level": 6, "num_output_files": 1, "total_output_size": 8538819, "num_input_records": 4508, "num_output_records": 3986, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069648522, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069650639, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.558046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:37:49.650748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:37:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:50.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:37:50 compute-1 sshd-session[111305]: Accepted publickey for zuul from 192.168.122.30 port 56300 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:37:51 compute-1 systemd-logind[788]: New session 42 of user zuul.
Dec 06 06:37:51 compute-1 systemd[1]: Started Session 42 of User zuul.
Dec 06 06:37:51 compute-1 sshd-session[111305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:37:51 compute-1 ceph-mon[81689]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:37:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:51.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:37:52 compute-1 python3.9[111458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:37:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:37:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:37:53 compute-1 ceph-mon[81689]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:53 compute-1 sudo[111612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbihgpzqfggfmecyjvkpsdbcgzsiwuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003072.8415701-68-102442471039162/AnsiballZ_setup.py'
Dec 06 06:37:53 compute-1 sudo[111612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:53.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:53 compute-1 python3.9[111614]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:37:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:53 compute-1 sudo[111612]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:53 compute-1 sudo[111696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaclqjjwepxntegkrxmkraptvrxtiubo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003072.8415701-68-102442471039162/AnsiballZ_dnf.py'
Dec 06 06:37:53 compute-1 sudo[111696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:54 compute-1 python3.9[111698]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 06:37:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:54.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:55.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:55 compute-1 ceph-mon[81689]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:55 compute-1 sudo[111696]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:56 compute-1 python3.9[111849]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:37:56 compute-1 ceph-mon[81689]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:57.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:58 compute-1 python3.9[112000]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:37:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:58 compute-1 ceph-mon[81689]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:58 compute-1 sudo[112151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:58 compute-1 sudo[112151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:58 compute-1 sudo[112151]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:58 compute-1 python3.9[112150]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:37:58 compute-1 sudo[112176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:37:58 compute-1 sudo[112176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:58 compute-1 sudo[112176]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:58 compute-1 sudo[112201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:58 compute-1 sudo[112201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:58 compute-1 sudo[112201]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:59 compute-1 sudo[112250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 06:37:59 compute-1 sudo[112250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:59 compute-1 sudo[112250]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:37:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:59.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:59 compute-1 python3.9[112419]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:37:59 compute-1 sudo[112420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:59 compute-1 sudo[112420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:59 compute-1 sudo[112420]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:59 compute-1 sudo[112445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:37:59 compute-1 sudo[112445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:59 compute-1 sudo[112445]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:59 compute-1 sudo[112494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:59 compute-1 sudo[112494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:59 compute-1 sudo[112494]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:59 compute-1 sudo[112519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:37:59 compute-1 sudo[112519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:00 compute-1 sudo[112519]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:00 compute-1 sshd-session[111308]: Connection closed by 192.168.122.30 port 56300
Dec 06 06:38:00 compute-1 sshd-session[111305]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:38:00 compute-1 systemd-logind[788]: Session 42 logged out. Waiting for processes to exit.
Dec 06 06:38:00 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Dec 06 06:38:00 compute-1 systemd[1]: session-42.scope: Consumed 5.785s CPU time.
Dec 06 06:38:00 compute-1 systemd-logind[788]: Removed session 42.
Dec 06 06:38:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:00.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:01.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:01 compute-1 ceph-mon[81689]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:02.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:03.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:03 compute-1 ceph-mon[81689]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:38:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:38:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:38:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:38:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:38:04 compute-1 ceph-mon[81689]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:05 compute-1 sshd-session[112575]: Accepted publickey for zuul from 192.168.122.30 port 43152 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:38:05 compute-1 systemd-logind[788]: New session 43 of user zuul.
Dec 06 06:38:05 compute-1 systemd[1]: Started Session 43 of User zuul.
Dec 06 06:38:05 compute-1 sshd-session[112575]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:38:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:05.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:06 compute-1 python3.9[112728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:38:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:06 compute-1 ceph-mon[81689]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:07.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:07 compute-1 sudo[112882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srckxssmqemlyesxkarztysiwwlmzuuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003087.3079817-115-36195597565831/AnsiballZ_file.py'
Dec 06 06:38:07 compute-1 sudo[112882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:07 compute-1 python3.9[112884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:07 compute-1 sudo[112882]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:08.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:08 compute-1 sudo[113034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azamwfhcxpddfzjlxyssflvvieehtfjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003088.1070943-115-227113385110934/AnsiballZ_file.py'
Dec 06 06:38:08 compute-1 sudo[113034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:08 compute-1 python3.9[113036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:08 compute-1 sudo[113034]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:09 compute-1 sudo[113186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggvvizosplaxtjqncwrliqabfzihmxgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003088.7227356-158-119056864167570/AnsiballZ_stat.py'
Dec 06 06:38:09 compute-1 sudo[113186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:09 compute-1 python3.9[113188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:09 compute-1 sudo[113189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:09 compute-1 sudo[113189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:09 compute-1 sudo[113186]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:09 compute-1 sudo[113189]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:09 compute-1 sudo[113214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:38:09 compute-1 sudo[113214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:09 compute-1 sudo[113214]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:09 compute-1 ceph-mon[81689]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:09.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:09 compute-1 sudo[113359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onxnvatrjyturqqvveomamtesytrhzxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003088.7227356-158-119056864167570/AnsiballZ_copy.py'
Dec 06 06:38:09 compute-1 sudo[113359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:10 compute-1 python3.9[113361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003088.7227356-158-119056864167570/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=1723c4c6781b1fe3a1f6eb8007e4ee5b598adf70 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:10 compute-1 sudo[113359]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:10.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:10 compute-1 sudo[113511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hizivctfikkjlxidxrhxswvcihgrcdpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003090.1755707-158-90530220043812/AnsiballZ_stat.py'
Dec 06 06:38:10 compute-1 sudo[113511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:10 compute-1 python3.9[113513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:10 compute-1 sudo[113511]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:10 compute-1 sudo[113634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tljabajaqymyuglnnxqwzzvuwooebtth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003090.1755707-158-90530220043812/AnsiballZ_copy.py'
Dec 06 06:38:10 compute-1 sudo[113634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:11 compute-1 python3.9[113636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003090.1755707-158-90530220043812/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=ffa5eebd660d3b98ce1601104e9075977b0c55a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:11 compute-1 sudo[113634]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:11 compute-1 ceph-mon[81689]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:11.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:11 compute-1 sudo[113786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsjliikhnrvphbpedyjbyiabnrcqbzcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003091.2693925-158-56969782097834/AnsiballZ_stat.py'
Dec 06 06:38:11 compute-1 sudo[113786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:11 compute-1 python3.9[113788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:11 compute-1 sudo[113786]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:12 compute-1 sudo[113909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfaeepithhtfudmxlzcrzxwljohvsgnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003091.2693925-158-56969782097834/AnsiballZ_copy.py'
Dec 06 06:38:12 compute-1 sudo[113909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:12 compute-1 python3.9[113911]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003091.2693925-158-56969782097834/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=44c84a6a268a61e4b70eca67a7dc95a57dfe1434 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:12 compute-1 sudo[113909]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 06:38:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:12.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 06:38:12 compute-1 sudo[114061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jobkfmxwhwrhssxcejjtqjnbydahgsjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003092.4034317-286-250386105064464/AnsiballZ_file.py'
Dec 06 06:38:12 compute-1 sudo[114061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:12 compute-1 python3.9[114063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:12 compute-1 sudo[114061]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:14 compute-1 ceph-mon[81689]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:14 compute-1 sudo[114213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnarslglymwbuiozyuxnguddnfbkichm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003094.4156845-286-16603728094898/AnsiballZ_file.py'
Dec 06 06:38:14 compute-1 sudo[114213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:14 compute-1 python3.9[114215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:14 compute-1 sudo[114213]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:15 compute-1 sudo[114365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxytrnvnfmcibnqyeliqhiusdzjunarg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003095.0022368-362-19187851878662/AnsiballZ_stat.py'
Dec 06 06:38:15 compute-1 sudo[114365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:15 compute-1 ceph-mon[81689]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:15 compute-1 python3.9[114367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:15 compute-1 sudo[114365]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:15 compute-1 sudo[114488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgshvvmvhonzfpwvqxmqzuucfwnbzmom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003095.0022368-362-19187851878662/AnsiballZ_copy.py'
Dec 06 06:38:15 compute-1 sudo[114488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:15 compute-1 python3.9[114490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003095.0022368-362-19187851878662/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=4ffc6d0364b4561225756fe2606081421553a608 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:15 compute-1 sudo[114488]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:16 compute-1 sudo[114640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsukhmksegpddmfjjcbzktkucmaiujym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003096.0916271-362-271668862153637/AnsiballZ_stat.py'
Dec 06 06:38:16 compute-1 sudo[114640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:16.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:16.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:16 compute-1 python3.9[114642]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:16 compute-1 sudo[114640]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:16 compute-1 ceph-mon[81689]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:16 compute-1 sudo[114763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyainvmysekopvjwcwnyfsnnkhturohj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003096.0916271-362-271668862153637/AnsiballZ_copy.py'
Dec 06 06:38:16 compute-1 sudo[114763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:17 compute-1 python3.9[114765]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003096.0916271-362-271668862153637/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=7302dfe8a711171d00de900e7fa863eaee0e2101 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:17 compute-1 sudo[114763]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:17 compute-1 sudo[114915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncpelfzizlrjgquvwepodyzjwpkigczg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003097.1392853-362-100878766167878/AnsiballZ_stat.py'
Dec 06 06:38:17 compute-1 sudo[114915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:17 compute-1 python3.9[114917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:17 compute-1 sudo[114915]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:17 compute-1 sudo[115038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpcxjquwtnevsxkyetvxkumjbuqyymqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003097.1392853-362-100878766167878/AnsiballZ_copy.py'
Dec 06 06:38:17 compute-1 sudo[115038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:18 compute-1 python3.9[115040]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003097.1392853-362-100878766167878/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=34f07c8821eed1c98bc0cf0c8c18b071aa0e492a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:18 compute-1 sudo[115038]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:18.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:18.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:18 compute-1 sudo[115190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zajmcpldhbxzglhysrqkraoorhxhxtcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003098.294158-485-216505069695864/AnsiballZ_file.py'
Dec 06 06:38:18 compute-1 sudo[115190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:18 compute-1 python3.9[115192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:18 compute-1 sudo[115190]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:19 compute-1 sudo[115342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phiwcgsnymjmtfmpvwuwmbfvsvoaeqnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003098.8590088-485-71960397418203/AnsiballZ_file.py'
Dec 06 06:38:19 compute-1 sudo[115342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:19 compute-1 python3.9[115344]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:19 compute-1 sudo[115342]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:19 compute-1 sudo[115494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smrrtpqbumiqshacqsxryhxcxjwppulu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003099.6171522-527-111639240807157/AnsiballZ_stat.py'
Dec 06 06:38:19 compute-1 sudo[115494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:20 compute-1 python3.9[115496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:20 compute-1 sudo[115494]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:20 compute-1 ceph-mon[81689]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:20 compute-1 sudo[115617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtpoipcapmilmknakujjmiinroomohla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003099.6171522-527-111639240807157/AnsiballZ_copy.py'
Dec 06 06:38:20 compute-1 sudo[115617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:38:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:20.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:38:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:20.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:20 compute-1 python3.9[115619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003099.6171522-527-111639240807157/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=4336862804b5bfd3c73699935a8d20bbfc3dd0dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:20 compute-1 sudo[115617]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:20 compute-1 sudo[115769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvugmbhoetlseivzndqsswbkizdkncl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003100.6619673-527-80019986584440/AnsiballZ_stat.py'
Dec 06 06:38:20 compute-1 sudo[115769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:21 compute-1 python3.9[115771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:21 compute-1 sudo[115769]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:21 compute-1 sudo[115892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izkzbsnnzxjoxcklqbfbvnfuiigxxkxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003100.6619673-527-80019986584440/AnsiballZ_copy.py'
Dec 06 06:38:21 compute-1 sudo[115892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:21 compute-1 python3.9[115894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003100.6619673-527-80019986584440/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=7302dfe8a711171d00de900e7fa863eaee0e2101 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:21 compute-1 sudo[115892]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:22 compute-1 sudo[116044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esuwptyfksjflpnhakvofapjiwsokruq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003101.9241602-527-256599583093255/AnsiballZ_stat.py'
Dec 06 06:38:22 compute-1 sudo[116044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:22.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:22.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:38:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Cumulative writes: 5947 writes, 26K keys, 5947 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5947 writes, 823 syncs, 7.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5947 writes, 26K keys, 5947 commit groups, 1.0 writes per commit group, ingest: 19.38 MB, 0.03 MB/s
                                           Interval WAL: 5947 writes, 823 syncs, 7.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 605.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 06:38:22 compute-1 python3.9[116046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:22 compute-1 sudo[116044]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:22 compute-1 ceph-mon[81689]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:22 compute-1 sudo[116167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysbvnqpvryhyaxpyfzfyqjhdaupxwujs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003101.9241602-527-256599583093255/AnsiballZ_copy.py'
Dec 06 06:38:22 compute-1 sudo[116167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:23 compute-1 python3.9[116169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003101.9241602-527-256599583093255/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=e1164d0d37390b2d0209d3789affc5f7c016a554 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:23 compute-1 sudo[116167]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:23 compute-1 ceph-mon[81689]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:24 compute-1 sudo[116319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nytqhgevyrdzmszxqxnorvmlhqgxbgmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003103.876911-705-273232555776479/AnsiballZ_file.py'
Dec 06 06:38:24 compute-1 sudo[116319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:24 compute-1 python3.9[116321]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:24 compute-1 sudo[116319]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:24.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:24.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:24 compute-1 sudo[116471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcabkveqxvxbffzrxyahnrmurbhvhteq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003104.4467986-731-182327645254909/AnsiballZ_stat.py'
Dec 06 06:38:24 compute-1 sudo[116471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:24 compute-1 python3.9[116473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:24 compute-1 sudo[116471]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:25 compute-1 ceph-mon[81689]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:25 compute-1 sudo[116594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plozcvmyrduybycnbpmazjfrqdktllxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003104.4467986-731-182327645254909/AnsiballZ_copy.py'
Dec 06 06:38:25 compute-1 sudo[116594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:25 compute-1 python3.9[116596]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003104.4467986-731-182327645254909/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:25 compute-1 sudo[116594]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:25 compute-1 sudo[116746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlpoxsdrhkrhzxtbbyusvtofkysnvoud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003105.5579379-773-38745489418663/AnsiballZ_file.py'
Dec 06 06:38:25 compute-1 sudo[116746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:25 compute-1 python3.9[116748]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:25 compute-1 sudo[116746]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:26 compute-1 sudo[116898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvzvzmnpolvbegtsejccrjbfvipuuama ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003106.1168032-794-35678338451667/AnsiballZ_stat.py'
Dec 06 06:38:26 compute-1 sudo[116898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:38:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:26.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:38:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:26.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:26 compute-1 python3.9[116900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:26 compute-1 sudo[116898]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:26 compute-1 ceph-mon[81689]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:27 compute-1 sudo[117021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xosyxqqbcvsnfcoickbpnvcpnodnttcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003106.1168032-794-35678338451667/AnsiballZ_copy.py'
Dec 06 06:38:27 compute-1 sudo[117021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:27 compute-1 python3.9[117023]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003106.1168032-794-35678338451667/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:27 compute-1 sudo[117021]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:27 compute-1 sudo[117173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snqcuftypkkdvggmbnboukdalamwvkhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003107.4867396-844-69440758214944/AnsiballZ_file.py'
Dec 06 06:38:27 compute-1 sudo[117173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:27 compute-1 python3.9[117175]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:27 compute-1 sudo[117173]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:28 compute-1 sudo[117325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzchamaqiocmaypwxneoqdpcyfkolcyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003108.0652213-866-205939308060843/AnsiballZ_stat.py'
Dec 06 06:38:28 compute-1 sudo[117325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:28.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:38:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:28.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:38:28 compute-1 python3.9[117327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:28 compute-1 sudo[117325]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:28 compute-1 sudo[117448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhttekcidmzssaxjtnnbbgqktzcsenog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003108.0652213-866-205939308060843/AnsiballZ_copy.py'
Dec 06 06:38:28 compute-1 sudo[117448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:29 compute-1 ceph-mon[81689]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:29 compute-1 python3.9[117450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003108.0652213-866-205939308060843/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:29 compute-1 sudo[117448]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:29 compute-1 sudo[117600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iunqxuhvgenusgptidlukzhrprayfxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003109.2433183-915-128482193005890/AnsiballZ_file.py'
Dec 06 06:38:29 compute-1 sudo[117600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:29 compute-1 python3.9[117602]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:29 compute-1 sudo[117600]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:30 compute-1 sudo[117752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctskhtxxxzxanavjgvouhjerbxszazbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003109.927621-938-149984377751282/AnsiballZ_stat.py'
Dec 06 06:38:30 compute-1 sudo[117752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:30 compute-1 python3.9[117754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:30 compute-1 sudo[117752]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:30.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:38:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:30.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:38:30 compute-1 ceph-mon[81689]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:30 compute-1 sudo[117875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbkgjbzypnhdxsqlorytzwqfqnlgjapc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003109.927621-938-149984377751282/AnsiballZ_copy.py'
Dec 06 06:38:30 compute-1 sudo[117875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:30 compute-1 python3.9[117877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003109.927621-938-149984377751282/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:30 compute-1 sudo[117875]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:31 compute-1 sudo[118027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lftieifxljqpwcrqesglmjespiaunnxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003111.1363027-985-254002992316851/AnsiballZ_file.py'
Dec 06 06:38:31 compute-1 sudo[118027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:31 compute-1 python3.9[118029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:31 compute-1 sudo[118027]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:32 compute-1 sudo[118179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgnpaahjdhefzhkttorlxmrziayhygif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003111.9134939-1008-264487742547088/AnsiballZ_stat.py'
Dec 06 06:38:32 compute-1 sudo[118179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:32 compute-1 python3.9[118181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:32 compute-1 sudo[118179]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:32.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:32.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:32 compute-1 sudo[118302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grqetgeyvvqujourtudvvrriygdjqgpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003111.9134939-1008-264487742547088/AnsiballZ_copy.py'
Dec 06 06:38:32 compute-1 sudo[118302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:32 compute-1 python3.9[118304]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003111.9134939-1008-264487742547088/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:32 compute-1 sudo[118302]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:33 compute-1 ceph-mon[81689]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:33 compute-1 sudo[118454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzhmvzhrtqrbkgtnlpxppzydytekxsfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003113.2546225-1055-158882024980981/AnsiballZ_file.py'
Dec 06 06:38:33 compute-1 sudo[118454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:33 compute-1 python3.9[118456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:33 compute-1 sudo[118454]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:34 compute-1 sudo[118606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kytppjwbxzdbhwtoqghthhwhyohwpstg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003113.9415476-1078-229149313557614/AnsiballZ_stat.py'
Dec 06 06:38:34 compute-1 sudo[118606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:34 compute-1 python3.9[118608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:34.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:34.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:34 compute-1 sudo[118606]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:34 compute-1 sudo[118729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwuptzdairxndygwbmrnfsufnxegocaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003113.9415476-1078-229149313557614/AnsiballZ_copy.py'
Dec 06 06:38:34 compute-1 sudo[118729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:34 compute-1 ceph-mon[81689]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:34 compute-1 python3.9[118731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003113.9415476-1078-229149313557614/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:34 compute-1 sudo[118729]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:36 compute-1 sshd-session[112578]: Connection closed by 192.168.122.30 port 43152
Dec 06 06:38:36 compute-1 sshd-session[112575]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:38:36 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Dec 06 06:38:36 compute-1 systemd[1]: session-43.scope: Consumed 20.941s CPU time.
Dec 06 06:38:36 compute-1 systemd-logind[788]: Session 43 logged out. Waiting for processes to exit.
Dec 06 06:38:36 compute-1 systemd-logind[788]: Removed session 43.
Dec 06 06:38:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:36.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:38:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:36.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:38:37 compute-1 ceph-mon[81689]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:38.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:38.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:38 compute-1 ceph-mon[81689]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:40.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:40.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:41 compute-1 ceph-mon[81689]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:41 compute-1 sshd-session[118756]: Accepted publickey for zuul from 192.168.122.30 port 52312 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:38:41 compute-1 systemd-logind[788]: New session 44 of user zuul.
Dec 06 06:38:41 compute-1 systemd[1]: Started Session 44 of User zuul.
Dec 06 06:38:41 compute-1 sshd-session[118756]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:38:42 compute-1 sudo[118909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqlvfptpvtvwnkwwjnuyvrtnocckxznu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003121.7632344-32-281439383041308/AnsiballZ_file.py'
Dec 06 06:38:42 compute-1 sudo[118909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:42.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:42.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:42 compute-1 python3.9[118911]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:42 compute-1 sudo[118909]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:42 compute-1 ceph-mon[81689]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:43 compute-1 sudo[119061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqraqwmqlpajzurgxffywkfgicweqyjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003122.6811898-68-94430177771493/AnsiballZ_stat.py'
Dec 06 06:38:43 compute-1 sudo[119061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:43 compute-1 python3.9[119063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:43 compute-1 sudo[119061]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:43 compute-1 sudo[119184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzsfbyocwbhhtickconzgxnqvbdsxxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003122.6811898-68-94430177771493/AnsiballZ_copy.py'
Dec 06 06:38:43 compute-1 sudo[119184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:44 compute-1 python3.9[119186]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003122.6811898-68-94430177771493/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=eb306234ce11ca94053ba9deb99a6e4ceca2e349 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:44 compute-1 sudo[119184]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:44.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:44.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:44 compute-1 sudo[119336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsdqpqavyqeldgzlkolpbrfnhgvivsrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003124.2796445-68-266614563642880/AnsiballZ_stat.py'
Dec 06 06:38:44 compute-1 sudo[119336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:44 compute-1 python3.9[119338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:44 compute-1 sudo[119336]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:45 compute-1 sudo[119459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzourezxvyofedryxhksnnibqeukqobg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003124.2796445-68-266614563642880/AnsiballZ_copy.py'
Dec 06 06:38:45 compute-1 sudo[119459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:45 compute-1 python3.9[119461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003124.2796445-68-266614563642880/.source.conf _original_basename=ceph.conf follow=False checksum=72f9497223d5391694ed548fdd27afc9585eca3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:45 compute-1 sudo[119459]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:45 compute-1 ceph-mon[81689]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:45 compute-1 sshd-session[118759]: Connection closed by 192.168.122.30 port 52312
Dec 06 06:38:45 compute-1 sshd-session[118756]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:38:45 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Dec 06 06:38:45 compute-1 systemd[1]: session-44.scope: Consumed 2.380s CPU time.
Dec 06 06:38:45 compute-1 systemd-logind[788]: Session 44 logged out. Waiting for processes to exit.
Dec 06 06:38:45 compute-1 systemd-logind[788]: Removed session 44.
Dec 06 06:38:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:46.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:46 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:46.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:46 compute-1 ceph-mon[81689]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:38:48 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:48.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:48.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:38:48 compute-1 ceph-mon[81689]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:50.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:50.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:51 compute-1 sshd-session[119486]: Accepted publickey for zuul from 192.168.122.30 port 56248 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:38:51 compute-1 systemd-logind[788]: New session 45 of user zuul.
Dec 06 06:38:51 compute-1 systemd[1]: Started Session 45 of User zuul.
Dec 06 06:38:51 compute-1 sshd-session[119486]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:38:52 compute-1 ceph-mon[81689]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:38:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:38:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:38:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:38:52 compute-1 python3.9[119639]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:38:53 compute-1 ceph-mon[81689]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:53 compute-1 sudo[119793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mecrzswjcsjpoydywaijnjpiomhddjyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003133.240806-69-212997556448167/AnsiballZ_file.py'
Dec 06 06:38:53 compute-1 sudo[119793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:53 compute-1 python3.9[119795]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:53 compute-1 sudo[119793]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:54 compute-1 sudo[119945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfzcybjdzoiwkgevbnquabdudlgemlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003134.023615-69-50324578188768/AnsiballZ_file.py'
Dec 06 06:38:54 compute-1 sudo[119945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:54.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:54.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:54 compute-1 python3.9[119947]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:54 compute-1 sudo[119945]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:55 compute-1 ceph-mon[81689]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:55 compute-1 python3.9[120097]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:38:56 compute-1 sudo[120247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhkobdvtourdukohrtxvonlukhutlxvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003135.6377308-138-162842965546828/AnsiballZ_seboolean.py'
Dec 06 06:38:56 compute-1 sudo[120247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:56 compute-1 python3.9[120249]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 06:38:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:38:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:56.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:38:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:56.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:57 compute-1 ceph-mon[81689]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:57 compute-1 sudo[120247]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:38:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:38:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:58.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:58.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:59 compute-1 ceph-mon[81689]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:59 compute-1 sudo[120403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsrfvceewofommlqnyvdpwujoghlxjxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003138.8418174-168-88147886632192/AnsiballZ_setup.py'
Dec 06 06:38:59 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 06 06:38:59 compute-1 sudo[120403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:59 compute-1 python3.9[120405]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:38:59 compute-1 sudo[120403]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:00 compute-1 sudo[120487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arzqaqmqehaluuoubraijhumjoowmoio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003138.8418174-168-88147886632192/AnsiballZ_dnf.py'
Dec 06 06:39:00 compute-1 sudo[120487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:00 compute-1 python3.9[120489]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:39:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:00.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:00 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:00.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:01 compute-1 ceph-mon[81689]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:01 compute-1 sudo[120487]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:39:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:39:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:39:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:39:02 compute-1 sudo[120640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlbzpjxbnwxevfwmdjshjmqxynisralg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003142.0421028-204-225193023998638/AnsiballZ_systemd.py'
Dec 06 06:39:02 compute-1 sudo[120640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:03 compute-1 python3.9[120642]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:39:03 compute-1 ceph-mon[81689]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:03 compute-1 sudo[120640]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:04 compute-1 sudo[120795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdyunnbkgtalrahszaakntcaasgxwkuc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003143.9370782-228-249252646998326/AnsiballZ_edpm_nftables_snippet.py'
Dec 06 06:39:04 compute-1 sudo[120795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:04 compute-1 python3[120797]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 06 06:39:04 compute-1 sudo[120795]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:05 compute-1 ceph-mon[81689]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:05 compute-1 sudo[120947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubreoonvqvwhbryrittgemrjmkoyqxjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003144.9172027-255-92131645567894/AnsiballZ_file.py'
Dec 06 06:39:05 compute-1 sudo[120947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:05 compute-1 python3.9[120949]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:05 compute-1 sudo[120947]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:06 compute-1 sudo[121099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrgjuqtohrcvgdpfmcdsjkkfkdddjdcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003145.592542-279-31948150382344/AnsiballZ_stat.py'
Dec 06 06:39:06 compute-1 sudo[121099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:06 compute-1 python3.9[121101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:06 compute-1 sudo[121099]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:06 compute-1 sudo[121177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcfoblbxmxmqviccftzqbirlonqjrbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003145.592542-279-31948150382344/AnsiballZ_file.py'
Dec 06 06:39:06 compute-1 sudo[121177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:06 compute-1 python3.9[121179]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:06 compute-1 sudo[121177]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:07 compute-1 sudo[121329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkykohuradfgslkxuggaisyarnsgnwmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003146.97712-315-231829526366172/AnsiballZ_stat.py'
Dec 06 06:39:07 compute-1 sudo[121329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:07 compute-1 ceph-mon[81689]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:07 compute-1 python3.9[121331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:07 compute-1 sudo[121329]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:08 compute-1 sudo[121407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjttpufdmnepliqdfbddqucaknpuuifc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003146.97712-315-231829526366172/AnsiballZ_file.py'
Dec 06 06:39:08 compute-1 sudo[121407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:08 compute-1 python3.9[121409]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.i_ti5zrb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:08 compute-1 sudo[121407]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:08.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:08.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:08 compute-1 ceph-mon[81689]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:08 compute-1 sudo[121559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzypnzhdgifumhgylfysiuxugjtmwfnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003148.3948927-351-58429508357884/AnsiballZ_stat.py'
Dec 06 06:39:08 compute-1 sudo[121559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:09 compute-1 python3.9[121561]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:09 compute-1 sudo[121559]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-1 sudo[121637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vctghynvxtkcqrlszqghfmrezskaypuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003148.3948927-351-58429508357884/AnsiballZ_file.py'
Dec 06 06:39:09 compute-1 sudo[121637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:09 compute-1 sudo[121640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:09 compute-1 sudo[121640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:09 compute-1 sudo[121640]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-1 python3.9[121639]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:09 compute-1 sudo[121665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:39:09 compute-1 sudo[121665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:09 compute-1 sudo[121665]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-1 sudo[121637]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-1 sudo[121690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:09 compute-1 sudo[121690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:09 compute-1 sudo[121690]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-1 sudo[121739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:39:09 compute-1 sudo[121739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:10 compute-1 sudo[121739]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:10 compute-1 sudo[121919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfwmilwqozorrwiupsmlzdflldvjpsle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003149.8793974-390-74358468623386/AnsiballZ_command.py'
Dec 06 06:39:10 compute-1 sudo[121919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:10.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:39:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:10.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:39:10 compute-1 python3.9[121921]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:10 compute-1 sudo[121919]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:39:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:39:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 06 06:39:11 compute-1 ceph-mon[81689]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:11 compute-1 sudo[122072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbuubdlommuviyhrqhmohyynidxvcsl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003150.979975-414-117906489379083/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 06:39:11 compute-1 sudo[122072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:11 compute-1 python3[122074]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 06:39:11 compute-1 sudo[122072]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:12 compute-1 sudo[122224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjwmakqwmhjhuwtnguuziftzwzwhonax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003151.8196907-438-177251453296748/AnsiballZ_stat.py'
Dec 06 06:39:12 compute-1 sudo[122224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:12 compute-1 python3.9[122226]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:12 compute-1 sudo[122224]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:12.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:12.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:12 compute-1 sudo[122349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjrtxqovfcqelofwvdcvxyybkpcayqwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003151.8196907-438-177251453296748/AnsiballZ_copy.py'
Dec 06 06:39:12 compute-1 sudo[122349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:13 compute-1 python3.9[122351]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003151.8196907-438-177251453296748/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:13 compute-1 sudo[122349]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:13 compute-1 ceph-mon[81689]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:39:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:39:13 compute-1 sudo[122501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fizkktjhrsdawiqyfvzunnqojwhzfnlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003153.24436-483-128312606252053/AnsiballZ_stat.py'
Dec 06 06:39:13 compute-1 sudo[122501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:13 compute-1 python3.9[122503]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:13 compute-1 sudo[122501]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:14 compute-1 sudo[122626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtfupkblziugbykvdrlnpxsbwqckbogs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003153.24436-483-128312606252053/AnsiballZ_copy.py'
Dec 06 06:39:14 compute-1 sudo[122626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:14 compute-1 python3.9[122628]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003153.24436-483-128312606252053/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:14 compute-1 sudo[122626]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:39:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:39:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:39:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:14.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:39:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:14.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:39:14 compute-1 sudo[122778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebunvsidpqkdzhzkaskedcdgwugytnsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003154.6274137-528-160640936229070/AnsiballZ_stat.py'
Dec 06 06:39:14 compute-1 sudo[122778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:15 compute-1 python3.9[122780]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:15 compute-1 sudo[122778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:15 compute-1 ceph-mon[81689]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:15 compute-1 sudo[122903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llwhgmnatiqwvjkjvgmjozqfanexorcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003154.6274137-528-160640936229070/AnsiballZ_copy.py'
Dec 06 06:39:15 compute-1 sudo[122903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:15 compute-1 python3.9[122905]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003154.6274137-528-160640936229070/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:15 compute-1 sudo[122903]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:16 compute-1 sudo[123055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtpvluafpkcqdfxeotpklhbojxcaafml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003155.940522-573-41223778305696/AnsiballZ_stat.py'
Dec 06 06:39:16 compute-1 sudo[123055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:16 compute-1 python3.9[123057]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:16 compute-1 sudo[123055]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:16.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:16.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:16 compute-1 sudo[123180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aultfjizzvcfgabpxrasdxlicsyjoovv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003155.940522-573-41223778305696/AnsiballZ_copy.py'
Dec 06 06:39:16 compute-1 sudo[123180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:16 compute-1 python3.9[123182]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003155.940522-573-41223778305696/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:17 compute-1 sudo[123180]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:17 compute-1 ceph-mon[81689]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:17 compute-1 sudo[123332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eylateyjajzlqtydgctziajlrypyhujk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003157.3630762-618-100872633685227/AnsiballZ_stat.py'
Dec 06 06:39:17 compute-1 sudo[123332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:17 compute-1 python3.9[123334]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:17 compute-1 sudo[123332]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:18 compute-1 sudo[123457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imitzmcmkjhmqdsbspljdefsbzgpffja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003157.3630762-618-100872633685227/AnsiballZ_copy.py'
Dec 06 06:39:18 compute-1 sudo[123457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:18.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:18.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:18 compute-1 python3.9[123459]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003157.3630762-618-100872633685227/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:18 compute-1 sudo[123457]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:19 compute-1 sudo[123609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvsfieuewxatxxhfekznipmlxwhfxosd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003158.852928-663-281458364914097/AnsiballZ_file.py'
Dec 06 06:39:19 compute-1 sudo[123609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:19 compute-1 python3.9[123611]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:19 compute-1 sudo[123609]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:19 compute-1 ceph-mon[81689]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:19 compute-1 sudo[123761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exqaetloulxlstwwgsaihjmusvguqjjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003159.5762794-687-30302115717656/AnsiballZ_command.py'
Dec 06 06:39:19 compute-1 sudo[123761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:19 compute-1 sudo[123763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:19 compute-1 sudo[123763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:19 compute-1 sudo[123763]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:19 compute-1 sudo[123789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:39:19 compute-1 sudo[123789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:19 compute-1 sudo[123789]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:20 compute-1 python3.9[123764]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:20 compute-1 sudo[123761]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:39:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:39:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:20.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:39:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:20.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:39:20 compute-1 sudo[123966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekhqkhotrvmvytbfdivypmkwdjghkyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003160.2772841-711-144314266527982/AnsiballZ_blockinfile.py'
Dec 06 06:39:20 compute-1 sudo[123966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:20 compute-1 ceph-mon[81689]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:20 compute-1 python3.9[123968]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:20 compute-1 sudo[123966]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:21 compute-1 sudo[124118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eksrdldpuhanztsmtegtcealnftcloon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003161.2162716-738-98866275912486/AnsiballZ_command.py'
Dec 06 06:39:21 compute-1 sudo[124118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:21 compute-1 python3.9[124120]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:21 compute-1 sudo[124118]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:22 compute-1 sudo[124271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gleeqcinuvugfznxzztswxfglbamfmor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003162.0072718-762-186379700122233/AnsiballZ_stat.py'
Dec 06 06:39:22 compute-1 sudo[124271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:22.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:22.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:22 compute-1 ceph-mon[81689]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:22 compute-1 python3.9[124273]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:39:22 compute-1 sudo[124271]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:23 compute-1 sudo[124425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksfsgsclpcxzxgmgcsjisvujbtwbiiws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003163.0369923-786-118160957151018/AnsiballZ_command.py'
Dec 06 06:39:23 compute-1 sudo[124425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:23 compute-1 python3.9[124427]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:23 compute-1 sudo[124425]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:24 compute-1 sudo[124580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsedccwsyqidtpizgqxyremarnmgrpmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003163.7690463-810-211557974616271/AnsiballZ_file.py'
Dec 06 06:39:24 compute-1 sudo[124580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:24 compute-1 python3.9[124582]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:24 compute-1 sudo[124580]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:24.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 06:39:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:24.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 06:39:25 compute-1 python3.9[124732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:39:26 compute-1 ceph-mon[81689]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:26.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:26.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:26 compute-1 sudo[124883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjmikovtodfxzsmwwkyhefyitvlqmape ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003166.3455436-930-171330469973477/AnsiballZ_command.py'
Dec 06 06:39:26 compute-1 sudo[124883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:26 compute-1 python3.9[124885]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:26 compute-1 ovs-vsctl[124886]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 06 06:39:26 compute-1 sudo[124883]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:27 compute-1 sudo[125036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djmdkqcxpaypzwwzatuyrshaamwhdarn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003167.172007-957-197136772876987/AnsiballZ_command.py'
Dec 06 06:39:27 compute-1 sudo[125036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:27 compute-1 python3.9[125038]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:27 compute-1 sudo[125036]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:28 compute-1 ceph-mon[81689]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:28 compute-1 sudo[125192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvthzkcqgtxhchwdemgqllqdatlndsch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003167.97595-982-251709310415175/AnsiballZ_command.py'
Dec 06 06:39:28 compute-1 sudo[125192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:28 compute-1 python3.9[125194]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:28 compute-1 ovs-vsctl[125195]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 06 06:39:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:28.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:28.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:28 compute-1 sudo[125192]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:29 compute-1 python3.9[125345]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:39:29 compute-1 ceph-mon[81689]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:29 compute-1 sudo[125497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthyklwgroajfuqyhqqwqgzsreabhivk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003169.586072-1032-260692215243977/AnsiballZ_file.py'
Dec 06 06:39:29 compute-1 sudo[125497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:30 compute-1 python3.9[125499]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:30 compute-1 sudo[125497]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:39:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Cumulative writes: 1521 writes, 10K keys, 1521 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 1521 writes, 1521 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1521 writes, 10K keys, 1521 commit groups, 1.0 writes per commit group, ingest: 21.61 MB, 0.04 MB/s
                                           Interval WAL: 1521 writes, 1521 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.5      0.72              0.03         4    0.180       0      0       0.0       0.0
                                             L6      1/0    8.14 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3     22.5     20.1      1.27              0.07         3    0.422     12K   1251       0.0       0.0
                                            Sum      1/0    8.14 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     14.3     18.4      1.99              0.10         7    0.284     12K   1251       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     14.7     18.9      1.94              0.10         6    0.323     12K   1251       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     22.5     20.1      1.27              0.07         3    0.422     12K   1251       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.67              0.03         3    0.224       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 2.0 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 1.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 1.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(54,1.00 MB,0.330433%) FilterBlock(7,45.17 KB,0.0145109%) IndexBlock(7,90.70 KB,0.0291373%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:39:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:30.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:39:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:30.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:39:30 compute-1 sudo[125649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naierlntcbwiamjfmvbmhgczqpuouykv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003170.2889442-1056-111393859516297/AnsiballZ_stat.py'
Dec 06 06:39:30 compute-1 sudo[125649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:30 compute-1 python3.9[125651]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:30 compute-1 sudo[125649]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:30 compute-1 sudo[125727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbgizlmpillwuyfthaxvutyznpfqusja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003170.2889442-1056-111393859516297/AnsiballZ_file.py'
Dec 06 06:39:30 compute-1 sudo[125727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:31 compute-1 python3.9[125729]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:31 compute-1 sudo[125727]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:31 compute-1 sudo[125879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqbigxujrepfqsdewaqgkpigpdjhvyqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003171.3382316-1056-149652715985209/AnsiballZ_stat.py'
Dec 06 06:39:31 compute-1 sudo[125879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:31 compute-1 python3.9[125881]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:31 compute-1 sudo[125879]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:32 compute-1 sudo[125957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtlvvvybksdbovooeaoqzjjegswvyqix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003171.3382316-1056-149652715985209/AnsiballZ_file.py'
Dec 06 06:39:32 compute-1 sudo[125957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:32 compute-1 python3.9[125959]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:32 compute-1 sudo[125957]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:32.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:32.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:32 compute-1 sudo[126109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koxdyzsjvrtmpmiqzuplgmhrnolrxktn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003172.6454818-1125-66164403142483/AnsiballZ_file.py'
Dec 06 06:39:32 compute-1 sudo[126109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:33 compute-1 python3.9[126111]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:33 compute-1 sudo[126109]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:33 compute-1 sudo[126263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amngyixqdfxuxngemqtcczhunmthyhzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003173.324915-1149-197582749624939/AnsiballZ_stat.py'
Dec 06 06:39:33 compute-1 sudo[126263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:33 compute-1 python3.9[126265]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:33 compute-1 sudo[126263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:34 compute-1 sudo[126341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbyjgajkkbopijeinvpxdpfjmcyhcqia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003173.324915-1149-197582749624939/AnsiballZ_file.py'
Dec 06 06:39:34 compute-1 sudo[126341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:34 compute-1 python3.9[126343]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:34 compute-1 sudo[126341]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:34 compute-1 ceph-mon[81689]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:34 compute-1 ceph-mon[81689]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:34.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:34 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:34.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:34 compute-1 sudo[126493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvdtrxtknyvlazsblzbynfpwbakshjwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003174.5974405-1185-148715033736185/AnsiballZ_stat.py'
Dec 06 06:39:34 compute-1 sudo[126493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:35 compute-1 python3.9[126495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:35 compute-1 sudo[126493]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:35 compute-1 sudo[126571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lepyjeaszqskbkoukwlxgovcgargycbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003174.5974405-1185-148715033736185/AnsiballZ_file.py'
Dec 06 06:39:35 compute-1 sudo[126571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:35 compute-1 python3.9[126573]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:35 compute-1 sudo[126571]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:35 compute-1 ceph-mon[81689]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:36 compute-1 sudo[126723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nstnxklpgrsckrwwkwirqyzenbiypmyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003175.9024565-1221-106114225697698/AnsiballZ_systemd.py'
Dec 06 06:39:36 compute-1 sudo[126723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:36 compute-1 python3.9[126725]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:39:36 compute-1 systemd[1]: Reloading.
Dec 06 06:39:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:36.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:36.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:36 compute-1 systemd-rc-local-generator[126753]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:39:36 compute-1 systemd-sysv-generator[126757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:39:36 compute-1 ceph-mon[81689]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:36 compute-1 sudo[126723]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:37 compute-1 sudo[126913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opupaeucyytxlelbvxufvostmdvzvpdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003177.1431994-1245-22122659482835/AnsiballZ_stat.py'
Dec 06 06:39:37 compute-1 sudo[126913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:37 compute-1 python3.9[126915]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:37 compute-1 sudo[126913]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:37 compute-1 sudo[126991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvmunseguurcermndjfzigomlpcxcwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003177.1431994-1245-22122659482835/AnsiballZ_file.py'
Dec 06 06:39:37 compute-1 sudo[126991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:38 compute-1 python3.9[126993]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:38 compute-1 sudo[126991]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:38.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:38.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:38 compute-1 sudo[127143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gofadoqkbomhhvckcallidnuvqyawtzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003178.6231613-1281-148430846021603/AnsiballZ_stat.py'
Dec 06 06:39:38 compute-1 sudo[127143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:39 compute-1 python3.9[127145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:39 compute-1 sudo[127143]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:39 compute-1 sudo[127221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iompxfmzsnvgczhgoqdqhbhyucsgrenv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003178.6231613-1281-148430846021603/AnsiballZ_file.py'
Dec 06 06:39:39 compute-1 sudo[127221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:39 compute-1 ceph-mon[81689]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:39 compute-1 python3.9[127223]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:39 compute-1 sudo[127221]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:40 compute-1 sudo[127373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyfbokmpvzekmhrozfyvugedwchzvmqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003179.8390422-1317-123481044798178/AnsiballZ_systemd.py'
Dec 06 06:39:40 compute-1 sudo[127373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:40 compute-1 python3.9[127375]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:39:40 compute-1 systemd[1]: Reloading.
Dec 06 06:39:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:40.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:40 compute-1 systemd-rc-local-generator[127400]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:39:40 compute-1 systemd-sysv-generator[127403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:39:40 compute-1 ceph-mon[81689]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:40 compute-1 systemd[1]: Starting Create netns directory...
Dec 06 06:39:40 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:39:40 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:39:40 compute-1 systemd[1]: Finished Create netns directory.
Dec 06 06:39:40 compute-1 sudo[127373]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:41 compute-1 sudo[127566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhqfxaxvjubsssowqyzdrvvsqochvjoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003181.1777778-1347-123899546152153/AnsiballZ_file.py'
Dec 06 06:39:41 compute-1 sudo[127566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:41 compute-1 python3.9[127568]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:41 compute-1 sudo[127566]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:42 compute-1 sudo[127718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqzrlianlfmpgnlikikorksrfwzfzzke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003181.8760357-1371-143334324563549/AnsiballZ_stat.py'
Dec 06 06:39:42 compute-1 sudo[127718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:42.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:42 compute-1 python3.9[127720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:42 compute-1 sudo[127718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:42 compute-1 sudo[127841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbfzumsnlheeixuepjkxjjrqjxecrqhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003181.8760357-1371-143334324563549/AnsiballZ_copy.py'
Dec 06 06:39:42 compute-1 sudo[127841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:43 compute-1 python3.9[127843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003181.8760357-1371-143334324563549/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:43 compute-1 sudo[127841]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:43 compute-1 ceph-mon[81689]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:43 compute-1 sudo[127993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euisjxtvxsiehvukcmnzytwwacjdesbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003183.589685-1422-200900705550988/AnsiballZ_file.py'
Dec 06 06:39:43 compute-1 sudo[127993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:44 compute-1 python3.9[127995]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:44 compute-1 sudo[127993]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:44.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:44 compute-1 sudo[128145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtooeodtpyoprgctmieijjvrrvlpojgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003184.3292007-1446-84170450308161/AnsiballZ_stat.py'
Dec 06 06:39:44 compute-1 sudo[128145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:44 compute-1 python3.9[128147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:44 compute-1 sudo[128145]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:45 compute-1 ceph-mon[81689]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:45 compute-1 sudo[128268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlkgaeeeiuyspeoxzpwgdtlcxoobzmvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003184.3292007-1446-84170450308161/AnsiballZ_copy.py'
Dec 06 06:39:45 compute-1 sudo[128268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:45 compute-1 python3.9[128270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003184.3292007-1446-84170450308161/.source.json _original_basename=.5jiu9k9a follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:45 compute-1 sudo[128268]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:46 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:46 compute-1 sudo[128420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezmeafmyerwhulgaqvvhaizctbuqusqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003186.485937-1491-218820846611292/AnsiballZ_file.py'
Dec 06 06:39:46 compute-1 sudo[128420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:46 compute-1 python3.9[128422]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:46 compute-1 sudo[128420]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:47 compute-1 ceph-mon[81689]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:47 compute-1 sudo[128572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wviejwsgutyguyazgqthtaosatygpyqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003187.1721747-1515-274532895091199/AnsiballZ_stat.py'
Dec 06 06:39:47 compute-1 sudo[128572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:47 compute-1 sudo[128572]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:47 compute-1 sudo[128695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcawvabqachitpsfvepcrncvxwagvqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003187.1721747-1515-274532895091199/AnsiballZ_copy.py'
Dec 06 06:39:47 compute-1 sudo[128695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:48 compute-1 sudo[128695]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:48 compute-1 sudo[128847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtpehzhnnsappigrhhfpgqrriklssrum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003188.5262082-1566-262290457703883/AnsiballZ_container_config_data.py'
Dec 06 06:39:48 compute-1 sudo[128847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:49 compute-1 python3.9[128849]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 06 06:39:49 compute-1 sudo[128847]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:49 compute-1 ceph-mon[81689]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:50 compute-1 sudo[128999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbgpkselftbrzwkqmwgtvphtsbiveppv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003189.6212025-1593-275722251217233/AnsiballZ_container_config_hash.py'
Dec 06 06:39:50 compute-1 sudo[128999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:50 compute-1 python3.9[129001]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:39:50 compute-1 sudo[128999]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:50.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:39:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:39:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:50.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:39:50 compute-1 sudo[129151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnyrswplzfoickfwuskcodslnjjdsfcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003190.5205648-1620-205500250464579/AnsiballZ_podman_container_info.py'
Dec 06 06:39:50 compute-1 sudo[129151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:51 compute-1 python3.9[129153]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 06:39:51 compute-1 sudo[129151]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:51 compute-1 ceph-mon[81689]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:39:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:52.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:39:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:52.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:52 compute-1 sudo[129329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awykdlxgsigcjmilvmdkiqbuohcpojxy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003192.1717722-1659-62850119474664/AnsiballZ_edpm_container_manage.py'
Dec 06 06:39:52 compute-1 sudo[129329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:52 compute-1 ceph-mon[81689]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:53 compute-1 python3[129331]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:39:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:54.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:56 compute-1 ceph-mon[81689]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:56.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:58 compute-1 ceph-mon[81689]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:58.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:39:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:58.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:00.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:01 compute-1 ceph-mon[81689]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:02.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:02 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:40:02 compute-1 ceph-mon[81689]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:04.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:06.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:06.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:08.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:09 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:40:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:10.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:11 compute-1 ceph-mon[81689]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-1 ceph-mon[81689]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-1 ceph-mon[81689]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-1 ceph-mon[81689]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-1 ceph-mon[81689]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-1 podman[129343]: 2025-12-06 06:40:11.842222761 +0000 UTC m=+18.758122785 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 06:40:12 compute-1 podman[129465]: 2025-12-06 06:40:11.944209633 +0000 UTC m=+0.022398988 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 06:40:12 compute-1 podman[129465]: 2025-12-06 06:40:12.361525058 +0000 UTC m=+0.439714393 container create b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:40:12 compute-1 python3[129331]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 06:40:12 compute-1 sudo[129329]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:12.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:12.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:12 compute-1 sudo[129653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwhkdlovkxtlypalsmywrcwmprmaehqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003212.6686676-1683-236620125753912/AnsiballZ_stat.py'
Dec 06 06:40:12 compute-1 sudo[129653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:13 compute-1 python3.9[129655]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:40:13 compute-1 sudo[129653]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:13 compute-1 ceph-mon[81689]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:13 compute-1 sudo[129807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wggtpfbgwsrrsitvfxtfodkswrnoxdbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003213.5563622-1710-21820828264807/AnsiballZ_file.py'
Dec 06 06:40:13 compute-1 sudo[129807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:14 compute-1 python3.9[129809]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:14 compute-1 sudo[129807]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:14 compute-1 sudo[129883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfxuqyvcozzsoccznfijcpziwlvhtkyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003213.5563622-1710-21820828264807/AnsiballZ_stat.py'
Dec 06 06:40:14 compute-1 sudo[129883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:14 compute-1 python3.9[129885]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:40:14 compute-1 sudo[129883]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:40:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:14.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:14.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:15 compute-1 sudo[130034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pohbjzkvqktkvgjqnjmxnhkxneapbzrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003214.5919127-1710-257171352673346/AnsiballZ_copy.py'
Dec 06 06:40:15 compute-1 sudo[130034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:15 compute-1 python3.9[130036]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003214.5919127-1710-257171352673346/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:15 compute-1 sudo[130034]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:15 compute-1 sudo[130110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rppkyuejflpekrolxdhisivthqcynoed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003214.5919127-1710-257171352673346/AnsiballZ_systemd.py'
Dec 06 06:40:15 compute-1 sudo[130110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:15 compute-1 ceph-mon[81689]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:15 compute-1 python3.9[130112]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:40:15 compute-1 systemd[1]: Reloading.
Dec 06 06:40:15 compute-1 systemd-sysv-generator[130142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:40:15 compute-1 systemd-rc-local-generator[130139]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:40:16 compute-1 sudo[130110]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:16 compute-1 sudo[130221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wejrlpxytocozarmirgspizjjdiminjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003214.5919127-1710-257171352673346/AnsiballZ_systemd.py'
Dec 06 06:40:16 compute-1 sudo[130221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:40:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:16.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:16.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:16 compute-1 python3.9[130223]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:40:16 compute-1 systemd[1]: Reloading.
Dec 06 06:40:16 compute-1 systemd-rc-local-generator[130249]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:40:16 compute-1 systemd-sysv-generator[130254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:40:17 compute-1 systemd[1]: Starting ovn_controller container...
Dec 06 06:40:17 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:40:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edb8a6fae0e8670b0abf48dc41c21e7a526a10ff6eaf6fd337feb9506db7d01f/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:17 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1.
Dec 06 06:40:17 compute-1 ceph-mon[81689]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:18 compute-1 podman[130264]: 2025-12-06 06:40:18.494901812 +0000 UTC m=+1.458450636 container init b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + sudo -E kolla_set_configs
Dec 06 06:40:18 compute-1 podman[130264]: 2025-12-06 06:40:18.518761709 +0000 UTC m=+1.482310543 container start b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:40:18 compute-1 systemd[1]: Created slice User Slice of UID 0.
Dec 06 06:40:18 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 06 06:40:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:18.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:18 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 06 06:40:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:18.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:18 compute-1 systemd[1]: Starting User Manager for UID 0...
Dec 06 06:40:18 compute-1 systemd[130298]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 06 06:40:18 compute-1 systemd[130298]: Queued start job for default target Main User Target.
Dec 06 06:40:18 compute-1 systemd[130298]: Created slice User Application Slice.
Dec 06 06:40:18 compute-1 systemd[130298]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 06 06:40:18 compute-1 systemd[130298]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 06:40:18 compute-1 systemd[130298]: Reached target Paths.
Dec 06 06:40:18 compute-1 systemd[130298]: Reached target Timers.
Dec 06 06:40:18 compute-1 systemd[130298]: Starting D-Bus User Message Bus Socket...
Dec 06 06:40:18 compute-1 systemd[130298]: Starting Create User's Volatile Files and Directories...
Dec 06 06:40:18 compute-1 systemd[130298]: Finished Create User's Volatile Files and Directories.
Dec 06 06:40:18 compute-1 systemd[130298]: Listening on D-Bus User Message Bus Socket.
Dec 06 06:40:18 compute-1 systemd[130298]: Reached target Sockets.
Dec 06 06:40:18 compute-1 systemd[130298]: Reached target Basic System.
Dec 06 06:40:18 compute-1 systemd[130298]: Reached target Main User Target.
Dec 06 06:40:18 compute-1 systemd[130298]: Startup finished in 121ms.
Dec 06 06:40:18 compute-1 systemd[1]: Started User Manager for UID 0.
Dec 06 06:40:18 compute-1 systemd[1]: Started Session c1 of User root.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:40:18 compute-1 ovn_controller[130279]: INFO:__main__:Validating config file
Dec 06 06:40:18 compute-1 ovn_controller[130279]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:40:18 compute-1 ovn_controller[130279]: INFO:__main__:Writing out command to execute
Dec 06 06:40:18 compute-1 ovn_controller[130279]: ++ cat /run_command
Dec 06 06:40:18 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + ARGS=
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + sudo kolla_copy_cacerts
Dec 06 06:40:18 compute-1 systemd[1]: Started Session c2 of User root.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + [[ ! -n '' ]]
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + . kolla_extend_start
Dec 06 06:40:18 compute-1 ovn_controller[130279]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + umask 0022
Dec 06 06:40:18 compute-1 ovn_controller[130279]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 06 06:40:18 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.8850] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.8857] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.8865] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.8869] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.8872] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 06:40:18 compute-1 kernel: br-int: entered promiscuous mode
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 06:40:18 compute-1 ovn_controller[130279]: 2025-12-06T06:40:18Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.9055] manager: (ovn-feab6d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.9062] manager: (ovn-150d59-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Dec 06 06:40:18 compute-1 systemd-udevd[130325]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:40:18 compute-1 systemd-udevd[130326]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:40:18 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.9228] device (genev_sys_6081): carrier: link connected
Dec 06 06:40:18 compute-1 NetworkManager[49031]: <info>  [1765003218.9231] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Dec 06 06:40:19 compute-1 edpm-start-podman-container[130264]: ovn_controller
Dec 06 06:40:19 compute-1 NetworkManager[49031]: <info>  [1765003219.1460] manager: (ovn-9f96b9-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec 06 06:40:19 compute-1 edpm-start-podman-container[130263]: Creating additional drop-in dependency for "ovn_controller" (b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1)
Dec 06 06:40:19 compute-1 systemd[1]: Reloading.
Dec 06 06:40:19 compute-1 systemd-rc-local-generator[130394]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:40:19 compute-1 systemd-sysv-generator[130398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:40:19 compute-1 ceph-mon[81689]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:19 compute-1 podman[130285]: 2025-12-06 06:40:19.326699693 +0000 UTC m=+0.794405119 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 06:40:19 compute-1 systemd[1]: Started ovn_controller container.
Dec 06 06:40:19 compute-1 sudo[130221]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:19 compute-1 sudo[130550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pejpcjoiozyttxazevuogvqtjngmjdlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003219.736157-1794-7802091684055/AnsiballZ_command.py'
Dec 06 06:40:20 compute-1 sudo[130550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:20 compute-1 sudo[130553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:20 compute-1 sudo[130553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:20 compute-1 sudo[130553]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-1 python3.9[130552]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:40:20 compute-1 sudo[130578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:40:20 compute-1 sudo[130578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:20 compute-1 sudo[130578]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-1 ovs-vsctl[130602]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 06 06:40:20 compute-1 sudo[130550]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-1 sudo[130604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:20 compute-1 sudo[130604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:20 compute-1 sudo[130604]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-1 sudo[130647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:40:20 compute-1 sudo[130647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:20 compute-1 sudo[130647]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-1 sudo[130834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqspyjsdkssbyyksdrqqizlzhhxkcjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003220.495196-1818-152917308822739/AnsiballZ_command.py'
Dec 06 06:40:20 compute-1 sudo[130834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:21 compute-1 python3.9[130836]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:40:21 compute-1 ovs-vsctl[130838]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 06 06:40:21 compute-1 sudo[130834]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:21 compute-1 sudo[130989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omnxkytavdmxpyhduvgegpzpahqwtidg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003221.7134607-1860-66497483378414/AnsiballZ_command.py'
Dec 06 06:40:21 compute-1 sudo[130989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:22 compute-1 python3.9[130991]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:40:22 compute-1 ovs-vsctl[130992]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 06 06:40:22 compute-1 sudo[130989]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:22.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:22 compute-1 sshd-session[119489]: Connection closed by 192.168.122.30 port 56248
Dec 06 06:40:22 compute-1 sshd-session[119486]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:40:22 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Dec 06 06:40:22 compute-1 systemd[1]: session-45.scope: Consumed 55.160s CPU time.
Dec 06 06:40:22 compute-1 systemd-logind[788]: Session 45 logged out. Waiting for processes to exit.
Dec 06 06:40:22 compute-1 systemd-logind[788]: Removed session 45.
Dec 06 06:40:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:24.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:25 compute-1 ceph-mon[81689]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:26 compute-1 ceph-mon[81689]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:26 compute-1 ceph-mon[81689]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:26.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:27 compute-1 ceph-mon[81689]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:40:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:40:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:40:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:40:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:40:28 compute-1 ceph-mon[81689]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:28.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:28.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:28 compute-1 sshd-session[131017]: Accepted publickey for zuul from 192.168.122.30 port 50336 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:40:28 compute-1 systemd-logind[788]: New session 47 of user zuul.
Dec 06 06:40:28 compute-1 systemd[1]: Started Session 47 of User zuul.
Dec 06 06:40:28 compute-1 sshd-session[131017]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:40:28 compute-1 systemd[1]: Stopping User Manager for UID 0...
Dec 06 06:40:28 compute-1 systemd[130298]: Activating special unit Exit the Session...
Dec 06 06:40:28 compute-1 systemd[130298]: Stopped target Main User Target.
Dec 06 06:40:28 compute-1 systemd[130298]: Stopped target Basic System.
Dec 06 06:40:28 compute-1 systemd[130298]: Stopped target Paths.
Dec 06 06:40:28 compute-1 systemd[130298]: Stopped target Sockets.
Dec 06 06:40:28 compute-1 systemd[130298]: Stopped target Timers.
Dec 06 06:40:28 compute-1 systemd[130298]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 06:40:28 compute-1 systemd[130298]: Closed D-Bus User Message Bus Socket.
Dec 06 06:40:28 compute-1 systemd[130298]: Stopped Create User's Volatile Files and Directories.
Dec 06 06:40:28 compute-1 systemd[130298]: Removed slice User Application Slice.
Dec 06 06:40:28 compute-1 systemd[130298]: Reached target Shutdown.
Dec 06 06:40:28 compute-1 systemd[130298]: Finished Exit the Session.
Dec 06 06:40:28 compute-1 systemd[130298]: Reached target Exit the Session.
Dec 06 06:40:29 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Dec 06 06:40:29 compute-1 systemd[1]: Stopped User Manager for UID 0.
Dec 06 06:40:29 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 06 06:40:29 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 06 06:40:29 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 06 06:40:29 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 06 06:40:29 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Dec 06 06:40:29 compute-1 python3.9[131173]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:40:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:30.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:30 compute-1 sudo[131327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvcnjetjmxbopwguhbrrnleuvrgdfzxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003230.2181096-68-124928559631942/AnsiballZ_file.py'
Dec 06 06:40:30 compute-1 sudo[131327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:30 compute-1 python3.9[131329]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:30 compute-1 ceph-mon[81689]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:30 compute-1 sudo[131327]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:31 compute-1 sudo[131479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awgvzavdabvofxlwgoiabzpoyelzulwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003231.0696244-68-253452154005079/AnsiballZ_file.py'
Dec 06 06:40:31 compute-1 sudo[131479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:31 compute-1 python3.9[131481]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:31 compute-1 sudo[131479]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:32 compute-1 sudo[131631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-outbncdxmfjjpseyhhlvwzaktewpxwem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003231.7481546-68-67403884185002/AnsiballZ_file.py'
Dec 06 06:40:32 compute-1 sudo[131631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:32 compute-1 python3.9[131633]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:32 compute-1 sudo[131631]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:32.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:32 compute-1 ceph-mon[81689]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:32.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:32 compute-1 sudo[131783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrhsgcejffoldzjithgldgnjbclnewgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003232.4717977-68-269781195757296/AnsiballZ_file.py'
Dec 06 06:40:32 compute-1 sudo[131783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:33 compute-1 sudo[131786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:33 compute-1 sudo[131786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:33 compute-1 sudo[131786]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:33 compute-1 sudo[131811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:40:33 compute-1 sudo[131811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:33 compute-1 python3.9[131785]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:33 compute-1 sudo[131811]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:33 compute-1 sudo[131783]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:33 compute-1 sudo[131985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yixbokzmkynuclfaoruipnyivgqoltcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003233.2110217-68-61875233522417/AnsiballZ_file.py'
Dec 06 06:40:33 compute-1 sudo[131985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:33 compute-1 python3.9[131987]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:33 compute-1 sudo[131985]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:34.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:34.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:34 compute-1 python3.9[132137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:40:35 compute-1 ceph-mon[81689]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:35 compute-1 sudo[132287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iclsbbltnoxgwnvjxycjkaoeddwialmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003234.8812995-200-236689176118880/AnsiballZ_seboolean.py'
Dec 06 06:40:35 compute-1 sudo[132287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:35 compute-1 python3.9[132289]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 06:40:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:36 compute-1 sudo[132287]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:36.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:36.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:37 compute-1 python3.9[132440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:37 compute-1 python3.9[132561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003236.512115-224-183562252473788/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:38 compute-1 python3.9[132711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:38.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:38.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:39 compute-1 ceph-mon[81689]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:39 compute-1 python3.9[132832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003238.1192944-269-93947245725746/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:39 compute-1 ceph-mon[81689]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:40 compute-1 sudo[132982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovlyfzguswlrmjnwfnohdijxldmzszcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003239.7017877-320-213205822710869/AnsiballZ_setup.py'
Dec 06 06:40:40 compute-1 sudo[132982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:40 compute-1 python3.9[132984]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:40:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:40.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:40 compute-1 sudo[132982]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:40.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:40 compute-1 sudo[133066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phixietzmqojfzlbxkfvgejgqwwsqkyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003239.7017877-320-213205822710869/AnsiballZ_dnf.py'
Dec 06 06:40:40 compute-1 sudo[133066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:41 compute-1 ceph-mon[81689]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:41 compute-1 python3.9[133068]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:40:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:42.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:42.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:42 compute-1 sudo[133066]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:43 compute-1 ceph-mon[81689]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:43 compute-1 sudo[133219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jafqukqnodxbpsprsnhumhqcjmvgwdtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003242.9702725-356-8666433787362/AnsiballZ_systemd.py'
Dec 06 06:40:43 compute-1 sudo[133219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:43 compute-1 python3.9[133221]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:40:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:44.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:44.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:44 compute-1 sudo[133219]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:45 compute-1 ceph-mon[81689]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:45 compute-1 python3.9[133374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:46 compute-1 python3.9[133495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003245.1103497-380-195252323753077/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:46.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:46 compute-1 python3.9[133645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:46.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:47 compute-1 python3.9[133766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003246.175305-380-76044243560603/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:47 compute-1 ceph-mon[81689]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:48 compute-1 python3.9[133916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:48.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:48.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:49 compute-1 python3.9[134037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003248.130788-512-63254337400500/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:49 compute-1 ceph-mon[81689]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:49 compute-1 ovn_controller[130279]: 2025-12-06T06:40:49Z|00025|memory|INFO|16640 kB peak resident set size after 30.8 seconds
Dec 06 06:40:49 compute-1 ovn_controller[130279]: 2025-12-06T06:40:49Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Dec 06 06:40:49 compute-1 podman[134161]: 2025-12-06 06:40:49.712891597 +0000 UTC m=+0.084389776 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:40:49 compute-1 python3.9[134198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:50 compute-1 python3.9[134332]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003249.353487-512-101293469271607/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:50.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:50.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:51 compute-1 python3.9[134482]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:40:51 compute-1 ceph-mon[81689]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:51 compute-1 sudo[134634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjriopxkhgikgywzpbxyjmamkdakhcsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003251.496389-626-259993704482886/AnsiballZ_file.py'
Dec 06 06:40:51 compute-1 sudo[134634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:51 compute-1 python3.9[134636]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:52 compute-1 sudo[134634]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:52 compute-1 sudo[134786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozwemesouymgkzeumiwzxuknzdrintkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003252.3184125-650-79671494116676/AnsiballZ_stat.py'
Dec 06 06:40:52 compute-1 sudo[134786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:52 compute-1 ceph-mon[81689]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:52.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:52.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:52 compute-1 python3.9[134788]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:52 compute-1 sudo[134786]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:52 compute-1 sudo[134864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wttquzbsnnmrgcaixndlodwethsqrzxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003252.3184125-650-79671494116676/AnsiballZ_file.py'
Dec 06 06:40:52 compute-1 sudo[134864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:53 compute-1 python3.9[134866]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:53 compute-1 sudo[134864]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:53 compute-1 sudo[135016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvxlstmowrbmapohkynyhuzsmfrkiesi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003253.360636-650-76991506188942/AnsiballZ_stat.py'
Dec 06 06:40:53 compute-1 sudo[135016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:54 compute-1 python3.9[135018]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:54 compute-1 sudo[135016]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:54 compute-1 sudo[135094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxqpvuqrbcinrrtztjkascdtbsbcuhzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003253.360636-650-76991506188942/AnsiballZ_file.py'
Dec 06 06:40:54 compute-1 sudo[135094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:54 compute-1 python3.9[135096]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:54.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:54 compute-1 sudo[135094]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:54.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:54 compute-1 ceph-mon[81689]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:55 compute-1 sudo[135246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzrthwqjdfcypkbwweuvaxgvtvtsysvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003254.957117-719-44771556850856/AnsiballZ_file.py'
Dec 06 06:40:55 compute-1 sudo[135246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:55 compute-1 python3.9[135248]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:55 compute-1 sudo[135246]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:55 compute-1 sudo[135398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndrvgrxmxbmjnjvlwhpqefygslggchum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003255.6958256-743-169323801786137/AnsiballZ_stat.py'
Dec 06 06:40:55 compute-1 sudo[135398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:56 compute-1 python3.9[135400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:56 compute-1 sudo[135398]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:56 compute-1 sudo[135476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyqttzclqgbshaidzwwugawozctjppom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003255.6958256-743-169323801786137/AnsiballZ_file.py'
Dec 06 06:40:56 compute-1 sudo[135476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:56.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:56 compute-1 python3.9[135478]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:56.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:56 compute-1 sudo[135476]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:57 compute-1 sudo[135628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtbzfugfodjfcqbqajuunixoetbwsckc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003256.886212-779-146718960132962/AnsiballZ_stat.py'
Dec 06 06:40:57 compute-1 sudo[135628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:57 compute-1 python3.9[135630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:57 compute-1 sudo[135628]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:57 compute-1 sudo[135706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpclvveghvsnifdfevlbskdgcjjbkacq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003256.886212-779-146718960132962/AnsiballZ_file.py'
Dec 06 06:40:57 compute-1 sudo[135706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:57 compute-1 python3.9[135708]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:57 compute-1 sudo[135706]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:57 compute-1 ceph-mon[81689]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:58 compute-1 sudo[135858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtbrtmwezqbkayztegigsgpwynwzjmht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003258.1184578-815-249091242437505/AnsiballZ_systemd.py'
Dec 06 06:40:58 compute-1 sudo[135858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:40:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:58.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:40:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:40:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:58.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:58 compute-1 python3.9[135860]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:40:58 compute-1 systemd[1]: Reloading.
Dec 06 06:40:58 compute-1 systemd-rc-local-generator[135885]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:40:58 compute-1 systemd-sysv-generator[135889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:40:58 compute-1 ceph-mon[81689]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:58 compute-1 sudo[135858]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:59 compute-1 sudo[136047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myswtdikftakquontmbxdcziovleiuxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003259.2870932-839-81594704776521/AnsiballZ_stat.py'
Dec 06 06:40:59 compute-1 sudo[136047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:59 compute-1 python3.9[136049]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:59 compute-1 sudo[136047]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:00 compute-1 sudo[136125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-novqfalkbtpccyftfdotyreevxzrubvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003259.2870932-839-81594704776521/AnsiballZ_file.py'
Dec 06 06:41:00 compute-1 sudo[136125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:00 compute-1 python3.9[136127]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:00 compute-1 sudo[136125]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:00.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:00 compute-1 sudo[136277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfzomphzuxmbhqndlygpcvwcofqbdfiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003260.6395404-875-48862024047240/AnsiballZ_stat.py'
Dec 06 06:41:00 compute-1 sudo[136277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:02.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:02.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:04 compute-1 python3.9[136279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:41:04 compute-1 sudo[136277]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:04.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:04 compute-1 sudo[136355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxhrtmbgslhyekxedjswbppfbvwietfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003260.6395404-875-48862024047240/AnsiballZ_file.py'
Dec 06 06:41:04 compute-1 sudo[136355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:04 compute-1 python3.9[136357]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:04 compute-1 sudo[136355]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:04 compute-1 ceph-mon[81689]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:05 compute-1 sudo[136507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vysirwvchxajygahasaoslndgqrcfjal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003265.0699446-911-133877356772954/AnsiballZ_systemd.py'
Dec 06 06:41:05 compute-1 sudo[136507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:05 compute-1 python3.9[136509]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:41:05 compute-1 systemd[1]: Reloading.
Dec 06 06:41:05 compute-1 systemd-rc-local-generator[136537]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:41:05 compute-1 systemd-sysv-generator[136540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:41:06 compute-1 ceph-mon[81689]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:06 compute-1 ceph-mon[81689]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:06 compute-1 systemd[1]: Starting Create netns directory...
Dec 06 06:41:06 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:41:06 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:41:06 compute-1 systemd[1]: Finished Create netns directory.
Dec 06 06:41:06 compute-1 sudo[136507]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:06.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:06.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:06 compute-1 sudo[136703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmfhagpgyzreewuogmenyysbkcestkpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003266.5418074-941-113312094167536/AnsiballZ_file.py'
Dec 06 06:41:06 compute-1 sudo[136703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:06 compute-1 python3.9[136705]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:41:06 compute-1 sudo[136703]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:07 compute-1 sudo[136855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swhyokadipvtyeyhkmrnnuydzxuxvfme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003267.232207-965-85566557093248/AnsiballZ_stat.py'
Dec 06 06:41:07 compute-1 sudo[136855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:07 compute-1 python3.9[136857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:41:07 compute-1 sudo[136855]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:08 compute-1 sudo[136978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-henygmqyhhhzcydtvujjfdujjbqnzdkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003267.232207-965-85566557093248/AnsiballZ_copy.py'
Dec 06 06:41:08 compute-1 sudo[136978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:08 compute-1 python3.9[136980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003267.232207-965-85566557093248/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:41:08 compute-1 sudo[136978]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:08 compute-1 ceph-mon[81689]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:08.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:09 compute-1 ceph-mon[81689]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 06:41:09 compute-1 sudo[137130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzzevhyszomeyngczubvqrvljexztka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003268.8206108-1016-237317147065525/AnsiballZ_file.py'
Dec 06 06:41:09 compute-1 sudo[137130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:09 compute-1 python3.9[137132]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:41:09 compute-1 sudo[137130]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:09 compute-1 sudo[137282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsiwdczcpjeragyvizegczdpnzjnxrjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003269.6314604-1040-228663682875252/AnsiballZ_stat.py'
Dec 06 06:41:09 compute-1 sudo[137282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:10 compute-1 python3.9[137284]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:41:10 compute-1 sudo[137282]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:41:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:10 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:10 compute-1 sudo[137405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmaggfgntepvrpqpeiiiaysitrcodmhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003269.6314604-1040-228663682875252/AnsiballZ_copy.py'
Dec 06 06:41:10 compute-1 sudo[137405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:11 compute-1 python3.9[137407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003269.6314604-1040-228663682875252/.source.json _original_basename=.avz12o4a follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:11 compute-1 sudo[137405]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:11 compute-1 sudo[137557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xofosbuoeedqabowqoxaydknvmtqnohl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003271.3177056-1085-271401123050682/AnsiballZ_file.py'
Dec 06 06:41:11 compute-1 sudo[137557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:11 compute-1 python3.9[137559]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:11 compute-1 sudo[137557]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:12 compute-1 ceph-mon[81689]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 06:41:12 compute-1 sudo[137709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoixiwwooucrvmhxckvxclauznqojflv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003272.0822992-1109-14172649257927/AnsiballZ_stat.py'
Dec 06 06:41:12 compute-1 sudo[137709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:12 compute-1 sudo[137709]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:12.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:12.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:12 compute-1 sudo[137832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqktlrfvojmatonasowlyrlhmanjdeok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003272.0822992-1109-14172649257927/AnsiballZ_copy.py'
Dec 06 06:41:12 compute-1 sudo[137832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:13 compute-1 sudo[137832]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:13 compute-1 ceph-mon[81689]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Dec 06 06:41:13 compute-1 sudo[137984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-answksypblntzegpipjflkvcypozapaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003273.5163445-1160-226679919046155/AnsiballZ_container_config_data.py'
Dec 06 06:41:13 compute-1 sudo[137984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:14 compute-1 python3.9[137986]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 06 06:41:14 compute-1 sudo[137984]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:14.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:14.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:14 compute-1 sudo[138136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psbxjxcxlnffeikiviommvaovgaehjfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003274.4277494-1187-215698446217061/AnsiballZ_container_config_hash.py'
Dec 06 06:41:14 compute-1 sudo[138136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:15 compute-1 python3.9[138138]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:41:15 compute-1 sudo[138136]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:15 compute-1 ceph-mon[81689]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 06 06:41:15 compute-1 sudo[138289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trhhksevlpqqxnkluklegbqxgwcalmok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003275.3626435-1214-203339556925980/AnsiballZ_podman_container_info.py'
Dec 06 06:41:15 compute-1 sudo[138289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:15 compute-1 python3.9[138291]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 06:41:16 compute-1 sudo[138289]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:16.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:16.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:17 compute-1 ceph-mon[81689]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 06 06:41:17 compute-1 sudo[138466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhbldibqeyuuuzfyhgncxjxwdrvmoctg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003276.9664145-1253-87860862828382/AnsiballZ_edpm_container_manage.py'
Dec 06 06:41:17 compute-1 sudo[138466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:17 compute-1 python3[138468]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:41:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:18.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:19 compute-1 ceph-mon[81689]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Dec 06 06:41:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:20.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:22.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:22.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:24.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:25 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:41:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:26.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:41:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:28.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:28.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:28 compute-1 ceph-mon[81689]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec 06 06:41:29 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:41:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:30.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 06:41:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:30.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:31 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.356117249s, txc = 0x55b5547d7800
Dec 06 06:41:31 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 10.624466896s
Dec 06 06:41:31 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 10.624466896s
Dec 06 06:41:31 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.356084824s, txc = 0x55b5546a8600
Dec 06 06:41:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:32.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:32.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:33 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:41:33 compute-1 sudo[138539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:33 compute-1 sudo[138539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-1 sudo[138539]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:33 compute-1 sudo[138564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:41:33 compute-1 sudo[138564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-1 sudo[138564]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:33 compute-1 sudo[138589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:33 compute-1 sudo[138589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-1 sudo[138589]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:33 compute-1 sudo[138614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:41:33 compute-1 sudo[138614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:34.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:34.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:36.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:37 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:41:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:38.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:38.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:40.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:40.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:41 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:41:41 compute-1 podman[138525]: 2025-12-06 06:41:41.306909823 +0000 UTC m=+21.309509088 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 06:41:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 503..1222) lease_timeout -- calling new election
Dec 06 06:41:41 compute-1 ceph-mon[81689]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 06 06:41:41 compute-1 ceph-mon[81689]: paxos.2).electionLogic(36) init, last seen epoch 36
Dec 06 06:41:42 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 10.958197594s
Dec 06 06:41:42 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 10.958197594s
Dec 06 06:41:42 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 21.581192017s, txc = 0x55b553cbf800
Dec 06 06:41:42 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:41:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:42.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:42.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:42 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:41:42 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:41:42 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.379572868s, txc = 0x55b5549e8f00
Dec 06 06:41:42 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.379138947s, txc = 0x55b554b23b00
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 0 B/s wr, 85 op/s
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 06 06:41:43 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:43 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:41:43 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:41:43 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:41:43 compute-1 ceph-mon[81689]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:41:43 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 15m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:41:43 compute-1 ceph-mon[81689]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:41:43 compute-1 ceph-mon[81689]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:41:43 compute-1 ceph-mon[81689]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:41:43 compute-1 ceph-mon[81689]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:43 compute-1 ceph-mon[81689]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:43 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:41:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:44.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:44.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:45 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:41:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:46.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:46.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:48 compute-1 ceph-mon[81689]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:41:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:48 compute-1 sudo[138614]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:48.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:48 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:41:48 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:41:48 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:41:48 compute-1 ceph-mon[81689]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:48 compute-1 ceph-mon[81689]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Dec 06 06:41:48 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:41:48 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:41:48 compute-1 ceph-mon[81689]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:41:48 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 15m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:41:48 compute-1 ceph-mon[81689]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:41:48 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 06:41:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:50.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:50 compute-1 podman[138481]: 2025-12-06 06:41:50.872819651 +0000 UTC m=+32.973689543 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 06:41:51 compute-1 podman[138760]: 2025-12-06 06:41:50.978725476 +0000 UTC m=+0.020833657 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 06:41:51 compute-1 ceph-mon[81689]: mon.compute-1 calling monitor election
Dec 06 06:41:51 compute-1 ceph-mon[81689]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:41:51 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:41:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:41:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:41:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:41:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:41:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:41:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:41:51 compute-1 podman[138760]: 2025-12-06 06:41:51.172694161 +0000 UTC m=+0.214802342 container create 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 06:41:51 compute-1 python3[138468]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 06:41:51 compute-1 sudo[138466]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:51 compute-1 sudo[138948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwvrftydgfrgcyghoeydapcdhmojhokr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003311.490314-1277-91501195672091/AnsiballZ_stat.py'
Dec 06 06:41:51 compute-1 sudo[138948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:51 compute-1 python3.9[138950]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:41:51 compute-1 sudo[138948]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:52.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:41:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:52.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:41:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:54 compute-1 ceph-mon[81689]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:41:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:54.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:54.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:54 compute-1 sudo[139102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdxzyjmehucwoyyeafjjuaktalhtdhzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003314.55472-1304-31483848524827/AnsiballZ_file.py'
Dec 06 06:41:54 compute-1 sudo[139102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:55 compute-1 python3.9[139104]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:55 compute-1 sudo[139102]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:55 compute-1 sudo[139178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzljfzwoipghnqnstvfdrpcjlxrqugfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003314.55472-1304-31483848524827/AnsiballZ_stat.py'
Dec 06 06:41:55 compute-1 sudo[139178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:55 compute-1 python3.9[139180]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:41:55 compute-1 sudo[139178]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:55 compute-1 sudo[139329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpnoownvjbbioriafvaxonxvyovhdtjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003315.6063237-1304-175449325414208/AnsiballZ_copy.py'
Dec 06 06:41:55 compute-1 sudo[139329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:56 compute-1 python3.9[139331]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003315.6063237-1304-175449325414208/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:56 compute-1 sudo[139329]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:56 compute-1 ceph-mon[81689]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Dec 06 06:41:56 compute-1 ceph-mon[81689]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Dec 06 06:41:56 compute-1 sudo[139405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqolvwbpnfbpyafaxpzazwcnxjxrmjjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003315.6063237-1304-175449325414208/AnsiballZ_systemd.py'
Dec 06 06:41:56 compute-1 sudo[139405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:56 compute-1 python3.9[139407]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:41:56 compute-1 systemd[1]: Reloading.
Dec 06 06:41:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:56.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:56.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:56 compute-1 systemd-rc-local-generator[139430]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:41:56 compute-1 systemd-sysv-generator[139433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:41:57 compute-1 sudo[139405]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:57 compute-1 ceph-mon[81689]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 06 06:41:57 compute-1 sudo[139516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ectvvwjmxgktziynuqrliifpiujxlymo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003315.6063237-1304-175449325414208/AnsiballZ_systemd.py'
Dec 06 06:41:57 compute-1 sudo[139516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:57 compute-1 python3.9[139518]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:41:57 compute-1 systemd[1]: Reloading.
Dec 06 06:41:57 compute-1 systemd-rc-local-generator[139546]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:41:57 compute-1 systemd-sysv-generator[139549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:41:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:58.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:41:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:58.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:59 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Dec 06 06:41:59 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:41:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd22ef8c0673f36f1877d42651d1cedd66e39abd62aab745a3af3d3e1fbe67c8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd22ef8c0673f36f1877d42651d1cedd66e39abd62aab745a3af3d3e1fbe67c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:59 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28.
Dec 06 06:41:59 compute-1 podman[139559]: 2025-12-06 06:41:59.609545634 +0000 UTC m=+0.439882049 container init 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + sudo -E kolla_set_configs
Dec 06 06:41:59 compute-1 podman[139559]: 2025-12-06 06:41:59.637552364 +0000 UTC m=+0.467888779 container start 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Validating config file
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Copying service configuration files
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Writing out command to execute
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: ++ cat /run_command
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + CMD=neutron-ovn-metadata-agent
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + ARGS=
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + sudo kolla_copy_cacerts
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + [[ ! -n '' ]]
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + . kolla_extend_start
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: Running command: 'neutron-ovn-metadata-agent'
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + umask 0022
Dec 06 06:41:59 compute-1 ovn_metadata_agent[139575]: + exec neutron-ovn-metadata-agent
Dec 06 06:41:59 compute-1 edpm-start-podman-container[139559]: ovn_metadata_agent
Dec 06 06:41:59 compute-1 edpm-start-podman-container[139558]: Creating additional drop-in dependency for "ovn_metadata_agent" (69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28)
Dec 06 06:41:59 compute-1 podman[139582]: 2025-12-06 06:41:59.802907802 +0000 UTC m=+0.156442856 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 06:41:59 compute-1 systemd[1]: Reloading.
Dec 06 06:41:59 compute-1 systemd-sysv-generator[139653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:41:59 compute-1 systemd-rc-local-generator[139650]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:42:00 compute-1 systemd[1]: Started ovn_metadata_agent container.
Dec 06 06:42:00 compute-1 sudo[139516]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:00.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:00.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.559 139580 INFO neutron.common.config [-] Logging enabled!
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.559 139580 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.559 139580 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.560 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.560 139580 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.560 139580 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.560 139580 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.560 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.560 139580 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.560 139580 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.561 139580 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.562 139580 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.563 139580 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.564 139580 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.565 139580 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.566 139580 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.567 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.568 139580 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.569 139580 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.570 139580 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.571 139580 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.571 139580 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.571 139580 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.571 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.571 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.571 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.571 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.571 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.572 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.573 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.574 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.575 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.576 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.577 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.578 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.579 139580 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.580 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.581 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.582 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.583 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.583 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.583 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.583 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.583 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.583 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.583 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.583 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.584 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.584 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.584 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.584 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.584 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.584 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.584 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.584 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.585 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.585 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.585 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.585 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.585 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.585 139580 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.585 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.585 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.586 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.587 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.587 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.587 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.587 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.587 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.587 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.587 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.587 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.588 139580 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.589 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.590 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.591 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.592 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.593 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.593 139580 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.593 139580 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.601 139580 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.601 139580 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.601 139580 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.601 139580 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.602 139580 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.614 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 03fe054d-d727-4af3-9c5e-92e57505f242 (UUID: 03fe054d-d727-4af3-9c5e-92e57505f242) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.640 139580 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.640 139580 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.640 139580 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.640 139580 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.647 139580 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.654 139580 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.660 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '03fe054d-d727-4af3-9c5e-92e57505f242'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], external_ids={}, name=03fe054d-d727-4af3-9c5e-92e57505f242, nb_cfg_timestamp=1765003226901, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.661 139580 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f2f83fb6f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.661 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.661 139580 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.662 139580 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.662 139580 INFO oslo_service.service [-] Starting 1 workers
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.666 139580 DEBUG oslo_service.service [-] Started child 139689 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.669 139580 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpsw4ufy81/privsep.sock']
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.672 139689 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-8226271'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.710 139689 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.711 139689 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.711 139689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.715 139689 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.722 139689 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 06:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:01.730 139689 INFO eventlet.wsgi.server [-] (139689) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 06 06:42:02 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.352 139580 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.352 139580 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsw4ufy81/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.200 139694 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.204 139694 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.206 139694 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.207 139694 INFO oslo.privsep.daemon [-] privsep daemon running as pid 139694
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.355 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[8afe9626-7039-4c25-bca9-0cd41c093f58]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:42:02 compute-1 sshd-session[131020]: Connection closed by 192.168.122.30 port 50336
Dec 06 06:42:02 compute-1 sshd-session[131017]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:42:02 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Dec 06 06:42:02 compute-1 systemd[1]: session-47.scope: Consumed 55.399s CPU time.
Dec 06 06:42:02 compute-1 systemd-logind[788]: Session 47 logged out. Waiting for processes to exit.
Dec 06 06:42:02 compute-1 systemd-logind[788]: Removed session 47.
Dec 06 06:42:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:42:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:02.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:42:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:02.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.873 139694 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.873 139694 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:42:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:02.873 139694 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.417 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[fac36b86-2038-4906-8b70-73e4edfa30fc]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.419 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, column=external_ids, values=({'neutron:ovn-metadata-id': '17d5e055-98b2-52fd-b756-fa87b72a9ccf'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.427 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.432 139580 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.433 139580 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.434 139580 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.435 139580 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.436 139580 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.437 139580 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.437 139580 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.437 139580 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.437 139580 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.437 139580 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.437 139580 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.437 139580 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.438 139580 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.439 139580 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.440 139580 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.441 139580 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.442 139580 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.443 139580 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.444 139580 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.445 139580 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.446 139580 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.447 139580 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.448 139580 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.449 139580 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.450 139580 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.451 139580 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.452 139580 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.452 139580 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.452 139580 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.452 139580 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.452 139580 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.452 139580 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.452 139580 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.452 139580 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.453 139580 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.454 139580 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.455 139580 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.456 139580 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.457 139580 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.458 139580 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.459 139580 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.460 139580 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.461 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.462 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.463 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:42:03.464 139580 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 06:42:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:42:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:04.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:42:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:04 compute-1 ceph-mon[81689]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Dec 06 06:42:04 compute-1 ceph-mon[81689]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Dec 06 06:42:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:06.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:42:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:06.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:42:08 compute-1 sshd-session[139719]: Accepted publickey for zuul from 192.168.122.30 port 41254 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:42:08 compute-1 systemd-logind[788]: New session 48 of user zuul.
Dec 06 06:42:08 compute-1 systemd[1]: Started Session 48 of User zuul.
Dec 06 06:42:08 compute-1 sshd-session[139719]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:42:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:08.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:09 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:42:09 compute-1 python3.9[139872]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:42:10 compute-1 sudo[140026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufoigmjtxhmpxyhmglsvzhyabtywcwiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003329.9362888-68-251852627632866/AnsiballZ_command.py'
Dec 06 06:42:10 compute-1 sudo[140026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:10 compute-1 python3.9[140028]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:10.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:10 compute-1 sudo[140026]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:42:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:10.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:42:11 compute-1 sudo[140191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qosoxnezurmgcfihihrmljfwjosbgmbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003331.2130027-101-121161826565420/AnsiballZ_systemd_service.py'
Dec 06 06:42:11 compute-1 sudo[140191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:12 compute-1 python3.9[140193]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:42:12 compute-1 systemd[1]: Reloading.
Dec 06 06:42:12 compute-1 systemd-rc-local-generator[140216]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:42:12 compute-1 systemd-sysv-generator[140222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:42:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 503..1243) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 5.363191605s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 06:42:12 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1[81685]: 2025-12-06T06:42:12.689+0000 7fc98ce5b640 -1 mon.compute-1@2(peon).paxos(paxos updating c 503..1243) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 5.363191605s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 06:42:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:12 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:12 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:12 compute-1 sudo[140191]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:12.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:13 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:42:13 compute-1 python3.9[140378]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:42:13 compute-1 network[140395]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:42:13 compute-1 network[140396]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:42:13 compute-1 network[140397]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:42:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:14.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:42:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:14.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:42:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:16.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:16.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:17 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:42:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:17 compute-1 ceph-mon[81689]: paxos.2).electionLogic(42) init, last seen epoch 42
Dec 06 06:42:17 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:17 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:17 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_timecheck drop unexpected msg
Dec 06 06:42:17 compute-1 sudo[140532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:42:17 compute-1 sudo[140532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:17 compute-1 sudo[140532]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:17 compute-1 sudo[140557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:42:17 compute-1 sudo[140557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:17 compute-1 sudo[140557]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:18.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:18.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:19 compute-1 podman[140605]: 2025-12-06 06:42:19.097269686 +0000 UTC m=+0.078979505 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 06:42:19 compute-1 sudo[140733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxuipsffrfbpdcqwxgjhitvsccayyrcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003338.9952056-158-45860324515538/AnsiballZ_systemd_service.py'
Dec 06 06:42:19 compute-1 sudo[140733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:19 compute-1 python3.9[140735]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:19 compute-1 sudo[140733]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:19 compute-1 sudo[140886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnhnojtflijdqcbhpzsdjmmlvthejikw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003339.7027025-158-54638469148723/AnsiballZ_systemd_service.py'
Dec 06 06:42:19 compute-1 sudo[140886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:19 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:42:19 compute-1 ceph-mon[81689]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Dec 06 06:42:19 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 06:42:19 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:42:19 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 06:42:19 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:42:19 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:42:19 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:42:19 compute-1 ceph-mon[81689]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:42:19 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 16m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:42:19 compute-1 ceph-mon[81689]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Dec 06 06:42:19 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 06:42:19 compute-1 ceph-mon[81689]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:20 compute-1 python3.9[140888]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:20 compute-1 sudo[140886]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:20 compute-1 sudo[141039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evhlznqiswiirraigsubxuhsxinepslg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003340.4175873-158-52221629454031/AnsiballZ_systemd_service.py'
Dec 06 06:42:20 compute-1 sudo[141039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:42:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:20.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:42:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:20.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:20 compute-1 python3.9[141041]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:21 compute-1 sudo[141039]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:21 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:42:21 compute-1 ceph-mon[81689]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:21 compute-1 sudo[141192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sivnidtyroyqyomoftyavzbqxoirtlrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003341.1375656-158-189666511088367/AnsiballZ_systemd_service.py'
Dec 06 06:42:21 compute-1 sudo[141192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:21 compute-1 python3.9[141194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:21 compute-1 sudo[141192]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:22 compute-1 sudo[141345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koxlsxfvffwdlvfjrfhreghsiytbasof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003341.8333342-158-281178609338182/AnsiballZ_systemd_service.py'
Dec 06 06:42:22 compute-1 sudo[141345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:22 compute-1 python3.9[141347]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:22 compute-1 sudo[141345]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:22 compute-1 sudo[141498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphnitkblkcjipdxnykmsglnkwdlntuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003342.583164-158-209275251655437/AnsiballZ_systemd_service.py'
Dec 06 06:42:22 compute-1 sudo[141498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:22.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:42:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:22.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:42:23 compute-1 ceph-mon[81689]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:23 compute-1 python3.9[141500]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:23 compute-1 sudo[141498]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:23 compute-1 sudo[141651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kplbrdneollcidziksueedgzfgtkyjnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003343.2655406-158-62452186748112/AnsiballZ_systemd_service.py'
Dec 06 06:42:23 compute-1 sudo[141651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:24 compute-1 python3.9[141653]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:24 compute-1 sudo[141651]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:24.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:24.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:25 compute-1 ceph-mon[81689]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.095693) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345095765, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2430, "num_deletes": 252, "total_data_size": 6179315, "memory_usage": 6239104, "flush_reason": "Manual Compaction"}
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345147648, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4042481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9993, "largest_seqno": 12418, "table_properties": {"data_size": 4032610, "index_size": 6301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20164, "raw_average_key_size": 20, "raw_value_size": 4012601, "raw_average_value_size": 4036, "num_data_blocks": 281, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003070, "oldest_key_time": 1765003070, "file_creation_time": 1765003345, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 51992 microseconds, and 10468 cpu microseconds.
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.147687) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4042481 bytes OK
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.147703) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.149727) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.149742) EVENT_LOG_v1 {"time_micros": 1765003345149737, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.149759) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6168605, prev total WAL file size 6168605, number of live WAL files 2.
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.151204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3947KB)], [21(8338KB)]
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345151274, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 12581300, "oldest_snapshot_seqno": -1}
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4450 keys, 9968425 bytes, temperature: kUnknown
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345285705, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 9968425, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9934439, "index_size": 21780, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 109126, "raw_average_key_size": 24, "raw_value_size": 9849816, "raw_average_value_size": 2213, "num_data_blocks": 938, "num_entries": 4450, "num_filter_entries": 4450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765003345, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.286036) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 9968425 bytes
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.348965) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.5 rd, 74.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 8.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(5.6) write-amplify(2.5) OK, records in: 4980, records dropped: 530 output_compression: NoCompression
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.349003) EVENT_LOG_v1 {"time_micros": 1765003345348988, "job": 10, "event": "compaction_finished", "compaction_time_micros": 134569, "compaction_time_cpu_micros": 32047, "output_level": 6, "num_output_files": 1, "total_output_size": 9968425, "num_input_records": 4980, "num_output_records": 4450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345350504, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345352445, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.151056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.352532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.352537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.352539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.352541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:42:25.352542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:26 compute-1 sudo[141804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfnfyzkmbbrsibivmgttxitsdhieyoir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003345.9330223-314-98554397940203/AnsiballZ_file.py'
Dec 06 06:42:26 compute-1 sudo[141804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:26 compute-1 python3.9[141806]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:26 compute-1 sudo[141804]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:26.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:26.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:26 compute-1 sudo[141956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehcdpssxsdyrkpqdoknoppkahmheiux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003346.6753721-314-253959721418056/AnsiballZ_file.py'
Dec 06 06:42:26 compute-1 sudo[141956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:27 compute-1 ceph-mon[81689]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:27 compute-1 python3.9[141958]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:27 compute-1 sudo[141956]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:27 compute-1 sudo[142108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zganzzitxtgmdzmoybumjmgyhoczntzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003347.2719111-314-160142309137795/AnsiballZ_file.py'
Dec 06 06:42:27 compute-1 sudo[142108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:27 compute-1 python3.9[142110]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:27 compute-1 sudo[142108]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:28 compute-1 sudo[142260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otmuctfjnmnczqmjigsygoqjcjbovzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003347.8623583-314-77129480998474/AnsiballZ_file.py'
Dec 06 06:42:28 compute-1 sudo[142260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:28 compute-1 python3.9[142262]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:28 compute-1 sudo[142260]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:28 compute-1 sudo[142412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxttompvkgogdnrqpgtrjgmkhqmogoqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003348.5101655-314-204506735691272/AnsiballZ_file.py'
Dec 06 06:42:28 compute-1 sudo[142412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:42:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:28.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:42:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:28.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:29 compute-1 ceph-mon[81689]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:29 compute-1 python3.9[142414]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:29 compute-1 sudo[142412]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:29 compute-1 sudo[142564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uubeznhkkclqfpssxllbgwnvykzaktxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003349.3093915-314-201266795125526/AnsiballZ_file.py'
Dec 06 06:42:29 compute-1 sudo[142564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:29 compute-1 python3.9[142566]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:29 compute-1 sudo[142564]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:30 compute-1 podman[142629]: 2025-12-06 06:42:30.068784142 +0000 UTC m=+0.049580618 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 06 06:42:30 compute-1 sudo[142736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnnegbpzatkqalmpuzvlouwxidgwimao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003349.96262-314-78061277583948/AnsiballZ_file.py'
Dec 06 06:42:30 compute-1 sudo[142736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:30 compute-1 python3.9[142738]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:30 compute-1 sudo[142736]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:30.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:30.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:31 compute-1 sudo[142888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tostckxlquwspeodqzkdkbfxqilvyckp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003351.4525087-464-59340504599991/AnsiballZ_file.py'
Dec 06 06:42:31 compute-1 sudo[142888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:31 compute-1 python3.9[142890]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:31 compute-1 sudo[142888]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:32 compute-1 sudo[143040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bppocpjetljdvqvxvfrzyoyzaxtnvfbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003352.0603664-464-225182433570093/AnsiballZ_file.py'
Dec 06 06:42:32 compute-1 sudo[143040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:32 compute-1 python3.9[143042]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:32 compute-1 sudo[143040]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:32 compute-1 ceph-mon[81689]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:32.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:32 compute-1 sudo[143192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pivrrhowfgfpfdhdowgzegaoualrgega ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003352.6418169-464-132815533051427/AnsiballZ_file.py'
Dec 06 06:42:32 compute-1 sudo[143192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:32.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:33 compute-1 python3.9[143194]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:33 compute-1 sudo[143192]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:33 compute-1 sudo[143344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iftovkgakviwtexeqfisboernwqbbyqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003353.2120714-464-255140375087227/AnsiballZ_file.py'
Dec 06 06:42:33 compute-1 sudo[143344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:33 compute-1 python3.9[143346]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:33 compute-1 sudo[143344]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:34 compute-1 sudo[143496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzsdaktqhlbornhcnilcpsillbjlczfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003353.8861988-464-253147845389281/AnsiballZ_file.py'
Dec 06 06:42:34 compute-1 sudo[143496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:34 compute-1 python3.9[143498]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:34 compute-1 sudo[143496]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:34 compute-1 ceph-mon[81689]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:34 compute-1 sudo[143648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuetmskvclvbjbousnouzdxdolagghyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003354.5505016-464-255784447580501/AnsiballZ_file.py'
Dec 06 06:42:34 compute-1 sudo[143648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:34.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:42:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:34.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:42:35 compute-1 python3.9[143650]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:35 compute-1 sudo[143648]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:35 compute-1 sudo[143800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqvblmobciobulgyhzrxdtdracamhgdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003355.3219678-464-38052970080493/AnsiballZ_file.py'
Dec 06 06:42:35 compute-1 sudo[143800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:35 compute-1 ceph-mon[81689]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:35 compute-1 python3.9[143802]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:35 compute-1 sudo[143800]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:36.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:36.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:37 compute-1 sudo[143952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virohtgasjzslfmrbhyrrfbpwayssrip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003357.0762477-618-65573817012970/AnsiballZ_command.py'
Dec 06 06:42:37 compute-1 sudo[143952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:37 compute-1 python3.9[143954]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:37 compute-1 sudo[143952]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:38.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:42:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:38.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:42:39 compute-1 ceph-mon[81689]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:40 compute-1 python3.9[144106]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:42:40 compute-1 ceph-mon[81689]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:40.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:40.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:40 compute-1 sudo[144256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypccsiyhupbyzzpsgnrvbwxrpjxdirwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003360.6326015-671-89542356111804/AnsiballZ_systemd_service.py'
Dec 06 06:42:40 compute-1 sudo[144256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:41 compute-1 python3.9[144258]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:42:41 compute-1 systemd[1]: Reloading.
Dec 06 06:42:41 compute-1 systemd-rc-local-generator[144277]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:42:41 compute-1 systemd-sysv-generator[144281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:42:42 compute-1 sudo[144256]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:42 compute-1 ceph-mon[81689]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:42 compute-1 sudo[144443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uonyrlvudtjibjsvadnqujklbjdfpjtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003362.3122575-695-108585166996971/AnsiballZ_command.py'
Dec 06 06:42:42 compute-1 sudo[144443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:42 compute-1 python3.9[144445]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:42 compute-1 sudo[144443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:42.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:43 compute-1 sudo[144596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhwyhzppqsqykeiffqmjjqmvgglsftek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003363.1902027-695-79508384008252/AnsiballZ_command.py'
Dec 06 06:42:43 compute-1 sudo[144596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:43 compute-1 python3.9[144598]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:43 compute-1 sudo[144596]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:44 compute-1 sudo[144749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlvtnspwxrklrlwrnjnbgdvahxmspldy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003363.7656224-695-249883862778565/AnsiballZ_command.py'
Dec 06 06:42:44 compute-1 sudo[144749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:42:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:44.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:42:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:44.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:45 compute-1 python3.9[144751]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:45 compute-1 sudo[144749]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:45 compute-1 sudo[144902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtynmxnwthltkczufygmgbjzvllfgfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003365.488051-695-250974595873185/AnsiballZ_command.py'
Dec 06 06:42:45 compute-1 sudo[144902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:46 compute-1 python3.9[144904]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:46 compute-1 sudo[144902]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:46 compute-1 sudo[145055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvdtiigbnpibhcshqfslptfccygoehzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003366.359977-695-194076640065385/AnsiballZ_command.py'
Dec 06 06:42:46 compute-1 sudo[145055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:46 compute-1 python3.9[145057]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:46.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:46 compute-1 sudo[145055]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:46.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:47 compute-1 ceph-mon[81689]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:47 compute-1 sudo[145208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdhsptgsyyuejgvaiwupmjtvumxewvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003367.0341434-695-10660700926822/AnsiballZ_command.py'
Dec 06 06:42:47 compute-1 sudo[145208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:47 compute-1 python3.9[145210]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:47 compute-1 sudo[145208]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:48 compute-1 sudo[145361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eadkspacnvsxmdhebvazblfuetrzodji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003367.8614256-695-1885477125549/AnsiballZ_command.py'
Dec 06 06:42:48 compute-1 sudo[145361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:48 compute-1 python3.9[145363]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:48 compute-1 sudo[145361]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:48 compute-1 ceph-mon[81689]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:48 compute-1 ceph-mon[81689]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:48.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:49 compute-1 sudo[145529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muktlcfpkxnoiexcporkdedqzwtcthqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003368.8459606-857-87512629118568/AnsiballZ_getent.py'
Dec 06 06:42:49 compute-1 sudo[145529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:49 compute-1 podman[145488]: 2025-12-06 06:42:49.311346714 +0000 UTC m=+0.110748697 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 06:42:49 compute-1 python3.9[145536]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 06 06:42:49 compute-1 sudo[145529]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:50 compute-1 sudo[145693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxovoaebbxpyzgezsmpowsuunaiolucb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003369.660707-881-106203878455050/AnsiballZ_group.py'
Dec 06 06:42:50 compute-1 sudo[145693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:50 compute-1 ceph-mon[81689]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:50 compute-1 python3.9[145695]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:42:50 compute-1 groupadd[145696]: group added to /etc/group: name=libvirt, GID=42473
Dec 06 06:42:50 compute-1 groupadd[145696]: group added to /etc/gshadow: name=libvirt
Dec 06 06:42:50 compute-1 groupadd[145696]: new group: name=libvirt, GID=42473
Dec 06 06:42:50 compute-1 sudo[145693]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:50.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:50.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:52.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:42:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:52.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:42:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:53 compute-1 ceph-mon[81689]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:53 compute-1 sudo[145851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlsjuxerhflpvdxrfyiyvwkrzzkdpazr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003373.3550062-905-224223445509559/AnsiballZ_user.py'
Dec 06 06:42:53 compute-1 sudo[145851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:54 compute-1 python3.9[145853]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 06:42:54 compute-1 useradd[145855]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 06:42:54 compute-1 sudo[145851]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:54 compute-1 ceph-mon[81689]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:54.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:54.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:55 compute-1 sudo[146011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-errtwfmwbmxsnzpdapoxkdpzzkjieabb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003375.0487556-938-95969577981332/AnsiballZ_setup.py'
Dec 06 06:42:55 compute-1 sudo[146011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:55 compute-1 python3.9[146013]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:42:55 compute-1 ceph-mon[81689]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:55 compute-1 sudo[146011]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:56 compute-1 sudo[146095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfesbjuulvsqmyqbazmmrcerdsqiitma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003375.0487556-938-95969577981332/AnsiballZ_dnf.py'
Dec 06 06:42:56 compute-1 sudo[146095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:56 compute-1 python3.9[146097]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:42:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:56.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:56.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:57 compute-1 ceph-mon[81689]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:58.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:42:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:00.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:01 compute-1 podman[146107]: 2025-12-06 06:43:01.110706897 +0000 UTC m=+0.083797495 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:43:01 compute-1 ceph-mon[81689]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:43:01.595 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:43:01.596 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:43:01.596 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:43:02 compute-1 ceph-mon[81689]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:02.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:02.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:04 compute-1 ceph-mon[81689]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:04.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:06 compute-1 ceph-mon[81689]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:06.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:07 compute-1 ceph-mon[81689]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:08.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:08.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:10.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:11 compute-1 ceph-mon[81689]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:12.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:13 compute-1 ceph-mon[81689]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:14.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:14.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:15 compute-1 ceph-mon[81689]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:16.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:16.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:17 compute-1 ceph-mon[81689]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:18 compute-1 sudo[146127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:18 compute-1 sudo[146127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-1 sudo[146127]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-1 sudo[146152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:43:18 compute-1 sudo[146152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-1 sudo[146152]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-1 sudo[146177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:18 compute-1 sudo[146177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-1 sudo[146177]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-1 sudo[146202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:43:18 compute-1 sudo[146202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-1 ceph-mon[81689]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:18 compute-1 sudo[146202]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:18.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:20 compute-1 podman[146259]: 2025-12-06 06:43:20.167350162 +0000 UTC m=+0.112405433 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 06:43:20 compute-1 ceph-mon[81689]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:43:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:20.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:20.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:21 compute-1 ceph-mon[81689]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:22.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:22.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:23 compute-1 ceph-mon[81689]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:24.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:24 compute-1 ceph-mon[81689]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:25.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:26.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:27.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:27 compute-1 ceph-mon[81689]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:27 compute-1 sudo[146421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:27 compute-1 sudo[146421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:27 compute-1 sudo[146421]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:27 compute-1 sudo[146446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:43:27 compute-1 sudo[146446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:27 compute-1 sudo[146446]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:29.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:29 compute-1 ceph-mon[81689]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:30.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:31 compute-1 ceph-mon[81689]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:32 compute-1 podman[146509]: 2025-12-06 06:43:32.081862919 +0000 UTC m=+0.063370961 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 06 06:43:32 compute-1 ceph-mon[81689]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:32.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:33.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:34.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:35.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:35 compute-1 ceph-mon[81689]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:36.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:37.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:37 compute-1 ceph-mon[81689]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:38 compute-1 ceph-mon[81689]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:38.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:39.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:40.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:41.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:41 compute-1 ceph-mon[81689]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:42.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:43.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:43 compute-1 ceph-mon[81689]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:45.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:45 compute-1 ceph-mon[81689]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:46.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:47.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:47 compute-1 ceph-mon[81689]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:48.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:49.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:50 compute-1 ceph-mon[81689]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:50.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:51.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:51 compute-1 podman[146534]: 2025-12-06 06:43:51.114182322 +0000 UTC m=+0.098723080 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 06:43:51 compute-1 ceph-mon[81689]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:53.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:53 compute-1 ceph-mon[81689]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:55.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:55 compute-1 ceph-mon[81689]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:55 compute-1 kernel: SELinux:  Converting 2769 SID table entries...
Dec 06 06:43:55 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:43:55 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:43:55 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:43:55 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:43:55 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:43:55 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:43:55 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:43:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:56.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:56 compute-1 ceph-mon[81689]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:43:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:57.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:43:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:43:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:43:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:59.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:43:59 compute-1 ceph-mon[81689]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:01.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:44:01.597 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:44:01.598 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:44:01.598 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:44:01 compute-1 ceph-mon[81689]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:02 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 06 06:44:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:44:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:03.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:44:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:03.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:03 compute-1 podman[146568]: 2025-12-06 06:44:03.081839475 +0000 UTC m=+0.059031514 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 06:44:03 compute-1 ceph-mon[81689]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:44:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:05.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:44:06 compute-1 ceph-mon[81689]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:07.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:07.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:07 compute-1 ceph-mon[81689]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:08 compute-1 kernel: SELinux:  Converting 2769 SID table entries...
Dec 06 06:44:08 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:44:08 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:44:08 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:44:08 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:44:08 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:44:08 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:44:08 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:44:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:09.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:09 compute-1 ceph-mon[81689]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:44:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:11.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:44:11 compute-1 ceph-mon[81689]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:44:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:44:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:44:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:13.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:44:13 compute-1 ceph-mon[81689]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:44:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:44:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:15.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:15 compute-1 ceph-mon[81689]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:17.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:17 compute-1 ceph-mon[81689]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:44:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:44:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:19.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:19 compute-1 ceph-mon[81689]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:21.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:21.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:21 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 06 06:44:22 compute-1 podman[146593]: 2025-12-06 06:44:22.14698224 +0000 UTC m=+0.117040317 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 06:44:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:23.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:23.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:23 compute-1 ceph-mon[81689]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:25.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:25 compute-1 ceph-mon[81689]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:26 compute-1 ceph-mon[81689]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:27.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:27 compute-1 sudo[150062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:27 compute-1 sudo[150062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:27 compute-1 sudo[150062]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:27 compute-1 sudo[150137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:44:27 compute-1 sudo[150137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:27 compute-1 sudo[150137]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:27 compute-1 sudo[150203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:27 compute-1 sudo[150203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:27 compute-1 sudo[150203]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:28 compute-1 sudo[150268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:44:28 compute-1 sudo[150268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:28 compute-1 ceph-mon[81689]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:28 compute-1 sudo[150268]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:29.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:44:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:29.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:44:29 compute-1 ceph-mon[81689]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:44:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:44:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:44:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:44:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:44:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:31.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:31 compute-1 ceph-mon[81689]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:33.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:33 compute-1 ceph-mon[81689]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:34 compute-1 podman[154072]: 2025-12-06 06:44:34.071259157 +0000 UTC m=+0.057178703 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 06 06:44:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:35.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:35 compute-1 ceph-mon[81689]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:37.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:37.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:38 compute-1 ceph-mon[81689]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:39.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:39 compute-1 sudo[157489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:39.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:39 compute-1 sudo[157489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:39 compute-1 sudo[157489]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:39 compute-1 sudo[157549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:44:39 compute-1 sudo[157549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:39 compute-1 sudo[157549]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:40 compute-1 ceph-mon[81689]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:41.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:41.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:41 compute-1 ceph-mon[81689]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:43.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:45.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:45.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:45 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:44:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:47 compute-1 ceph-mon[81689]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:47.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:44:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:47.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:44:48 compute-1 ceph-mon[81689]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:48 compute-1 ceph-mon[81689]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:49.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:49.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:50 compute-1 ceph-mon[81689]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:44:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:51.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:44:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:44:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:51.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:44:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:44:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:44:53 compute-1 podman[163616]: 2025-12-06 06:44:53.276284205 +0000 UTC m=+0.139728538 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 06:44:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:55.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:56 compute-1 ceph-mon[81689]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:44:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:57.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:44:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:57.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:44:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:44:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:59.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:44:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:00 compute-1 ceph-mon[81689]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:00 compute-1 ceph-mon[81689]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:00 compute-1 ceph-mon[81689]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:45:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:45:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:01.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:45:01.599 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:45:01.599 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:45:01.599 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:45:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:03.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:03.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:04 compute-1 ceph-mon[81689]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:04 compute-1 ceph-mon[81689]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:45:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:45:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:05.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:05 compute-1 podman[163644]: 2025-12-06 06:45:05.266785821 +0000 UTC m=+0.080191738 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 06:45:05 compute-1 ceph-mon[81689]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:05 compute-1 ceph-mon[81689]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 06:45:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:07.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:07 compute-1 ceph-mon[81689]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:09.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:11.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:11 compute-1 ceph-mon[81689]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:13.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:15 compute-1 ceph-mon[81689]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:17.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:45:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:17.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:45:17 compute-1 ceph-mon[81689]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:17 compute-1 ceph-mon[81689]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:19.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:19 compute-1 ceph-mon[81689]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:20 compute-1 ceph-mon[81689]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:21.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:23.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:23.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:24 compute-1 podman[163686]: 2025-12-06 06:45:24.113600538 +0000 UTC m=+0.086420526 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:45:24 compute-1 ceph-mon[81689]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:25.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:25.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:27.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:27.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:27 compute-1 kernel: SELinux:  Converting 2770 SID table entries...
Dec 06 06:45:27 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:45:27 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:45:27 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:45:27 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:45:27 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:45:27 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:45:27 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:45:27 compute-1 ceph-mon[81689]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:27 compute-1 ceph-mon[81689]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:29.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:29.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:31.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:31.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:32 compute-1 ceph-mon[81689]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:32 compute-1 ceph-mon[81689]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:33.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:35.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:35 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 06 06:45:36 compute-1 podman[163714]: 2025-12-06 06:45:36.067125354 +0000 UTC m=+0.050645270 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 06:45:36 compute-1 ceph-mon[81689]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:37.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:37.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:37 compute-1 ceph-mon[81689]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:37 compute-1 ceph-mon[81689]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:37 compute-1 ceph-mon[81689]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:39.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:39 compute-1 ceph-mon[81689]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:39.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:39 compute-1 sudo[163733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:39 compute-1 sudo[163733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-1 sudo[163733]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:39 compute-1 sudo[163758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:45:39 compute-1 sudo[163758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-1 sudo[163758]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:39 compute-1 sudo[163783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:39 compute-1 sudo[163783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-1 sudo[163783]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:39 compute-1 sudo[163808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:45:39 compute-1 sudo[163808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:40 compute-1 sudo[163808]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:41.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:41.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:43.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:43.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:45.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:45.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:45 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:45:47 compute-1 ceph-mon[81689]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:47.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:47.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:48 compute-1 ceph-mon[81689]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:48 compute-1 ceph-mon[81689]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:48 compute-1 ceph-mon[81689]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:49.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:49.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:49 compute-1 ceph-mon[81689]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:45:50 compute-1 groupadd[163871]: group added to /etc/group: name=dnsmasq, GID=992
Dec 06 06:45:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:51.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:51 compute-1 groupadd[163871]: group added to /etc/gshadow: name=dnsmasq
Dec 06 06:45:51 compute-1 groupadd[163871]: new group: name=dnsmasq, GID=992
Dec 06 06:45:51 compute-1 ceph-mon[81689]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:51 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Dec 06 06:45:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:51.934773) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:45:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Dec 06 06:45:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551934821, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1815, "num_deletes": 250, "total_data_size": 4515329, "memory_usage": 4581360, "flush_reason": "Manual Compaction"}
Dec 06 06:45:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Dec 06 06:45:51 compute-1 useradd[163878]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 06 06:45:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003552559330, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 2959024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12423, "largest_seqno": 14233, "table_properties": {"data_size": 2951579, "index_size": 4452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14096, "raw_average_key_size": 18, "raw_value_size": 2936758, "raw_average_value_size": 3833, "num_data_blocks": 201, "num_entries": 766, "num_filter_entries": 766, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003346, "oldest_key_time": 1765003346, "file_creation_time": 1765003551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:45:52 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 624599 microseconds, and 14467 cpu microseconds.
Dec 06 06:45:52 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:45:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:53.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:45:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:52.559374) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 2959024 bytes OK
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:52.559394) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:53.966861) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:53.966914) EVENT_LOG_v1 {"time_micros": 1765003553966900, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:53.966944) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 4507234, prev total WAL file size 4507234, number of live WAL files 2.
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:53.969016) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(2889KB)], [24(9734KB)]
Dec 06 06:45:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003553969139, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12927449, "oldest_snapshot_seqno": -1}
Dec 06 06:45:55 compute-1 podman[163879]: 2025-12-06 06:45:55.109736577 +0000 UTC m=+0.089331426 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 06:45:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:55.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:55.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:55 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4703 keys, 12379918 bytes, temperature: kUnknown
Dec 06 06:45:55 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003555350740, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 12379918, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12342034, "index_size": 25038, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 116015, "raw_average_key_size": 24, "raw_value_size": 12250722, "raw_average_value_size": 2604, "num_data_blocks": 1062, "num_entries": 4703, "num_filter_entries": 4703, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765003553, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:45:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:57.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:57.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:58 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:58.679026) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12379918 bytes
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:59.114890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 9.4 rd, 9.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.5 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(8.6) write-amplify(4.2) OK, records in: 5216, records dropped: 513 output_compression: NoCompression
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:59.114929) EVENT_LOG_v1 {"time_micros": 1765003559114912, "job": 12, "event": "compaction_finished", "compaction_time_micros": 1381676, "compaction_time_cpu_micros": 46008, "output_level": 6, "num_output_files": 1, "total_output_size": 12379918, "num_input_records": 5216, "num_output_records": 4703, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003559115560, "job": 12, "event": "table_file_deletion", "file_number": 26}
Dec 06 06:45:59 compute-1 ceph-mon[81689]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003559117231, "job": 12, "event": "table_file_deletion", "file_number": 24}
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:53.968955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:59.117303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:59.117309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:59.117311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:59.117313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:45:59.117314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:59.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:59 compute-1 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Dec 06 06:45:59 compute-1 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Dec 06 06:45:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:45:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:59.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:01.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:01.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:46:01.599 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:46:01.600 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:46:01.600 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:46:01 compute-1 ceph-mon[81689]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:03.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:03.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:46:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:05.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:46:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:05 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:46:07 compute-1 podman[163914]: 2025-12-06 06:46:07.101199409 +0000 UTC m=+0.071365900 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 06:46:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:46:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:46:07 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1[81685]: 2025-12-06T06:46:07.674+0000 7fc98ce5b640 -1 mon.compute-1@2(peon).paxos(paxos updating c 754..1442) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 2.251118422s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 06:46:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 754..1442) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 2.251118422s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 06:46:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:08 compute-1 sudo[163933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:46:08 compute-1 sudo[163933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:08 compute-1 sudo[163933]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:08 compute-1 sudo[163958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:46:08 compute-1 sudo[163958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:08 compute-1 sudo[163958]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:08 compute-1 ceph-mon[81689]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:08 compute-1 ceph-mon[81689]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:46:08 compute-1 ceph-mon[81689]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:46:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:09.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:46:10 compute-1 ceph-mon[81689]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:10 compute-1 ceph-mon[81689]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:10 compute-1 ceph-mon[81689]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:46:10 compute-1 ceph-mon[81689]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:11.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:11.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:11 compute-1 groupadd[163986]: group added to /etc/group: name=clevis, GID=991
Dec 06 06:46:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:46:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:13.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:46:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:13.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:13 compute-1 groupadd[163986]: group added to /etc/gshadow: name=clevis
Dec 06 06:46:13 compute-1 groupadd[163986]: new group: name=clevis, GID=991
Dec 06 06:46:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:14 compute-1 ceph-mon[81689]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:14 compute-1 useradd[163993]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 06 06:46:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:15.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:15.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:15 compute-1 ceph-mon[81689]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:15 compute-1 ceph-mon[81689]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:16 compute-1 usermod[164003]: add 'clevis' to group 'tss'
Dec 06 06:46:16 compute-1 usermod[164003]: add 'clevis' to shadow group 'tss'
Dec 06 06:46:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:17.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:17.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:18 compute-1 ceph-mon[81689]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:46:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:19.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:46:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:21.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:21.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:22 compute-1 ceph-mon[81689]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:23.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:25.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:25 compute-1 ceph-mon[81689]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:25 compute-1 ceph-mon[81689]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:26 compute-1 podman[164013]: 2025-12-06 06:46:26.120679736 +0000 UTC m=+0.093597782 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 06:46:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:46:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:27.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:46:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:29.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:29.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:29 compute-1 ceph-mon[81689]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:46:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:31.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:46:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:31.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:33.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:46:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:46:33 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:46:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:35.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:36 compute-1 ceph-mon[81689]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:36 compute-1 ceph-mon[81689]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:37.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:37.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:38 compute-1 podman[164050]: 2025-12-06 06:46:38.06787027 +0000 UTC m=+0.051812492 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 06:46:39 compute-1 ceph-mon[81689]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:39 compute-1 ceph-mon[81689]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:39 compute-1 ceph-mon[81689]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:39 compute-1 ceph-mon[81689]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:39.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:39.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:41.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:41.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:43.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:43.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:44 compute-1 ceph-mon[81689]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:45.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:45 compute-1 polkitd[43521]: Reloading rules
Dec 06 06:46:45 compute-1 polkitd[43521]: Collecting garbage unconditionally...
Dec 06 06:46:45 compute-1 polkitd[43521]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 06:46:45 compute-1 polkitd[43521]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 06:46:45 compute-1 polkitd[43521]: Finished loading, compiling and executing 3 rules
Dec 06 06:46:45 compute-1 polkitd[43521]: Reloading rules
Dec 06 06:46:45 compute-1 polkitd[43521]: Collecting garbage unconditionally...
Dec 06 06:46:45 compute-1 polkitd[43521]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 06:46:45 compute-1 polkitd[43521]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 06:46:45 compute-1 polkitd[43521]: Finished loading, compiling and executing 3 rules
Dec 06 06:46:45 compute-1 ceph-mon[81689]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:45 compute-1 ceph-mon[81689]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:45 compute-1 ceph-mon[81689]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:47.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:47.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:47 compute-1 ceph-mon[81689]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:49.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:49.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:49 compute-1 ceph-mon[81689]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:51 compute-1 groupadd[164235]: group added to /etc/group: name=ceph, GID=167
Dec 06 06:46:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:51.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:51 compute-1 ceph-mon[81689]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:46:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:51.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:46:51 compute-1 groupadd[164235]: group added to /etc/gshadow: name=ceph
Dec 06 06:46:51 compute-1 groupadd[164235]: new group: name=ceph, GID=167
Dec 06 06:46:52 compute-1 useradd[164241]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 06 06:46:53 compute-1 ceph-mon[81689]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:53.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:53.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:55 compute-1 ceph-mon[81689]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:55.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:55.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:55 compute-1 sshd[1011]: Received signal 15; terminating.
Dec 06 06:46:55 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Dec 06 06:46:55 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Dec 06 06:46:55 compute-1 systemd[1]: sshd.service: Unit process 164242 (sshd-session) remains running after unit stopped.
Dec 06 06:46:55 compute-1 systemd[1]: sshd.service: Unit process 164249 (sshd-session) remains running after unit stopped.
Dec 06 06:46:55 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Dec 06 06:46:55 compute-1 systemd[1]: sshd.service: Consumed 4.307s CPU time, 36.5M memory peak, read 564.0K from disk, written 20.0K to disk.
Dec 06 06:46:55 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Dec 06 06:46:55 compute-1 systemd[1]: Stopping sshd-keygen.target...
Dec 06 06:46:55 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:46:55 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:46:55 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:46:55 compute-1 systemd[1]: Reached target sshd-keygen.target.
Dec 06 06:46:55 compute-1 systemd[1]: Starting OpenSSH server daemon...
Dec 06 06:46:55 compute-1 sshd[164848]: Server listening on 0.0.0.0 port 22.
Dec 06 06:46:55 compute-1 sshd[164848]: Server listening on :: port 22.
Dec 06 06:46:55 compute-1 systemd[1]: Started OpenSSH server daemon.
Dec 06 06:46:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:56 compute-1 podman[164917]: 2025-12-06 06:46:56.244309619 +0000 UTC m=+0.086170840 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 06:46:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:46:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:57.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:46:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:57.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:57 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:46:57 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:46:57 compute-1 systemd[1]: Reloading.
Dec 06 06:46:57 compute-1 systemd-rc-local-generator[165131]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:46:57 compute-1 systemd-sysv-generator[165134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:46:58 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:46:58 compute-1 ceph-mon[81689]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:46:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:59.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:59 compute-1 ceph-mon[81689]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:00 compute-1 ceph-mon[81689]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:01.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:01.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:01 compute-1 sudo[146095]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:47:01.601 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:47:01.602 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:47:01.602 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:47:02 compute-1 sudo[167926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imejspjrstvmidjvflvljvkdtouqwymh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003621.7638917-974-90072126721714/AnsiballZ_systemd.py'
Dec 06 06:47:02 compute-1 sudo[167926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:02 compute-1 python3.9[167953]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:02 compute-1 systemd[1]: Reloading.
Dec 06 06:47:02 compute-1 systemd-rc-local-generator[168543]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:02 compute-1 systemd-sysv-generator[168547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:03 compute-1 sudo[167926]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:03.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:03 compute-1 sshd-session[164242]: Received disconnect from 14.63.196.175 port 40376:11: Bye Bye [preauth]
Dec 06 06:47:03 compute-1 sshd-session[164242]: Disconnected from authenticating user root 14.63.196.175 port 40376 [preauth]
Dec 06 06:47:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:03 compute-1 sudo[169579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmrrzglnarazspeeyhtdogjqbdymraci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003623.24351-974-132118431729895/AnsiballZ_systemd.py'
Dec 06 06:47:03 compute-1 sudo[169579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:03 compute-1 ceph-mon[81689]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:03 compute-1 python3.9[169605]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:03 compute-1 systemd[1]: Reloading.
Dec 06 06:47:04 compute-1 systemd-rc-local-generator[170123]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:04 compute-1 systemd-sysv-generator[170127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:04 compute-1 sudo[169579]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:04 compute-1 sudo[171117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpykdysijpodnppbvmpiywysrxtbynpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003624.440347-974-269663868498904/AnsiballZ_systemd.py'
Dec 06 06:47:04 compute-1 sudo[171117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:04 compute-1 python3.9[171143]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:05 compute-1 systemd[1]: Reloading.
Dec 06 06:47:05 compute-1 systemd-rc-local-generator[171717]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:05 compute-1 systemd-sysv-generator[171720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:47:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:05.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:47:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:47:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:05.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:47:05 compute-1 ceph-mon[81689]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:05 compute-1 sudo[171117]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:06 compute-1 sudo[172302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajscocjrawtatilfdazsvuzgkjbpycvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003625.9615414-974-277033634271309/AnsiballZ_systemd.py'
Dec 06 06:47:06 compute-1 sudo[172302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:06 compute-1 python3.9[172329]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:06 compute-1 systemd[1]: Reloading.
Dec 06 06:47:06 compute-1 systemd-sysv-generator[172871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:06 compute-1 systemd-rc-local-generator[172866]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:06 compute-1 sudo[172302]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:07.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:07.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:07 compute-1 sudo[173871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgexfomgohwgjwlhdjxihqiqzrzotbei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003627.1933367-1067-1611058689907/AnsiballZ_systemd.py'
Dec 06 06:47:07 compute-1 sudo[173871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:07 compute-1 ceph-mon[81689]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:07 compute-1 python3.9[173877]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:07 compute-1 systemd[1]: Reloading.
Dec 06 06:47:07 compute-1 systemd-sysv-generator[174387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:07 compute-1 systemd-rc-local-generator[174381]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:08 compute-1 sudo[173871]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-1 podman[174468]: 2025-12-06 06:47:08.262900512 +0000 UTC m=+0.052437422 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:47:08 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:47:08 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:47:08 compute-1 systemd[1]: man-db-cache-update.service: Consumed 10.023s CPU time.
Dec 06 06:47:08 compute-1 systemd[1]: run-r517baa5ebf4645d8aa01e3024dbaa8aa.service: Deactivated successfully.
Dec 06 06:47:08 compute-1 sudo[174637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pptrjwrbgvslqgvbykwyrhkcmrgqqqoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003628.3332498-1067-200534666623613/AnsiballZ_systemd.py'
Dec 06 06:47:08 compute-1 sudo[174637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:08 compute-1 sudo[174640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:08 compute-1 sudo[174640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-1 sudo[174640]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-1 sudo[174665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:47:08 compute-1 sudo[174665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-1 sudo[174665]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-1 sudo[174690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:08 compute-1 sudo[174690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-1 sudo[174690]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-1 python3.9[174639]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:08 compute-1 sudo[174715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:47:08 compute-1 sudo[174715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-1 ceph-mon[81689]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:08 compute-1 systemd[1]: Reloading.
Dec 06 06:47:09 compute-1 systemd-rc-local-generator[174771]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:09 compute-1 systemd-sysv-generator[174774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:09.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:09 compute-1 sudo[174637]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:09 compute-1 podman[174877]: 2025-12-06 06:47:09.441791393 +0000 UTC m=+0.053188443 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:47:09 compute-1 podman[174877]: 2025-12-06 06:47:09.561926749 +0000 UTC m=+0.173323809 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:47:09 compute-1 sudo[175051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwmqqkdxvcnzldguneheeglniqczxrbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003629.4040914-1067-253534567792139/AnsiballZ_systemd.py'
Dec 06 06:47:09 compute-1 sudo[175051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:10 compute-1 sudo[174715]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:10 compute-1 python3.9[175055]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:10 compute-1 systemd[1]: Reloading.
Dec 06 06:47:10 compute-1 systemd-rc-local-generator[175175]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:10 compute-1 systemd-sysv-generator[175178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-1 sudo[175126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:10 compute-1 sudo[175126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:10 compute-1 sudo[175126]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:10 compute-1 sudo[175051]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:10 compute-1 sudo[175188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:47:10 compute-1 sudo[175188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:10 compute-1 sudo[175188]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:10 compute-1 sudo[175236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:10 compute-1 sudo[175236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:10 compute-1 sudo[175236]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:10 compute-1 sudo[175270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:47:10 compute-1 sudo[175270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:10 compute-1 sudo[175427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tudvtijfmbqjxrpnligsuvmtqrprvfjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003630.528364-1067-18660713924977/AnsiballZ_systemd.py'
Dec 06 06:47:10 compute-1 sudo[175427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:10 compute-1 sudo[175270]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:11 compute-1 python3.9[175429]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:11 compute-1 sudo[175427]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:11 compute-1 ceph-mon[81689]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:47:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:47:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:47:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:47:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:47:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:11.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:11 compute-1 sudo[175597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzdagloaqyfqdzktzpyqxbxfwjeqqiwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003631.3342037-1067-218415825321547/AnsiballZ_systemd.py'
Dec 06 06:47:11 compute-1 sudo[175597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:11 compute-1 python3.9[175599]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:11 compute-1 systemd[1]: Reloading.
Dec 06 06:47:12 compute-1 systemd-rc-local-generator[175629]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:12 compute-1 systemd-sysv-generator[175632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:12 compute-1 sudo[175597]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000079s ======
Dec 06 06:47:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:13.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec 06 06:47:13 compute-1 sudo[175787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkbjbcvntmauzkqaftebqjpwzknqcdqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003633.0791988-1175-41345123672233/AnsiballZ_systemd.py'
Dec 06 06:47:13 compute-1 sudo[175787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:13.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:13 compute-1 python3.9[175789]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:13 compute-1 ceph-mon[81689]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:13 compute-1 systemd[1]: Reloading.
Dec 06 06:47:13 compute-1 systemd-sysv-generator[175825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:13 compute-1 systemd-rc-local-generator[175821]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:14 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 06 06:47:14 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 06 06:47:14 compute-1 sudo[175787]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:14 compute-1 sudo[175980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqelwtcszttdrywoeuejriegmwqjljzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003634.5405517-1199-190140557398390/AnsiballZ_systemd.py'
Dec 06 06:47:14 compute-1 sudo[175980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:15 compute-1 python3.9[175982]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:15 compute-1 sudo[175980]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:15.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:15 compute-1 sudo[176135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmbhtfxgctlbbobtrberdwvbzektauxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003635.3825135-1199-208412715285387/AnsiballZ_systemd.py'
Dec 06 06:47:15 compute-1 sudo[176135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:16 compute-1 python3.9[176137]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:16 compute-1 sudo[176135]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:16 compute-1 ceph-mon[81689]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:16 compute-1 sudo[176290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peipmacfrxfiywmypsyobjiicoqflasi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003636.3143082-1199-268079234768293/AnsiballZ_systemd.py'
Dec 06 06:47:16 compute-1 sudo[176290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:16 compute-1 python3.9[176292]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:16 compute-1 sudo[176290]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 06:47:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:17.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 06:47:17 compute-1 sudo[176445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unqtkrqmmsnotkivfrmzopvgpeceqkup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003637.087726-1199-199794011317225/AnsiballZ_systemd.py'
Dec 06 06:47:17 compute-1 sudo[176445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:17.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:17 compute-1 python3.9[176447]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:17 compute-1 sudo[176445]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:18 compute-1 sudo[176600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orpfarfcingytskjsjovahmvnmgqkoqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003637.9910645-1199-61825553097005/AnsiballZ_systemd.py'
Dec 06 06:47:18 compute-1 sudo[176600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:18 compute-1 python3.9[176602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:18 compute-1 sudo[176600]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:19 compute-1 sudo[176755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsdescutwyvploqkvhgbmcvjuttlfdjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003638.815395-1199-30193430833238/AnsiballZ_systemd.py'
Dec 06 06:47:19 compute-1 sudo[176755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:19.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:19 compute-1 python3.9[176757]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:19 compute-1 sudo[176755]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:20 compute-1 ceph-mon[81689]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:20 compute-1 sudo[176910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kytfhsrkqcrnrfwyetfemgqnkgkijywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003639.9601436-1199-273947447468037/AnsiballZ_systemd.py'
Dec 06 06:47:20 compute-1 sudo[176910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:20 compute-1 python3.9[176912]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:20 compute-1 sudo[176910]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:21 compute-1 sudo[177065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqzjbfoigzjzqvnshoqqjoojiqxlxlus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003641.0101056-1199-214158631558908/AnsiballZ_systemd.py'
Dec 06 06:47:21 compute-1 sudo[177065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:21.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:21 compute-1 python3.9[177067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:21 compute-1 sudo[177065]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:22 compute-1 sudo[177220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndaelmjhytgntcwrntnzrrefmwhwnuif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003641.8264177-1199-189746957709858/AnsiballZ_systemd.py'
Dec 06 06:47:22 compute-1 sudo[177220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:22 compute-1 sudo[177221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:22 compute-1 sudo[177221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:22 compute-1 sudo[177221]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:22 compute-1 sudo[177248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:47:22 compute-1 sudo[177248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:22 compute-1 sudo[177248]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:22 compute-1 python3.9[177228]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:22 compute-1 sudo[177220]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:23 compute-1 sudo[177425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvfwmgqmhqzuayhwvdkljvwyldsgoxzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003642.6084228-1199-148752014003760/AnsiballZ_systemd.py'
Dec 06 06:47:23 compute-1 sudo[177425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:23 compute-1 python3.9[177427]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:23 compute-1 sudo[177425]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:23 compute-1 sudo[177580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuqehmblgrposwybstrzrfvrakicgkpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003643.6618805-1199-65691185373712/AnsiballZ_systemd.py'
Dec 06 06:47:23 compute-1 sudo[177580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:24 compute-1 python3.9[177582]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:24 compute-1 sudo[177580]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:24 compute-1 ceph-mon[81689]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:24 compute-1 ceph-mon[81689]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:24 compute-1 sudo[177735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tptllvcomityrqexiiddvljmmueocumh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003644.3671148-1199-36780424861885/AnsiballZ_systemd.py'
Dec 06 06:47:24 compute-1 sudo[177735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:25 compute-1 python3.9[177737]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:25 compute-1 sudo[177735]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:25.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:25 compute-1 sudo[177890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpkfoepcllfiaitvuszxqylyolibyyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003645.2326953-1199-179143409506058/AnsiballZ_systemd.py'
Dec 06 06:47:25 compute-1 sudo[177890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:26 compute-1 python3.9[177892]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:26 compute-1 sudo[177890]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:26 compute-1 sudo[178058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vucxbgoyrecotuorjyhbmwxaqsqxgfcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003646.3682666-1199-14955877620383/AnsiballZ_systemd.py'
Dec 06 06:47:26 compute-1 sudo[178058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:26 compute-1 podman[178019]: 2025-12-06 06:47:26.732883026 +0000 UTC m=+0.128804360 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 06:47:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:26 compute-1 ceph-mon[81689]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:26 compute-1 ceph-mon[81689]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:26 compute-1 python3.9[178065]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:27 compute-1 sudo[178058]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:27.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:27.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:27 compute-1 sudo[178226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vukncmutmildfbnhupklnipiusvinoid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003647.5018435-1505-246198817730648/AnsiballZ_file.py'
Dec 06 06:47:27 compute-1 sudo[178226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:27 compute-1 python3.9[178228]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:27 compute-1 sudo[178226]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:28 compute-1 ceph-mon[81689]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:28 compute-1 sudo[178378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfckyjpaswwuwpnbjthewkleofbsjnbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003648.1403177-1505-276995735836723/AnsiballZ_file.py'
Dec 06 06:47:28 compute-1 sudo[178378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:28 compute-1 python3.9[178380]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:28 compute-1 sudo[178378]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:29 compute-1 sudo[178530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oinjzxwdawnjwawyqpxxcdubfhygtlpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003648.7308455-1505-54071248114054/AnsiballZ_file.py'
Dec 06 06:47:29 compute-1 sudo[178530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:29 compute-1 python3.9[178532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:29 compute-1 sudo[178530]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:29.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:29 compute-1 ceph-mon[81689]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:29.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:29 compute-1 sudo[178682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouwbhwnqojncpdknlvsrfvmozhkpsghn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003649.3929365-1505-257692949082664/AnsiballZ_file.py'
Dec 06 06:47:29 compute-1 sudo[178682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:29 compute-1 python3.9[178684]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:29 compute-1 sudo[178682]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:30 compute-1 sudo[178834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qntcxnxtvzgphtqneuroyzvpbssesmwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003649.9776332-1505-254787074287010/AnsiballZ_file.py'
Dec 06 06:47:30 compute-1 sudo[178834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:30 compute-1 python3.9[178836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:30 compute-1 sudo[178834]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:30 compute-1 sudo[178986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtexbkrrdkulcwhjvwiuogldjavcvtcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003650.5916138-1505-191469895675855/AnsiballZ_file.py'
Dec 06 06:47:30 compute-1 sudo[178986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:31 compute-1 python3.9[178988]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:31 compute-1 sudo[178986]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.234311) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651234342, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1158, "num_deletes": 501, "total_data_size": 1976412, "memory_usage": 2001608, "flush_reason": "Manual Compaction"}
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651244854, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 853154, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14239, "largest_seqno": 15391, "table_properties": {"data_size": 848909, "index_size": 1385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13557, "raw_average_key_size": 19, "raw_value_size": 838028, "raw_average_value_size": 1187, "num_data_blocks": 61, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003555, "oldest_key_time": 1765003555, "file_creation_time": 1765003651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10593 microseconds, and 3774 cpu microseconds.
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.244900) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 853154 bytes OK
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.244921) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.247194) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.247222) EVENT_LOG_v1 {"time_micros": 1765003651247216, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.247241) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1969797, prev total WAL file size 1969797, number of live WAL files 2.
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.248140) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323538' seq:72057594037927935, type:22 .. '6D67727374617400353039' seq:0, type:0; will stop at (end)
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(833KB)], [27(11MB)]
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651248207, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 13233072, "oldest_snapshot_seqno": -1}
Dec 06 06:47:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:47:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:31.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:47:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:31.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4419 keys, 7585279 bytes, temperature: kUnknown
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651575611, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 7585279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7555041, "index_size": 18096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111495, "raw_average_key_size": 25, "raw_value_size": 7474301, "raw_average_value_size": 1691, "num_data_blocks": 749, "num_entries": 4419, "num_filter_entries": 4419, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765003651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.575835) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7585279 bytes
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.587384) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.4 rd, 23.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(24.4) write-amplify(8.9) OK, records in: 5409, records dropped: 990 output_compression: NoCompression
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.587443) EVENT_LOG_v1 {"time_micros": 1765003651587405, "job": 14, "event": "compaction_finished", "compaction_time_micros": 327461, "compaction_time_cpu_micros": 29289, "output_level": 6, "num_output_files": 1, "total_output_size": 7585279, "num_input_records": 5409, "num_output_records": 4419, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651587764, "job": 14, "event": "table_file_deletion", "file_number": 29}
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651590924, "job": 14, "event": "table_file_deletion", "file_number": 27}
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.248039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.591022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.591028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.591030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.591032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:47:31.591034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-1 sudo[179138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfcsglrnighthfxvownvvjbglfnzrgmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003651.2793314-1634-224060552782570/AnsiballZ_stat.py'
Dec 06 06:47:31 compute-1 sudo[179138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:31 compute-1 ceph-mon[81689]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:31 compute-1 python3.9[179140]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:31 compute-1 sudo[179138]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:32 compute-1 sudo[179263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbveckyccpxpaznqdamflgjaxgleenux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003651.2793314-1634-224060552782570/AnsiballZ_copy.py'
Dec 06 06:47:32 compute-1 sudo[179263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:32 compute-1 python3.9[179265]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003651.2793314-1634-224060552782570/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:32 compute-1 sudo[179263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:33 compute-1 sudo[179415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilswfxnabmgrdxwhgpcdeineshyuqdej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003652.7239733-1634-91491569086833/AnsiballZ_stat.py'
Dec 06 06:47:33 compute-1 sudo[179415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:33 compute-1 python3.9[179417]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 06:47:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:33.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 06:47:33 compute-1 sudo[179415]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:47:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:47:33 compute-1 ceph-mon[81689]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:33 compute-1 sudo[179540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpcmhelazgvmufmkmnetjmmotkloftsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003652.7239733-1634-91491569086833/AnsiballZ_copy.py'
Dec 06 06:47:33 compute-1 sudo[179540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:34 compute-1 python3.9[179542]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003652.7239733-1634-91491569086833/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:34 compute-1 sudo[179540]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:34 compute-1 sudo[179692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzaliqrdticbyhzaqzrgmptsxmgqomuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003654.1966252-1634-218176579686490/AnsiballZ_stat.py'
Dec 06 06:47:34 compute-1 sudo[179692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:34 compute-1 python3.9[179694]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:34 compute-1 sudo[179692]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:34 compute-1 ceph-mon[81689]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:34 compute-1 sudo[179817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpaeohlficqqsjwxvyuetehribaduxzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003654.1966252-1634-218176579686490/AnsiballZ_copy.py'
Dec 06 06:47:34 compute-1 sudo[179817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:35 compute-1 python3.9[179819]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003654.1966252-1634-218176579686490/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:35 compute-1 sudo[179817]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:35.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:35 compute-1 sudo[179969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgcqoyuqaxdhpgghitsgewmnroakguim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003655.3331983-1634-141189498236957/AnsiballZ_stat.py'
Dec 06 06:47:35 compute-1 sudo[179969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:35 compute-1 python3.9[179971]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:35 compute-1 sudo[179969]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:36 compute-1 sudo[180094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfsrzjfhgdsixktszihqdifymfavggxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003655.3331983-1634-141189498236957/AnsiballZ_copy.py'
Dec 06 06:47:36 compute-1 sudo[180094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:36 compute-1 python3.9[180096]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003655.3331983-1634-141189498236957/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:36 compute-1 sudo[180094]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:36 compute-1 sudo[180246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukdxsxbsdookpksmqfqvlvbcbofbspms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003656.4362543-1634-183979773837699/AnsiballZ_stat.py'
Dec 06 06:47:36 compute-1 sudo[180246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:36 compute-1 python3.9[180248]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:36 compute-1 sudo[180246]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:37 compute-1 sudo[180371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yweajuqumelhqixkuhgrpufsaoxbcjgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003656.4362543-1634-183979773837699/AnsiballZ_copy.py'
Dec 06 06:47:37 compute-1 sudo[180371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:37.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:47:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:47:37 compute-1 ceph-mon[81689]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:37 compute-1 python3.9[180373]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003656.4362543-1634-183979773837699/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:37 compute-1 sudo[180371]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:37 compute-1 sudo[180523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aimedubnvkvwlmjhmwfoodmemvurqcdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003657.56773-1634-146938362271632/AnsiballZ_stat.py'
Dec 06 06:47:37 compute-1 sudo[180523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:38 compute-1 python3.9[180525]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:38 compute-1 sudo[180523]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:38 compute-1 sudo[180663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koznqwwymfvcfxbxxbctwjqbhomawrkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003657.56773-1634-146938362271632/AnsiballZ_copy.py'
Dec 06 06:47:38 compute-1 sudo[180663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:38 compute-1 podman[180622]: 2025-12-06 06:47:38.444090898 +0000 UTC m=+0.080219412 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 06:47:38 compute-1 python3.9[180669]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003657.56773-1634-146938362271632/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:38 compute-1 sudo[180663]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:39 compute-1 sudo[180820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvuvzepvvpzymawogcrzgexzbmottqqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003658.7531476-1634-83906551731800/AnsiballZ_stat.py'
Dec 06 06:47:39 compute-1 sudo[180820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:39 compute-1 python3.9[180822]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:39 compute-1 sudo[180820]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 06:47:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 06:47:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:39 compute-1 sudo[180943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djhsxzegfllxyaufvhmidzcxcxxvdajs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003658.7531476-1634-83906551731800/AnsiballZ_copy.py'
Dec 06 06:47:39 compute-1 sudo[180943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:39 compute-1 ceph-mon[81689]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:39 compute-1 python3.9[180945]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003658.7531476-1634-83906551731800/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:39 compute-1 sudo[180943]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:40 compute-1 sudo[181095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wovwpxatqvakguooixmdocwqsqhacpwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003659.8805382-1634-145934715873873/AnsiballZ_stat.py'
Dec 06 06:47:40 compute-1 sudo[181095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:40 compute-1 python3.9[181097]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:40 compute-1 sudo[181095]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:40 compute-1 sudo[181220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmebpssusesibhwpmhiglrqozkbxoext ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003659.8805382-1634-145934715873873/AnsiballZ_copy.py'
Dec 06 06:47:40 compute-1 sudo[181220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:40 compute-1 python3.9[181222]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003659.8805382-1634-145934715873873/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:40 compute-1 sudo[181220]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:41 compute-1 sudo[181372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzjtektbdxeeqvaeqkmugwxcicdfvtxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003661.2801633-1974-129893054048206/AnsiballZ_command.py'
Dec 06 06:47:41 compute-1 sudo[181372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:41 compute-1 python3.9[181374]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 06 06:47:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:42 compute-1 sudo[181372]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:42 compute-1 ceph-mon[81689]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:43 compute-1 sudo[181525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlxkhstukgdvpfxgtbazodsoacsavomt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003662.7667577-2000-66570765831943/AnsiballZ_file.py'
Dec 06 06:47:43 compute-1 sudo[181525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:43 compute-1 python3.9[181527]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:43 compute-1 sudo[181525]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:47:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:43.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:47:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:43 compute-1 sudo[181678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaswedotoixlrthqgdaypmreobbymped ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003663.3674865-2000-2713638716584/AnsiballZ_file.py'
Dec 06 06:47:43 compute-1 sudo[181678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:44 compute-1 python3.9[181680]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:44 compute-1 sudo[181678]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:45 compute-1 ceph-mon[81689]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:45 compute-1 sudo[181831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtqdsppxzntjlfwjqqjsfncsxyooswzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003665.08689-2000-216083589481260/AnsiballZ_file.py'
Dec 06 06:47:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 06:47:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:45.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 06:47:45 compute-1 sudo[181831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:47:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:45.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:47:45 compute-1 python3.9[181833]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:45 compute-1 sudo[181831]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:45 compute-1 sudo[181983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhhvbptituhuetaaonrldolttsqacroa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003665.6557586-2000-150743547256976/AnsiballZ_file.py'
Dec 06 06:47:45 compute-1 sudo[181983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:46 compute-1 python3.9[181985]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:46 compute-1 sudo[181983]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:46 compute-1 sudo[182135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kalbljsemvwnxgscfsidkmcsdbayxjtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003666.2486677-2000-41148757414960/AnsiballZ_file.py'
Dec 06 06:47:46 compute-1 sudo[182135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:46 compute-1 python3.9[182137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:46 compute-1 sudo[182135]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:47 compute-1 sudo[182287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rieaotnuzegubkhlsxehradlzarwfftu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003666.8136837-2000-28961940428773/AnsiballZ_file.py'
Dec 06 06:47:47 compute-1 sudo[182287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:47 compute-1 python3.9[182289]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:47 compute-1 sudo[182287]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:47.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:47.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:47 compute-1 sudo[182439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwgqlvycbfcwvuoildksuxkryphbyob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003667.3453257-2000-217202988147668/AnsiballZ_file.py'
Dec 06 06:47:47 compute-1 sudo[182439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:47 compute-1 python3.9[182441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:47 compute-1 sudo[182439]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:48 compute-1 sudo[182591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itayikobbpbrrkkcuitmgmrzkwkuqmeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003667.8668168-2000-54641334431252/AnsiballZ_file.py'
Dec 06 06:47:48 compute-1 sudo[182591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:48 compute-1 ceph-mon[81689]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:48 compute-1 python3.9[182593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:48 compute-1 sudo[182591]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:48 compute-1 sudo[182743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifzerjggdizzlnonevnsmhicgkvrzdhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003668.5902216-2000-270364022240575/AnsiballZ_file.py'
Dec 06 06:47:48 compute-1 sudo[182743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:49 compute-1 python3.9[182745]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:49 compute-1 sudo[182743]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:49 compute-1 sudo[182895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahqeotzybbtafadhuqegyrujunwodesr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003669.144696-2000-4385403767442/AnsiballZ_file.py'
Dec 06 06:47:49 compute-1 sudo[182895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:47:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:49.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:47:49 compute-1 python3.9[182897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:49 compute-1 sudo[182895]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:49 compute-1 sudo[183047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eonneulaeiuwdmblecdprutagkhxtnkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003669.712582-2000-150012458497472/AnsiballZ_file.py'
Dec 06 06:47:49 compute-1 sudo[183047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:50 compute-1 ceph-mon[81689]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:50 compute-1 python3.9[183049]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:50 compute-1 sudo[183047]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:50 compute-1 sudo[183199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciltgbzzmgtkjsepcoqvhgalvgyhzobd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003670.2902553-2000-184997059345932/AnsiballZ_file.py'
Dec 06 06:47:50 compute-1 sudo[183199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:50 compute-1 python3.9[183201]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:50 compute-1 sudo[183199]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:51 compute-1 sudo[183351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnqyqlfmrpeqlwkwdixwlvxmezinnnpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003670.8955822-2000-83889554436153/AnsiballZ_file.py'
Dec 06 06:47:51 compute-1 sudo[183351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:51 compute-1 python3.9[183353]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:51.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:51 compute-1 sudo[183351]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:51.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:51 compute-1 sudo[183503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcxhqygzjazyxemyhdncuwqknkasdlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003671.4571211-2000-68752829078494/AnsiballZ_file.py'
Dec 06 06:47:51 compute-1 sudo[183503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:53.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:53.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:53 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 06:47:54 compute-1 python3.9[183505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:54 compute-1 sudo[183503]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:54 compute-1 ceph-mon[81689]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:54 compute-1 sudo[183655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdgxhlsybxamkmabcomskgrzsnmlizw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003674.3909473-2297-236010793556551/AnsiballZ_stat.py'
Dec 06 06:47:54 compute-1 sudo[183655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:54 compute-1 python3.9[183657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:54 compute-1 sudo[183655]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:55 compute-1 sudo[183778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeusmxkjbjauyonnmasigbvgvidhkjot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003674.3909473-2297-236010793556551/AnsiballZ_copy.py'
Dec 06 06:47:55 compute-1 sudo[183778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:55.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:55 compute-1 ceph-mon[81689]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:55 compute-1 ceph-mon[81689]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:55 compute-1 ceph-mon[81689]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:55 compute-1 python3.9[183780]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003674.3909473-2297-236010793556551/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:55.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:55 compute-1 sudo[183778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:55 compute-1 sudo[183930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qizibtqdlphchyvdamejotlilzbjfrvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003675.5998354-2297-6647538055881/AnsiballZ_stat.py'
Dec 06 06:47:55 compute-1 sudo[183930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:56 compute-1 python3.9[183932]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:56 compute-1 sudo[183930]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:56 compute-1 sudo[184053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qktxyqxtdwmyopnrycbbbgvkephsiysi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003675.5998354-2297-6647538055881/AnsiballZ_copy.py'
Dec 06 06:47:56 compute-1 sudo[184053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:56 compute-1 python3.9[184055]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003675.5998354-2297-6647538055881/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:56 compute-1 sudo[184053]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:56 compute-1 sudo[184216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgsoptckvlrlgumyvcevrqspxwmglqaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003676.6993034-2297-260766242070580/AnsiballZ_stat.py'
Dec 06 06:47:56 compute-1 sudo[184216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:57 compute-1 podman[184179]: 2025-12-06 06:47:57.070248236 +0000 UTC m=+0.127314850 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 06:47:57 compute-1 python3.9[184224]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:57 compute-1 sudo[184216]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:57 compute-1 ceph-mon[81689]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:57.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:57.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:57 compute-1 sudo[184353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqrdkacupvewztdkzfwcmikyrpqagoth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003676.6993034-2297-260766242070580/AnsiballZ_copy.py'
Dec 06 06:47:57 compute-1 sudo[184353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:57 compute-1 python3.9[184355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003676.6993034-2297-260766242070580/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:57 compute-1 sudo[184353]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:58 compute-1 sudo[184505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzjoawghbdvuqkcqtonszkcfqvxrdsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003677.8384912-2297-62044643179252/AnsiballZ_stat.py'
Dec 06 06:47:58 compute-1 sudo[184505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:58 compute-1 python3.9[184507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:58 compute-1 sudo[184505]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:58 compute-1 sudo[184628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbgzgvoxqatysqspdlqynwthxtkshygc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003677.8384912-2297-62044643179252/AnsiballZ_copy.py'
Dec 06 06:47:58 compute-1 sudo[184628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:58 compute-1 python3.9[184630]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003677.8384912-2297-62044643179252/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:58 compute-1 sudo[184628]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:58 compute-1 ceph-mon[81689]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:59 compute-1 sudo[184780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihjqezjgaycyedrstdhbhcbuhuukyke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003678.922465-2297-189917290994402/AnsiballZ_stat.py'
Dec 06 06:47:59 compute-1 sudo[184780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:59 compute-1 python3.9[184782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:47:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:47:59 compute-1 sudo[184780]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:47:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:59.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:59 compute-1 sudo[184903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arnhdtkvugzhgmbjgozuaqccerklwcjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003678.922465-2297-189917290994402/AnsiballZ_copy.py'
Dec 06 06:47:59 compute-1 sudo[184903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:59 compute-1 python3.9[184905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003678.922465-2297-189917290994402/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:59 compute-1 sudo[184903]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:00 compute-1 sudo[185055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boohmhitxzebkpduwlcqjelphllqmzxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003680.0162413-2297-152214432461005/AnsiballZ_stat.py'
Dec 06 06:48:00 compute-1 sudo[185055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:00 compute-1 python3.9[185057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:00 compute-1 sudo[185055]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:00 compute-1 sudo[185178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sonknvrbgoqitfcbhhnzimjizqqzlhrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003680.0162413-2297-152214432461005/AnsiballZ_copy.py'
Dec 06 06:48:00 compute-1 sudo[185178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:00 compute-1 python3.9[185180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003680.0162413-2297-152214432461005/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:00 compute-1 sudo[185178]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:01.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:01 compute-1 sudo[185330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-calrfbrznzwxxtpzlvauubvxeholrmut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003681.1408165-2297-68462055030988/AnsiballZ_stat.py'
Dec 06 06:48:01 compute-1 sudo[185330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:01.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:01 compute-1 python3.9[185332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:48:01.603 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:48:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:48:01.605 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:48:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:48:01.605 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:48:01 compute-1 sudo[185330]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:01 compute-1 sudo[185453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myimfymznaxwsfelfperknuwbzmmqyny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003681.1408165-2297-68462055030988/AnsiballZ_copy.py'
Dec 06 06:48:01 compute-1 sudo[185453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:02 compute-1 python3.9[185455]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003681.1408165-2297-68462055030988/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:02 compute-1 sudo[185453]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:02 compute-1 sudo[185605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-webpraysavjsgzvfsrmrursxoccemzsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003682.2711604-2297-253241126194173/AnsiballZ_stat.py'
Dec 06 06:48:02 compute-1 sudo[185605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:02 compute-1 python3.9[185607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:02 compute-1 sudo[185605]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:03 compute-1 sudo[185728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puccyrxsnvxicjttqyrpsrfogyepjysv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003682.2711604-2297-253241126194173/AnsiballZ_copy.py'
Dec 06 06:48:03 compute-1 sudo[185728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:03 compute-1 python3.9[185730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003682.2711604-2297-253241126194173/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:03 compute-1 sudo[185728]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:03.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:03 compute-1 ceph-mon[81689]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:03 compute-1 sudo[185880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-midhcpamxrwllfztciwadeobyxxwhuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003683.5456526-2297-67410594737046/AnsiballZ_stat.py'
Dec 06 06:48:03 compute-1 sudo[185880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:03 compute-1 python3.9[185882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:03 compute-1 sudo[185880]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:04 compute-1 sudo[186003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kesjwquewkfmeqdgpaiihznhikahrvyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003683.5456526-2297-67410594737046/AnsiballZ_copy.py'
Dec 06 06:48:04 compute-1 sudo[186003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:04 compute-1 ceph-mon[81689]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:04 compute-1 python3.9[186005]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003683.5456526-2297-67410594737046/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:04 compute-1 sudo[186003]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:05 compute-1 sudo[186155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sszlvkdhgelhedrnhkhyrcqjnbxxgaon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003684.835584-2297-116824438808526/AnsiballZ_stat.py'
Dec 06 06:48:05 compute-1 sudo[186155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:05 compute-1 python3.9[186157]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:05 compute-1 sudo[186155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:05.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:05.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:05 compute-1 sudo[186278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chhwreagddsvqzdzhkxgwpoxpsfbkegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003684.835584-2297-116824438808526/AnsiballZ_copy.py'
Dec 06 06:48:05 compute-1 sudo[186278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:05 compute-1 python3.9[186280]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003684.835584-2297-116824438808526/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:05 compute-1 sudo[186278]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:06 compute-1 sudo[186430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghfrchwgkmzdbbcbymcsmzeyfkaoeuxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003686.0232973-2297-5549232109320/AnsiballZ_stat.py'
Dec 06 06:48:06 compute-1 sudo[186430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:06 compute-1 ceph-mon[81689]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:06 compute-1 python3.9[186432]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:06 compute-1 sudo[186430]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:06 compute-1 sudo[186553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orqwqwpxgihilrlmrunzenabbgxctlil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003686.0232973-2297-5549232109320/AnsiballZ_copy.py'
Dec 06 06:48:06 compute-1 sudo[186553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:07 compute-1 python3.9[186555]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003686.0232973-2297-5549232109320/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:07 compute-1 sudo[186553]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:07 compute-1 ceph-mon[81689]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:07.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:07 compute-1 sudo[186705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-parbjxqqpqqxnigkqydyrtyfgoswaoep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003687.2745779-2297-16799952646780/AnsiballZ_stat.py'
Dec 06 06:48:07 compute-1 sudo[186705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:07 compute-1 python3.9[186707]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:07 compute-1 sudo[186705]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:08 compute-1 sudo[186828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xicxavxgaettzmibzaxemzvmrkoqjnvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003687.2745779-2297-16799952646780/AnsiballZ_copy.py'
Dec 06 06:48:08 compute-1 sudo[186828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:08 compute-1 python3.9[186830]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003687.2745779-2297-16799952646780/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:08 compute-1 sudo[186828]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:08 compute-1 sudo[186993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbpoaodxgrpiixbelockldgezojqqry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003688.4374635-2297-244546851534980/AnsiballZ_stat.py'
Dec 06 06:48:08 compute-1 sudo[186993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:08 compute-1 podman[186954]: 2025-12-06 06:48:08.808937539 +0000 UTC m=+0.085614537 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:48:08 compute-1 python3.9[187001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:09 compute-1 sudo[186993]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:48:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:09.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:48:09 compute-1 sudo[187122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kchrhsxykrcrzehaybcltzewafggpzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003688.4374635-2297-244546851534980/AnsiballZ_copy.py'
Dec 06 06:48:09 compute-1 sudo[187122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:09.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:09 compute-1 python3.9[187124]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003688.4374635-2297-244546851534980/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:09 compute-1 sudo[187122]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:09 compute-1 sudo[187274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iakhvynukajdehgzjfzkltciwaretmvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003689.720765-2297-241384919519893/AnsiballZ_stat.py'
Dec 06 06:48:09 compute-1 sudo[187274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:10 compute-1 ceph-mon[81689]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:10 compute-1 python3.9[187276]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:10 compute-1 sudo[187274]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:10 compute-1 sudo[187397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaftqywbserwwnrduysjzztsgkobjfsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003689.720765-2297-241384919519893/AnsiballZ_copy.py'
Dec 06 06:48:10 compute-1 sudo[187397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:10 compute-1 python3.9[187399]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003689.720765-2297-241384919519893/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:10 compute-1 sudo[187397]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:11.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:48:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:11.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:48:11 compute-1 ceph-mon[81689]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:11 compute-1 python3.9[187549]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:12 compute-1 sudo[187702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdrgivzgwhuonatmawioiesvudbdhisj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003692.1453884-2915-245482165792642/AnsiballZ_seboolean.py'
Dec 06 06:48:12 compute-1 sudo[187702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:12 compute-1 python3.9[187704]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 06 06:48:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:13.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:13.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:13 compute-1 ceph-mon[81689]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:14 compute-1 ceph-mon[81689]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:14 compute-1 sudo[187702]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:15 compute-1 sudo[187858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oibvjurgihvrqjjzglbqsdtowiassnfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003695.1170816-2939-93707908237041/AnsiballZ_copy.py'
Dec 06 06:48:15 compute-1 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 06 06:48:15 compute-1 sudo[187858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:15.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:15 compute-1 python3.9[187860]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:15 compute-1 sudo[187858]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:15 compute-1 sudo[188010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwehqsguuvuohvgftvtzklwkvwmgttkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003695.7322128-2939-157695845733696/AnsiballZ_copy.py'
Dec 06 06:48:15 compute-1 sudo[188010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:16 compute-1 python3.9[188012]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:16 compute-1 sudo[188010]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:16 compute-1 sudo[188162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohpoxlflsvpylljtqrbzrpwffzklaeez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003696.3038383-2939-12272915619059/AnsiballZ_copy.py'
Dec 06 06:48:16 compute-1 sudo[188162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:16 compute-1 python3.9[188164]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:16 compute-1 sudo[188162]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:17 compute-1 sudo[188314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkfibairjwivqrrfynwuswdgusqdmdqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003696.8464549-2939-75953399470836/AnsiballZ_copy.py'
Dec 06 06:48:17 compute-1 sudo[188314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:17 compute-1 python3.9[188316]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:17 compute-1 sudo[188314]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:17.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:17.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:17 compute-1 ceph-mon[81689]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:17 compute-1 sudo[188466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzeutxumyycnxrjvqobejyrsprtnzalx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003697.443201-2939-168599214943377/AnsiballZ_copy.py'
Dec 06 06:48:17 compute-1 sudo[188466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:17 compute-1 python3.9[188468]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:17 compute-1 sudo[188466]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:18 compute-1 sudo[188618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvanljmodqmyydmbuwwyhxaotywwqoqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003698.2306795-3047-249917319011203/AnsiballZ_copy.py'
Dec 06 06:48:18 compute-1 sudo[188618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:18 compute-1 python3.9[188620]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:18 compute-1 sudo[188618]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:18 compute-1 ceph-mon[81689]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:19 compute-1 sudo[188770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqttbwzeabjykscyxgddxcgqdoscpjuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003698.856353-3047-30941919225779/AnsiballZ_copy.py'
Dec 06 06:48:19 compute-1 sudo[188770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:19 compute-1 python3.9[188772]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:19 compute-1 sudo[188770]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:19.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:19.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:19 compute-1 sudo[188922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnqfggweacxrefdhmzjxyvqpqewqeaja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003699.4786-3047-123623679977397/AnsiballZ_copy.py'
Dec 06 06:48:19 compute-1 sudo[188922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:19 compute-1 python3.9[188924]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:19 compute-1 sudo[188922]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:20 compute-1 sudo[189074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukrhcjzjxfjednjcmuwvagkkpcofizep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003700.0666516-3047-128342234944654/AnsiballZ_copy.py'
Dec 06 06:48:20 compute-1 sudo[189074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:20 compute-1 python3.9[189076]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:20 compute-1 sudo[189074]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:20 compute-1 ceph-mon[81689]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:20 compute-1 sudo[189226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uohennbfqjhakpkomufouilfuknaydiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003700.6521454-3047-219209185107138/AnsiballZ_copy.py'
Dec 06 06:48:20 compute-1 sudo[189226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:21 compute-1 python3.9[189228]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:21 compute-1 sudo[189226]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:21.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:21.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:21 compute-1 sudo[189378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvyrarkaelgymdemmsebinbyubiimira ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003701.4359896-3155-255539640443748/AnsiballZ_systemd.py'
Dec 06 06:48:21 compute-1 sudo[189378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:21 compute-1 python3.9[189380]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:21 compute-1 systemd[1]: Reloading.
Dec 06 06:48:22 compute-1 systemd-rc-local-generator[189407]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:22 compute-1 systemd-sysv-generator[189410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:22 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Dec 06 06:48:22 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Dec 06 06:48:22 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 06 06:48:22 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 06 06:48:22 compute-1 systemd[1]: Starting libvirt logging daemon...
Dec 06 06:48:22 compute-1 systemd[1]: Started libvirt logging daemon.
Dec 06 06:48:22 compute-1 sudo[189422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:22 compute-1 sudo[189422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-1 sudo[189422]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:48:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Cumulative writes: 6453 writes, 27K keys, 6453 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6453 writes, 1055 syncs, 6.12 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 506 writes, 812 keys, 506 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                           Interval WAL: 506 writes, 232 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1205.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 06:48:22 compute-1 sudo[189378]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-1 sudo[189447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:48:22 compute-1 sudo[189447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-1 sudo[189447]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-1 sudo[189496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:22 compute-1 sudo[189496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-1 sudo[189496]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-1 sudo[189545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 06:48:22 compute-1 sudo[189545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-1 sudo[189684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqkmhguilssicwqmrjhravjmctjkgfji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003702.5504463-3155-200294379317969/AnsiballZ_systemd.py'
Dec 06 06:48:22 compute-1 sudo[189684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:22 compute-1 sudo[189545]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-1 sudo[189695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:22 compute-1 sudo[189695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-1 sudo[189695]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:23 compute-1 sudo[189720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:48:23 compute-1 sudo[189720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:23 compute-1 sudo[189720]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:23 compute-1 python3.9[189688]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:23 compute-1 systemd[1]: Reloading.
Dec 06 06:48:23 compute-1 systemd-sysv-generator[189798]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:23 compute-1 systemd-rc-local-generator[189795]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:23.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:23 compute-1 sudo[189746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:23 compute-1 sudo[189746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:23 compute-1 sudo[189746]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:23 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 06 06:48:23 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 06 06:48:23 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 06 06:48:23 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 06 06:48:23 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 06 06:48:23 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 06 06:48:23 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Dec 06 06:48:23 compute-1 sudo[189807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:48:23 compute-1 sudo[189807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:23 compute-1 systemd[1]: Started libvirt nodedev daemon.
Dec 06 06:48:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:23.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:23 compute-1 sudo[189684]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:23 compute-1 ceph-mon[81689]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:23 compute-1 sudo[190026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsgehivrgycgloqvvatholemxfnhsamx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003703.6299753-3155-98564271366361/AnsiballZ_systemd.py'
Dec 06 06:48:23 compute-1 sudo[190026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:23 compute-1 sudo[189807]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:24 compute-1 python3.9[190038]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:24 compute-1 systemd[1]: Reloading.
Dec 06 06:48:24 compute-1 systemd-sysv-generator[190069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:24 compute-1 systemd-rc-local-generator[190063]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:24 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 06 06:48:24 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 06 06:48:24 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 06 06:48:24 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 06 06:48:24 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 06 06:48:24 compute-1 systemd[1]: Starting libvirt proxy daemon...
Dec 06 06:48:24 compute-1 systemd[1]: Started libvirt proxy daemon.
Dec 06 06:48:24 compute-1 sudo[190026]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:24 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 06 06:48:25 compute-1 sudo[190251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brvhfirabwenxygqtzytvhgqiknqmzyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003704.7086046-3155-127338508107425/AnsiballZ_systemd.py'
Dec 06 06:48:25 compute-1 sudo[190251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:25 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 06 06:48:25 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 06 06:48:25 compute-1 python3.9[190253]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:25 compute-1 systemd[1]: Reloading.
Dec 06 06:48:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:25.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:25 compute-1 systemd-sysv-generator[190289]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:25 compute-1 systemd-rc-local-generator[190283]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:25.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:25 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Dec 06 06:48:25 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 06 06:48:25 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 06 06:48:25 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 06 06:48:25 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 06 06:48:25 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 06 06:48:25 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 06 06:48:25 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 06 06:48:25 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 06 06:48:25 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 06 06:48:25 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Dec 06 06:48:25 compute-1 systemd[1]: Started libvirt QEMU daemon.
Dec 06 06:48:25 compute-1 sudo[190251]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:26 compute-1 setroubleshoot[190078]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 89c22c07-a67b-4db2-9312-6e207c8cb0aa
Dec 06 06:48:26 compute-1 sudo[190476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huwufkoxnqkslqwasndenrdkmbnjnfly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003705.974097-3155-140327892975647/AnsiballZ_systemd.py'
Dec 06 06:48:26 compute-1 sudo[190476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:27.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:28 compute-1 podman[190479]: 2025-12-06 06:48:28.120326252 +0000 UTC m=+0.099531572 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec 06 06:48:28 compute-1 setroubleshoot[190078]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 06 06:48:28 compute-1 setroubleshoot[190078]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 89c22c07-a67b-4db2-9312-6e207c8cb0aa
Dec 06 06:48:28 compute-1 setroubleshoot[190078]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 06 06:48:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:48:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:48:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:48:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:48:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:48:29 compute-1 python3.9[190478]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:29 compute-1 systemd[1]: Reloading.
Dec 06 06:48:29 compute-1 ceph-mon[81689]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:29 compute-1 systemd-rc-local-generator[190531]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:29 compute-1 systemd-sysv-generator[190535]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:29.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:29 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Dec 06 06:48:29 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Dec 06 06:48:29 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 06 06:48:29 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 06 06:48:29 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 06 06:48:29 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 06 06:48:29 compute-1 systemd[1]: Starting libvirt secret daemon...
Dec 06 06:48:29 compute-1 systemd[1]: Started libvirt secret daemon.
Dec 06 06:48:29 compute-1 sudo[190476]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:30 compute-1 auditd[698]: Audit daemon rotating log files
Dec 06 06:48:30 compute-1 sudo[190715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deyodkvzyhfpxvgfwnnsilhgbtqwusal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003709.925837-3266-131123434406237/AnsiballZ_file.py'
Dec 06 06:48:30 compute-1 sudo[190715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:30 compute-1 ceph-mon[81689]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:30 compute-1 ceph-mon[81689]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:30 compute-1 python3.9[190717]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:30 compute-1 sudo[190715]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:30 compute-1 sudo[190867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aftvjfgmlnhwxlhylxdtjvaprffvwtbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003710.6929083-3290-202262696490070/AnsiballZ_find.py'
Dec 06 06:48:30 compute-1 sudo[190867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:31 compute-1 python3.9[190869]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:48:31 compute-1 sudo[190867]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:31 compute-1 ceph-mon[81689]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:31.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:31.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:31 compute-1 sudo[190953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:31 compute-1 sudo[190953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:31 compute-1 sudo[190953]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:31 compute-1 sudo[190994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:48:31 compute-1 sudo[190994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:31 compute-1 sudo[190994]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:31 compute-1 sudo[191069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnuywkqpkqmcixvekvvzxihovxxetzsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003711.3752112-3314-8702831710839/AnsiballZ_command.py'
Dec 06 06:48:31 compute-1 sudo[191069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:31 compute-1 python3.9[191071]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:31 compute-1 sudo[191069]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:32 compute-1 python3.9[191225]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:48:33 compute-1 ceph-mon[81689]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:33.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:33 compute-1 python3.9[191375]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:33 compute-1 python3.9[191496]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003713.0847194-3371-193897902981569/.source.xml follow=False _original_basename=secret.xml.j2 checksum=cbc4650861b6b585bb80bed115fd7c888a642f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:34 compute-1 sudo[191646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuqqfypjjfqezsxnnhsluduaumronsvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003714.464938-3417-185855176881232/AnsiballZ_command.py'
Dec 06 06:48:34 compute-1 sudo[191646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:34 compute-1 ceph-mon[81689]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:34 compute-1 python3.9[191648]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 40a1bae4-cf76-5610-8dab-c75116dfe0bb
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:34 compute-1 polkitd[43521]: Registered Authentication Agent for unix-process:191650:399448 (system bus name :1.1843 [pkttyagent --process 191650 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 06:48:34 compute-1 polkitd[43521]: Unregistered Authentication Agent for unix-process:191650:399448 (system bus name :1.1843, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 06:48:34 compute-1 polkitd[43521]: Registered Authentication Agent for unix-process:191649:399447 (system bus name :1.1844 [pkttyagent --process 191649 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 06:48:34 compute-1 polkitd[43521]: Unregistered Authentication Agent for unix-process:191649:399447 (system bus name :1.1844, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 06:48:35 compute-1 sudo[191646]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:35.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:35.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:36 compute-1 python3.9[191810]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:36 compute-1 sudo[191960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiojuxumlbytlwklvldibxxiigngbbtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003716.364223-3464-144849644044885/AnsiballZ_command.py'
Dec 06 06:48:36 compute-1 sudo[191960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:36 compute-1 sudo[191960]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:37 compute-1 sudo[192113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjksrzindzfngamxzconkoonevbuojla ; FSID=40a1bae4-cf76-5610-8dab-c75116dfe0bb KEY=AQAgzDNpAAAAABAARUe82jbSNft4GCMkj8z7BQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003717.0637882-3488-119775691054732/AnsiballZ_command.py'
Dec 06 06:48:37 compute-1 sudo[192113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:48:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:37.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:48:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:37 compute-1 polkitd[43521]: Registered Authentication Agent for unix-process:192116:399709 (system bus name :1.1847 [pkttyagent --process 192116 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 06:48:37 compute-1 polkitd[43521]: Unregistered Authentication Agent for unix-process:192116:399709 (system bus name :1.1847, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 06:48:37 compute-1 sudo[192113]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:37 compute-1 ceph-mon[81689]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:38 compute-1 sudo[192271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzovzosuxryaprpyetdobiifpkjuekff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003717.869321-3512-117902649615092/AnsiballZ_copy.py'
Dec 06 06:48:38 compute-1 sudo[192271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:38 compute-1 python3.9[192273]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:38 compute-1 sudo[192271]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:38 compute-1 sudo[192434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtlatnqzxapuplliufwwhffthwpzauwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003718.6278913-3536-215105254067130/AnsiballZ_stat.py'
Dec 06 06:48:38 compute-1 sudo[192434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:38 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 06 06:48:38 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.065s CPU time.
Dec 06 06:48:38 compute-1 podman[192397]: 2025-12-06 06:48:38.965546751 +0000 UTC m=+0.052663360 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 06 06:48:38 compute-1 ceph-mon[81689]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:39 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 06 06:48:39 compute-1 python3.9[192441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:39 compute-1 sudo[192434]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:39 compute-1 sudo[192563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqrzrkkuxohdtnkuimzeiuxsjsuttanx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003718.6278913-3536-215105254067130/AnsiballZ_copy.py'
Dec 06 06:48:39 compute-1 sudo[192563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:39 compute-1 python3.9[192565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003718.6278913-3536-215105254067130/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:39 compute-1 sudo[192563]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:40 compute-1 sudo[192715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdmrtkmmgkqsdwodwbhobkklktynbjcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003720.174218-3584-8013951721577/AnsiballZ_file.py'
Dec 06 06:48:40 compute-1 sudo[192715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:40 compute-1 python3.9[192717]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:40 compute-1 sudo[192715]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:41 compute-1 sudo[192867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mckdcwtqxitizvdcjsgelzzcjlxiovkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003720.7962604-3608-253543485002479/AnsiballZ_stat.py'
Dec 06 06:48:41 compute-1 sudo[192867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:41 compute-1 python3.9[192869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:41 compute-1 sudo[192867]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:41 compute-1 sudo[192945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntstrepfghpystoopatcixlwkfrjtixp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003720.7962604-3608-253543485002479/AnsiballZ_file.py'
Dec 06 06:48:41 compute-1 sudo[192945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:41 compute-1 python3.9[192947]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:41 compute-1 sudo[192945]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:42 compute-1 sudo[193097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vncxoraqbvbdmmsmoildmcbeatttccif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003721.9681659-3644-25169223238106/AnsiballZ_stat.py'
Dec 06 06:48:42 compute-1 sudo[193097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:42 compute-1 python3.9[193099]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:42 compute-1 sudo[193097]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:42 compute-1 sudo[193175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heixdubmoxiltiotufmoodnsxfbmbfaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003721.9681659-3644-25169223238106/AnsiballZ_file.py'
Dec 06 06:48:42 compute-1 sudo[193175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:42 compute-1 python3.9[193177]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xpw_a3in recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:42 compute-1 sudo[193175]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:43.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:43 compute-1 sudo[193327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbmlahvvhohbcbgdsppwqfzscdgxjjzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003723.129891-3680-155521220836961/AnsiballZ_stat.py'
Dec 06 06:48:43 compute-1 sudo[193327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:43 compute-1 python3.9[193329]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:43 compute-1 sudo[193327]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:43 compute-1 sudo[193405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djeapkvylfnxlplvzprpufrxfhvzqgsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003723.129891-3680-155521220836961/AnsiballZ_file.py'
Dec 06 06:48:43 compute-1 sudo[193405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:44 compute-1 python3.9[193407]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:44 compute-1 sudo[193405]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:44 compute-1 sudo[193557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrqezsyglygozeijznhpzmhyxjuffjsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003724.4149234-3719-269536172254660/AnsiballZ_command.py'
Dec 06 06:48:44 compute-1 sudo[193557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:44 compute-1 python3.9[193559]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:44 compute-1 sudo[193557]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:45 compute-1 ceph-mon[81689]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:45.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:46 compute-1 ceph-mon[81689]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:47 compute-1 sudo[193710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmpxiqdkvbbubevdhbvotvfsfrdharvt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003726.6606236-3743-246720283395957/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 06:48:47 compute-1 sudo[193710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:47 compute-1 python3[193712]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 06:48:47 compute-1 sudo[193710]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:47.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:47 compute-1 sudo[193862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyubezsjcwpudasmfnfispdrziadbofc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003727.4989462-3767-164416775405104/AnsiballZ_stat.py'
Dec 06 06:48:47 compute-1 sudo[193862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:48 compute-1 python3.9[193864]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:48 compute-1 sudo[193862]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:48 compute-1 ceph-mon[81689]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:48 compute-1 ceph-mon[81689]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:48 compute-1 sudo[193940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bybzrienmwoaswoorikgroocjltzzmzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003727.4989462-3767-164416775405104/AnsiballZ_file.py'
Dec 06 06:48:48 compute-1 sudo[193940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:48 compute-1 python3.9[193942]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:48 compute-1 sudo[193940]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:49 compute-1 sudo[194092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhaxfnojtxkwoyoinenzxqpatimjnqna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003728.8018317-3803-144294720908327/AnsiballZ_stat.py'
Dec 06 06:48:49 compute-1 sudo[194092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:49 compute-1 python3.9[194095]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:49 compute-1 sudo[194092]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:49.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:49 compute-1 sudo[194171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pffocperhsqbqoxycosddqtxydygtmbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003728.8018317-3803-144294720908327/AnsiballZ_file.py'
Dec 06 06:48:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:49 compute-1 sudo[194171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:50 compute-1 python3.9[194173]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:50 compute-1 sudo[194171]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:50 compute-1 sudo[194324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njrkeooxfpvvdvcqhmnqffywjfngjbnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003730.2342594-3839-146429036222947/AnsiballZ_stat.py'
Dec 06 06:48:50 compute-1 sudo[194324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:50 compute-1 python3.9[194326]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:50 compute-1 sudo[194324]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:50 compute-1 sudo[194402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suwaoszumkvkpqysajkfqrjicnceohtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003730.2342594-3839-146429036222947/AnsiballZ_file.py'
Dec 06 06:48:50 compute-1 sudo[194402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:51 compute-1 python3.9[194404]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:51 compute-1 sudo[194402]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:51.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:51.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:52 compute-1 sshd-session[194093]: Connection reset by authenticating user root 91.202.233.33 port 26286 [preauth]
Dec 06 06:48:52 compute-1 ceph-mon[81689]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:53 compute-1 sudo[194556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhqwfpleaugvalqqrsyowwlwollekgbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003732.9560215-3875-185416641708242/AnsiballZ_stat.py'
Dec 06 06:48:53 compute-1 sudo[194556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:53.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:53 compute-1 python3.9[194558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:53 compute-1 sudo[194556]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:53.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:53 compute-1 sudo[194634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahqxagqeokboejedpjztocrpfdmtrdgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003732.9560215-3875-185416641708242/AnsiballZ_file.py'
Dec 06 06:48:53 compute-1 sudo[194634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:53 compute-1 ceph-mon[81689]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:53 compute-1 ceph-mon[81689]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:53 compute-1 python3.9[194636]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:53 compute-1 sudo[194634]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:54 compute-1 sshd-session[194429]: Connection reset by authenticating user root 91.202.233.33 port 30988 [preauth]
Dec 06 06:48:54 compute-1 sudo[194786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvpdokxtamvyzuuatvozmcinbrqgxhno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003734.0949821-3912-76719422587169/AnsiballZ_stat.py'
Dec 06 06:48:54 compute-1 sudo[194786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:54 compute-1 python3.9[194788]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:54 compute-1 sudo[194786]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:54 compute-1 sudo[194913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljvhdnjgpdyatcmhgadlfbmrvrgnabas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003734.0949821-3912-76719422587169/AnsiballZ_copy.py'
Dec 06 06:48:54 compute-1 sudo[194913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:55 compute-1 python3.9[194915]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003734.0949821-3912-76719422587169/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:55 compute-1 sudo[194913]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:48:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:55.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:48:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:55.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:55 compute-1 sudo[195065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guxolmxvmjuntkbenjkmtcdimwcqxeyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003735.5811727-3956-169649228016777/AnsiballZ_file.py'
Dec 06 06:48:55 compute-1 sudo[195065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:56 compute-1 sshd-session[194789]: Invalid user ftpuser from 91.202.233.33 port 31022
Dec 06 06:48:57 compute-1 python3.9[195067]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:57 compute-1 sudo[195065]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:57 compute-1 sshd-session[194789]: Connection reset by invalid user ftpuser 91.202.233.33 port 31022 [preauth]
Dec 06 06:48:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:57 compute-1 ceph-mon[81689]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:57.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:57 compute-1 sudo[195219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcixmpkheaosfuyvvflhidffzgegeaum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003737.256821-3980-41912658813818/AnsiballZ_command.py'
Dec 06 06:48:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:57.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:57 compute-1 sudo[195219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:57 compute-1 python3.9[195221]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:57 compute-1 sudo[195219]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:58 compute-1 sudo[195387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agdjsnfexzccrycvvplevbfbnoyfrovf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003737.9550471-4004-65392822008306/AnsiballZ_blockinfile.py'
Dec 06 06:48:58 compute-1 sudo[195387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:58 compute-1 podman[195348]: 2025-12-06 06:48:58.517177575 +0000 UTC m=+0.103289573 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:48:58 compute-1 ceph-mon[81689]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:58 compute-1 python3.9[195396]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:58 compute-1 sudo[195387]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:59 compute-1 sudo[195552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecijwobsvqxseoowkjnrhawtfdzymsbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003738.997657-4031-156611822176284/AnsiballZ_command.py'
Dec 06 06:48:59 compute-1 sudo[195552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:59.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:59 compute-1 python3.9[195554]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:59 compute-1 sudo[195552]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:48:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:59.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:59 compute-1 sudo[195705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntvamgwtbwmyzqusjpwjvxganyfklnud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003739.7029834-4055-154959899715514/AnsiballZ_stat.py'
Dec 06 06:48:59 compute-1 sudo[195705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:00 compute-1 sshd-session[195096]: Connection reset by authenticating user root 91.202.233.33 port 31036 [preauth]
Dec 06 06:49:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:01.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:01.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:49:01.603 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:49:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:49:01.603 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:49:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:49:01.603 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:49:02 compute-1 ceph-mon[81689]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:02 compute-1 python3.9[195707]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:02 compute-1 sudo[195705]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:02 compute-1 sudo[195861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxcdykqethjjtxeiwoevlrmvdnvvypz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003742.3327088-4079-210359482727193/AnsiballZ_command.py'
Dec 06 06:49:02 compute-1 sudo[195861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:02 compute-1 python3.9[195863]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:02 compute-1 sudo[195861]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:02 compute-1 sshd-session[195708]: Connection reset by authenticating user root 91.202.233.33 port 31088 [preauth]
Dec 06 06:49:03 compute-1 sudo[196016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrlsnhxfxutpwxbgnhkiebgxupiazflc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003743.1002498-4104-195182006523310/AnsiballZ_file.py'
Dec 06 06:49:03 compute-1 sudo[196016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:03.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:49:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:03.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:49:03 compute-1 python3.9[196018]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:03 compute-1 sudo[196016]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:04 compute-1 sudo[196168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auuvrzhiegitqmfawlburvujntevbdyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003743.8000557-4127-169182009984370/AnsiballZ_stat.py'
Dec 06 06:49:04 compute-1 sudo[196168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:04 compute-1 python3.9[196170]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:04 compute-1 sudo[196168]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:04 compute-1 ceph-mon[81689]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:04 compute-1 ceph-mon[81689]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:04 compute-1 sudo[196291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tibfncdpdsbooujbmjuglrgysfhpkqrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003743.8000557-4127-169182009984370/AnsiballZ_copy.py'
Dec 06 06:49:04 compute-1 sudo[196291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:04 compute-1 python3.9[196293]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003743.8000557-4127-169182009984370/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:04 compute-1 sudo[196291]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:05 compute-1 ceph-mon[81689]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:05 compute-1 sudo[196443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckahjewedvovidymgfmjzyxknatsurig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003745.1382887-4172-34270883403295/AnsiballZ_stat.py'
Dec 06 06:49:05 compute-1 sudo[196443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:05.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:05.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:05 compute-1 python3.9[196445]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:05 compute-1 sudo[196443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:05 compute-1 sudo[196566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhtzdnovulmjlkfjrbvkkjbgqpanjoqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003745.1382887-4172-34270883403295/AnsiballZ_copy.py'
Dec 06 06:49:05 compute-1 sudo[196566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:06 compute-1 python3.9[196568]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003745.1382887-4172-34270883403295/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:06 compute-1 sudo[196566]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:06 compute-1 sudo[196718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qncguofrkwjuywxgmjvglapgkgawppto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003746.4098907-4217-69522811647555/AnsiballZ_stat.py'
Dec 06 06:49:06 compute-1 sudo[196718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:06 compute-1 python3.9[196720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:06 compute-1 sudo[196718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:07 compute-1 sudo[196841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbryaboimimopmclgiljgfzmiogjmdfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003746.4098907-4217-69522811647555/AnsiballZ_copy.py'
Dec 06 06:49:07 compute-1 sudo[196841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:07 compute-1 python3.9[196843]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003746.4098907-4217-69522811647555/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:07 compute-1 sudo[196841]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:07.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:07.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:07 compute-1 sudo[196993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bddukrirpuyhshmxffszoxierlczqgmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003747.6829093-4262-47016985548686/AnsiballZ_systemd.py'
Dec 06 06:49:07 compute-1 sudo[196993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:08 compute-1 python3.9[196995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:49:08 compute-1 systemd[1]: Reloading.
Dec 06 06:49:08 compute-1 systemd-rc-local-generator[197020]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:49:08 compute-1 systemd-sysv-generator[197024]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:49:08 compute-1 ceph-mon[81689]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:08 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Dec 06 06:49:08 compute-1 sudo[196993]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:09 compute-1 podman[197059]: 2025-12-06 06:49:09.080590815 +0000 UTC m=+0.064688364 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:49:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:09.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:09.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:09 compute-1 ceph-mon[81689]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:10 compute-1 sudo[197204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbngvfaxydedqwqpcigxyuqspnzegmab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003749.8069625-4286-27853363665217/AnsiballZ_systemd.py'
Dec 06 06:49:10 compute-1 sudo[197204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:10 compute-1 python3.9[197206]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 06:49:10 compute-1 systemd[1]: Reloading.
Dec 06 06:49:10 compute-1 systemd-rc-local-generator[197231]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:49:10 compute-1 systemd-sysv-generator[197236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:49:10 compute-1 systemd[1]: Reloading.
Dec 06 06:49:10 compute-1 systemd-sysv-generator[197275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:49:10 compute-1 systemd-rc-local-generator[197270]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:49:11 compute-1 sudo[197204]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:11.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:11.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:11 compute-1 ceph-mon[81689]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:11 compute-1 sshd-session[139722]: Connection closed by 192.168.122.30 port 41254
Dec 06 06:49:11 compute-1 sshd-session[139719]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:49:11 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Dec 06 06:49:11 compute-1 systemd[1]: session-48.scope: Consumed 3min 33.736s CPU time.
Dec 06 06:49:11 compute-1 systemd-logind[788]: Session 48 logged out. Waiting for processes to exit.
Dec 06 06:49:11 compute-1 systemd-logind[788]: Removed session 48.
Dec 06 06:49:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:13 compute-1 ceph-mon[81689]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:13.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:15.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:49:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:15.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:49:15 compute-1 ceph-mon[81689]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:16 compute-1 ceph-mon[81689]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:17 compute-1 sshd-session[197303]: Accepted publickey for zuul from 192.168.122.30 port 55764 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:49:17 compute-1 systemd-logind[788]: New session 49 of user zuul.
Dec 06 06:49:17 compute-1 systemd[1]: Started Session 49 of User zuul.
Dec 06 06:49:17 compute-1 sshd-session[197303]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:49:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:17.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:17.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:18 compute-1 python3.9[197456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:49:18 compute-1 ceph-mon[81689]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:19.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:19 compute-1 python3.9[197610]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:49:19 compute-1 network[197627]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:49:19 compute-1 network[197628]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:49:19 compute-1 network[197629]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:49:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:19.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:20 compute-1 ceph-mon[81689]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:23 compute-1 ceph-mon[81689]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:23 compute-1 sudo[197899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrtikauwiqlsxdswfjimgglxeibfcohr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003763.7209203-108-144833785269033/AnsiballZ_setup.py'
Dec 06 06:49:24 compute-1 sudo[197899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:24 compute-1 python3.9[197901]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:49:24 compute-1 sudo[197899]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:24 compute-1 ceph-mon[81689]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:24 compute-1 sudo[197983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqmbzndcsgadgzojrutrjxwqdscebgdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003763.7209203-108-144833785269033/AnsiballZ_dnf.py'
Dec 06 06:49:24 compute-1 sudo[197983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:25 compute-1 python3.9[197985]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:49:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:25.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:27.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:27 compute-1 ceph-mon[81689]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:28 compute-1 ceph-mon[81689]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:29 compute-1 podman[197987]: 2025-12-06 06:49:29.122343667 +0000 UTC m=+0.100557177 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 06:49:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:29.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:49:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Cumulative writes: 2638 writes, 16K keys, 2638 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 2638 writes, 2638 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1117 writes, 5369 keys, 1117 commit groups, 1.0 writes per commit group, ingest: 11.89 MB, 0.02 MB/s
                                           Interval WAL: 1117 writes, 1117 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.3      1.41              0.06         7    0.201       0      0       0.0       0.0
                                             L6      1/0    7.23 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9     21.0     17.3      3.11              0.18         6    0.518     28K   3284       0.0       0.0
                                            Sum      1/0    7.23 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9     14.5     16.1      4.52              0.24        13    0.347     28K   3284       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     14.6     14.2      2.53              0.14         6    0.422     15K   2033       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0     21.0     17.3      3.11              0.18         6    0.518     28K   3284       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.7      1.36              0.06         6    0.227       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.018, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 4.5 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 2.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 2.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(110,2.07 MB,0.682043%) FilterBlock(13,87.86 KB,0.0282237%) IndexBlock(13,173.55 KB,0.0557498%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:49:30 compute-1 sudo[197983]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:31.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:31 compute-1 sudo[198162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyrbuixacodysuohumerxmspylprddju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003771.1825588-144-272977750108455/AnsiballZ_stat.py'
Dec 06 06:49:31 compute-1 sudo[198162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:31 compute-1 ceph-mon[81689]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:31 compute-1 python3.9[198164]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:31 compute-1 sudo[198165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:31 compute-1 sudo[198165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:31 compute-1 sudo[198165]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-1 sudo[198162]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-1 sudo[198190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:49:31 compute-1 sudo[198190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:31 compute-1 sudo[198190]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-1 sudo[198239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:31 compute-1 sudo[198239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:31 compute-1 sudo[198239]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-1 sudo[198264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:49:31 compute-1 sudo[198264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:32 compute-1 sudo[198264]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:32 compute-1 sudo[198446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yosttrqjmtxtnjmpradjtkehrgedcphg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003772.1364086-174-83805627016959/AnsiballZ_command.py'
Dec 06 06:49:32 compute-1 sudo[198446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:49:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:33.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:49:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:35 compute-1 python3.9[198448]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:35 compute-1 sudo[198446]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:49:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:49:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:35.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:35 compute-1 sudo[198599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvebotavkzvqzvezafoeaikqpseiwpnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003775.5020168-204-247678321582965/AnsiballZ_stat.py'
Dec 06 06:49:35 compute-1 sudo[198599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:35 compute-1 python3.9[198601]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:35 compute-1 sudo[198599]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:36 compute-1 ceph-mon[81689]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:36 compute-1 ceph-mon[81689]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:49:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:49:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:49:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:49:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:49:36 compute-1 sudo[198751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckufbjhgvgebndzbdlvxufpahwuuojmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003776.1481943-228-237005853084135/AnsiballZ_command.py'
Dec 06 06:49:36 compute-1 sudo[198751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:36 compute-1 python3.9[198753]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:36 compute-1 sudo[198751]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:37 compute-1 sudo[198904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrvzycgqhpasxqgmxmpnloliqkgmgcjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003776.8236902-252-226543943332746/AnsiballZ_stat.py'
Dec 06 06:49:37 compute-1 sudo[198904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:37 compute-1 python3.9[198906]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:37 compute-1 sudo[198904]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:37 compute-1 ceph-mon[81689]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:37.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:37 compute-1 sudo[199027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sauqejugunwfnmhjeteryktouztivyde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003776.8236902-252-226543943332746/AnsiballZ_copy.py'
Dec 06 06:49:37 compute-1 sudo[199027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:37 compute-1 python3.9[199029]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003776.8236902-252-226543943332746/.source.iscsi _original_basename=.12c21oel follow=False checksum=85d10ba14ab3b60d58472fd148cb0f10da05d36d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:38 compute-1 sudo[199027]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 compute-1 sudo[199179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoxuvankvsvsjogxuezwowrolviuiiiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003778.215713-297-211854469242241/AnsiballZ_file.py'
Dec 06 06:49:38 compute-1 sudo[199179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:38 compute-1 python3.9[199181]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:38 compute-1 sudo[199179]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 compute-1 ceph-mon[81689]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:39 compute-1 sudo[199341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgeaqpoogpbdettujxnmpdvgsphikwnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003779.0223522-321-201049891855603/AnsiballZ_lineinfile.py'
Dec 06 06:49:39 compute-1 sudo[199341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:39 compute-1 podman[199305]: 2025-12-06 06:49:39.4537536 +0000 UTC m=+0.062159269 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 06:49:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:49:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:39.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:49:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:49:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:39.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:49:39 compute-1 python3.9[199348]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:39 compute-1 sudo[199341]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:40 compute-1 sudo[199500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixxbllkrqtvbgfnjotiabwkdhaiomgat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003779.901949-348-263412831190654/AnsiballZ_systemd_service.py'
Dec 06 06:49:40 compute-1 sudo[199500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:40 compute-1 python3.9[199502]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:49:40 compute-1 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 06 06:49:40 compute-1 sudo[199500]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:41 compute-1 ceph-mon[81689]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:41 compute-1 sudo[199656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srttyiyrdiwuquastshmnxuelsvcroau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003781.054179-372-165226516664113/AnsiballZ_systemd_service.py'
Dec 06 06:49:41 compute-1 sudo[199656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:41.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:49:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:41.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:49:41 compute-1 python3.9[199658]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:49:41 compute-1 systemd[1]: Reloading.
Dec 06 06:49:41 compute-1 systemd-rc-local-generator[199685]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:49:41 compute-1 systemd-sysv-generator[199688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:49:42 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 06 06:49:42 compute-1 systemd[1]: Starting Open-iSCSI...
Dec 06 06:49:42 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Dec 06 06:49:42 compute-1 systemd[1]: Started Open-iSCSI.
Dec 06 06:49:42 compute-1 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 06 06:49:42 compute-1 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 06 06:49:42 compute-1 sudo[199656]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:42 compute-1 sudo[199856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymkzybjrznaskwpcceoqbzxxhyhdjwtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003782.6340437-405-10236013556611/AnsiballZ_service_facts.py'
Dec 06 06:49:42 compute-1 sudo[199856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:43 compute-1 python3.9[199858]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:49:43 compute-1 network[199875]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:49:43 compute-1 network[199876]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:49:43 compute-1 network[199877]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:49:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:43.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 06:49:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:43.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 06:49:43 compute-1 ceph-mon[81689]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:45 compute-1 sudo[199904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:45 compute-1 sudo[199904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:45 compute-1 sudo[199904]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:45 compute-1 sudo[199929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:49:45 compute-1 sudo[199929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:45 compute-1 sudo[199929]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:45 compute-1 ceph-mon[81689]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:45.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:45.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:47 compute-1 ceph-mon[81689]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:47 compute-1 sudo[199856]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 06:49:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:47.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 06:49:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:47.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:47 compute-1 sudo[200197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaxwdxzyznsutygyljqowhhydxixbhzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003787.6630096-435-147017064531148/AnsiballZ_file.py'
Dec 06 06:49:47 compute-1 sudo[200197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:48 compute-1 python3.9[200199]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 06:49:48 compute-1 sudo[200197]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:48 compute-1 sudo[200349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qccfwfcjnxodmbbirnhyzhalzxtisdmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003788.460528-459-107157351839676/AnsiballZ_modprobe.py'
Dec 06 06:49:48 compute-1 sudo[200349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:49 compute-1 python3.9[200351]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 06 06:49:49 compute-1 sudo[200349]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:49.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:49 compute-1 sudo[200505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxslbcyrhnwapkesaraddjxyzmtvijaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003789.2660952-483-140119426547573/AnsiballZ_stat.py'
Dec 06 06:49:49 compute-1 sudo[200505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:49.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:49 compute-1 ceph-mon[81689]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:49 compute-1 python3.9[200507]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:49 compute-1 sudo[200505]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:50 compute-1 sudo[200628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cracohyxutcfoagksruqqeglnfyrdnxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003789.2660952-483-140119426547573/AnsiballZ_copy.py'
Dec 06 06:49:50 compute-1 sudo[200628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:50 compute-1 python3.9[200630]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003789.2660952-483-140119426547573/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:50 compute-1 sudo[200628]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:50 compute-1 ceph-mon[81689]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:50 compute-1 sudo[200780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xogggmeauqdgynbtgjsnmtjgsrfmhxhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003790.7197387-531-97065304681038/AnsiballZ_lineinfile.py'
Dec 06 06:49:50 compute-1 sudo[200780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:51 compute-1 python3.9[200782]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:51 compute-1 sudo[200780]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:51.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:52 compute-1 sudo[200932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgjqiybgnfefnrjmewdwgkjfwegxjque ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003791.453149-555-222366841326755/AnsiballZ_systemd.py'
Dec 06 06:49:52 compute-1 sudo[200932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:52 compute-1 python3.9[200934]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:49:52 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 06:49:52 compute-1 systemd[1]: Stopped Load Kernel Modules.
Dec 06 06:49:52 compute-1 systemd[1]: Stopping Load Kernel Modules...
Dec 06 06:49:52 compute-1 systemd[1]: Starting Load Kernel Modules...
Dec 06 06:49:52 compute-1 systemd[1]: Finished Load Kernel Modules.
Dec 06 06:49:52 compute-1 sudo[200932]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:53 compute-1 sudo[201088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmawscmwdtxrbfvxbbzcqwxgghlrtvhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003792.7324796-579-5419149005494/AnsiballZ_file.py'
Dec 06 06:49:53 compute-1 sudo[201088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:53 compute-1 python3.9[201090]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:49:53 compute-1 sudo[201088]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:53 compute-1 ceph-mon[81689]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:53.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:53 compute-1 sudo[201240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idlndyagiotrwjwmlavcxriacelozxdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003793.6616428-606-54699812642586/AnsiballZ_stat.py'
Dec 06 06:49:53 compute-1 sudo[201240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:54 compute-1 python3.9[201242]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:54 compute-1 sudo[201240]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:54 compute-1 sudo[201392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neelxfhgliexhcfusirhglwuehiejcdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003794.445022-633-202596338226284/AnsiballZ_stat.py'
Dec 06 06:49:54 compute-1 sudo[201392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:54 compute-1 python3.9[201394]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:54 compute-1 sudo[201392]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:55 compute-1 sudo[201544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejpzdbtklghenvxllelvpiubipdmdmzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003795.1827753-657-100468602956193/AnsiballZ_stat.py'
Dec 06 06:49:55 compute-1 sudo[201544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:55.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:55 compute-1 python3.9[201546]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:55.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:55 compute-1 sudo[201544]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:55 compute-1 ceph-mon[81689]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:56 compute-1 sudo[201667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxfltdjpnmsmjmbmzdfccsipcfqwqkee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003795.1827753-657-100468602956193/AnsiballZ_copy.py'
Dec 06 06:49:56 compute-1 sudo[201667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:56 compute-1 python3.9[201669]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003795.1827753-657-100468602956193/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:56 compute-1 sudo[201667]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:56 compute-1 ceph-mon[81689]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:56 compute-1 sudo[201819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkxviiamoriqfxtzkwladxtpagsaauzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003796.4355206-702-138789458727036/AnsiballZ_command.py'
Dec 06 06:49:56 compute-1 sudo[201819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:57 compute-1 python3.9[201821]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:57 compute-1 sudo[201819]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:49:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:57.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:49:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:57.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:57 compute-1 sudo[201972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svbutsiclwyntbjacmdivtvebemrldla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003797.3973937-726-90497648701384/AnsiballZ_lineinfile.py'
Dec 06 06:49:57 compute-1 sudo[201972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:57 compute-1 python3.9[201974]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:57 compute-1 sudo[201972]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:58 compute-1 sudo[202124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcvgmvmiammsosxzdcbhpkdxqeixeotk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003798.1589012-750-93376777364937/AnsiballZ_replace.py'
Dec 06 06:49:58 compute-1 sudo[202124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:58 compute-1 python3.9[202126]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:58 compute-1 sudo[202124]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:58 compute-1 ceph-mon[81689]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:59 compute-1 sudo[202289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijsjpzlbgsopqtjiywfnztowvkpkwleq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003798.9837146-775-149813052056709/AnsiballZ_replace.py'
Dec 06 06:49:59 compute-1 sudo[202289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:59 compute-1 podman[202250]: 2025-12-06 06:49:59.334617514 +0000 UTC m=+0.089676393 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:49:59 compute-1 python3.9[202297]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:59.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:59 compute-1 sudo[202289]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:49:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:59.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:00 compute-1 sudo[202454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nipqtkahdgzsewkypinehorvqxnyvvol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003799.7970476-801-155185993248173/AnsiballZ_lineinfile.py'
Dec 06 06:50:00 compute-1 sudo[202454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:00 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 06:50:00 compute-1 python3.9[202456]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:00 compute-1 sudo[202454]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:00 compute-1 sudo[202606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqlnsqldzhzldxsmzlovzdfusmqmsinw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003800.4324963-801-25543220445343/AnsiballZ_lineinfile.py'
Dec 06 06:50:00 compute-1 sudo[202606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:00 compute-1 python3.9[202608]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:00 compute-1 sudo[202606]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:01 compute-1 ceph-mon[81689]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:01 compute-1 sudo[202758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viajgfngtjwdamoeyyuudjqqlqdbyqiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003801.071168-801-140880408023053/AnsiballZ_lineinfile.py'
Dec 06 06:50:01 compute-1 sudo[202758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:01.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:01 compute-1 python3.9[202760]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:01 compute-1 sudo[202758]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:50:01.604 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:50:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:50:01.605 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:50:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:50:01.605 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:50:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:01.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:01 compute-1 sudo[202910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxyvaltinfaqoivjaarlrwbglzearnpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003801.6593897-801-241299014902859/AnsiballZ_lineinfile.py'
Dec 06 06:50:01 compute-1 sudo[202910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:02 compute-1 python3.9[202912]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:02 compute-1 sudo[202910]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:02 compute-1 sudo[203062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxmkmrzhaaaqnkpmfuyhqpctzkcbhxfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003802.5352516-888-149998729310129/AnsiballZ_stat.py'
Dec 06 06:50:02 compute-1 sudo[203062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:02 compute-1 python3.9[203064]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:50:03 compute-1 sudo[203062]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:03 compute-1 sudo[203216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esdbjunsjtccuywevviznofdqxfhforo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003803.248747-912-244677618589760/AnsiballZ_file.py'
Dec 06 06:50:03 compute-1 sudo[203216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:03.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:03.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:03 compute-1 python3.9[203218]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:03 compute-1 sudo[203216]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:03 compute-1 ceph-mon[81689]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:04 compute-1 sudo[203368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxvdmxxkifetybanqslwfrbjmdsehfie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003804.1305356-939-233593216607653/AnsiballZ_file.py'
Dec 06 06:50:04 compute-1 sudo[203368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:04 compute-1 python3.9[203370]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:04 compute-1 sudo[203368]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:04 compute-1 ceph-mon[81689]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:05 compute-1 sudo[203520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhipduabcqfkbzwnlkpgeqxdzfqbtnoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003804.957269-963-252456044691537/AnsiballZ_stat.py'
Dec 06 06:50:05 compute-1 sudo[203520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:05 compute-1 python3.9[203522]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:05 compute-1 sudo[203520]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:05.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:05 compute-1 sudo[203598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvexibbpyxdwyfoygksiqywytdolptjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003804.957269-963-252456044691537/AnsiballZ_file.py'
Dec 06 06:50:05 compute-1 sudo[203598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:05.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:05 compute-1 python3.9[203600]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:05 compute-1 sudo[203598]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:06 compute-1 sudo[203750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jewiuelzmfznyiybtvhoposfsjptsyxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003805.973133-963-82557713535043/AnsiballZ_stat.py'
Dec 06 06:50:06 compute-1 sudo[203750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:06 compute-1 python3.9[203752]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:06 compute-1 sudo[203750]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:06 compute-1 sudo[203828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxutroacvqqijugiryofimsmumcnwtbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003805.973133-963-82557713535043/AnsiballZ_file.py'
Dec 06 06:50:06 compute-1 sudo[203828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:06 compute-1 python3.9[203830]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:06 compute-1 sudo[203828]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:07.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:07 compute-1 sudo[203980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prfiilgnxcaprfofcvjhzmgsythangbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003807.3221757-1033-263813189560247/AnsiballZ_file.py'
Dec 06 06:50:07 compute-1 sudo[203980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:07.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:07 compute-1 python3.9[203982]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:07 compute-1 sudo[203980]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:08 compute-1 ceph-mon[81689]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:08 compute-1 sudo[204132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooemzotndkzgtbxhqqyvfhdaqistqdam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003808.0738995-1056-249246826045325/AnsiballZ_stat.py'
Dec 06 06:50:08 compute-1 sudo[204132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:08 compute-1 python3.9[204134]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:08 compute-1 sudo[204132]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:08 compute-1 sudo[204210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuawqaiusbplgexdycyywnoqourudfqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003808.0738995-1056-249246826045325/AnsiballZ_file.py'
Dec 06 06:50:08 compute-1 sudo[204210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:08 compute-1 python3.9[204212]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:08 compute-1 sudo[204210]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:09 compute-1 ceph-mon[81689]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:09 compute-1 sudo[204378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aycsrqopwbuhwgldbmkzvvdojzwmmfns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003809.3234413-1092-29560070958868/AnsiballZ_stat.py'
Dec 06 06:50:09 compute-1 sudo[204378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:09 compute-1 podman[204336]: 2025-12-06 06:50:09.589261004 +0000 UTC m=+0.049372635 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 06:50:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:09.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:09 compute-1 python3.9[204382]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:09 compute-1 sudo[204378]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:10 compute-1 sudo[204459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhjnxjcatjvsdscfgfcqqnvgyfpqiuvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003809.3234413-1092-29560070958868/AnsiballZ_file.py'
Dec 06 06:50:10 compute-1 sudo[204459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:10 compute-1 python3.9[204461]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:10 compute-1 sudo[204459]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:10 compute-1 sudo[204611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcpberwaevqslujeegupahpfcxflctpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003810.5453763-1128-52527200261237/AnsiballZ_systemd.py'
Dec 06 06:50:10 compute-1 sudo[204611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:11 compute-1 python3.9[204613]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:11 compute-1 systemd[1]: Reloading.
Dec 06 06:50:11 compute-1 systemd-rc-local-generator[204640]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:11 compute-1 systemd-sysv-generator[204643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:11.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:11 compute-1 sudo[204611]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:11.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:11 compute-1 ceph-mon[81689]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:12 compute-1 sudo[204799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqpwvkstqrrwtzhlkvvanjhuswmkysad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003811.8417902-1152-239751006684483/AnsiballZ_stat.py'
Dec 06 06:50:12 compute-1 sudo[204799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:12 compute-1 python3.9[204801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:12 compute-1 sudo[204799]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:12 compute-1 sudo[204877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hldbqadwvjnnefxqgacapkcnoaaxzoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003811.8417902-1152-239751006684483/AnsiballZ_file.py'
Dec 06 06:50:12 compute-1 sudo[204877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:12 compute-1 ceph-mon[81689]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:12 compute-1 python3.9[204879]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:12 compute-1 sudo[204877]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:13.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:13 compute-1 sudo[205029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkpfufaqdetyaqouiwtqqzsegfkevlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003813.4281716-1188-228749617509019/AnsiballZ_stat.py'
Dec 06 06:50:13 compute-1 sudo[205029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:13 compute-1 python3.9[205031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:13 compute-1 sudo[205029]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:14 compute-1 sudo[205107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehlmhxbqrrudallwcssnigtogkwwaklk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003813.4281716-1188-228749617509019/AnsiballZ_file.py'
Dec 06 06:50:14 compute-1 sudo[205107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:14 compute-1 python3.9[205109]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:14 compute-1 sudo[205107]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:14 compute-1 sudo[205259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfvktbvjnsgzpukikplocqwynkagbuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003814.574049-1224-124990351097224/AnsiballZ_systemd.py'
Dec 06 06:50:14 compute-1 sudo[205259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:15 compute-1 python3.9[205261]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:15 compute-1 systemd[1]: Reloading.
Dec 06 06:50:15 compute-1 systemd-sysv-generator[205289]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:15 compute-1 systemd-rc-local-generator[205284]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:15.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:15 compute-1 systemd[1]: Starting Create netns directory...
Dec 06 06:50:15 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:50:15 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:50:15 compute-1 systemd[1]: Finished Create netns directory.
Dec 06 06:50:15 compute-1 sudo[205259]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:15.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:15 compute-1 ceph-mon[81689]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:16 compute-1 sudo[205453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypxavbragflryzzdzqtntfarpxhwvcrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003815.9331274-1254-150736237169427/AnsiballZ_file.py'
Dec 06 06:50:16 compute-1 sudo[205453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:16 compute-1 python3.9[205455]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:16 compute-1 sudo[205453]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:16 compute-1 sudo[205605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzjnefumnwfrbggzuhgriraiufiexmax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003816.5871837-1278-167639520995302/AnsiballZ_stat.py'
Dec 06 06:50:16 compute-1 sudo[205605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:17 compute-1 python3.9[205607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:17 compute-1 sudo[205605]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:17 compute-1 ceph-mon[81689]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:17 compute-1 sudo[205728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsegrihwjhwdruexjqdbzyrkwegptgjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003816.5871837-1278-167639520995302/AnsiballZ_copy.py'
Dec 06 06:50:17 compute-1 sudo[205728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:17.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:17 compute-1 python3.9[205730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003816.5871837-1278-167639520995302/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:17 compute-1 sudo[205728]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:18 compute-1 sudo[205880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oswhqeqacbjauujnuqhbkusaveyjvnry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003818.1967957-1329-81560092249192/AnsiballZ_file.py'
Dec 06 06:50:18 compute-1 sudo[205880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:18 compute-1 python3.9[205882]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:18 compute-1 sudo[205880]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:19 compute-1 sudo[206032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oncrfjvdjjoysfndurbfclcirejileqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003819.0596583-1353-172242356416387/AnsiballZ_stat.py'
Dec 06 06:50:19 compute-1 sudo[206032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:19 compute-1 python3.9[206034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:19 compute-1 sudo[206032]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:19.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:19.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:19 compute-1 ceph-mon[81689]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:19 compute-1 sudo[206155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cudkurznpjuycqsafsycfeplirwuhfkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003819.0596583-1353-172242356416387/AnsiballZ_copy.py'
Dec 06 06:50:19 compute-1 sudo[206155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:20 compute-1 python3.9[206157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003819.0596583-1353-172242356416387/.source.json _original_basename=.rhmazezb follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:20 compute-1 sudo[206155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:20 compute-1 sudo[206307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzqopvsimzqpatszzjxekyxswwnjcfno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003820.2333672-1398-194912283582921/AnsiballZ_file.py'
Dec 06 06:50:20 compute-1 sudo[206307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:20 compute-1 python3.9[206309]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:20 compute-1 sudo[206307]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:20 compute-1 ceph-mon[81689]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:21 compute-1 sudo[206459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmwljpinoujecambzqavedilmfbalqqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003820.996193-1422-78186754143995/AnsiballZ_stat.py'
Dec 06 06:50:21 compute-1 sudo[206459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:21 compute-1 sudo[206459]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:21.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:21 compute-1 sudo[206582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbsespvqdihplaldkkrzuyyxonnreqxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003820.996193-1422-78186754143995/AnsiballZ_copy.py'
Dec 06 06:50:21 compute-1 sudo[206582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:21 compute-1 sudo[206582]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:22 compute-1 sudo[206734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iakmacfhnwehjnodwfmjakgtrhqfridv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003822.5245478-1473-56699072685105/AnsiballZ_container_config_data.py'
Dec 06 06:50:22 compute-1 sudo[206734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:23 compute-1 python3.9[206736]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 06 06:50:23 compute-1 sudo[206734]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:23 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 06 06:50:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:23.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:23.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:23 compute-1 ceph-mon[81689]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:23 compute-1 sudo[206887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yobpmhflpdmytgeggquaqlhjbtvhscvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003823.4804263-1500-159975957302595/AnsiballZ_container_config_hash.py'
Dec 06 06:50:23 compute-1 sudo[206887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:24 compute-1 python3.9[206889]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:50:24 compute-1 sudo[206887]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:24 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 06 06:50:24 compute-1 sudo[207040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwprmirsmihthwfcfcspcmkcahxwdpng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003824.33583-1527-84069919049353/AnsiballZ_podman_container_info.py'
Dec 06 06:50:24 compute-1 sudo[207040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:24 compute-1 ceph-mon[81689]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:24 compute-1 python3.9[207042]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 06:50:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:25 compute-1 sudo[207040]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:25.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:25.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:26 compute-1 sudo[207218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkxixppvsfxeutggyqibwvlrctyagtck ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003826.2338035-1566-280789424394349/AnsiballZ_edpm_container_manage.py'
Dec 06 06:50:26 compute-1 sudo[207218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:27 compute-1 python3[207220]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:50:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:27.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:27.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:27 compute-1 ceph-mon[81689]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:28 compute-1 podman[207233]: 2025-12-06 06:50:28.138435376 +0000 UTC m=+0.968581747 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 06:50:28 compute-1 podman[207292]: 2025-12-06 06:50:28.267057128 +0000 UTC m=+0.046759153 container create 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 06:50:28 compute-1 podman[207292]: 2025-12-06 06:50:28.24080298 +0000 UTC m=+0.020505035 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 06:50:28 compute-1 python3[207220]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 06:50:28 compute-1 sudo[207218]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:28 compute-1 ceph-mon[81689]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:29 compute-1 sudo[207480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmsoukljzhgvmmuiutpcsogftzvxmjdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003828.7624955-1590-98677532329438/AnsiballZ_stat.py'
Dec 06 06:50:29 compute-1 sudo[207480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:29 compute-1 python3.9[207482]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:50:29 compute-1 sudo[207480]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:29.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:29.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:29 compute-1 sudo[207644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eighdvkgsobbeozckyfvmgeydzbpbtpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003829.5660586-1617-126886147124896/AnsiballZ_file.py'
Dec 06 06:50:29 compute-1 sudo[207644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:29 compute-1 podman[207608]: 2025-12-06 06:50:29.924767372 +0000 UTC m=+0.095235093 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 06:50:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:30 compute-1 python3.9[207649]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:30 compute-1 sudo[207644]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:30 compute-1 sudo[207736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkphivewshhdvnzoiddtdfbkwsgjbtwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003829.5660586-1617-126886147124896/AnsiballZ_stat.py'
Dec 06 06:50:30 compute-1 sudo[207736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:30 compute-1 python3.9[207738]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:50:30 compute-1 sudo[207736]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:31 compute-1 sudo[207887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqxhcyzsexusvmxmafvnqlntzwbzeijq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003830.5835886-1617-60199154370720/AnsiballZ_copy.py'
Dec 06 06:50:31 compute-1 sudo[207887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:31 compute-1 python3.9[207889]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003830.5835886-1617-60199154370720/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:31 compute-1 sudo[207887]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:31.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:31 compute-1 sudo[207963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efrmzdejoothzblhcopsxhesjwafrath ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003830.5835886-1617-60199154370720/AnsiballZ_systemd.py'
Dec 06 06:50:31 compute-1 sudo[207963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:31.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:31 compute-1 ceph-mon[81689]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:31 compute-1 python3.9[207965]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:50:31 compute-1 systemd[1]: Reloading.
Dec 06 06:50:32 compute-1 systemd-rc-local-generator[207988]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:32 compute-1 systemd-sysv-generator[207994]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:32 compute-1 sudo[207963]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:32 compute-1 sudo[208073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hihynhpvqtuyurmwvzxdeypwgbffgguw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003830.5835886-1617-60199154370720/AnsiballZ_systemd.py'
Dec 06 06:50:32 compute-1 sudo[208073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:32 compute-1 ceph-mon[81689]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:32 compute-1 python3.9[208075]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:32 compute-1 systemd[1]: Reloading.
Dec 06 06:50:32 compute-1 systemd-rc-local-generator[208103]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:32 compute-1 systemd-sysv-generator[208108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:33 compute-1 systemd[1]: Starting multipathd container...
Dec 06 06:50:33 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:50:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0cd4e76dc3f3a4a8d576a2e5f3cdeda5314e4f8edce7174226d41766780b7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0cd4e76dc3f3a4a8d576a2e5f3cdeda5314e4f8edce7174226d41766780b7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:33 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2.
Dec 06 06:50:33 compute-1 podman[208114]: 2025-12-06 06:50:33.310397336 +0000 UTC m=+0.109190639 container init 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 06:50:33 compute-1 multipathd[208128]: + sudo -E kolla_set_configs
Dec 06 06:50:33 compute-1 podman[208114]: 2025-12-06 06:50:33.334521428 +0000 UTC m=+0.133314701 container start 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:50:33 compute-1 sudo[208134]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 06:50:33 compute-1 podman[208114]: multipathd
Dec 06 06:50:33 compute-1 sudo[208134]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:50:33 compute-1 sudo[208134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 06:50:33 compute-1 systemd[1]: Started multipathd container.
Dec 06 06:50:33 compute-1 multipathd[208128]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:50:33 compute-1 multipathd[208128]: INFO:__main__:Validating config file
Dec 06 06:50:33 compute-1 multipathd[208128]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:50:33 compute-1 multipathd[208128]: INFO:__main__:Writing out command to execute
Dec 06 06:50:33 compute-1 sudo[208073]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:33 compute-1 sudo[208134]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:33 compute-1 multipathd[208128]: ++ cat /run_command
Dec 06 06:50:33 compute-1 multipathd[208128]: + CMD='/usr/sbin/multipathd -d'
Dec 06 06:50:33 compute-1 multipathd[208128]: + ARGS=
Dec 06 06:50:33 compute-1 multipathd[208128]: + sudo kolla_copy_cacerts
Dec 06 06:50:33 compute-1 podman[208135]: 2025-12-06 06:50:33.39091067 +0000 UTC m=+0.047000661 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 06:50:33 compute-1 systemd[1]: 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2-3a92e90bf382033e.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 06:50:33 compute-1 systemd[1]: 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2-3a92e90bf382033e.service: Failed with result 'exit-code'.
Dec 06 06:50:33 compute-1 sudo[208159]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 06:50:33 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:50:33 compute-1 sudo[208159]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:50:33 compute-1 sudo[208159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 06:50:33 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:50:33 compute-1 sudo[208159]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:33 compute-1 multipathd[208128]: + [[ ! -n '' ]]
Dec 06 06:50:33 compute-1 multipathd[208128]: + . kolla_extend_start
Dec 06 06:50:33 compute-1 multipathd[208128]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 06 06:50:33 compute-1 multipathd[208128]: Running command: '/usr/sbin/multipathd -d'
Dec 06 06:50:33 compute-1 multipathd[208128]: + umask 0022
Dec 06 06:50:33 compute-1 multipathd[208128]: + exec /usr/sbin/multipathd -d
Dec 06 06:50:33 compute-1 multipathd[208128]: 4112.993210 | --------start up--------
Dec 06 06:50:33 compute-1 multipathd[208128]: 4112.993226 | read /etc/multipath.conf
Dec 06 06:50:33 compute-1 multipathd[208128]: 4112.999130 | path checkers start up
Dec 06 06:50:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:33.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:33.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:34 compute-1 python3.9[208319]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:50:34 compute-1 sudo[208471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euonqqzwumjcotfzhelzudftjjxsnebg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003834.297246-1725-280908025712107/AnsiballZ_command.py'
Dec 06 06:50:34 compute-1 sudo[208471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:34 compute-1 python3.9[208473]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:50:34 compute-1 sudo[208471]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:35 compute-1 sudo[208636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bufbagppleozunmcxqwwcsszwcsvngfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003835.1418693-1749-184913774836119/AnsiballZ_systemd.py'
Dec 06 06:50:35 compute-1 sudo[208636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:35.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:35 compute-1 python3.9[208638]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:50:35 compute-1 systemd[1]: Stopping multipathd container...
Dec 06 06:50:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:35.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:35 compute-1 multipathd[208128]: 4115.337462 | exit (signal)
Dec 06 06:50:35 compute-1 multipathd[208128]: 4115.338050 | --------shut down-------
Dec 06 06:50:35 compute-1 systemd[1]: libpod-46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2.scope: Deactivated successfully.
Dec 06 06:50:35 compute-1 podman[208642]: 2025-12-06 06:50:35.796636753 +0000 UTC m=+0.070440053 container died 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 06:50:35 compute-1 systemd[1]: 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2-3a92e90bf382033e.timer: Deactivated successfully.
Dec 06 06:50:35 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2.
Dec 06 06:50:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2-userdata-shm.mount: Deactivated successfully.
Dec 06 06:50:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-e8a0cd4e76dc3f3a4a8d576a2e5f3cdeda5314e4f8edce7174226d41766780b7-merged.mount: Deactivated successfully.
Dec 06 06:50:36 compute-1 podman[208642]: 2025-12-06 06:50:36.095889284 +0000 UTC m=+0.369692584 container cleanup 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 06:50:36 compute-1 podman[208642]: multipathd
Dec 06 06:50:36 compute-1 podman[208669]: multipathd
Dec 06 06:50:36 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 06 06:50:36 compute-1 systemd[1]: Stopped multipathd container.
Dec 06 06:50:36 compute-1 systemd[1]: Starting multipathd container...
Dec 06 06:50:36 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:50:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0cd4e76dc3f3a4a8d576a2e5f3cdeda5314e4f8edce7174226d41766780b7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0cd4e76dc3f3a4a8d576a2e5f3cdeda5314e4f8edce7174226d41766780b7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:36 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2.
Dec 06 06:50:36 compute-1 podman[208682]: 2025-12-06 06:50:36.280367936 +0000 UTC m=+0.100003252 container init 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:50:36 compute-1 multipathd[208697]: + sudo -E kolla_set_configs
Dec 06 06:50:36 compute-1 ceph-mon[81689]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:36 compute-1 sudo[208703]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 06:50:36 compute-1 sudo[208703]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:50:36 compute-1 sudo[208703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 06:50:36 compute-1 podman[208682]: 2025-12-06 06:50:36.309676957 +0000 UTC m=+0.129312253 container start 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:50:36 compute-1 podman[208682]: multipathd
Dec 06 06:50:36 compute-1 systemd[1]: Started multipathd container.
Dec 06 06:50:36 compute-1 sudo[208636]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:36 compute-1 multipathd[208697]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:50:36 compute-1 multipathd[208697]: INFO:__main__:Validating config file
Dec 06 06:50:36 compute-1 multipathd[208697]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:50:36 compute-1 multipathd[208697]: INFO:__main__:Writing out command to execute
Dec 06 06:50:36 compute-1 sudo[208703]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:36 compute-1 multipathd[208697]: ++ cat /run_command
Dec 06 06:50:36 compute-1 multipathd[208697]: + CMD='/usr/sbin/multipathd -d'
Dec 06 06:50:36 compute-1 multipathd[208697]: + ARGS=
Dec 06 06:50:36 compute-1 multipathd[208697]: + sudo kolla_copy_cacerts
Dec 06 06:50:36 compute-1 sudo[208723]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 06:50:36 compute-1 sudo[208723]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:50:36 compute-1 sudo[208723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 06:50:36 compute-1 sudo[208723]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:36 compute-1 multipathd[208697]: + [[ ! -n '' ]]
Dec 06 06:50:36 compute-1 multipathd[208697]: + . kolla_extend_start
Dec 06 06:50:36 compute-1 multipathd[208697]: Running command: '/usr/sbin/multipathd -d'
Dec 06 06:50:36 compute-1 multipathd[208697]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 06 06:50:36 compute-1 multipathd[208697]: + umask 0022
Dec 06 06:50:36 compute-1 multipathd[208697]: + exec /usr/sbin/multipathd -d
Dec 06 06:50:36 compute-1 podman[208704]: 2025-12-06 06:50:36.379260036 +0000 UTC m=+0.058718037 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:50:36 compute-1 systemd[1]: 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2-3612229608a037a6.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 06:50:36 compute-1 systemd[1]: 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2-3612229608a037a6.service: Failed with result 'exit-code'.
Dec 06 06:50:36 compute-1 multipathd[208697]: 4115.964422 | --------start up--------
Dec 06 06:50:36 compute-1 multipathd[208697]: 4115.964441 | read /etc/multipath.conf
Dec 06 06:50:36 compute-1 multipathd[208697]: 4115.969671 | path checkers start up
Dec 06 06:50:37 compute-1 sudo[208885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbukanggcsxblchjgqzxbvkmyfcbvpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003836.9701235-1773-161206145104793/AnsiballZ_file.py'
Dec 06 06:50:37 compute-1 sudo[208885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:37 compute-1 ceph-mon[81689]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:37 compute-1 python3.9[208887]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:37 compute-1 sudo[208885]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:37.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:37 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 06 06:50:37 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 06 06:50:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:37.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:38 compute-1 sudo[209039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgxyfrkydxdqstuhrsnomaarcwsmboia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003838.030839-1809-139303005746262/AnsiballZ_file.py'
Dec 06 06:50:38 compute-1 sudo[209039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:38 compute-1 python3.9[209041]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 06:50:38 compute-1 sudo[209039]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:38 compute-1 ceph-mon[81689]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:39 compute-1 sudo[209191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywwaeluqmukguzucghcxhkyjveetghso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003838.7892764-1833-147116363690201/AnsiballZ_modprobe.py'
Dec 06 06:50:39 compute-1 sudo[209191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:39 compute-1 python3.9[209193]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 06 06:50:39 compute-1 kernel: Key type psk registered
Dec 06 06:50:39 compute-1 sudo[209191]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:39.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:40 compute-1 sudo[209367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzttbuoepibwvxzscjepspbvkurrzyfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003839.7614846-1857-248061601971606/AnsiballZ_stat.py'
Dec 06 06:50:40 compute-1 sudo[209367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:40 compute-1 podman[209330]: 2025-12-06 06:50:40.071333654 +0000 UTC m=+0.077642967 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 06:50:40 compute-1 python3.9[209375]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:40 compute-1 sudo[209367]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:40 compute-1 sudo[209498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfjhhustksaakysrehblvpflbrpxjzik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003839.7614846-1857-248061601971606/AnsiballZ_copy.py'
Dec 06 06:50:40 compute-1 sudo[209498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:40 compute-1 python3.9[209500]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003839.7614846-1857-248061601971606/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:40 compute-1 sudo[209498]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:40 compute-1 ceph-mon[81689]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:41 compute-1 sudo[209650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghzldakamepkhoborpwfvwewarcnkywr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003841.1389751-1905-263830283075237/AnsiballZ_lineinfile.py'
Dec 06 06:50:41 compute-1 sudo[209650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:41.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:41 compute-1 python3.9[209652]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:41 compute-1 sudo[209650]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:41.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:42 compute-1 sudo[209802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnthgvehtcggpwrszejitdpyuuwngzhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003841.8367555-1929-140777230252666/AnsiballZ_systemd.py'
Dec 06 06:50:42 compute-1 sudo[209802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:42 compute-1 python3.9[209804]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:50:42 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 06:50:42 compute-1 systemd[1]: Stopped Load Kernel Modules.
Dec 06 06:50:42 compute-1 systemd[1]: Stopping Load Kernel Modules...
Dec 06 06:50:42 compute-1 systemd[1]: Starting Load Kernel Modules...
Dec 06 06:50:42 compute-1 systemd[1]: Finished Load Kernel Modules.
Dec 06 06:50:42 compute-1 sudo[209802]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:42 compute-1 ceph-mon[81689]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:43 compute-1 sudo[209958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctyhujxjnrmxfggampxdyramczbuezkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003842.8001125-1953-232283599404524/AnsiballZ_dnf.py'
Dec 06 06:50:43 compute-1 sudo[209958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:43 compute-1 python3.9[209960]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:50:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:43.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:44 compute-1 ceph-mon[81689]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:45 compute-1 sudo[209965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:45 compute-1 sudo[209965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:45 compute-1 sudo[209965]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:45 compute-1 sudo[209990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:50:45 compute-1 sudo[209990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:45 compute-1 sudo[209990]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:45 compute-1 sudo[210015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:45 compute-1 sudo[210015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:45 compute-1 sudo[210015]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:45 compute-1 sudo[210040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:50:45 compute-1 sudo[210040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:45 compute-1 systemd[1]: Reloading.
Dec 06 06:50:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:45.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:45 compute-1 systemd-rc-local-generator[210107]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:45 compute-1 systemd-sysv-generator[210111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:46 compute-1 systemd[1]: Reloading.
Dec 06 06:50:46 compute-1 sudo[210040]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:46 compute-1 systemd-rc-local-generator[210154]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:46 compute-1 systemd-sysv-generator[210161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:46 compute-1 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 06 06:50:46 compute-1 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 06 06:50:46 compute-1 lvm[210205]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:50:46 compute-1 lvm[210205]: VG ceph_vg0 finished
Dec 06 06:50:47 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:50:47 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:50:47 compute-1 systemd[1]: Reloading.
Dec 06 06:50:47 compute-1 systemd-rc-local-generator[210257]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:47 compute-1 systemd-sysv-generator[210260]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:47 compute-1 ceph-mon[81689]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:47.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:47 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:50:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:47.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:48 compute-1 sudo[209958]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:48 compute-1 sudo[211543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uymuxflyebwxztedkadozndhwbrrzwbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003848.47064-1978-227563941342790/AnsiballZ_systemd_service.py'
Dec 06 06:50:48 compute-1 sudo[211543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:48 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:50:48 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:50:48 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.695s CPU time.
Dec 06 06:50:48 compute-1 systemd[1]: run-r472b66387c224cc9b42ec9375170015c.service: Deactivated successfully.
Dec 06 06:50:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:50:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:50:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:50:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:50:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:50:48 compute-1 ceph-mon[81689]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:49 compute-1 python3.9[211545]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:50:49 compute-1 systemd[1]: Stopping Open-iSCSI...
Dec 06 06:50:49 compute-1 iscsid[199698]: iscsid shutting down.
Dec 06 06:50:49 compute-1 systemd[1]: iscsid.service: Deactivated successfully.
Dec 06 06:50:49 compute-1 systemd[1]: Stopped Open-iSCSI.
Dec 06 06:50:49 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 06 06:50:49 compute-1 systemd[1]: Starting Open-iSCSI...
Dec 06 06:50:49 compute-1 systemd[1]: Started Open-iSCSI.
Dec 06 06:50:49 compute-1 sudo[211543]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:49.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:49 compute-1 python3.9[211701]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:50:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:50 compute-1 sudo[211855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vemkjmhzyqonlltumlhzdrloibhkoklg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003850.5442855-2029-105776631776958/AnsiballZ_file.py'
Dec 06 06:50:50 compute-1 sudo[211855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:50 compute-1 ceph-mon[81689]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:50 compute-1 python3.9[211857]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:51 compute-1 sudo[211855]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:51.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:51 compute-1 sudo[212007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inlgrytwjrkqkqdzafygqgpwbwfthvms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003851.610327-2062-3247383657313/AnsiballZ_systemd_service.py'
Dec 06 06:50:51 compute-1 sudo[212007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:52 compute-1 python3.9[212009]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:50:52 compute-1 systemd[1]: Reloading.
Dec 06 06:50:52 compute-1 systemd-rc-local-generator[212035]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:52 compute-1 systemd-sysv-generator[212039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:52 compute-1 sudo[212007]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:52 compute-1 ceph-mon[81689]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:53 compute-1 python3.9[212193]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:50:53 compute-1 network[212210]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:50:53 compute-1 network[212211]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:50:53 compute-1 network[212212]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:50:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:53.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:53.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:55 compute-1 ceph-mon[81689]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:55.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:55.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:57 compute-1 sudo[212485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhafkgbkyyisxgnsjcauhkiqzalqakqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003856.8872707-2119-92523789465086/AnsiballZ_systemd_service.py'
Dec 06 06:50:57 compute-1 sudo[212485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:57 compute-1 python3.9[212487]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:57 compute-1 sudo[212485]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:57.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:50:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:57.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:50:57 compute-1 sudo[212638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgikosvrkqgdjulptdbfoznandnokpjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003857.6596718-2119-108838320820996/AnsiballZ_systemd_service.py'
Dec 06 06:50:57 compute-1 sudo[212638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:58 compute-1 ceph-mon[81689]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:58 compute-1 python3.9[212640]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:58 compute-1 sudo[212641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:58 compute-1 sudo[212641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:58 compute-1 sudo[212638]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:58 compute-1 sudo[212641]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:58 compute-1 sudo[212667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:50:58 compute-1 sudo[212667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:58 compute-1 sudo[212667]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:58 compute-1 sudo[212841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aapohachngqalptbtaxrqhppvgnopvue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003858.4477038-2119-139910031004761/AnsiballZ_systemd_service.py'
Dec 06 06:50:58 compute-1 sudo[212841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:58 compute-1 python3.9[212843]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:59 compute-1 sudo[212841]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:59 compute-1 ceph-mon[81689]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:59 compute-1 sudo[212994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdyfoapqbdwymtkvummjfkpoqnkycahx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003859.1575327-2119-247801313361112/AnsiballZ_systemd_service.py'
Dec 06 06:50:59 compute-1 sudo[212994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:59.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:59 compute-1 python3.9[212996]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:50:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:59.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:59 compute-1 sudo[212994]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:00 compute-1 podman[213097]: 2025-12-06 06:51:00.136290098 +0000 UTC m=+0.115090159 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 06:51:00 compute-1 sudo[213173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hifafvyqijwjkffmdtjndaqajilauqlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003859.8880067-2119-126040798262228/AnsiballZ_systemd_service.py'
Dec 06 06:51:00 compute-1 sudo[213173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:00 compute-1 python3.9[213175]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:51:00 compute-1 sudo[213173]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:00 compute-1 ceph-mon[81689]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:01 compute-1 sudo[213326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyjqvrewvskoqsmpjfacinhhufirwrmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003860.8028014-2119-102709481062891/AnsiballZ_systemd_service.py'
Dec 06 06:51:01 compute-1 sudo[213326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:01 compute-1 python3.9[213328]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:51:01 compute-1 sudo[213326]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:51:01.605 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:51:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:51:01.606 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:51:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:51:01.606 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:51:01 compute-1 anacron[30936]: Job `cron.monthly' started
Dec 06 06:51:01 compute-1 anacron[30936]: Job `cron.monthly' terminated
Dec 06 06:51:01 compute-1 anacron[30936]: Normal exit (3 jobs run)
Dec 06 06:51:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:51:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:51:01 compute-1 sudo[213481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agenffxdlsvlbzylqsiheyduagrmcgxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003861.5990462-2119-157655780142350/AnsiballZ_systemd_service.py'
Dec 06 06:51:01 compute-1 sudo[213481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:02 compute-1 python3.9[213483]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:51:02 compute-1 sudo[213481]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:02 compute-1 sudo[213634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxzmhcpcivjiszomiakgexsvjpnolwto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003862.3066955-2119-161988073991966/AnsiballZ_systemd_service.py'
Dec 06 06:51:02 compute-1 sudo[213634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:02 compute-1 python3.9[213636]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:51:02 compute-1 sudo[213634]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:03 compute-1 ceph-mon[81689]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:03.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:04 compute-1 sudo[213787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbssmumizqjcbpqlkqfwdzgqnnhvhwxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003863.819321-2296-185439632973181/AnsiballZ_file.py'
Dec 06 06:51:04 compute-1 sudo[213787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:04 compute-1 python3.9[213789]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:04 compute-1 sudo[213787]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:04 compute-1 sudo[213939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umlwwnfrgsnehpqkvhkzkonqqswoxfnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003864.3996534-2296-73384988850169/AnsiballZ_file.py'
Dec 06 06:51:04 compute-1 sudo[213939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:04 compute-1 ceph-mon[81689]: pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:04 compute-1 python3.9[213941]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:05 compute-1 sudo[213939]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:05 compute-1 sudo[214091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeqpfqpxqxvbnemgnnyajfocbuwfctnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003865.137566-2296-198472328917457/AnsiballZ_file.py'
Dec 06 06:51:05 compute-1 sudo[214091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:05 compute-1 python3.9[214093]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:05 compute-1 sudo[214091]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:06 compute-1 sudo[214243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfgrhnemgouidqrgugkcdrqhiykwefdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003865.7530563-2296-177308804939842/AnsiballZ_file.py'
Dec 06 06:51:06 compute-1 sudo[214243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:06 compute-1 python3.9[214245]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:06 compute-1 sudo[214243]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:06 compute-1 sudo[214406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vswthsccfuhcbbkdiuiafakbjdviesmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003866.414997-2296-280824302227237/AnsiballZ_file.py'
Dec 06 06:51:06 compute-1 sudo[214406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:06 compute-1 ceph-mon[81689]: pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:51:06 compute-1 podman[214369]: 2025-12-06 06:51:06.831565803 +0000 UTC m=+0.129037455 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec 06 06:51:06 compute-1 python3.9[214412]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:06 compute-1 sudo[214406]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:07 compute-1 sudo[214567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvebnhhstjzuybzerrroxlerhibpynh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003867.1135683-2296-242773514316991/AnsiballZ_file.py'
Dec 06 06:51:07 compute-1 sudo[214567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:07.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:07 compute-1 python3.9[214569]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:07 compute-1 sudo[214567]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:07.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:08 compute-1 sudo[214719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crvmxlqhkmaudlynemmvypwdjcnowncr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003867.834333-2296-14818156020904/AnsiballZ_file.py'
Dec 06 06:51:08 compute-1 sudo[214719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:08 compute-1 python3.9[214721]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:08 compute-1 sudo[214719]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:08 compute-1 sudo[214871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uidxilcxdvhyikwgteauygkctvcxcplx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003868.511683-2296-160076241839706/AnsiballZ_file.py'
Dec 06 06:51:08 compute-1 sudo[214871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:08 compute-1 ceph-mon[81689]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec 06 06:51:08 compute-1 python3.9[214873]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:09 compute-1 sudo[214871]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:09 compute-1 sudo[215023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujyswjhbflzwrppjkzpurlqmtaietft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003869.19975-2467-180432794865822/AnsiballZ_file.py'
Dec 06 06:51:09 compute-1 sudo[215023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:09.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:09 compute-1 python3.9[215025]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:09 compute-1 sudo[215023]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:09.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:10 compute-1 sudo[215175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrsgskmvzpyjsshsecspldnuvxwamoax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003869.8227375-2467-52353989017667/AnsiballZ_file.py'
Dec 06 06:51:10 compute-1 sudo[215175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:10 compute-1 podman[215177]: 2025-12-06 06:51:10.19101877 +0000 UTC m=+0.057932035 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 06 06:51:10 compute-1 python3.9[215178]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:10 compute-1 sudo[215175]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:11 compute-1 ceph-mon[81689]: pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec 06 06:51:11 compute-1 sudo[215347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxzcfloptrnqszcapaezsrihsymmxny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003870.5330951-2467-262652964204226/AnsiballZ_file.py'
Dec 06 06:51:11 compute-1 sudo[215347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:51:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:11.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:51:11 compute-1 python3.9[215349]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:11 compute-1 sudo[215347]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:51:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:11.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:51:12 compute-1 sudo[215499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hexkmuktpvvvfkwutumhqgnpvephvxke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003871.8474364-2467-4252385919528/AnsiballZ_file.py'
Dec 06 06:51:12 compute-1 sudo[215499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:12 compute-1 python3.9[215501]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:12 compute-1 sudo[215499]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:12 compute-1 sudo[215651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lthxuqujillgjryvgtiqxtbvlbsdklvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003872.4685981-2467-270513952275692/AnsiballZ_file.py'
Dec 06 06:51:12 compute-1 sudo[215651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:12 compute-1 ceph-mon[81689]: pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Dec 06 06:51:12 compute-1 python3.9[215653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:12 compute-1 sudo[215651]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:13 compute-1 sudo[215803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jznzhtyakaoffoltzggdozjwegftvylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003873.2257106-2467-35120925396724/AnsiballZ_file.py'
Dec 06 06:51:13 compute-1 sudo[215803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:13.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:13 compute-1 python3.9[215805]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:13 compute-1 sudo[215803]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:13.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:14 compute-1 sudo[215955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgyhhenwdiuycflnkiejpenpdbhhtkhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003873.8714335-2467-162739591370117/AnsiballZ_file.py'
Dec 06 06:51:14 compute-1 sudo[215955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:14 compute-1 python3.9[215957]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:14 compute-1 sudo[215955]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:14 compute-1 sudo[216107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emcoemoftwvxeknapfoggyujtzpfsrhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003874.4910214-2467-261732480597173/AnsiballZ_file.py'
Dec 06 06:51:14 compute-1 sudo[216107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:14 compute-1 python3.9[216109]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:14 compute-1 sudo[216107]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:15 compute-1 ceph-mon[81689]: pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 83 op/s
Dec 06 06:51:15 compute-1 sudo[216259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weewtglvntljewxchbiivvqykcbsbktj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003875.307591-2641-13319119490922/AnsiballZ_command.py'
Dec 06 06:51:15 compute-1 sudo[216259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:15 compute-1 python3.9[216261]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:15 compute-1 sudo[216259]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:15.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.439503) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876439547, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2170, "num_deletes": 257, "total_data_size": 5534093, "memory_usage": 5609312, "flush_reason": "Manual Compaction"}
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876472691, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 3620560, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15397, "largest_seqno": 17561, "table_properties": {"data_size": 3611698, "index_size": 5548, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16929, "raw_average_key_size": 19, "raw_value_size": 3594216, "raw_average_value_size": 4088, "num_data_blocks": 248, "num_entries": 879, "num_filter_entries": 879, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003651, "oldest_key_time": 1765003651, "file_creation_time": 1765003876, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 33223 microseconds, and 6578 cpu microseconds.
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.472731) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 3620560 bytes OK
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.472747) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.475026) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.475041) EVENT_LOG_v1 {"time_micros": 1765003876475036, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.475058) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 5524646, prev total WAL file size 5524646, number of live WAL files 2.
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.476300) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(3535KB)], [30(7407KB)]
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876476371, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 11205839, "oldest_snapshot_seqno": -1}
Dec 06 06:51:16 compute-1 python3.9[216413]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4767 keys, 10810269 bytes, temperature: kUnknown
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876563649, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 10810269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10775013, "index_size": 22218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 119737, "raw_average_key_size": 25, "raw_value_size": 10685410, "raw_average_value_size": 2241, "num_data_blocks": 921, "num_entries": 4767, "num_filter_entries": 4767, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765003876, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.563880) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 10810269 bytes
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.565277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.3 rd, 123.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 7.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.1) write-amplify(3.0) OK, records in: 5298, records dropped: 531 output_compression: NoCompression
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.565298) EVENT_LOG_v1 {"time_micros": 1765003876565288, "job": 16, "event": "compaction_finished", "compaction_time_micros": 87345, "compaction_time_cpu_micros": 22690, "output_level": 6, "num_output_files": 1, "total_output_size": 10810269, "num_input_records": 5298, "num_output_records": 4767, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876566180, "job": 16, "event": "table_file_deletion", "file_number": 32}
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876568045, "job": 16, "event": "table_file_deletion", "file_number": 30}
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.476134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.568138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.568142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.568144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.568146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:16.568148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:17 compute-1 sudo[216563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkthntsjnvizxemegnmqxbdnpmwmojws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003876.9973888-2695-188624801089207/AnsiballZ_systemd_service.py'
Dec 06 06:51:17 compute-1 sudo[216563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:17.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:17 compute-1 python3.9[216565]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:51:17 compute-1 systemd[1]: Reloading.
Dec 06 06:51:17 compute-1 systemd-sysv-generator[216596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:51:17 compute-1 systemd-rc-local-generator[216591]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:51:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:17.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:17 compute-1 ceph-mon[81689]: pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 06:51:17 compute-1 sudo[216563]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:18 compute-1 sudo[216750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpcdjwkcccjsbovwqezlmxeoizqptygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003878.274323-2719-98656424651683/AnsiballZ_command.py'
Dec 06 06:51:18 compute-1 sudo[216750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:18 compute-1 python3.9[216752]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:18 compute-1 sudo[216750]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:19 compute-1 sudo[216903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgwvygoqkmpqoyefzbmesojhcjfrdzlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003878.8975902-2719-252067972902014/AnsiballZ_command.py'
Dec 06 06:51:19 compute-1 sudo[216903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:19 compute-1 ceph-mon[81689]: pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Dec 06 06:51:19 compute-1 python3.9[216905]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:19 compute-1 sudo[216903]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:19.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:19 compute-1 sudo[217056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylghkkvaxgmnjhqqimfjnjlzymhdmhsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003879.4911854-2719-255334290172400/AnsiballZ_command.py'
Dec 06 06:51:19 compute-1 sudo[217056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:19 compute-1 python3.9[217058]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:19 compute-1 sudo[217056]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:20 compute-1 sudo[217209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwimfgdaiwnlhfvkkugnyeboawwzrufr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003880.0495818-2719-177264284950401/AnsiballZ_command.py'
Dec 06 06:51:20 compute-1 sudo[217209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:20 compute-1 python3.9[217211]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:20 compute-1 sudo[217209]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:20 compute-1 sudo[217362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayuxdgmdasfsxzpfjyjzcljvqyoveelx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003880.6280713-2719-262391436952925/AnsiballZ_command.py'
Dec 06 06:51:20 compute-1 ceph-mon[81689]: pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 117 op/s
Dec 06 06:51:20 compute-1 sudo[217362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:21 compute-1 python3.9[217364]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:21 compute-1 sudo[217362]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:21.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:21 compute-1 sudo[217515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doiokmdnabanevzxoyvhaydqszyxjhbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003881.2790425-2719-228164542338597/AnsiballZ_command.py'
Dec 06 06:51:21 compute-1 sudo[217515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:21 compute-1 python3.9[217517]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:21 compute-1 sudo[217515]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:22 compute-1 sudo[217668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbkbgxezumqsdrwfuaxfhrgqghcdryke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003881.969922-2719-62131178981369/AnsiballZ_command.py'
Dec 06 06:51:22 compute-1 sudo[217668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:22 compute-1 python3.9[217670]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:22 compute-1 sudo[217668]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:22 compute-1 sudo[217821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdjmrshlkjuxdymhsyikdjphihtozfue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003882.5559802-2719-182606069647309/AnsiballZ_command.py'
Dec 06 06:51:22 compute-1 sudo[217821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:22 compute-1 ceph-mon[81689]: pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 148 op/s
Dec 06 06:51:22 compute-1 python3.9[217823]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:23 compute-1 sudo[217821]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:23.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:24 compute-1 sudo[217974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfwbzgfblwulbbffdcbaotjhnbrbgsme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003884.3168461-2926-33469575247963/AnsiballZ_file.py'
Dec 06 06:51:24 compute-1 sudo[217974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:24 compute-1 python3.9[217976]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:24 compute-1 sudo[217974]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:25 compute-1 sudo[218126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwdsdjidarubsmulqshhfmcvsrteklog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003884.936029-2926-279966054916655/AnsiballZ_file.py'
Dec 06 06:51:25 compute-1 sudo[218126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:25 compute-1 python3.9[218128]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:25 compute-1 sudo[218126]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:25.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:25 compute-1 sudo[218278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceubhjuaqpittlsxfpysujumqtxkmeau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003885.6110632-2926-62853862889197/AnsiballZ_file.py'
Dec 06 06:51:25 compute-1 sudo[218278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:26 compute-1 python3.9[218280]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:26 compute-1 sudo[218278]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:26 compute-1 ceph-mon[81689]: pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 112 op/s
Dec 06 06:51:26 compute-1 sudo[218430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpfehoklxzjvmjslgzunbrshpeorosey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003886.411459-2992-3507811462069/AnsiballZ_file.py'
Dec 06 06:51:26 compute-1 sudo[218430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:26 compute-1 python3.9[218432]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:26 compute-1 sudo[218430]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:27 compute-1 sudo[218582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozeuropnxydycqysrtqovdtjllmrvfrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003887.017086-2992-147999837197424/AnsiballZ_file.py'
Dec 06 06:51:27 compute-1 sudo[218582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:27 compute-1 python3.9[218584]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:27 compute-1 sudo[218582]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:51:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:27.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:51:27 compute-1 ceph-mon[81689]: pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 95 op/s
Dec 06 06:51:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:27.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:27 compute-1 sudo[218734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzdierwrqghtdwmnomohoylkypcbvulm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003887.6152735-2992-146602740136175/AnsiballZ_file.py'
Dec 06 06:51:27 compute-1 sudo[218734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:28 compute-1 python3.9[218736]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:28 compute-1 sudo[218734]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:28 compute-1 sudo[218886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nruwkejulnrwnmhdnuwwejradqmwodsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003888.2279603-2992-254497898618790/AnsiballZ_file.py'
Dec 06 06:51:28 compute-1 sudo[218886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:28 compute-1 python3.9[218888]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:28 compute-1 sudo[218886]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:29 compute-1 sudo[219038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pssmkxvvmdtogcriisutcuhbrevmtcqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003888.8419561-2992-33752705847980/AnsiballZ_file.py'
Dec 06 06:51:29 compute-1 sudo[219038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:29 compute-1 python3.9[219040]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:29 compute-1 sudo[219038]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:29 compute-1 ceph-mon[81689]: pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.602064) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889602139, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 371, "num_deletes": 251, "total_data_size": 359601, "memory_usage": 366968, "flush_reason": "Manual Compaction"}
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889605976, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 237057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17566, "largest_seqno": 17932, "table_properties": {"data_size": 234849, "index_size": 372, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5454, "raw_average_key_size": 18, "raw_value_size": 230534, "raw_average_value_size": 781, "num_data_blocks": 17, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003877, "oldest_key_time": 1765003877, "file_creation_time": 1765003889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 3935 microseconds, and 1264 cpu microseconds.
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.606011) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 237057 bytes OK
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.606026) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.607695) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.607711) EVENT_LOG_v1 {"time_micros": 1765003889607706, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.607725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 357137, prev total WAL file size 357137, number of live WAL files 2.
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.608017) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(231KB)], [33(10MB)]
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889608049, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 11047326, "oldest_snapshot_seqno": -1}
Dec 06 06:51:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:51:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4552 keys, 8962818 bytes, temperature: kUnknown
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889678516, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 8962818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8930706, "index_size": 19650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 115937, "raw_average_key_size": 25, "raw_value_size": 8846412, "raw_average_value_size": 1943, "num_data_blocks": 806, "num_entries": 4552, "num_filter_entries": 4552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765003889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.678723) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8962818 bytes
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.682800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.6 rd, 127.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(84.4) write-amplify(37.8) OK, records in: 5062, records dropped: 510 output_compression: NoCompression
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.682816) EVENT_LOG_v1 {"time_micros": 1765003889682809, "job": 18, "event": "compaction_finished", "compaction_time_micros": 70532, "compaction_time_cpu_micros": 19139, "output_level": 6, "num_output_files": 1, "total_output_size": 8962818, "num_input_records": 5062, "num_output_records": 4552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889682957, "job": 18, "event": "table_file_deletion", "file_number": 35}
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889684850, "job": 18, "event": "table_file_deletion", "file_number": 33}
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.607963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.684935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.684940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.684942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.684943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:51:29.684945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:29.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:29 compute-1 sudo[219190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdyhzzqittdlzyaokfglbymhnvaznzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003889.422229-2992-116679640110101/AnsiballZ_file.py'
Dec 06 06:51:29 compute-1 sudo[219190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:30 compute-1 python3.9[219192]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:30 compute-1 sudo[219190]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:30 compute-1 sudo[219352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdgmdgfyikhtvrqhcbtwgikdpuogtvar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003890.1614366-2992-118702102950091/AnsiballZ_file.py'
Dec 06 06:51:30 compute-1 sudo[219352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:30 compute-1 podman[219316]: 2025-12-06 06:51:30.520306491 +0000 UTC m=+0.133321991 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 06:51:30 compute-1 python3.9[219361]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:30 compute-1 sudo[219352]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:30 compute-1 ceph-mon[81689]: pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 06 06:51:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:31.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:31.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:32 compute-1 ceph-mon[81689]: pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 06 06:51:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:51:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:33.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:51:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:33.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:34 compute-1 ceph-mon[81689]: pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 06:51:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:35.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:35.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:36 compute-1 sudo[219520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxpypyhuumfgullmjclfzyyrvfbigaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003895.9651308-3317-70010581416023/AnsiballZ_getent.py'
Dec 06 06:51:36 compute-1 sudo[219520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:36 compute-1 python3.9[219522]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 06 06:51:36 compute-1 sudo[219520]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:36 compute-1 ceph-mon[81689]: pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:37 compute-1 podman[219600]: 2025-12-06 06:51:37.09121385 +0000 UTC m=+0.079596293 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 06 06:51:37 compute-1 sudo[219693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvpusflfajjrbnxgmvbkjvxreuiabroy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003896.8034387-3341-48473870738515/AnsiballZ_group.py'
Dec 06 06:51:37 compute-1 sudo[219693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:37 compute-1 python3.9[219695]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:51:37 compute-1 groupadd[219696]: group added to /etc/group: name=nova, GID=42436
Dec 06 06:51:37 compute-1 groupadd[219696]: group added to /etc/gshadow: name=nova
Dec 06 06:51:37 compute-1 groupadd[219696]: new group: name=nova, GID=42436
Dec 06 06:51:37 compute-1 sudo[219693]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:37.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:37.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:38 compute-1 ceph-mon[81689]: pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:39 compute-1 sudo[219851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baannzyqgsychiawzzihmwjefzczjooi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003898.6847644-3365-234962098629841/AnsiballZ_user.py'
Dec 06 06:51:39 compute-1 sudo[219851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:39 compute-1 python3.9[219853]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 06:51:39 compute-1 useradd[219855]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 06 06:51:39 compute-1 useradd[219855]: add 'nova' to group 'libvirt'
Dec 06 06:51:39 compute-1 useradd[219855]: add 'nova' to shadow group 'libvirt'
Dec 06 06:51:39 compute-1 sudo[219851]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:39.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:40 compute-1 sshd-session[219887]: Accepted publickey for zuul from 192.168.122.30 port 38654 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:51:40 compute-1 systemd-logind[788]: New session 50 of user zuul.
Dec 06 06:51:40 compute-1 systemd[1]: Started Session 50 of User zuul.
Dec 06 06:51:40 compute-1 sshd-session[219887]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:51:40 compute-1 podman[219889]: 2025-12-06 06:51:40.506334296 +0000 UTC m=+0.053130653 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 06:51:40 compute-1 sshd-session[219896]: Received disconnect from 192.168.122.30 port 38654:11: disconnected by user
Dec 06 06:51:40 compute-1 sshd-session[219896]: Disconnected from user zuul 192.168.122.30 port 38654
Dec 06 06:51:40 compute-1 sshd-session[219887]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:51:40 compute-1 systemd[1]: session-50.scope: Deactivated successfully.
Dec 06 06:51:40 compute-1 systemd-logind[788]: Session 50 logged out. Waiting for processes to exit.
Dec 06 06:51:40 compute-1 systemd-logind[788]: Removed session 50.
Dec 06 06:51:40 compute-1 ceph-mon[81689]: pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:41 compute-1 python3.9[220061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:41.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:41.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:41 compute-1 python3.9[220183]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003900.8145902-3440-2670698734912/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:42 compute-1 python3.9[220333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:42 compute-1 python3.9[220409]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:43 compute-1 sshd-session[219965]: Connection reset by authenticating user root 45.140.17.124 port 49186 [preauth]
Dec 06 06:51:43 compute-1 python3.9[220559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:51:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:43.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:51:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:43.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:43 compute-1 python3.9[220682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003903.0337224-3440-260627775912514/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:43 compute-1 ceph-mon[81689]: pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:44 compute-1 python3.9[220832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:45 compute-1 python3.9[220953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003904.0943902-3440-78682502835099/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:45 compute-1 ceph-mon[81689]: pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:45 compute-1 python3.9[221103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:45.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:45 compute-1 sshd-session[220560]: Connection reset by authenticating user root 45.140.17.124 port 45960 [preauth]
Dec 06 06:51:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:45.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:46 compute-1 python3.9[221225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003905.2041004-3440-124603583222584/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:46 compute-1 python3.9[221376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:47 compute-1 ceph-mon[81689]: pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:47 compute-1 python3.9[221497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003906.247501-3440-156264592861516/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:47.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:47.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:48 compute-1 sshd-session[221205]: Connection reset by authenticating user root 45.140.17.124 port 45970 [preauth]
Dec 06 06:51:48 compute-1 ceph-mon[81689]: pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:48 compute-1 sudo[221648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifeictqqfiiuwmyewhhqbjukjrylscbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003908.529146-3689-148649652195756/AnsiballZ_file.py'
Dec 06 06:51:48 compute-1 sudo[221648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:49 compute-1 python3.9[221651]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:49 compute-1 sudo[221648]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:49 compute-1 sudo[221801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srzozeidaoyskcmlrelzridztyshlmhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003909.292239-3713-67368781007984/AnsiballZ_copy.py'
Dec 06 06:51:49 compute-1 sudo[221801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:49.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:49 compute-1 python3.9[221803]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:49 compute-1 sudo[221801]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:49.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:50 compute-1 sudo[221953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mysakudqkjhfwbxovxxyjkrvvhvvswlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003910.0669816-3737-89401796767588/AnsiballZ_stat.py'
Dec 06 06:51:50 compute-1 sudo[221953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:50 compute-1 python3.9[221955]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:51:50 compute-1 sudo[221953]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:50 compute-1 sshd-session[221522]: Connection reset by authenticating user root 45.140.17.124 port 45982 [preauth]
Dec 06 06:51:51 compute-1 sudo[222107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltglrcwpefafxrgmtgknvgpgduwfnaoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003911.3273392-3761-100644243489519/AnsiballZ_stat.py'
Dec 06 06:51:51 compute-1 sudo[222107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:51.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:51 compute-1 python3.9[222109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:51 compute-1 sudo[222107]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:51.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:52 compute-1 sudo[222230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsqnnmjvrnqljhdhgtnqixohvkrlvfyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003911.3273392-3761-100644243489519/AnsiballZ_copy.py'
Dec 06 06:51:52 compute-1 sudo[222230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:52 compute-1 ceph-mon[81689]: pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:52 compute-1 python3.9[222232]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765003911.3273392-3761-100644243489519/.source _original_basename=.xzz98jmu follow=False checksum=2ca67e98fd992256c35f9373c0e5db109e03ff4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 06 06:51:52 compute-1 sshd-session[221980]: Connection reset by authenticating user root 45.140.17.124 port 45998 [preauth]
Dec 06 06:51:53 compute-1 sudo[222230]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:53 compute-1 ceph-mon[81689]: pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:53.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:53.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:54 compute-1 python3.9[222384]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:51:54 compute-1 ceph-mon[81689]: pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:54 compute-1 python3.9[222536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:55 compute-1 python3.9[222657]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003914.370704-3840-270540792646741/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:55.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:55.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:56 compute-1 python3.9[222807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:56 compute-1 python3.9[222928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003915.7813268-3884-241738109158646/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:56 compute-1 ceph-mon[81689]: pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:57 compute-1 sudo[223078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uupqzdxaogmdtamscnfhcacxicdogtlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003917.2969077-3935-82328596210326/AnsiballZ_container_config_data.py'
Dec 06 06:51:57 compute-1 sudo[223078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:57.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:57 compute-1 python3.9[223080]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 06 06:51:57 compute-1 sudo[223078]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:57.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:58 compute-1 sudo[223230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlbtxspkvodrsgeexngmehbblhmoxuvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003918.2071218-3962-194897406432695/AnsiballZ_container_config_hash.py'
Dec 06 06:51:58 compute-1 sudo[223230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:58 compute-1 sudo[223233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:58 compute-1 sudo[223233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:58 compute-1 sudo[223233]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-1 python3.9[223232]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:51:58 compute-1 sudo[223230]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-1 sudo[223258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:51:58 compute-1 sudo[223258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:58 compute-1 sudo[223258]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-1 sudo[223283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:58 compute-1 sudo[223283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:58 compute-1 sudo[223283]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-1 sudo[223308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:51:58 compute-1 sudo[223308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:59 compute-1 ceph-mon[81689]: pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:59 compute-1 sudo[223308]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:59.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:51:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:59.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:00 compute-1 sudo[223512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szizkkcakmtrolkrcixkaxfdkkpumghx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003919.9155476-3992-118600770648786/AnsiballZ_edpm_container_manage.py'
Dec 06 06:52:00 compute-1 sudo[223512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:52:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:52:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:52:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:52:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:52:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:52:00 compute-1 python3[223514]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:52:01 compute-1 podman[223542]: 2025-12-06 06:52:01.113154289 +0000 UTC m=+0.091361687 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:52:01 compute-1 ceph-mon[81689]: pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:52:01.606 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:52:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:52:01.607 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:52:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:52:01.607 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:52:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:01.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:01.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:03 compute-1 ceph-mon[81689]: pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:03.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:05 compute-1 ceph-mon[81689]: pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:05.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:05.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:07.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:07.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:09.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:09.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:11.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:11.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:12 compute-1 podman[223623]: 2025-12-06 06:52:12.810499052 +0000 UTC m=+1.791449426 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:52:12 compute-1 podman[223611]: 2025-12-06 06:52:12.838250445 +0000 UTC m=+4.819474556 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 06:52:12 compute-1 podman[223529]: 2025-12-06 06:52:12.858213269 +0000 UTC m=+12.357479601 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 06:52:12 compute-1 podman[223670]: 2025-12-06 06:52:12.992341751 +0000 UTC m=+0.045711835 container create cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init)
Dec 06 06:52:12 compute-1 podman[223670]: 2025-12-06 06:52:12.970102036 +0000 UTC m=+0.023472140 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 06:52:12 compute-1 python3[223514]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 06 06:52:13 compute-1 sudo[223512]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:13 compute-1 ceph-mon[81689]: pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:13 compute-1 sudo[223729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:13 compute-1 sudo[223729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:13 compute-1 sudo[223729]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:13 compute-1 sudo[223754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:52:13 compute-1 sudo[223754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:13 compute-1 sudo[223754]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:13.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:13.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:14 compute-1 sudo[223904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scpgowmfdpyknenlbkonpiasdbziegxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003933.7639337-4016-277922863609695/AnsiballZ_stat.py'
Dec 06 06:52:14 compute-1 sudo[223904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:14 compute-1 ceph-mon[81689]: pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:14 compute-1 ceph-mon[81689]: pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:14 compute-1 ceph-mon[81689]: pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:52:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:52:14 compute-1 python3.9[223906]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:14 compute-1 sudo[223904]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:15 compute-1 sudo[224058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdpwntxlcqkvpjcqcxjzucdrnjbqgmmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003935.0222554-4052-190490926052246/AnsiballZ_container_config_data.py'
Dec 06 06:52:15 compute-1 sudo[224058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:15.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:17 compute-1 python3.9[224060]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 06 06:52:17 compute-1 sudo[224058]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:17.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:17 compute-1 sudo[224210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlqjycoxsntuzvevgjnerikpuvcnhsqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003937.538699-4079-232795895463866/AnsiballZ_container_config_hash.py'
Dec 06 06:52:17 compute-1 sudo[224210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:17.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:18 compute-1 python3.9[224212]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:52:18 compute-1 sudo[224210]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:18 compute-1 ceph-mon[81689]: pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:19 compute-1 sudo[224362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzghybvuybuutbsctzseakyynqsblfmm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003939.0213237-4109-226017264999887/AnsiballZ_edpm_container_manage.py'
Dec 06 06:52:19 compute-1 sudo[224362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:19 compute-1 python3[224364]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:52:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:19 compute-1 podman[224401]: 2025-12-06 06:52:19.737729729 +0000 UTC m=+0.045250482 container create 2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:52:19 compute-1 podman[224401]: 2025-12-06 06:52:19.714344163 +0000 UTC m=+0.021864956 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 06:52:19 compute-1 python3[224364]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 06 06:52:19 compute-1 sudo[224362]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:20 compute-1 ceph-mon[81689]: pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:20 compute-1 ceph-mon[81689]: pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:21 compute-1 sudo[224589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnpmkfumhkyewxsphhesjrbbsahvolxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003940.7877522-4133-264402905347637/AnsiballZ_stat.py'
Dec 06 06:52:21 compute-1 sudo[224589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:21 compute-1 python3.9[224591]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:21 compute-1 sudo[224589]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:21 compute-1 ceph-mon[81689]: pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:21 compute-1 sudo[224743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxpmcazumnwhsflaxkwgokwxddocqse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003941.6049602-4160-59146587871337/AnsiballZ_file.py'
Dec 06 06:52:21 compute-1 sudo[224743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:22 compute-1 python3.9[224745]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:52:22 compute-1 sudo[224743]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:22 compute-1 sudo[224894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywnfbcvpvkczcbxfdichtruwcmylacuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003942.1520498-4160-264329754944383/AnsiballZ_copy.py'
Dec 06 06:52:22 compute-1 sudo[224894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:22 compute-1 python3.9[224896]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003942.1520498-4160-264329754944383/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:52:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:22 compute-1 sudo[224894]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:23 compute-1 sudo[224970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmiirjagsixqvurxaoebytgkfejlhawd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003942.1520498-4160-264329754944383/AnsiballZ_systemd.py'
Dec 06 06:52:23 compute-1 sudo[224970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:23 compute-1 ceph-mon[81689]: pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:23 compute-1 python3.9[224972]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:52:23 compute-1 systemd[1]: Reloading.
Dec 06 06:52:23 compute-1 systemd-rc-local-generator[225000]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:52:23 compute-1 systemd-sysv-generator[225003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:52:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:23.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:23 compute-1 sudo[224970]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:23.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:23 compute-1 sudo[225081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgjekygjpwtccdbtbrqvmqxudlkslmub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003942.1520498-4160-264329754944383/AnsiballZ_systemd.py'
Dec 06 06:52:23 compute-1 sudo[225081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:24 compute-1 python3.9[225083]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:52:24 compute-1 systemd[1]: Reloading.
Dec 06 06:52:24 compute-1 systemd-rc-local-generator[225114]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:52:24 compute-1 systemd-sysv-generator[225119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:52:24 compute-1 systemd[1]: Starting nova_compute container...
Dec 06 06:52:24 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:52:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-1 podman[225123]: 2025-12-06 06:52:24.817814542 +0000 UTC m=+0.108929918 container init 2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.vendor=CentOS)
Dec 06 06:52:24 compute-1 podman[225123]: 2025-12-06 06:52:24.826007931 +0000 UTC m=+0.117123257 container start 2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:52:24 compute-1 podman[225123]: nova_compute
Dec 06 06:52:24 compute-1 nova_compute[225138]: + sudo -E kolla_set_configs
Dec 06 06:52:24 compute-1 ceph-mon[81689]: pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:24 compute-1 systemd[1]: Started nova_compute container.
Dec 06 06:52:24 compute-1 sudo[225081]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Validating config file
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying service configuration files
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Deleting /etc/ceph
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Creating directory /etc/ceph
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Writing out command to execute
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:24 compute-1 nova_compute[225138]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 06:52:24 compute-1 nova_compute[225138]: ++ cat /run_command
Dec 06 06:52:24 compute-1 nova_compute[225138]: + CMD=nova-compute
Dec 06 06:52:24 compute-1 nova_compute[225138]: + ARGS=
Dec 06 06:52:24 compute-1 nova_compute[225138]: + sudo kolla_copy_cacerts
Dec 06 06:52:24 compute-1 nova_compute[225138]: Running command: 'nova-compute'
Dec 06 06:52:24 compute-1 nova_compute[225138]: + [[ ! -n '' ]]
Dec 06 06:52:24 compute-1 nova_compute[225138]: + . kolla_extend_start
Dec 06 06:52:24 compute-1 nova_compute[225138]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 06:52:24 compute-1 nova_compute[225138]: + umask 0022
Dec 06 06:52:24 compute-1 nova_compute[225138]: + exec nova-compute
Dec 06 06:52:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:25.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:25.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:25 compute-1 python3.9[225300]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:26 compute-1 ceph-mon[81689]: pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:26 compute-1 python3.9[225450]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:27 compute-1 nova_compute[225138]: 2025-12-06 06:52:27.077 225142 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:27 compute-1 nova_compute[225138]: 2025-12-06 06:52:27.077 225142 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:27 compute-1 nova_compute[225138]: 2025-12-06 06:52:27.078 225142 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:27 compute-1 nova_compute[225138]: 2025-12-06 06:52:27.078 225142 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 06:52:27 compute-1 nova_compute[225138]: 2025-12-06 06:52:27.249 225142 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:27 compute-1 nova_compute[225138]: 2025-12-06 06:52:27.262 225142 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:52:27 compute-1 nova_compute[225138]: 2025-12-06 06:52:27.263 225142 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 06:52:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:27.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:27 compute-1 python3.9[225604]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:27.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.019 225142 INFO nova.virt.driver [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.135 225142 INFO nova.compute.provider_config [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.156 225142 DEBUG oslo_concurrency.lockutils [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.157 225142 DEBUG oslo_concurrency.lockutils [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.157 225142 DEBUG oslo_concurrency.lockutils [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.157 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.158 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.158 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.158 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.158 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.158 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.159 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.159 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.159 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.159 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.159 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.159 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.159 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.160 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.160 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.160 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.160 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.160 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.160 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.160 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.161 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.161 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.161 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.161 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.161 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.161 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.162 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.162 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.162 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.162 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.162 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.162 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.162 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.163 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.163 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.163 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.163 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.163 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.163 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.164 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.164 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.164 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.164 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.164 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.164 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.165 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.165 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.165 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.165 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.165 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.165 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.166 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.166 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.166 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.166 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.166 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.166 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.167 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.167 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.167 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.167 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.167 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.167 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.167 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.168 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.168 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.168 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.168 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.168 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.168 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.168 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.168 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.169 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.169 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.169 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.169 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.169 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.169 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.170 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.170 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.170 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.170 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.170 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.170 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.171 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.171 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.171 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.171 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.171 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.171 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.171 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.172 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.172 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.172 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.172 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.172 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.172 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.172 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.173 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.173 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.173 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.173 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.173 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.173 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.173 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.174 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.174 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.174 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.174 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.174 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.174 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.174 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.175 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.175 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.175 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.175 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.175 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.175 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.175 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.176 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.176 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.176 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.176 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.176 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.176 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.176 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.177 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.177 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.177 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.177 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.177 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.177 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.177 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.178 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.178 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.178 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.178 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.178 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.179 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.179 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.179 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.179 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.179 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.179 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.179 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.180 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.180 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.180 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.180 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.180 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.180 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.181 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.181 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.181 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.181 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.181 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.181 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.181 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.182 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.182 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.182 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.182 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.182 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.182 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.182 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.183 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.183 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.183 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.183 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.183 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.183 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.184 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.184 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.184 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.184 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.184 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.184 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.185 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.185 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.185 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.185 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.185 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.185 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.186 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.186 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.186 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.186 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.186 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.186 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.186 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.187 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.187 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.187 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.187 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.188 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.188 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.188 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.188 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.188 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.188 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.189 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.189 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.189 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.189 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.189 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.189 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.190 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.190 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.190 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.190 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.190 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.190 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.190 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.191 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.191 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.191 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.191 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.191 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.191 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.192 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.192 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.192 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.192 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.192 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.192 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.193 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.193 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.193 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.193 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.193 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.193 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.194 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.194 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.194 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.194 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.194 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.194 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.195 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.195 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.195 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.195 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.195 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.195 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.196 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.196 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.196 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.196 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.196 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.196 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.196 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.197 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.197 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.197 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.197 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.197 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.197 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.197 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.198 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.198 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.198 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.198 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.198 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.198 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.199 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.199 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.199 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.199 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.200 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.200 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.200 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.200 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.201 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.201 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.201 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.201 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.202 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.202 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.202 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.202 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.202 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.203 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.203 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.203 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.203 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.203 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.204 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.204 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.204 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.204 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.204 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.204 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.205 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.205 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.205 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.205 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.205 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.206 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.206 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.206 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.206 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.206 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.206 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.206 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.207 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.207 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.207 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.207 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.207 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.207 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.208 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.208 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.208 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.208 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.208 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.208 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.209 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.209 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.209 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.209 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.209 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.210 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.210 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.210 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.210 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.210 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.210 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.211 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.211 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.211 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.211 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.211 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.211 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.211 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.212 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.212 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.212 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.212 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.212 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.212 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.212 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.213 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.213 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.213 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.213 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.213 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.214 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.214 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.214 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.214 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.214 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.214 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.215 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.215 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.215 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.215 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.216 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.216 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.216 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.216 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.216 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.216 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.217 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.217 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.217 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.217 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.217 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.218 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.218 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.218 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.218 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.218 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.219 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.219 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.219 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.219 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.219 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.220 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.220 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.220 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.220 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.220 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.221 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.221 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.221 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.221 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.221 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.221 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.221 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.222 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.222 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.222 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.222 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.222 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.222 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.223 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.223 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.223 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.223 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.223 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.223 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.224 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.224 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.224 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.224 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.224 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.224 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.224 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.225 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.225 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.225 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.225 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.225 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.225 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.225 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.226 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.226 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.226 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.226 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.227 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.227 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.227 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.227 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.227 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.228 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.228 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.228 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.228 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.228 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.229 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.229 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.229 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.229 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.229 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.230 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.230 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.230 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.230 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.230 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.230 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.231 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.231 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.231 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.231 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.231 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.232 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.232 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.232 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.232 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.232 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.233 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.233 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.233 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.233 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.234 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.234 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.234 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.234 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.234 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.235 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.235 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.235 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.235 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.235 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.235 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.236 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.236 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.236 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.236 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.236 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.237 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.237 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.237 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.237 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.237 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.238 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.238 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.238 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.238 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.238 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.239 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.239 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.239 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.239 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.240 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.240 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.240 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.240 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.240 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.241 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.241 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.241 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.241 225142 WARNING oslo_config.cfg [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 06:52:28 compute-1 nova_compute[225138]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 06:52:28 compute-1 nova_compute[225138]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 06:52:28 compute-1 nova_compute[225138]: and ``live_migration_inbound_addr`` respectively.
Dec 06 06:52:28 compute-1 nova_compute[225138]: ).  Its value may be silently ignored in the future.
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.242 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.242 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.242 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.242 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.243 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.243 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.243 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.243 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.243 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.244 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.244 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.244 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.244 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.245 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.245 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.245 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.245 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.245 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.246 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rbd_secret_uuid        = 40a1bae4-cf76-5610-8dab-c75116dfe0bb log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.246 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.246 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.246 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.246 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.246 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.247 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.247 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.247 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.247 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.247 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.248 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.248 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.248 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.248 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.248 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.249 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.249 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.249 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.249 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.249 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.249 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.250 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.250 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.250 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.250 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.250 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.250 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.250 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.251 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.251 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.251 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.251 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.251 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.251 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.251 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.252 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.252 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.252 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.252 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.252 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.252 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.252 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.253 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.253 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.253 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.253 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.253 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.253 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.254 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.254 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.254 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.254 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.254 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.254 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.254 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.255 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.255 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.255 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.255 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.255 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.255 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.255 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.256 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.256 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.256 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.256 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.256 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.257 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.257 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.257 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.257 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.257 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.257 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.258 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.258 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.258 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.258 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.258 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.259 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.259 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.259 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.259 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.259 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.260 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.260 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.260 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.260 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.260 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.260 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.260 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.261 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.261 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.261 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.261 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.261 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.262 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.262 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.262 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.262 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.262 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.262 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.263 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.263 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.263 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.263 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.263 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.264 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.264 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.264 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.264 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.264 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.265 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.265 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.265 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.265 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.265 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.266 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.266 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.266 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.266 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.266 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.267 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.267 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.267 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.267 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.267 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.268 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.268 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.268 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.268 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.268 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.269 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.269 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.269 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.269 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.269 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.270 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.270 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.270 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.270 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.270 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.271 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.271 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.271 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.271 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.271 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.272 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.272 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.272 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.272 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.272 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.272 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.273 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.273 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.273 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.273 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.273 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.274 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.274 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.274 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.274 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.275 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.275 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.275 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.275 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.275 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.275 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.276 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.276 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.276 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.276 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.276 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.277 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.277 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.277 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.277 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.277 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.277 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.278 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.278 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.278 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.278 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.278 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.278 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.279 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.279 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.279 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.279 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.279 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.279 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.280 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.280 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.280 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.280 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.280 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.280 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.281 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.281 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.281 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.281 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.281 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.281 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.281 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.282 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.282 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.282 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.282 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.282 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.282 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.282 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.283 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.283 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.283 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.283 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.283 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.283 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.283 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.284 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.284 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.284 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.284 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.284 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.284 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.284 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.285 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.285 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.285 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.285 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.285 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.286 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.286 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.286 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.286 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.287 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.287 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.287 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.287 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.287 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.288 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.288 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.288 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.288 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.288 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.289 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.289 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.289 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.289 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.289 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.290 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.290 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.290 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.290 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.290 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.290 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.291 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.291 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.291 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.291 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.291 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.291 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.291 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.292 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.292 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.292 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.292 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.292 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.293 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.293 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.293 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.293 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.293 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.294 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.294 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.294 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.294 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.294 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.294 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.294 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.295 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.295 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.295 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.295 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.295 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.295 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.295 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.296 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.296 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.296 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.296 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.296 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.296 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.297 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.297 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.297 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.297 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.297 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.297 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.297 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.298 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.298 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.298 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.298 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.298 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.298 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.299 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.299 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.299 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.299 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.299 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.299 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.299 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.300 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.300 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.300 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.300 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.300 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.300 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.301 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.301 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.301 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.301 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.301 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.302 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.302 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.302 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.302 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.302 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.303 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.303 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.303 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.303 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.303 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.303 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.303 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.304 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.304 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.304 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.304 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.304 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.304 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.304 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.305 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.305 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.305 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.305 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.305 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.305 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.305 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.306 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.306 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.306 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.306 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.306 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.307 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.307 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.307 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.307 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.307 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.307 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.308 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.308 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.308 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.308 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.308 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.308 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.308 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.309 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.309 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.309 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.309 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.309 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.309 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.309 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.310 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.310 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.310 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.310 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.310 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.310 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.311 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.311 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.311 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.311 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.311 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.311 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.311 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.312 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.312 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.312 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.312 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.312 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.312 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.312 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.313 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.313 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.313 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.313 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.313 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.313 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.313 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.314 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.314 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.314 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.314 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.314 225142 DEBUG oslo_service.service [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.315 225142 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.498 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.498 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.499 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.499 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 06:52:28 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Dec 06 06:52:28 compute-1 systemd[1]: Started libvirt QEMU daemon.
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.568 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4d1ac00e80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.573 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4d1ac00e80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.574 225142 INFO nova.virt.libvirt.driver [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Connection event '1' reason 'None'
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.602 225142 WARNING nova.virt.libvirt.driver [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Dec 06 06:52:28 compute-1 nova_compute[225138]: 2025-12-06 06:52:28.603 225142 DEBUG nova.virt.libvirt.volume.mount [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 06:52:28 compute-1 sudo[225805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smssaaoehbvkseasdasltawmfivddlkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003948.2385964-4340-93500099565488/AnsiballZ_podman_container.py'
Dec 06 06:52:28 compute-1 sudo[225805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:28 compute-1 ceph-mon[81689]: pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:28 compute-1 python3.9[225808]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 06:52:29 compute-1 sudo[225805]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:29 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.421 225142 INFO nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]: 
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <host>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <uuid>effe0b74-d2bb-436f-b621-5e7c5f665fb5</uuid>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <arch>x86_64</arch>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <microcode version='16777317'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <signature family='23' model='49' stepping='0'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='x2apic'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='tsc-deadline'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='osxsave'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='hypervisor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='tsc_adjust'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='spec-ctrl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='stibp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='arch-capabilities'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='cmp_legacy'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='topoext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='virt-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='lbrv'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='tsc-scale'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='vmcb-clean'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='pause-filter'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='pfthreshold'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='svme-addr-chk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='rdctl-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='mds-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature name='pschange-mc-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <pages unit='KiB' size='4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <pages unit='KiB' size='2048'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <pages unit='KiB' size='1048576'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <power_management>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <suspend_mem/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </power_management>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <iommu support='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <migration_features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <live/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <uri_transports>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <uri_transport>tcp</uri_transport>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <uri_transport>rdma</uri_transport>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </uri_transports>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </migration_features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <topology>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <cells num='1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <cell id='0'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:           <memory unit='KiB'>7864320</memory>
Dec 06 06:52:29 compute-1 nova_compute[225138]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 06 06:52:29 compute-1 nova_compute[225138]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 06:52:29 compute-1 nova_compute[225138]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 06:52:29 compute-1 nova_compute[225138]:           <distances>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <sibling id='0' value='10'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:           </distances>
Dec 06 06:52:29 compute-1 nova_compute[225138]:           <cpus num='8'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:           </cpus>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         </cell>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </cells>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </topology>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <cache>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </cache>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <secmodel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model>selinux</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <doi>0</doi>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </secmodel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <secmodel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model>dac</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <doi>0</doi>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </secmodel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </host>
Dec 06 06:52:29 compute-1 nova_compute[225138]: 
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <guest>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <os_type>hvm</os_type>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <arch name='i686'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <wordsize>32</wordsize>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <domain type='qemu'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <domain type='kvm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </arch>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <pae/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <nonpae/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <acpi default='on' toggle='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <apic default='on' toggle='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <cpuselection/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <deviceboot/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <disksnapshot default='on' toggle='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <externalSnapshot/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </guest>
Dec 06 06:52:29 compute-1 nova_compute[225138]: 
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <guest>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <os_type>hvm</os_type>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <arch name='x86_64'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <wordsize>64</wordsize>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <domain type='qemu'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <domain type='kvm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </arch>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <acpi default='on' toggle='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <apic default='on' toggle='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <cpuselection/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <deviceboot/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <disksnapshot default='on' toggle='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <externalSnapshot/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </guest>
Dec 06 06:52:29 compute-1 nova_compute[225138]: 
Dec 06 06:52:29 compute-1 nova_compute[225138]: </capabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]: 
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.429 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.450 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 06:52:29 compute-1 nova_compute[225138]: <domainCapabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <domain>kvm</domain>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <arch>i686</arch>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <vcpu max='4096'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <iothreads supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <os supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <enum name='firmware'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <loader supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>rom</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pflash</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='readonly'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>yes</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>no</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='secure'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>no</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </loader>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </os>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>on</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>off</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='maximumMigratable'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>on</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>off</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='succor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='custom' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-128'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-256'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-512'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='KnightsMill'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SierraForest'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='athlon'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='athlon-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='core2duo'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='core2duo-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='coreduo'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='coreduo-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='n270'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='n270-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='phenom'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='phenom-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <memoryBacking supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <enum name='sourceType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>file</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>anonymous</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>memfd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </memoryBacking>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <devices>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <disk supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='diskDevice'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>disk</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>cdrom</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>floppy</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>lun</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='bus'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>fdc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>scsi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>sata</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </disk>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <graphics supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vnc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>egl-headless</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dbus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </graphics>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <video supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='modelType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vga</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>cirrus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>none</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>bochs</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ramfb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </video>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <hostdev supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='mode'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>subsystem</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='startupPolicy'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>default</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>mandatory</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>requisite</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>optional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='subsysType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pci</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>scsi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='capsType'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='pciBackend'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </hostdev>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <rng supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>random</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>egd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>builtin</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </rng>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <filesystem supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='driverType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>path</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>handle</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtiofs</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </filesystem>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <tpm supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tpm-tis</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tpm-crb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>emulator</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>external</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendVersion'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>2.0</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </tpm>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <redirdev supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='bus'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </redirdev>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <channel supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pty</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>unix</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </channel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <crypto supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>qemu</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>builtin</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </crypto>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <interface supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>default</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>passt</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </interface>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <panic supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>isa</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>hyperv</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </panic>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <console supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>null</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pty</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dev</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>file</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pipe</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>stdio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>udp</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tcp</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>unix</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>qemu-vdagent</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dbus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </console>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </devices>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <gic supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <genid supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <backup supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <async-teardown supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <ps2 supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <sev supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <sgx supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <hyperv supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='features'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>relaxed</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vapic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>spinlocks</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vpindex</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>runtime</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>synic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>stimer</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>reset</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vendor_id</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>frequencies</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>reenlightenment</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tlbflush</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ipi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>avic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>emsr_bitmap</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>xmm_input</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <defaults>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </defaults>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </hyperv>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <launchSecurity supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='sectype'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tdx</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </launchSecurity>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </features>
Dec 06 06:52:29 compute-1 nova_compute[225138]: </domainCapabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.458 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 06:52:29 compute-1 nova_compute[225138]: <domainCapabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <domain>kvm</domain>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <arch>i686</arch>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <vcpu max='240'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <iothreads supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <os supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <enum name='firmware'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <loader supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>rom</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pflash</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='readonly'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>yes</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>no</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='secure'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>no</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </loader>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </os>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>on</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>off</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='maximumMigratable'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>on</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>off</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='succor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='custom' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-128'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-256'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-512'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='KnightsMill'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SierraForest'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='athlon'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='athlon-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='core2duo'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='core2duo-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='coreduo'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='coreduo-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='n270'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='n270-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='phenom'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='phenom-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <memoryBacking supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <enum name='sourceType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>file</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>anonymous</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>memfd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </memoryBacking>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <devices>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <disk supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='diskDevice'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>disk</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>cdrom</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>floppy</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>lun</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='bus'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ide</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>fdc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>scsi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>sata</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </disk>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <graphics supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vnc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>egl-headless</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dbus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </graphics>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <video supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='modelType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vga</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>cirrus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>none</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>bochs</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ramfb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </video>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <hostdev supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='mode'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>subsystem</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='startupPolicy'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>default</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>mandatory</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>requisite</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>optional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='subsysType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pci</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>scsi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='capsType'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='pciBackend'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </hostdev>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <rng supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>random</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>egd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>builtin</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </rng>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <filesystem supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='driverType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>path</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>handle</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtiofs</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </filesystem>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <tpm supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tpm-tis</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tpm-crb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>emulator</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>external</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendVersion'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>2.0</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </tpm>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <redirdev supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='bus'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </redirdev>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <channel supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pty</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>unix</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </channel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <crypto supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>qemu</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>builtin</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </crypto>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <interface supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>default</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>passt</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </interface>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <panic supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>isa</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>hyperv</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </panic>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <console supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>null</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pty</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dev</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>file</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pipe</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>stdio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>udp</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tcp</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>unix</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>qemu-vdagent</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dbus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </console>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </devices>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <gic supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <genid supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <backup supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <async-teardown supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <ps2 supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <sev supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <sgx supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <hyperv supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='features'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>relaxed</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vapic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>spinlocks</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vpindex</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>runtime</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>synic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>stimer</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>reset</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vendor_id</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>frequencies</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>reenlightenment</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tlbflush</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ipi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>avic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>emsr_bitmap</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>xmm_input</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <defaults>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </defaults>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </hyperv>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <launchSecurity supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='sectype'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tdx</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </launchSecurity>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </features>
Dec 06 06:52:29 compute-1 nova_compute[225138]: </domainCapabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.490 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.494 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 06:52:29 compute-1 nova_compute[225138]: <domainCapabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <domain>kvm</domain>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <arch>x86_64</arch>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <vcpu max='240'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <iothreads supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <os supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <enum name='firmware'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <loader supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>rom</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pflash</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='readonly'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>yes</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>no</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='secure'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>no</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </loader>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </os>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>on</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>off</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='maximumMigratable'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>on</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>off</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='succor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='custom' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-128'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-256'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-512'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='KnightsMill'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SierraForest'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='athlon'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='athlon-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='core2duo'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='core2duo-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='coreduo'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='coreduo-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='n270'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='n270-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='phenom'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='phenom-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <memoryBacking supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <enum name='sourceType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>file</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>anonymous</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>memfd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </memoryBacking>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <devices>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <disk supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='diskDevice'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>disk</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>cdrom</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>floppy</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>lun</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='bus'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ide</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>fdc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>scsi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>sata</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </disk>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <graphics supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vnc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>egl-headless</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dbus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </graphics>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <video supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='modelType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vga</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>cirrus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>none</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>bochs</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ramfb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </video>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <hostdev supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='mode'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>subsystem</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='startupPolicy'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>default</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>mandatory</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>requisite</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>optional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='subsysType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pci</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>scsi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='capsType'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='pciBackend'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </hostdev>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <rng supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>random</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>egd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>builtin</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </rng>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <filesystem supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='driverType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>path</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>handle</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtiofs</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </filesystem>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <tpm supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tpm-tis</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tpm-crb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>emulator</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>external</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendVersion'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>2.0</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </tpm>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <redirdev supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='bus'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </redirdev>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <channel supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pty</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>unix</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </channel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <crypto supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>qemu</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>builtin</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </crypto>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <interface supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>default</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>passt</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </interface>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <panic supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>isa</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>hyperv</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </panic>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <console supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>null</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pty</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dev</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>file</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pipe</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>stdio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>udp</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tcp</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>unix</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>qemu-vdagent</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dbus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </console>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </devices>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <gic supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <genid supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <backup supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <async-teardown supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <ps2 supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <sev supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <sgx supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <hyperv supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='features'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>relaxed</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vapic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>spinlocks</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vpindex</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>runtime</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>synic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>stimer</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>reset</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vendor_id</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>frequencies</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>reenlightenment</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tlbflush</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ipi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>avic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>emsr_bitmap</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>xmm_input</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <defaults>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </defaults>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </hyperv>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <launchSecurity supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='sectype'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tdx</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </launchSecurity>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </features>
Dec 06 06:52:29 compute-1 nova_compute[225138]: </domainCapabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.568 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 06:52:29 compute-1 nova_compute[225138]: <domainCapabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <domain>kvm</domain>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <arch>x86_64</arch>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <vcpu max='4096'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <iothreads supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <os supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <enum name='firmware'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>efi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <loader supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>rom</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pflash</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='readonly'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>yes</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>no</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='secure'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>yes</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>no</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </loader>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </os>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>on</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>off</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='maximumMigratable'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>on</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>off</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='succor'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <mode name='custom' supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Denverton-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 sudo[225993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuuedhtenvpxackwyeknatcaxybltbxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003949.3546522-4364-25494225922893/AnsiballZ_systemd.py'
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='EPYC-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 sudo[225993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-128'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-256'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx10-512'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Haswell-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='KnightsMill'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xop'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='la57'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SierraForest'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='hle'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='pku'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='erms'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='athlon'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='athlon-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='core2duo'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='core2duo-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='coreduo'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='coreduo-v1'>
Dec 06 06:52:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='n270'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='n270-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='ss'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='phenom'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <blockers model='phenom-v1'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </blockers>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </mode>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <memoryBacking supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <enum name='sourceType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>file</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>anonymous</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <value>memfd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </memoryBacking>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <devices>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <disk supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='diskDevice'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>disk</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>cdrom</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>floppy</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>lun</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='bus'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>fdc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>scsi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>sata</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </disk>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <graphics supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vnc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>egl-headless</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dbus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </graphics>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <video supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='modelType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vga</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>cirrus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>none</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>bochs</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ramfb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </video>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <hostdev supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='mode'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>subsystem</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='startupPolicy'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>default</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>mandatory</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>requisite</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>optional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='subsysType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pci</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>scsi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='capsType'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='pciBackend'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </hostdev>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <rng supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>random</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>egd</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>builtin</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </rng>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <filesystem supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='driverType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>path</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>handle</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>virtiofs</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </filesystem>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <tpm supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tpm-tis</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tpm-crb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>emulator</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>external</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendVersion'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>2.0</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </tpm>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <redirdev supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='bus'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>usb</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </redirdev>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <channel supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pty</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>unix</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </channel>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <crypto supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>qemu</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>builtin</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </crypto>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <interface supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='backendType'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>default</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>passt</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </interface>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <panic supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='model'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>isa</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>hyperv</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </panic>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <console supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='type'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>null</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vc</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pty</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dev</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>file</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>pipe</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>stdio</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>udp</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tcp</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>unix</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>qemu-vdagent</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>dbus</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </console>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </devices>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <features>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <gic supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <genid supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <backup supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <async-teardown supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <ps2 supported='yes'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <sev supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <sgx supported='no'/>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <hyperv supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='features'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>relaxed</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vapic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>spinlocks</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vpindex</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>runtime</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>synic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>stimer</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>reset</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>vendor_id</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>frequencies</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>reenlightenment</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tlbflush</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>ipi</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>avic</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>emsr_bitmap</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>xmm_input</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <defaults>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </defaults>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </hyperv>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     <launchSecurity supported='yes'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       <enum name='sectype'>
Dec 06 06:52:29 compute-1 nova_compute[225138]:         <value>tdx</value>
Dec 06 06:52:29 compute-1 nova_compute[225138]:       </enum>
Dec 06 06:52:29 compute-1 nova_compute[225138]:     </launchSecurity>
Dec 06 06:52:29 compute-1 nova_compute[225138]:   </features>
Dec 06 06:52:29 compute-1 nova_compute[225138]: </domainCapabilities>
Dec 06 06:52:29 compute-1 nova_compute[225138]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.641 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.641 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.641 225142 DEBUG nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.641 225142 INFO nova.virt.libvirt.host [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Secure Boot support detected
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.643 225142 INFO nova.virt.libvirt.driver [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.644 225142 INFO nova.virt.libvirt.driver [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.653 225142 DEBUG nova.virt.libvirt.driver [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] cpu compare xml: <cpu match="exact">
Dec 06 06:52:29 compute-1 nova_compute[225138]:   <model>Nehalem</model>
Dec 06 06:52:29 compute-1 nova_compute[225138]: </cpu>
Dec 06 06:52:29 compute-1 nova_compute[225138]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.656 225142 DEBUG nova.virt.libvirt.driver [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.710 225142 INFO nova.virt.node [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Determined node identity 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from /var/lib/nova/compute_id
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.729 225142 WARNING nova.compute.manager [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Compute nodes ['466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.788 225142 INFO nova.compute.manager [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.885 225142 WARNING nova.compute.manager [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.885 225142 DEBUG oslo_concurrency.lockutils [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.886 225142 DEBUG oslo_concurrency.lockutils [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.886 225142 DEBUG oslo_concurrency.lockutils [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.886 225142 DEBUG nova.compute.resource_tracker [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:52:29 compute-1 nova_compute[225138]: 2025-12-06 06:52:29.886 225142 DEBUG oslo_concurrency.processutils [None req-24810bba-085e-4c4d-b3e5-62d15e08769c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:52:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:29.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:52:30 compute-1 python3.9[225995]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:52:30 compute-1 systemd[1]: Stopping nova_compute container...
Dec 06 06:52:30 compute-1 nova_compute[225138]: 2025-12-06 06:52:30.218 225142 DEBUG oslo_concurrency.lockutils [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:52:30 compute-1 nova_compute[225138]: 2025-12-06 06:52:30.219 225142 DEBUG oslo_concurrency.lockutils [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:52:30 compute-1 nova_compute[225138]: 2025-12-06 06:52:30.219 225142 DEBUG oslo_concurrency.lockutils [None req-ec965160-f6f9-45d4-9ccd-47a3cdf87973 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:52:30 compute-1 virtqemud[225710]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 06 06:52:30 compute-1 virtqemud[225710]: hostname: compute-1
Dec 06 06:52:30 compute-1 virtqemud[225710]: End of file while reading data: Input/output error
Dec 06 06:52:30 compute-1 systemd[1]: libpod-2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1.scope: Deactivated successfully.
Dec 06 06:52:30 compute-1 systemd[1]: libpod-2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1.scope: Consumed 3.676s CPU time.
Dec 06 06:52:30 compute-1 podman[226019]: 2025-12-06 06:52:30.743783688 +0000 UTC m=+0.656770109 container died 2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:52:30 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1-userdata-shm.mount: Deactivated successfully.
Dec 06 06:52:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f-merged.mount: Deactivated successfully.
Dec 06 06:52:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:31 compute-1 podman[226019]: 2025-12-06 06:52:31.895145421 +0000 UTC m=+1.808131862 container cleanup 2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 06:52:31 compute-1 podman[226019]: nova_compute
Dec 06 06:52:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:31.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:31 compute-1 podman[226053]: nova_compute
Dec 06 06:52:31 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 06 06:52:31 compute-1 systemd[1]: Stopped nova_compute container.
Dec 06 06:52:31 compute-1 systemd[1]: Starting nova_compute container...
Dec 06 06:52:32 compute-1 podman[226052]: 2025-12-06 06:52:32.101336492 +0000 UTC m=+0.169071057 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:52:32 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:52:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e3aac8d77f104f13a79060c34e76000984532e2cd5b9ae17b8776995dcdc5f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:32 compute-1 podman[226079]: 2025-12-06 06:52:32.147713025 +0000 UTC m=+0.153041930 container init 2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:52:32 compute-1 podman[226079]: 2025-12-06 06:52:32.153278434 +0000 UTC m=+0.158607309 container start 2b860662de544627ad15ca1343594d6573a186563bf8b0a420768b246d1eacf1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:52:32 compute-1 podman[226079]: nova_compute
Dec 06 06:52:32 compute-1 nova_compute[226101]: + sudo -E kolla_set_configs
Dec 06 06:52:32 compute-1 systemd[1]: Started nova_compute container.
Dec 06 06:52:32 compute-1 sudo[225993]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Validating config file
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying service configuration files
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /etc/ceph
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Creating directory /etc/ceph
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Writing out command to execute
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:32 compute-1 nova_compute[226101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 06:52:32 compute-1 nova_compute[226101]: ++ cat /run_command
Dec 06 06:52:32 compute-1 nova_compute[226101]: + CMD=nova-compute
Dec 06 06:52:32 compute-1 nova_compute[226101]: + ARGS=
Dec 06 06:52:32 compute-1 nova_compute[226101]: + sudo kolla_copy_cacerts
Dec 06 06:52:32 compute-1 nova_compute[226101]: + [[ ! -n '' ]]
Dec 06 06:52:32 compute-1 nova_compute[226101]: + . kolla_extend_start
Dec 06 06:52:32 compute-1 nova_compute[226101]: Running command: 'nova-compute'
Dec 06 06:52:32 compute-1 nova_compute[226101]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 06:52:32 compute-1 nova_compute[226101]: + umask 0022
Dec 06 06:52:32 compute-1 nova_compute[226101]: + exec nova-compute
Dec 06 06:52:32 compute-1 ceph-mon[81689]: pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:33 compute-1 sudo[226267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcbwryhfiqvzuzaajdmcnvbakqgndeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003953.0521104-4391-2791858216210/AnsiballZ_podman_container.py'
Dec 06 06:52:33 compute-1 sudo[226267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:33 compute-1 ceph-mon[81689]: pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:33 compute-1 python3.9[226269]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 06:52:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:33 compute-1 systemd[1]: Started libpod-conmon-cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5.scope.
Dec 06 06:52:33 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:52:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e157d6649175c9f7a8142ca0688ab8f5e541a807322f0e4762edabac264896a/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e157d6649175c9f7a8142ca0688ab8f5e541a807322f0e4762edabac264896a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e157d6649175c9f7a8142ca0688ab8f5e541a807322f0e4762edabac264896a/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:33 compute-1 podman[226295]: 2025-12-06 06:52:33.829998266 +0000 UTC m=+0.137296828 container init cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 06 06:52:33 compute-1 podman[226295]: 2025-12-06 06:52:33.838061992 +0000 UTC m=+0.145360524 container start cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:52:33 compute-1 python3.9[226269]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Applying nova statedir ownership
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 06 06:52:33 compute-1 nova_compute_init[226317]: INFO:nova_statedir:Nova statedir ownership complete
Dec 06 06:52:33 compute-1 systemd[1]: libpod-cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5.scope: Deactivated successfully.
Dec 06 06:52:33 compute-1 podman[226318]: 2025-12-06 06:52:33.895076879 +0000 UTC m=+0.027908718 container died cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 06 06:52:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:33.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:33 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5-userdata-shm.mount: Deactivated successfully.
Dec 06 06:52:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-2e157d6649175c9f7a8142ca0688ab8f5e541a807322f0e4762edabac264896a-merged.mount: Deactivated successfully.
Dec 06 06:52:33 compute-1 podman[226331]: 2025-12-06 06:52:33.960202363 +0000 UTC m=+0.055395525 container cleanup cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm)
Dec 06 06:52:33 compute-1 systemd[1]: libpod-conmon-cd6b7295ffa931db71fe4b1c1f41586c18d7b1e52764ad4c4190ed3fe45795a5.scope: Deactivated successfully.
Dec 06 06:52:33 compute-1 sudo[226267]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:34 compute-1 nova_compute[226101]: 2025-12-06 06:52:34.419 226109 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:34 compute-1 nova_compute[226101]: 2025-12-06 06:52:34.419 226109 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:34 compute-1 nova_compute[226101]: 2025-12-06 06:52:34.420 226109 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:34 compute-1 nova_compute[226101]: 2025-12-06 06:52:34.420 226109 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 06:52:34 compute-1 nova_compute[226101]: 2025-12-06 06:52:34.564 226109 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:34 compute-1 nova_compute[226101]: 2025-12-06 06:52:34.577 226109 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:52:34 compute-1 nova_compute[226101]: 2025-12-06 06:52:34.577 226109 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 06:52:34 compute-1 sshd-session[197306]: Connection closed by 192.168.122.30 port 55764
Dec 06 06:52:34 compute-1 sshd-session[197303]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:52:34 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Dec 06 06:52:34 compute-1 systemd[1]: session-49.scope: Consumed 2min 15.705s CPU time.
Dec 06 06:52:34 compute-1 systemd-logind[788]: Session 49 logged out. Waiting for processes to exit.
Dec 06 06:52:34 compute-1 systemd-logind[788]: Removed session 49.
Dec 06 06:52:34 compute-1 ceph-mon[81689]: pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.039 226109 INFO nova.virt.driver [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.147 226109 INFO nova.compute.provider_config [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.157 226109 DEBUG oslo_concurrency.lockutils [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.158 226109 DEBUG oslo_concurrency.lockutils [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.158 226109 DEBUG oslo_concurrency.lockutils [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.159 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.159 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.159 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.159 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.159 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.160 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.160 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.160 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.160 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.160 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.160 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.161 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.161 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.161 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.161 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.161 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.162 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.162 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.162 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.162 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.162 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.163 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.163 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.163 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.163 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.163 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.164 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.164 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.164 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.164 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.164 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.165 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.165 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.165 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.165 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.165 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.166 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.166 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.166 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.166 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.167 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.167 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.167 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.167 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.167 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.168 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.168 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.168 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.168 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.168 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.169 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.169 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.169 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.169 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.169 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.170 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.170 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.170 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.170 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.170 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.171 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.171 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.171 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.171 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.171 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.171 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.172 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.172 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.172 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.172 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.172 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.173 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.173 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.173 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.173 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.173 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.174 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.174 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.174 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.174 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.174 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.175 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.175 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.175 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.175 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.175 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.176 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.176 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.176 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.176 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.176 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.177 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.177 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.177 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.177 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.177 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.177 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.178 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.178 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.178 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.178 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.178 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.179 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.179 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.179 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.179 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.179 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.180 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.180 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.180 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.180 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.180 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.181 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.181 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.181 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.181 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.181 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.181 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.182 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.182 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.182 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.182 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.182 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.183 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.183 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.183 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.183 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.183 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.184 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.184 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.184 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.184 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.184 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.185 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.185 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.185 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.185 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.185 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.186 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.186 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.186 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.186 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.186 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.187 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.187 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.187 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.187 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.187 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.188 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.188 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.188 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.188 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.188 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.189 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.189 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.189 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.189 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.189 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.190 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.190 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.190 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.190 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.190 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.191 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.191 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.191 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.191 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.191 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.192 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.192 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.192 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.192 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.192 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.193 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.193 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.193 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.193 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.193 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.194 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.194 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.194 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.194 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.194 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.195 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.195 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.195 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.195 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.195 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.196 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.196 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.196 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.196 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.196 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.196 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.197 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.197 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.197 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.197 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.197 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.198 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.198 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.198 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.198 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.198 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.199 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.199 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.199 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.199 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.199 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.200 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.200 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.200 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.200 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.200 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.200 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.201 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.201 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.201 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.201 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.201 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.202 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.202 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.202 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.202 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.202 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.203 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.203 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.203 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.203 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.204 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.204 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.204 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.204 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.204 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.204 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.205 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.205 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.205 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.205 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.205 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.206 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.206 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.206 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.206 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.206 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.207 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.207 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.207 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.207 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.207 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.208 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.208 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.208 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.208 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.208 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.209 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.209 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.209 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.209 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.209 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.210 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.210 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.210 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.210 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.210 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.211 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.211 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.211 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.211 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.211 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.212 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.212 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.212 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.213 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.213 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.213 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.213 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.213 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.214 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.214 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.214 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.214 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.214 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.215 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.215 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.215 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.215 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.215 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.216 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.216 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.216 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.216 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.217 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.217 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.217 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.217 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.217 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.217 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.218 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.218 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.218 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.218 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.219 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.219 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.219 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.219 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.219 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.219 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.220 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.220 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.220 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.220 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.220 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.221 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.221 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.221 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.221 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.221 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.222 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.222 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.222 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.222 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.222 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.223 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.223 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.223 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.223 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.223 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.223 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.224 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.224 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.224 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.224 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.224 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.225 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.225 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.225 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.225 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.225 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.226 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.226 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.226 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.226 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.226 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.227 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.227 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.227 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.227 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.227 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.228 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.228 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.228 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.228 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.228 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.228 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.229 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.229 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.229 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.230 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.230 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.230 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.230 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.230 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.231 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.231 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.231 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.231 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.231 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.231 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.232 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.232 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.232 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.232 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.232 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.233 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.233 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.233 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.233 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.233 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.234 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.234 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.234 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.234 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.235 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.235 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.235 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.235 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.235 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.236 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.236 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.236 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.236 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.236 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.237 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.237 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.237 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.237 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.237 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.238 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.238 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.238 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.238 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.238 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.239 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.239 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.239 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.239 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.239 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.240 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.240 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.240 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.240 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.240 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.241 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.241 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.241 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.241 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.241 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.242 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.242 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.242 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.242 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.242 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.242 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.243 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.243 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.243 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.243 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.243 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.244 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.244 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.244 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.244 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.244 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.245 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.245 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.245 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.245 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.245 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.246 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.246 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.246 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.246 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.246 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.246 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.247 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.247 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.247 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.247 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.247 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.248 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.248 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.248 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.248 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.249 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.249 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.249 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.249 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.250 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.250 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.250 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.250 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.250 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.250 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.251 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.251 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.251 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.251 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.251 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.252 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.252 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.252 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.252 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.252 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.253 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.253 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.253 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.253 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.253 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.254 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.254 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.254 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.254 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.254 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.254 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.254 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.255 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.255 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.255 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.255 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.255 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.255 226109 WARNING oslo_config.cfg [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 06:52:35 compute-1 nova_compute[226101]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 06:52:35 compute-1 nova_compute[226101]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 06:52:35 compute-1 nova_compute[226101]: and ``live_migration_inbound_addr`` respectively.
Dec 06 06:52:35 compute-1 nova_compute[226101]: ).  Its value may be silently ignored in the future.
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.256 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.256 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.256 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.256 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.256 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.256 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.257 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.257 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.257 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.257 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.257 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.257 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.258 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.258 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.258 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.258 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.258 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.258 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.258 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rbd_secret_uuid        = 40a1bae4-cf76-5610-8dab-c75116dfe0bb log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.259 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.259 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.259 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.259 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.259 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.259 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.259 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.260 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.260 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.260 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.260 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.260 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.260 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.261 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.261 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.261 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.261 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.261 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.261 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.262 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.262 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.262 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.262 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.262 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.262 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.262 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.263 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.263 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.263 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.263 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.263 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.263 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.263 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.264 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.264 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.264 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.264 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.264 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.264 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.264 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.265 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.265 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.265 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.265 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.265 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.265 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.265 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.266 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.266 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.266 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.266 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.266 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.266 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.266 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.267 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.267 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.267 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.267 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.267 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.267 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.267 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.267 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.268 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.268 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.268 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.268 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.268 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.268 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.268 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.269 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.269 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.269 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.269 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.269 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.269 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.269 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.270 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.270 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.270 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.270 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.270 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.270 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.270 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.271 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.271 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.271 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.271 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.271 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.271 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.271 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.272 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.272 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.272 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.272 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.272 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.272 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.272 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.273 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.273 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.273 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.273 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.273 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.273 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.273 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.274 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.274 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.274 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.274 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.274 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.274 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.274 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.275 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.275 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.275 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.275 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.275 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.275 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.276 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.276 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.276 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.276 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.276 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.276 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.277 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.277 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.277 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.277 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.277 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.278 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.278 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.278 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.278 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.278 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.278 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.278 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.279 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.279 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.279 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.279 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.279 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.279 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.279 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.280 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.280 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.280 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.280 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.280 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.280 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.281 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.281 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.281 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.281 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.281 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.281 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.281 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.282 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.282 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.282 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.282 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.282 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.282 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.283 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.283 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.283 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.283 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.283 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.284 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.284 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.284 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.284 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.284 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.284 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.284 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.285 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.285 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.285 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.285 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.285 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.285 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.286 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.286 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.286 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.286 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.286 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.286 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.286 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.287 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.287 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.287 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.287 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.287 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.287 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.287 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.288 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.288 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.288 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.288 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.288 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.289 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.289 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.289 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.289 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.289 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.289 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.290 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.290 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.290 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.290 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.290 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.290 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.291 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.291 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.291 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.291 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.291 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.291 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.291 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.292 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.292 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.292 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.292 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.292 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.292 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.293 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.293 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.293 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.293 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.293 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.293 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.294 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.294 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.294 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.294 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.294 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.294 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.295 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.295 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.295 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.295 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.295 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.295 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.295 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.296 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.296 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.296 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.296 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.296 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.296 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.297 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.297 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.297 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.297 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.297 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.297 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.297 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.298 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.298 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.298 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.298 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.298 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.298 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.298 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.299 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.299 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.299 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.299 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.299 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.299 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.300 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.300 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.300 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.300 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.300 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.301 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.301 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.301 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.301 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.301 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.301 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.302 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.302 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.302 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.302 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.302 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.302 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.303 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.303 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.303 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.303 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.303 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.303 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.304 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.304 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.304 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.304 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.305 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.305 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.305 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.305 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.305 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.305 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.306 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.306 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.306 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.306 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.306 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.307 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.307 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.307 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.307 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.307 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.307 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.308 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.308 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.308 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.308 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.308 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.308 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.309 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.309 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.309 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.309 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.309 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.309 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.310 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.310 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.310 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.310 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.310 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.310 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.310 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.311 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.311 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.311 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.311 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.311 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.311 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.311 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.311 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.312 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.312 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.312 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.312 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.312 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.312 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.313 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.313 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.313 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.313 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.313 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.313 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.314 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.314 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.314 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.314 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.314 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.314 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.314 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.315 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.315 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.315 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.315 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.315 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.315 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.315 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.316 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.316 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.316 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.316 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.316 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.316 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.316 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.317 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.317 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.317 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.317 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.317 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.317 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.317 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.318 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.318 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.318 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.318 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.318 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.318 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.318 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.319 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.319 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.319 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.319 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.319 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.319 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.319 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.320 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.320 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.320 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.320 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.320 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.320 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.320 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.321 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.321 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.321 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.321 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.321 226109 DEBUG oslo_service.service [None req-33b432d9-8890-4e99-8ffe-d4e626aad1b5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.322 226109 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.347 226109 INFO nova.virt.node [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Determined node identity 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from /var/lib/nova/compute_id
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.348 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.349 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.349 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.349 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.360 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f68a25f2700> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.362 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f68a25f2700> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.363 226109 INFO nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Connection event '1' reason 'None'
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.369 226109 INFO nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]: 
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <host>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <uuid>effe0b74-d2bb-436f-b621-5e7c5f665fb5</uuid>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <arch>x86_64</arch>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model>EPYC-Rome-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <vendor>AMD</vendor>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <microcode version='16777317'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <signature family='23' model='49' stepping='0'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='x2apic'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='tsc-deadline'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='osxsave'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='hypervisor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='tsc_adjust'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='spec-ctrl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='stibp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='arch-capabilities'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='cmp_legacy'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='topoext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='virt-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='lbrv'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='tsc-scale'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='vmcb-clean'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='pause-filter'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='pfthreshold'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='svme-addr-chk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='rdctl-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='mds-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature name='pschange-mc-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <pages unit='KiB' size='4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <pages unit='KiB' size='2048'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <pages unit='KiB' size='1048576'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <power_management>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <suspend_mem/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </power_management>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <iommu support='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <migration_features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <live/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <uri_transports>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <uri_transport>tcp</uri_transport>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <uri_transport>rdma</uri_transport>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </uri_transports>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </migration_features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <topology>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <cells num='1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <cell id='0'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:           <memory unit='KiB'>7864320</memory>
Dec 06 06:52:35 compute-1 nova_compute[226101]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 06 06:52:35 compute-1 nova_compute[226101]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 06:52:35 compute-1 nova_compute[226101]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 06:52:35 compute-1 nova_compute[226101]:           <distances>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <sibling id='0' value='10'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:           </distances>
Dec 06 06:52:35 compute-1 nova_compute[226101]:           <cpus num='8'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:           </cpus>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         </cell>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </cells>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </topology>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <cache>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </cache>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <secmodel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model>selinux</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <doi>0</doi>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </secmodel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <secmodel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model>dac</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <doi>0</doi>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </secmodel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </host>
Dec 06 06:52:35 compute-1 nova_compute[226101]: 
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <guest>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <os_type>hvm</os_type>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <arch name='i686'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <wordsize>32</wordsize>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <domain type='qemu'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <domain type='kvm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </arch>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <pae/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <nonpae/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <acpi default='on' toggle='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <apic default='on' toggle='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <cpuselection/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <deviceboot/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <disksnapshot default='on' toggle='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <externalSnapshot/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </guest>
Dec 06 06:52:35 compute-1 nova_compute[226101]: 
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <guest>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <os_type>hvm</os_type>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <arch name='x86_64'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <wordsize>64</wordsize>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <domain type='qemu'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <domain type='kvm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </arch>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <acpi default='on' toggle='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <apic default='on' toggle='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <cpuselection/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <deviceboot/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <disksnapshot default='on' toggle='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <externalSnapshot/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </guest>
Dec 06 06:52:35 compute-1 nova_compute[226101]: 
Dec 06 06:52:35 compute-1 nova_compute[226101]: </capabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]: 
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.375 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.379 226109 DEBUG nova.virt.libvirt.volume.mount [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.380 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 06:52:35 compute-1 nova_compute[226101]: <domainCapabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <domain>kvm</domain>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <arch>i686</arch>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <vcpu max='240'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <iothreads supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <os supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <enum name='firmware'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <loader supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>rom</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pflash</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='readonly'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>yes</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>no</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='secure'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>no</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </loader>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </os>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>on</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>off</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='maximumMigratable'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>on</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>off</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <vendor>AMD</vendor>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='succor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='custom' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='auto-ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='auto-ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-128'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-256'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-512'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='KnightsMill'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512er'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512pf'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512er'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512pf'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tbm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tbm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SierraForest'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cmpccxadd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cmpccxadd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='athlon'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='athlon-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='core2duo'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='core2duo-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='coreduo'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='coreduo-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='n270'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='n270-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='phenom'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='phenom-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <memoryBacking supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <enum name='sourceType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>file</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>anonymous</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>memfd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </memoryBacking>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <disk supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='diskDevice'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>disk</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>cdrom</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>floppy</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>lun</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='bus'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ide</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>fdc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>scsi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>sata</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-non-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <graphics supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vnc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>egl-headless</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dbus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </graphics>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <video supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='modelType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vga</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>cirrus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>none</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>bochs</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ramfb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </video>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <hostdev supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='mode'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>subsystem</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='startupPolicy'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>default</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>mandatory</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>requisite</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>optional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='subsysType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pci</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>scsi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='capsType'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='pciBackend'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </hostdev>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <rng supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-non-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>random</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>egd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>builtin</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <filesystem supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='driverType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>path</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>handle</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtiofs</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </filesystem>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <tpm supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tpm-tis</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tpm-crb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>emulator</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>external</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendVersion'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>2.0</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </tpm>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <redirdev supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='bus'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </redirdev>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <channel supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pty</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>unix</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </channel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <crypto supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>qemu</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>builtin</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </crypto>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <interface supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>default</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>passt</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </interface>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <panic supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>isa</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>hyperv</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </panic>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <console supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>null</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pty</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dev</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>file</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pipe</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>stdio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>udp</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tcp</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>unix</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>qemu-vdagent</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dbus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </console>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <gic supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <genid supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <backup supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <async-teardown supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <ps2 supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <sev supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <sgx supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <hyperv supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='features'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>relaxed</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vapic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>spinlocks</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vpindex</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>runtime</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>synic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>stimer</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>reset</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vendor_id</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>frequencies</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>reenlightenment</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tlbflush</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ipi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>avic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>emsr_bitmap</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>xmm_input</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <defaults>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </defaults>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </hyperv>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <launchSecurity supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='sectype'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tdx</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </launchSecurity>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </features>
Dec 06 06:52:35 compute-1 nova_compute[226101]: </domainCapabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.386 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 06:52:35 compute-1 nova_compute[226101]: <domainCapabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <domain>kvm</domain>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <arch>i686</arch>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <vcpu max='4096'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <iothreads supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <os supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <enum name='firmware'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <loader supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>rom</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pflash</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='readonly'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>yes</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>no</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='secure'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>no</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </loader>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </os>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>on</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>off</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='maximumMigratable'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>on</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>off</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <vendor>AMD</vendor>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='succor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='custom' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='auto-ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='auto-ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-128'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-256'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-512'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='KnightsMill'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512er'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512pf'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512er'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512pf'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tbm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tbm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SierraForest'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cmpccxadd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cmpccxadd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='athlon'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='athlon-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='core2duo'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='core2duo-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='coreduo'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='coreduo-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='n270'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='n270-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='phenom'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='phenom-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <memoryBacking supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <enum name='sourceType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>file</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>anonymous</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>memfd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </memoryBacking>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <disk supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='diskDevice'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>disk</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>cdrom</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>floppy</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>lun</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='bus'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>fdc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>scsi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>sata</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-non-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <graphics supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vnc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>egl-headless</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dbus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </graphics>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <video supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='modelType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vga</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>cirrus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>none</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>bochs</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ramfb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </video>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <hostdev supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='mode'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>subsystem</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='startupPolicy'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>default</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>mandatory</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>requisite</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>optional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='subsysType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pci</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>scsi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='capsType'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='pciBackend'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </hostdev>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <rng supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-non-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>random</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>egd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>builtin</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <filesystem supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='driverType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>path</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>handle</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtiofs</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </filesystem>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <tpm supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tpm-tis</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tpm-crb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>emulator</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>external</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendVersion'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>2.0</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </tpm>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <redirdev supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='bus'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </redirdev>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <channel supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pty</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>unix</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </channel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <crypto supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>qemu</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>builtin</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </crypto>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <interface supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>default</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>passt</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </interface>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <panic supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>isa</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>hyperv</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </panic>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <console supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>null</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pty</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dev</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>file</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pipe</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>stdio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>udp</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tcp</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>unix</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>qemu-vdagent</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dbus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </console>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <gic supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <genid supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <backup supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <async-teardown supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <ps2 supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <sev supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <sgx supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <hyperv supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='features'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>relaxed</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vapic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>spinlocks</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vpindex</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>runtime</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>synic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>stimer</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>reset</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vendor_id</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>frequencies</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>reenlightenment</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tlbflush</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ipi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>avic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>emsr_bitmap</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>xmm_input</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <defaults>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </defaults>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </hyperv>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <launchSecurity supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='sectype'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tdx</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </launchSecurity>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </features>
Dec 06 06:52:35 compute-1 nova_compute[226101]: </domainCapabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.419 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.423 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 06:52:35 compute-1 nova_compute[226101]: <domainCapabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <domain>kvm</domain>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <arch>x86_64</arch>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <vcpu max='240'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <iothreads supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <os supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <enum name='firmware'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <loader supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>rom</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pflash</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='readonly'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>yes</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>no</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='secure'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>no</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </loader>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </os>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>on</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>off</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='maximumMigratable'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>on</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>off</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <vendor>AMD</vendor>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='succor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='custom' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='auto-ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='auto-ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-128'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-256'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-512'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='KnightsMill'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512er'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512pf'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512er'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512pf'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tbm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tbm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SierraForest'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cmpccxadd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cmpccxadd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='athlon'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='athlon-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='core2duo'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='core2duo-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='coreduo'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='coreduo-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='n270'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='n270-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='phenom'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='phenom-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <memoryBacking supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <enum name='sourceType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>file</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>anonymous</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>memfd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </memoryBacking>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <disk supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='diskDevice'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>disk</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>cdrom</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>floppy</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>lun</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='bus'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ide</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>fdc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>scsi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>sata</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-non-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <graphics supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vnc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>egl-headless</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dbus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </graphics>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <video supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='modelType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vga</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>cirrus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>none</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>bochs</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ramfb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </video>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <hostdev supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='mode'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>subsystem</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='startupPolicy'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>default</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>mandatory</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>requisite</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>optional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='subsysType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pci</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>scsi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='capsType'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='pciBackend'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </hostdev>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <rng supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-non-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>random</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>egd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>builtin</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <filesystem supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='driverType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>path</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>handle</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtiofs</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </filesystem>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <tpm supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tpm-tis</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tpm-crb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>emulator</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>external</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendVersion'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>2.0</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </tpm>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <redirdev supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='bus'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </redirdev>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <channel supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pty</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>unix</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </channel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <crypto supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>qemu</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>builtin</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </crypto>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <interface supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>default</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>passt</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </interface>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <panic supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>isa</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>hyperv</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </panic>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <console supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>null</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pty</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dev</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>file</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pipe</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>stdio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>udp</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tcp</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>unix</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>qemu-vdagent</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dbus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </console>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <gic supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <genid supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <backup supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <async-teardown supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <ps2 supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <sev supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <sgx supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <hyperv supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='features'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>relaxed</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vapic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>spinlocks</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vpindex</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>runtime</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>synic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>stimer</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>reset</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vendor_id</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>frequencies</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>reenlightenment</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tlbflush</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ipi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>avic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>emsr_bitmap</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>xmm_input</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <defaults>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </defaults>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </hyperv>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <launchSecurity supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='sectype'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tdx</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </launchSecurity>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </features>
Dec 06 06:52:35 compute-1 nova_compute[226101]: </domainCapabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.496 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 06:52:35 compute-1 nova_compute[226101]: <domainCapabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <domain>kvm</domain>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <arch>x86_64</arch>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <vcpu max='4096'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <iothreads supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <os supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <enum name='firmware'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>efi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <loader supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>rom</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pflash</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='readonly'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>yes</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>no</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='secure'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>yes</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>no</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </loader>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </os>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>on</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>off</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='maximumMigratable'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>on</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>off</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <vendor>AMD</vendor>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='succor'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <mode name='custom' supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Denverton-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='auto-ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='auto-ibrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amd-psfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='stibp-always-on'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='EPYC-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-128'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-256'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx10-512'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='prefetchiti'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Haswell-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='KnightsMill'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512er'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512pf'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512er'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512pf'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tbm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fma4'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tbm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xop'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='amx-tile'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-bf16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-fp16'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bitalg'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrc'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fzrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='la57'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='taa-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xfd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SierraForest'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cmpccxadd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ifma'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cmpccxadd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fbsdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='fsrs'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ibrs-all'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mcdt-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pbrsb-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='psdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='serialize'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vaes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='hle'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='rtm'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512bw'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512cd'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512dq'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512f'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='avx512vl'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='invpcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pcid'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='pku'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='mpx'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='core-capability'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='split-lock-detect'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='cldemote'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='erms'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='gfni'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdir64b'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='movdiri'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='xsaves'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='athlon'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='athlon-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='core2duo'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='core2duo-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='coreduo'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='coreduo-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='n270'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='n270-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='ss'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='phenom'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <blockers model='phenom-v1'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnow'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <feature name='3dnowext'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </blockers>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </mode>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <memoryBacking supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <enum name='sourceType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>file</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>anonymous</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <value>memfd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </memoryBacking>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <disk supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='diskDevice'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>disk</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>cdrom</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>floppy</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>lun</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='bus'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>fdc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>scsi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>sata</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-non-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <graphics supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vnc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>egl-headless</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dbus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </graphics>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <video supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='modelType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vga</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>cirrus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>none</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>bochs</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ramfb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </video>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <hostdev supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='mode'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>subsystem</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='startupPolicy'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>default</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>mandatory</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>requisite</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>optional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='subsysType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pci</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>scsi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='capsType'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='pciBackend'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </hostdev>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <rng supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtio-non-transitional</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>random</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>egd</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>builtin</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <filesystem supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='driverType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>path</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>handle</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>virtiofs</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </filesystem>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <tpm supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tpm-tis</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tpm-crb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>emulator</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>external</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendVersion'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>2.0</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </tpm>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <redirdev supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='bus'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>usb</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </redirdev>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <channel supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pty</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>unix</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </channel>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <crypto supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>qemu</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendModel'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>builtin</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </crypto>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <interface supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='backendType'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>default</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>passt</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </interface>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <panic supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='model'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>isa</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>hyperv</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </panic>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <console supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='type'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>null</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vc</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pty</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dev</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>file</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>pipe</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>stdio</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>udp</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tcp</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>unix</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>qemu-vdagent</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>dbus</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </console>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <features>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <gic supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <genid supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <backup supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <async-teardown supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <ps2 supported='yes'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <sev supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <sgx supported='no'/>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <hyperv supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='features'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>relaxed</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vapic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>spinlocks</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vpindex</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>runtime</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>synic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>stimer</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>reset</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>vendor_id</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>frequencies</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>reenlightenment</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tlbflush</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>ipi</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>avic</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>emsr_bitmap</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>xmm_input</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <defaults>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </defaults>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </hyperv>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     <launchSecurity supported='yes'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       <enum name='sectype'>
Dec 06 06:52:35 compute-1 nova_compute[226101]:         <value>tdx</value>
Dec 06 06:52:35 compute-1 nova_compute[226101]:       </enum>
Dec 06 06:52:35 compute-1 nova_compute[226101]:     </launchSecurity>
Dec 06 06:52:35 compute-1 nova_compute[226101]:   </features>
Dec 06 06:52:35 compute-1 nova_compute[226101]: </domainCapabilities>
Dec 06 06:52:35 compute-1 nova_compute[226101]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.562 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.563 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.563 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.563 226109 INFO nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Secure Boot support detected
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.566 226109 INFO nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.566 226109 INFO nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.577 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] cpu compare xml: <cpu match="exact">
Dec 06 06:52:35 compute-1 nova_compute[226101]:   <model>Nehalem</model>
Dec 06 06:52:35 compute-1 nova_compute[226101]: </cpu>
Dec 06 06:52:35 compute-1 nova_compute[226101]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.579 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.625 226109 INFO nova.virt.node [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Determined node identity 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from /var/lib/nova/compute_id
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.646 226109 WARNING nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Compute nodes ['466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.705 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 06:52:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:35.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.743 226109 WARNING nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.744 226109 DEBUG oslo_concurrency.lockutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.744 226109 DEBUG oslo_concurrency.lockutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.744 226109 DEBUG oslo_concurrency.lockutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.744 226109 DEBUG nova.compute.resource_tracker [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:52:35 compute-1 nova_compute[226101]: 2025-12-06 06:52:35.745 226109 DEBUG oslo_concurrency.processutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:35.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2866568348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:52:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3199913541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.162 226109 DEBUG oslo_concurrency.processutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:52:36 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Dec 06 06:52:36 compute-1 systemd[1]: Started libvirt nodedev daemon.
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.450 226109 WARNING nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.451 226109 DEBUG nova.compute.resource_tracker [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5316MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.452 226109 DEBUG oslo_concurrency.lockutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.452 226109 DEBUG oslo_concurrency.lockutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.477 226109 WARNING nova.compute.resource_tracker [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] No compute node record for compute-1.ctlplane.example.com:466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 could not be found.
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.492 226109 INFO nova.compute.resource_tracker [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.543 226109 DEBUG nova.compute.resource_tracker [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:52:36 compute-1 nova_compute[226101]: 2025-12-06 06:52:36.544 226109 DEBUG nova.compute.resource_tracker [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.098 226109 INFO nova.scheduler.client.report [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [req-875b28df-78b7-448e-8b10-b40c53d5fd3b] Created resource provider record via placement API for resource provider with UUID 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 and name compute-1.ctlplane.example.com.
Dec 06 06:52:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3199913541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3277231340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:37 compute-1 ceph-mon[81689]: pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.189 226109 DEBUG oslo_concurrency.processutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:52:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2782321474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.602 226109 DEBUG oslo_concurrency.processutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.607 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 06 06:52:37 compute-1 nova_compute[226101]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.607 226109 INFO nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] kernel doesn't support AMD SEV
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.608 226109 DEBUG nova.compute.provider_tree [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.608 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.610 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Libvirt baseline CPU <cpu>
Dec 06 06:52:37 compute-1 nova_compute[226101]:   <arch>x86_64</arch>
Dec 06 06:52:37 compute-1 nova_compute[226101]:   <model>Nehalem</model>
Dec 06 06:52:37 compute-1 nova_compute[226101]:   <vendor>AMD</vendor>
Dec 06 06:52:37 compute-1 nova_compute[226101]:   <topology sockets="8" cores="1" threads="1"/>
Dec 06 06:52:37 compute-1 nova_compute[226101]: </cpu>
Dec 06 06:52:37 compute-1 nova_compute[226101]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.694 226109 DEBUG nova.scheduler.client.report [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Updated inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.695 226109 DEBUG nova.compute.provider_tree [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Updating resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.695 226109 DEBUG nova.compute.provider_tree [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:52:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.820 226109 DEBUG nova.compute.provider_tree [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Updating resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.857 226109 DEBUG nova.compute.resource_tracker [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.857 226109 DEBUG oslo_concurrency.lockutils [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.857 226109 DEBUG nova.service [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 06 06:52:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:37.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.921 226109 DEBUG nova.service [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 06 06:52:37 compute-1 nova_compute[226101]: 2025-12-06 06:52:37.921 226109 DEBUG nova.servicegroup.drivers.db [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 06 06:52:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2477464245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2782321474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/822919840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:39 compute-1 ceph-mon[81689]: pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:39.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:39.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:40 compute-1 ceph-mon[81689]: pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:41.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:41.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:42 compute-1 ceph-mon[81689]: pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:43 compute-1 podman[226472]: 2025-12-06 06:52:43.066455826 +0000 UTC m=+0.054075350 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:52:43 compute-1 podman[226471]: 2025-12-06 06:52:43.080246145 +0000 UTC m=+0.067621892 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 06:52:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:43.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:44 compute-1 ceph-mon[81689]: pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:45.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:45.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:46 compute-1 ceph-mon[81689]: pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:52:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:47.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:52:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:48 compute-1 ceph-mon[81689]: pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:49.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:49.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:50 compute-1 ceph-mon[81689]: pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:51.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:51 compute-1 nova_compute[226101]: 2025-12-06 06:52:51.923 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:52:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:51.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:51 compute-1 nova_compute[226101]: 2025-12-06 06:52:51.942 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:52:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:53 compute-1 ceph-mon[81689]: pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:53.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:53.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:54 compute-1 ceph-mon[81689]: pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:55.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:55.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:56 compute-1 ceph-mon[81689]: pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:57.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:57.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:58 compute-1 ceph-mon[81689]: pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:59.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:52:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:59.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:00 compute-1 ceph-mon[81689]: pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:53:01.608 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:53:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:53:01.608 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:53:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:53:01.609 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:53:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:02 compute-1 ceph-mon[81689]: pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:03 compute-1 podman[226508]: 2025-12-06 06:53:03.116360495 +0000 UTC m=+0.101733485 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:53:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:03.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:04 compute-1 ceph-mon[81689]: pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:05.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:06 compute-1 ceph-mon[81689]: pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:07.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:07.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:08 compute-1 ceph-mon[81689]: pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:09.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:09.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:10 compute-1 ceph-mon[81689]: pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:11.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2725809804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2725809804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:13 compute-1 sudo[226535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:13 compute-1 sudo[226535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:13 compute-1 sudo[226535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:13 compute-1 ceph-mon[81689]: pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3390278078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3390278078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:13 compute-1 podman[226560]: 2025-12-06 06:53:13.744956076 +0000 UTC m=+0.057279695 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:53:13 compute-1 podman[226559]: 2025-12-06 06:53:13.745433848 +0000 UTC m=+0.054951732 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 06:53:13 compute-1 sudo[226574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:53:13 compute-1 sudo[226574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:13 compute-1 sudo[226574]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:13.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:13 compute-1 sudo[226622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:13 compute-1 sudo[226622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:13 compute-1 sudo[226622]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:13 compute-1 sudo[226647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:53:13 compute-1 sudo[226647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:14 compute-1 sudo[226647]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:53:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:53:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2717889535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:53:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:53:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2717889535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:53:15 compute-1 ceph-mon[81689]: pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:15.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:15.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:16 compute-1 ceph-mon[81689]: pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:17.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:17.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:18 compute-1 ceph-mon[81689]: pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:19.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:19.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:20 compute-1 sudo[226703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:20 compute-1 sudo[226703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:20 compute-1 sudo[226703]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:20 compute-1 sudo[226728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:53:20 compute-1 sudo[226728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:20 compute-1 sudo[226728]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:20 compute-1 ceph-mon[81689]: pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:21.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:21.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:22 compute-1 ceph-mon[81689]: pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:23.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:24 compute-1 ceph-mon[81689]: pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:25.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:25.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:26 compute-1 ceph-mon[81689]: pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:27.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:27.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:28 compute-1 ceph-mon[81689]: pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:29.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:30 compute-1 ceph-mon[81689]: pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:31.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:32.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:32 compute-1 ceph-mon[81689]: pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:33.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:34.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:34 compute-1 podman[226753]: 2025-12-06 06:53:34.09330524 +0000 UTC m=+0.077387410 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:53:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/472132947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.592 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.592 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.592 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.613 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.614 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.614 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.614 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.615 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.615 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.615 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.615 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.615 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.646 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.647 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.647 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.647 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:53:34 compute-1 nova_compute[226101]: 2025-12-06 06:53:34.647 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:53:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:53:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1311066653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.059 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.216 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.217 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5355MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.217 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.217 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:53:35 compute-1 ceph-mon[81689]: pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1311066653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1127660712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.395 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.395 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.463 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:53:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:35.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:53:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3498460210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.873 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.879 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.904 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.905 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:53:35 compute-1 nova_compute[226101]: 2025-12-06 06:53:35.906 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:53:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:36.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3584048377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3498460210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1477125112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:37 compute-1 ceph-mon[81689]: pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:37.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:38.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:38 compute-1 ceph-mon[81689]: pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:39.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:40.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:40 compute-1 ceph-mon[81689]: pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:41.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:42.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:42 compute-1 ceph-mon[81689]: pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:43.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:44.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:44 compute-1 podman[226823]: 2025-12-06 06:53:44.064507018 +0000 UTC m=+0.054286828 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 06 06:53:44 compute-1 podman[226824]: 2025-12-06 06:53:44.064607151 +0000 UTC m=+0.050416936 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 06:53:44 compute-1 ceph-mon[81689]: pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:45.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:46.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:46 compute-1 ceph-mon[81689]: pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:47.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:48.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:48 compute-1 ceph-mon[81689]: pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:49.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:50.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:51 compute-1 ceph-mon[81689]: pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:51.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:52.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:52 compute-1 ceph-mon[81689]: pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:53:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:53.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:53:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:54.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:54 compute-1 ceph-mon[81689]: pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:55.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:56.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:56 compute-1 ceph-mon[81689]: pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:53:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:57.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:53:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:58.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:58 compute-1 ceph-mon[81689]: pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:53:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:59.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:00.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:00 compute-1 ceph-mon[81689]: pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:54:01.609 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:54:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:54:01.610 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:54:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:54:01.610 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:54:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:01.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:02.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:02 compute-1 ceph-mon[81689]: pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:03.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:04.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:04 compute-1 ceph-mon[81689]: pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:05 compute-1 podman[226859]: 2025-12-06 06:54:05.094637497 +0000 UTC m=+0.080263447 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:54:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:05.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:06.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:06 compute-1 ceph-mon[81689]: pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:07.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:08 compute-1 ceph-mon[81689]: pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:09.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:10.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:10 compute-1 ceph-mon[81689]: pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:11.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:12.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:12 compute-1 ceph-mon[81689]: pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:13.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:14.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:14 compute-1 ceph-mon[81689]: pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:15 compute-1 podman[226886]: 2025-12-06 06:54:15.06307753 +0000 UTC m=+0.049997114 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 06:54:15 compute-1 podman[226887]: 2025-12-06 06:54:15.069204945 +0000 UTC m=+0.050750575 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 06:54:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:15.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:16.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:16 compute-1 ceph-mon[81689]: pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:17.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:18.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:18 compute-1 ceph-mon[81689]: pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:19.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:20.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:20 compute-1 sudo[226922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:20 compute-1 sudo[226922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:20 compute-1 sudo[226922]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:20 compute-1 sudo[226947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:54:20 compute-1 sudo[226947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:20 compute-1 sudo[226947]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:20 compute-1 sudo[226972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:20 compute-1 sudo[226972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:20 compute-1 sudo[226972]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:20 compute-1 sudo[226997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:54:20 compute-1 sudo[226997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:20 compute-1 ceph-mon[81689]: pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:21 compute-1 sudo[226997]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:21.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:54:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:54:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:54:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:54:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:54:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:54:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:22.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:22 compute-1 ceph-mon[81689]: pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:54:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:54:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:24 compute-1 ceph-mon[81689]: pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:25.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:26 compute-1 sudo[227054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:26 compute-1 sudo[227054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:26 compute-1 sudo[227054]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:27 compute-1 sudo[227079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:54:27 compute-1 sudo[227079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:27 compute-1 sudo[227079]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:27 compute-1 ceph-mon[81689]: pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:27.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:28.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:28 compute-1 ceph-mon[81689]: pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:29.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:30.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:30 compute-1 ceph-mon[81689]: pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:31.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:32 compute-1 ceph-mon[81689]: pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:33.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:34 compute-1 ceph-mon[81689]: pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:35.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.896 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.897 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.915 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.915 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.915 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.915 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.938 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.938 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.939 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.939 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:54:35 compute-1 nova_compute[226101]: 2025-12-06 06:54:35.939 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:54:36 compute-1 podman[227105]: 2025-12-06 06:54:36.110454513 +0000 UTC m=+0.095303282 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 06:54:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3026439830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:54:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/353004254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:36 compute-1 nova_compute[226101]: 2025-12-06 06:54:36.365 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:54:36 compute-1 nova_compute[226101]: 2025-12-06 06:54:36.502 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:54:36 compute-1 nova_compute[226101]: 2025-12-06 06:54:36.504 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5333MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:54:36 compute-1 nova_compute[226101]: 2025-12-06 06:54:36.504 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:54:36 compute-1 nova_compute[226101]: 2025-12-06 06:54:36.504 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:54:36 compute-1 nova_compute[226101]: 2025-12-06 06:54:36.773 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:54:36 compute-1 nova_compute[226101]: 2025-12-06 06:54:36.773 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:54:36 compute-1 nova_compute[226101]: 2025-12-06 06:54:36.786 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:54:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:54:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1567979913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:37 compute-1 nova_compute[226101]: 2025-12-06 06:54:37.190 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:54:37 compute-1 nova_compute[226101]: 2025-12-06 06:54:37.196 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:54:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/353004254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:37 compute-1 ceph-mon[81689]: pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2556591730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1567979913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:37 compute-1 nova_compute[226101]: 2025-12-06 06:54:37.359 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:54:37 compute-1 nova_compute[226101]: 2025-12-06 06:54:37.361 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:54:37 compute-1 nova_compute[226101]: 2025-12-06 06:54:37.361 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:54:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:37.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:38 compute-1 nova_compute[226101]: 2025-12-06 06:54:38.036 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:38 compute-1 nova_compute[226101]: 2025-12-06 06:54:38.036 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:54:38 compute-1 nova_compute[226101]: 2025-12-06 06:54:38.036 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:54:38 compute-1 nova_compute[226101]: 2025-12-06 06:54:38.065 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:54:38 compute-1 nova_compute[226101]: 2025-12-06 06:54:38.066 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:38 compute-1 nova_compute[226101]: 2025-12-06 06:54:38.066 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:38 compute-1 nova_compute[226101]: 2025-12-06 06:54:38.066 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:38 compute-1 nova_compute[226101]: 2025-12-06 06:54:38.067 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:38.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2042275790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1746649706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:39 compute-1 ceph-mon[81689]: pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:39.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:40.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:40 compute-1 ceph-mon[81689]: pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:54:42.098 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:54:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:54:42.099 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 06:54:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:54:42.100 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:54:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:42.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:42 compute-1 ceph-mon[81689]: pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:43.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:44.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:45 compute-1 ceph-mon[81689]: pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:45.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:46 compute-1 podman[227174]: 2025-12-06 06:54:46.067386268 +0000 UTC m=+0.048806572 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 06:54:46 compute-1 podman[227173]: 2025-12-06 06:54:46.072149166 +0000 UTC m=+0.056116929 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:54:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:46.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:46 compute-1 ceph-mon[81689]: pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:47.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:48.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:49 compute-1 ceph-mon[81689]: pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:49.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:50.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:50 compute-1 ceph-mon[81689]: pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:51.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:52 compute-1 ceph-mon[81689]: pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:53.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:54.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:54 compute-1 ceph-mon[81689]: pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:55.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:56.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:57 compute-1 ceph-mon[81689]: pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:54:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:57.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:54:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:58.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:58 compute-1 ceph-mon[81689]: pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:54:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:59.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:00.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:01 compute-1 ceph-mon[81689]: pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:55:01.610 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:55:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:55:01.611 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:55:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:55:01.611 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:55:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:01.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:02.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:03 compute-1 ceph-mon[81689]: pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:03.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:04.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:05 compute-1 ceph-mon[81689]: pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:05.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:06.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:07 compute-1 podman[227213]: 2025-12-06 06:55:07.117535701 +0000 UTC m=+0.102811284 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 06:55:07 compute-1 ceph-mon[81689]: pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:07.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:08.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1344939197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:55:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1344939197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:55:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:09.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:10 compute-1 ceph-mon[81689]: pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:10.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:11 compute-1 ceph-mon[81689]: pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:11.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:12.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:13 compute-1 ceph-mon[81689]: pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:13.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:14.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:15 compute-1 ceph-mon[81689]: pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:15.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:16.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:17 compute-1 podman[227240]: 2025-12-06 06:55:17.079883582 +0000 UTC m=+0.067087203 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 06:55:17 compute-1 podman[227241]: 2025-12-06 06:55:17.083575542 +0000 UTC m=+0.067329020 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:55:17 compute-1 ceph-mon[81689]: pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:18.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:19 compute-1 ceph-mon[81689]: pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:19.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:20.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:21 compute-1 ceph-mon[81689]: pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:21.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:23.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:23 compute-1 ceph-mon[81689]: pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:55:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:55:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:25.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:25 compute-1 ceph-mon[81689]: pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:25 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Dec 06 06:55:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:25.997823) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:55:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Dec 06 06:55:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004125997953, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2356, "num_deletes": 251, "total_data_size": 5899102, "memory_usage": 5951384, "flush_reason": "Manual Compaction"}
Dec 06 06:55:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126033708, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3870542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17938, "largest_seqno": 20288, "table_properties": {"data_size": 3861180, "index_size": 5920, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18670, "raw_average_key_size": 19, "raw_value_size": 3842507, "raw_average_value_size": 4096, "num_data_blocks": 266, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003890, "oldest_key_time": 1765003890, "file_creation_time": 1765004125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 35892 microseconds, and 9743 cpu microseconds.
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.033742) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3870542 bytes OK
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.033760) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.036326) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.036339) EVENT_LOG_v1 {"time_micros": 1765004126036335, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.036379) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5888947, prev total WAL file size 5888947, number of live WAL files 2.
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.038252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3779KB)], [36(8752KB)]
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126038331, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 12833360, "oldest_snapshot_seqno": -1}
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4975 keys, 10725851 bytes, temperature: kUnknown
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126107307, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 10725851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10689532, "index_size": 22769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125145, "raw_average_key_size": 25, "raw_value_size": 10596346, "raw_average_value_size": 2129, "num_data_blocks": 941, "num_entries": 4975, "num_filter_entries": 4975, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765004126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.107631) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10725851 bytes
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.110027) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.9 rd, 155.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5490, records dropped: 515 output_compression: NoCompression
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.110047) EVENT_LOG_v1 {"time_micros": 1765004126110037, "job": 20, "event": "compaction_finished", "compaction_time_micros": 69048, "compaction_time_cpu_micros": 20925, "output_level": 6, "num_output_files": 1, "total_output_size": 10725851, "num_input_records": 5490, "num_output_records": 4975, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126110850, "job": 20, "event": "table_file_deletion", "file_number": 38}
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126112430, "job": 20, "event": "table_file_deletion", "file_number": 36}
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.038140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.112504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.112510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.112512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.112513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:55:26.112515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:27 compute-1 sudo[227280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:27 compute-1 sudo[227280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:27 compute-1 sudo[227280]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:27 compute-1 sudo[227305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:55:27 compute-1 sudo[227305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:27 compute-1 sudo[227305]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:27 compute-1 sudo[227330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:27 compute-1 sudo[227330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:27 compute-1 sudo[227330]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:27 compute-1 sudo[227355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:55:27 compute-1 sudo[227355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:27 compute-1 sudo[227355]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:27.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:28 compute-1 ceph-mon[81689]: pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:55:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:55:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:55:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:55:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:55:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:29.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:30.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:30 compute-1 ceph-mon[81689]: pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:31.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:32.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:32 compute-1 ceph-mon[81689]: pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:33 compute-1 sudo[227410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:33 compute-1 sudo[227410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:33 compute-1 sudo[227410]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:33 compute-1 sudo[227435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:55:33 compute-1 sudo[227435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:33 compute-1 sudo[227435]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:33.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:34.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:34 compute-1 ceph-mon[81689]: pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:35 compute-1 nova_compute[226101]: 2025-12-06 06:55:35.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:35 compute-1 nova_compute[226101]: 2025-12-06 06:55:35.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:55:35 compute-1 nova_compute[226101]: 2025-12-06 06:55:35.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:55:35 compute-1 nova_compute[226101]: 2025-12-06 06:55:35.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:55:35 compute-1 nova_compute[226101]: 2025-12-06 06:55:35.621 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:55:35 compute-1 nova_compute[226101]: 2025-12-06 06:55:35.621 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:55:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:35.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:55:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/922551956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.065 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:55:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:36.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.218 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.219 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5358MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.219 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.220 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.292 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.293 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.308 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:55:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:55:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4111632330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.719 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.724 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.739 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.740 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:55:36 compute-1 nova_compute[226101]: 2025-12-06 06:55:36.740 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:55:37 compute-1 ceph-mon[81689]: pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/922551956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4111632330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:37 compute-1 nova_compute[226101]: 2025-12-06 06:55:37.734 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-1 nova_compute[226101]: 2025-12-06 06:55:37.734 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-1 nova_compute[226101]: 2025-12-06 06:55:37.735 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-1 nova_compute[226101]: 2025-12-06 06:55:37.735 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-1 nova_compute[226101]: 2025-12-06 06:55:37.735 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-1 nova_compute[226101]: 2025-12-06 06:55:37.735 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-1 nova_compute[226101]: 2025-12-06 06:55:37.735 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:55:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:37.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:38 compute-1 podman[227504]: 2025-12-06 06:55:38.113993787 +0000 UTC m=+0.102017082 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 06:55:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:38.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:38 compute-1 ceph-mon[81689]: pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2524605542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:38 compute-1 nova_compute[226101]: 2025-12-06 06:55:38.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:38 compute-1 nova_compute[226101]: 2025-12-06 06:55:38.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:55:38 compute-1 nova_compute[226101]: 2025-12-06 06:55:38.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:55:38 compute-1 nova_compute[226101]: 2025-12-06 06:55:38.607 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:55:38 compute-1 nova_compute[226101]: 2025-12-06 06:55:38.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4068762730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2983734056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:39.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:40.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:40 compute-1 ceph-mon[81689]: pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/713881256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:41.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:42.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:42 compute-1 ceph-mon[81689]: pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:43.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:44.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:44 compute-1 ceph-mon[81689]: pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:46.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:46 compute-1 ceph-mon[81689]: pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:47.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:48 compute-1 ceph-mon[81689]: pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:48 compute-1 podman[227531]: 2025-12-06 06:55:48.073879483 +0000 UTC m=+0.060989109 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 06 06:55:48 compute-1 podman[227530]: 2025-12-06 06:55:48.07526152 +0000 UTC m=+0.064238546 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 06:55:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:48.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:49.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:50 compute-1 ceph-mon[81689]: pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:50.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:51.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:52 compute-1 ceph-mon[81689]: pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:52.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:53.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:54.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:54 compute-1 ceph-mon[81689]: pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:55:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:55:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:56.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:56 compute-1 ceph-mon[81689]: pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:57.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:58.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:58 compute-1 ceph-mon[81689]: pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:55:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:59.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:00 compute-1 ceph-mon[81689]: pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:00.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:56:01.612 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:56:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:56:01.612 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:56:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:56:01.612 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:56:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:01.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:02.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:02 compute-1 ceph-mon[81689]: pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:03.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:04.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:04 compute-1 ceph-mon[81689]: pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:05.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:06 compute-1 ceph-mon[81689]: pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:06.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:07.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:08.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:09 compute-1 ceph-mon[81689]: pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:09 compute-1 podman[227569]: 2025-12-06 06:56:09.093485781 +0000 UTC m=+0.085225051 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 06:56:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:09.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/351251706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:56:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/351251706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:11.913328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171913389, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 664, "num_deletes": 250, "total_data_size": 1171588, "memory_usage": 1193152, "flush_reason": "Manual Compaction"}
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171955863, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 528173, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20294, "largest_seqno": 20952, "table_properties": {"data_size": 525272, "index_size": 873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7601, "raw_average_key_size": 20, "raw_value_size": 519199, "raw_average_value_size": 1366, "num_data_blocks": 39, "num_entries": 380, "num_filter_entries": 380, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004126, "oldest_key_time": 1765004126, "file_creation_time": 1765004171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 42577 microseconds, and 2330 cpu microseconds.
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:11.955909) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 528173 bytes OK
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:11.955932) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:11.958403) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:11.958467) EVENT_LOG_v1 {"time_micros": 1765004171958455, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:11.958490) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1167986, prev total WAL file size 1183961, number of live WAL files 2.
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:11.959315) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353038' seq:72057594037927935, type:22 .. '6D67727374617400373539' seq:0, type:0; will stop at (end)
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(515KB)], [39(10MB)]
Dec 06 06:56:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171959373, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11254024, "oldest_snapshot_seqno": -1}
Dec 06 06:56:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:12.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:12.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4859 keys, 7660311 bytes, temperature: kUnknown
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004172296762, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 7660311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7628850, "index_size": 18231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 123146, "raw_average_key_size": 25, "raw_value_size": 7541724, "raw_average_value_size": 1552, "num_data_blocks": 743, "num_entries": 4859, "num_filter_entries": 4859, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765004171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:12.297033) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7660311 bytes
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:12.300886) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.3 rd, 22.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.2 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(35.8) write-amplify(14.5) OK, records in: 5355, records dropped: 496 output_compression: NoCompression
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:12.300901) EVENT_LOG_v1 {"time_micros": 1765004172300894, "job": 22, "event": "compaction_finished", "compaction_time_micros": 337462, "compaction_time_cpu_micros": 18768, "output_level": 6, "num_output_files": 1, "total_output_size": 7660311, "num_input_records": 5355, "num_output_records": 4859, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004172301111, "job": 22, "event": "table_file_deletion", "file_number": 41}
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004172303027, "job": 22, "event": "table_file_deletion", "file_number": 39}
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:11.959223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:12.303056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:12.303060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:12.303062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:12.303064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:12 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:56:12.303066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:12 compute-1 ceph-mon[81689]: pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:13 compute-1 ceph-mon[81689]: pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:14.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:14.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:14 compute-1 ceph-mon[81689]: pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:16 compute-1 ceph-mon[81689]: pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:16.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:18.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:18 compute-1 ceph-mon[81689]: pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:19 compute-1 podman[227597]: 2025-12-06 06:56:19.061803213 +0000 UTC m=+0.050406445 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 06:56:19 compute-1 podman[227598]: 2025-12-06 06:56:19.066166771 +0000 UTC m=+0.051789293 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 06:56:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:20.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - - [06/Dec/2025:06:56:20.173 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.001000026s
Dec 06 06:56:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:20 compute-1 ceph-mon[81689]: pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:22.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:22 compute-1 ceph-mon[81689]: pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:24.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:24 compute-1 ceph-mon[81689]: pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:26.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:26.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:26 compute-1 ceph-mon[81689]: pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:28.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:28.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:29 compute-1 ceph-mon[81689]: pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:30.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:30.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:30 compute-1 ceph-mon[81689]: pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:32.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:32.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Dec 06 06:56:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:33 compute-1 ceph-mon[81689]: pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:33 compute-1 ceph-mon[81689]: osdmap e146: 3 total, 3 up, 3 in
Dec 06 06:56:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:34.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:34 compute-1 sudo[227633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:34 compute-1 sudo[227633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:34 compute-1 sudo[227633]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:34 compute-1 sudo[227658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:56:34 compute-1 sudo[227658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:34 compute-1 sudo[227658]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:34 compute-1 sudo[227683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:34 compute-1 sudo[227683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:34 compute-1 sudo[227683]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:34 compute-1 sudo[227708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:56:34 compute-1 sudo[227708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:34.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Dec 06 06:56:34 compute-1 ceph-mon[81689]: pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:34 compute-1 ceph-mon[81689]: osdmap e147: 3 total, 3 up, 3 in
Dec 06 06:56:34 compute-1 sudo[227708]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:35 compute-1 nova_compute[226101]: 2025-12-06 06:56:35.600 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:36.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:36.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:36 compute-1 ceph-mon[81689]: pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Dec 06 06:56:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.801 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.801 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.801 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.801 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.915 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.915 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.915 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.915 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:56:37 compute-1 nova_compute[226101]: 2025-12-06 06:56:37.915 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:56:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:38.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:38.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:56:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/839735319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.345 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:56:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Dec 06 06:56:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.504 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.506 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5349MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.506 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.506 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.573 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.573 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.587 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:56:38 compute-1 ceph-mon[81689]: pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 1.1 KiB/s wr, 9 op/s
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/839735319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:38 compute-1 ceph-mon[81689]: osdmap e148: 3 total, 3 up, 3 in
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:56:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:56:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:56:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1159887546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:38 compute-1 nova_compute[226101]: 2025-12-06 06:56:38.996 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:56:39 compute-1 nova_compute[226101]: 2025-12-06 06:56:39.002 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:56:39 compute-1 nova_compute[226101]: 2025-12-06 06:56:39.025 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:56:39 compute-1 nova_compute[226101]: 2025-12-06 06:56:39.027 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:56:39 compute-1 nova_compute[226101]: 2025-12-06 06:56:39.027 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:56:39 compute-1 nova_compute[226101]: 2025-12-06 06:56:39.816 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:39 compute-1 nova_compute[226101]: 2025-12-06 06:56:39.817 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:39 compute-1 nova_compute[226101]: 2025-12-06 06:56:39.817 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:56:39 compute-1 nova_compute[226101]: 2025-12-06 06:56:39.817 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:56:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1159887546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1841860323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:40.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:40 compute-1 podman[227808]: 2025-12-06 06:56:40.086222243 +0000 UTC m=+0.077103833 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller)
Dec 06 06:56:40 compute-1 nova_compute[226101]: 2025-12-06 06:56:40.285 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:56:40 compute-1 nova_compute[226101]: 2025-12-06 06:56:40.285 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:40 compute-1 nova_compute[226101]: 2025-12-06 06:56:40.286 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:40 compute-1 nova_compute[226101]: 2025-12-06 06:56:40.286 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:40 compute-1 nova_compute[226101]: 2025-12-06 06:56:40.286 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:40.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:40 compute-1 ceph-mon[81689]: pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 1.6 KiB/s wr, 12 op/s
Dec 06 06:56:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1622520585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1136916909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Dec 06 06:56:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Dec 06 06:56:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:42 compute-1 ceph-mon[81689]: pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 6.6 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Dec 06 06:56:42 compute-1 ceph-mon[81689]: osdmap e149: 3 total, 3 up, 3 in
Dec 06 06:56:42 compute-1 ceph-mon[81689]: osdmap e150: 3 total, 3 up, 3 in
Dec 06 06:56:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/938473162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:42.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:44.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:44 compute-1 ceph-mon[81689]: pgmap v1058: 305 pgs: 305 active+clean; 29 MiB data, 182 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 4.7 MiB/s wr, 49 op/s
Dec 06 06:56:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:44.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:44 compute-1 sudo[227836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:44 compute-1 sudo[227836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:44 compute-1 sudo[227836]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:44 compute-1 sudo[227861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:56:44 compute-1 sudo[227861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:44 compute-1 sudo[227861]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:46.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:46 compute-1 ceph-mon[81689]: pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 6.3 MiB/s wr, 47 op/s
Dec 06 06:56:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:46.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:48.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:48 compute-1 ceph-mon[81689]: pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 37 op/s
Dec 06 06:56:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:50.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:50 compute-1 podman[227886]: 2025-12-06 06:56:50.067605325 +0000 UTC m=+0.053220171 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:56:50 compute-1 podman[227887]: 2025-12-06 06:56:50.078363575 +0000 UTC m=+0.058492423 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true)
Dec 06 06:56:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:50.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:50 compute-1 ceph-mon[81689]: pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 37 op/s
Dec 06 06:56:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:52.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:52.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:52 compute-1 ceph-mon[81689]: pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 MiB/s wr, 30 op/s
Dec 06 06:56:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:54.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:54.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:54 compute-1 ceph-mon[81689]: pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 3.6 MiB/s wr, 26 op/s
Dec 06 06:56:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:56:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:56.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:56:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:56.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:56 compute-1 ceph-mon[81689]: pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 596 B/s rd, 1.0 MiB/s wr, 0 op/s
Dec 06 06:56:57 compute-1 ceph-mon[81689]: pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:56:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:58.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:56:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:56:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:58.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:59 compute-1 ceph-mon[81689]: pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:00.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:57:01.613 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:57:01.613 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:57:01.613 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:01 compute-1 ceph-mon[81689]: pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:02.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:02.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:03 compute-1 ceph-mon[81689]: pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:04.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:04.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:06.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:06 compute-1 ceph-mon[81689]: pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:06.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:08.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:57:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:08.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:57:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:08 compute-1 ceph-mon[81689]: pgmap v1070: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:57:09.090 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:57:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:57:09.092 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 06:57:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1112622656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:57:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1112622656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:57:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:10.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:10.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:10 compute-1 ceph-mon[81689]: pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:11 compute-1 podman[227925]: 2025-12-06 06:57:11.854245967 +0000 UTC m=+0.089108455 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:57:11 compute-1 ceph-mon[81689]: pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:12.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:12.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:14.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:57:14.095 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:57:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:14.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:14 compute-1 ceph-mon[81689]: pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:14 compute-1 nova_compute[226101]: 2025-12-06 06:57:14.766 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:14 compute-1 nova_compute[226101]: 2025-12-06 06:57:14.767 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:14 compute-1 nova_compute[226101]: 2025-12-06 06:57:14.791 226109 DEBUG nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:57:14 compute-1 nova_compute[226101]: 2025-12-06 06:57:14.934 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:14 compute-1 nova_compute[226101]: 2025-12-06 06:57:14.935 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:14 compute-1 nova_compute[226101]: 2025-12-06 06:57:14.954 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:57:14 compute-1 nova_compute[226101]: 2025-12-06 06:57:14.954 226109 INFO nova.compute.claims [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Claim successful on node compute-1.ctlplane.example.com
Dec 06 06:57:15 compute-1 nova_compute[226101]: 2025-12-06 06:57:15.143 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2208619595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:15 compute-1 nova_compute[226101]: 2025-12-06 06:57:15.622 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:15 compute-1 nova_compute[226101]: 2025-12-06 06:57:15.629 226109 DEBUG nova.compute.provider_tree [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:57:15 compute-1 nova_compute[226101]: 2025-12-06 06:57:15.842 226109 DEBUG nova.scheduler.client.report [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:57:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2208619595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:16.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.145 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.146 226109 DEBUG nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:57:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.624 226109 DEBUG nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.714 226109 INFO nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.758 226109 DEBUG nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.916 226109 DEBUG nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.918 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.918 226109 INFO nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Creating image(s)
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.949 226109 DEBUG nova.storage.rbd_utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:16 compute-1 nova_compute[226101]: 2025-12-06 06:57:16.975 226109 DEBUG nova.storage.rbd_utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:17 compute-1 nova_compute[226101]: 2025-12-06 06:57:17.000 226109 DEBUG nova.storage.rbd_utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:17 compute-1 nova_compute[226101]: 2025-12-06 06:57:17.004 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:17 compute-1 nova_compute[226101]: 2025-12-06 06:57:17.004 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:17 compute-1 ceph-mon[81689]: pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:18.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:18 compute-1 nova_compute[226101]: 2025-12-06 06:57:18.711 226109 DEBUG nova.virt.libvirt.imagebackend [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6efab05d-c7cf-4770-a5c3-c806a2739063/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6efab05d-c7cf-4770-a5c3-c806a2739063/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 06:57:18 compute-1 ceph-mon[81689]: pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:57:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:20.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:57:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:20.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:20 compute-1 ceph-mon[81689]: pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:21 compute-1 podman[228028]: 2025-12-06 06:57:21.06372753 +0000 UTC m=+0.050391855 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:57:21 compute-1 podman[228027]: 2025-12-06 06:57:21.100481847 +0000 UTC m=+0.089094485 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:57:21 compute-1 nova_compute[226101]: 2025-12-06 06:57:21.855 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:21 compute-1 nova_compute[226101]: 2025-12-06 06:57:21.919 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:21 compute-1 nova_compute[226101]: 2025-12-06 06:57:21.921 226109 DEBUG nova.virt.images [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] 6efab05d-c7cf-4770-a5c3-c806a2739063 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 06 06:57:21 compute-1 nova_compute[226101]: 2025-12-06 06:57:21.922 226109 DEBUG nova.privsep.utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 06:57:21 compute-1 nova_compute[226101]: 2025-12-06 06:57:21.923 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:22 compute-1 nova_compute[226101]: 2025-12-06 06:57:22.188 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:22 compute-1 nova_compute[226101]: 2025-12-06 06:57:22.195 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:22 compute-1 nova_compute[226101]: 2025-12-06 06:57:22.254 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:22 compute-1 nova_compute[226101]: 2025-12-06 06:57:22.256 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:22 compute-1 nova_compute[226101]: 2025-12-06 06:57:22.292 226109 DEBUG nova.storage.rbd_utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:22 compute-1 nova_compute[226101]: 2025-12-06 06:57:22.297 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:22.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:22 compute-1 ceph-mon[81689]: pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Dec 06 06:57:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Dec 06 06:57:23 compute-1 ceph-mon[81689]: pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Dec 06 06:57:23 compute-1 ceph-mon[81689]: osdmap e151: 3 total, 3 up, 3 in
Dec 06 06:57:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:24.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:57:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:24.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.457 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.551 226109 DEBUG nova.storage.rbd_utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] resizing rbd image b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.703 226109 DEBUG nova.objects.instance [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'migration_context' on Instance uuid b70ad842-3481-4e25-a66f-16f7ed5a6ab3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.855 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.855 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Ensure instance console log exists: /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.856 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.856 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.857 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.859 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.866 226109 WARNING nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.874 226109 DEBUG nova.virt.libvirt.host [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.875 226109 DEBUG nova.virt.libvirt.host [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.878 226109 DEBUG nova.virt.libvirt.host [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.878 226109 DEBUG nova.virt.libvirt.host [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.880 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.881 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.881 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.881 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.882 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.882 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.882 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.882 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.883 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.883 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.883 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.884 226109 DEBUG nova.virt.hardware [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.887 226109 DEBUG nova.privsep.utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 06:57:24 compute-1 nova_compute[226101]: 2025-12-06 06:57:24.887 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:24 compute-1 ceph-mon[81689]: osdmap e152: 3 total, 3 up, 3 in
Dec 06 06:57:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:57:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/77946902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:25 compute-1 nova_compute[226101]: 2025-12-06 06:57:25.280 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:25 compute-1 nova_compute[226101]: 2025-12-06 06:57:25.305 226109 DEBUG nova.storage.rbd_utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:25 compute-1 nova_compute[226101]: 2025-12-06 06:57:25.309 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:57:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1331597734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:25 compute-1 nova_compute[226101]: 2025-12-06 06:57:25.717 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:25 compute-1 nova_compute[226101]: 2025-12-06 06:57:25.720 226109 DEBUG nova.objects.instance [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'pci_devices' on Instance uuid b70ad842-3481-4e25-a66f-16f7ed5a6ab3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:26 compute-1 nova_compute[226101]: 2025-12-06 06:57:26.105 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <uuid>b70ad842-3481-4e25-a66f-16f7ed5a6ab3</uuid>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <name>instance-00000001</name>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <metadata>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <nova:name>tempest-AutoAllocateNetworkTest-server-1797869870</nova:name>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 06:57:24</nova:creationTime>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <nova:user uuid="e5122185c6194067bdb22d6ba8205dca">tempest-AutoAllocateNetworkTest-1572395875-project-member</nova:user>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <nova:project uuid="066c314d67e347f6a49e8e3e27998441">tempest-AutoAllocateNetworkTest-1572395875</nova:project>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   </metadata>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <system>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <entry name="serial">b70ad842-3481-4e25-a66f-16f7ed5a6ab3</entry>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <entry name="uuid">b70ad842-3481-4e25-a66f-16f7ed5a6ab3</entry>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     </system>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <os>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   </os>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <features>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <apic/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   </features>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   </clock>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk">
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       </source>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk.config">
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       </source>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:57:26 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3/console.log" append="off"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     </serial>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <video>
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     </video>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 06:57:26 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 06:57:26 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 06:57:26 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:57:26 compute-1 nova_compute[226101]: </domain>
Dec 06 06:57:26 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:57:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:26.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:26 compute-1 ceph-mon[81689]: pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 10 op/s
Dec 06 06:57:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/77946902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1331597734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:26.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:26 compute-1 nova_compute[226101]: 2025-12-06 06:57:26.862 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:57:26 compute-1 nova_compute[226101]: 2025-12-06 06:57:26.863 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:57:26 compute-1 nova_compute[226101]: 2025-12-06 06:57:26.864 226109 INFO nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Using config drive
Dec 06 06:57:26 compute-1 nova_compute[226101]: 2025-12-06 06:57:26.901 226109 DEBUG nova.storage.rbd_utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:27 compute-1 nova_compute[226101]: 2025-12-06 06:57:27.963 226109 INFO nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Creating config drive at /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3/disk.config
Dec 06 06:57:27 compute-1 nova_compute[226101]: 2025-12-06 06:57:27.967 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1dib0hk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:28 compute-1 nova_compute[226101]: 2025-12-06 06:57:28.092 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1dib0hk" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:28.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:28 compute-1 nova_compute[226101]: 2025-12-06 06:57:28.121 226109 DEBUG nova.storage.rbd_utils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:28 compute-1 nova_compute[226101]: 2025-12-06 06:57:28.124 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3/disk.config b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:28.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:28 compute-1 ceph-mon[81689]: pgmap v1082: 305 pgs: 305 active+clean; 64 MiB data, 205 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec 06 06:57:28 compute-1 nova_compute[226101]: 2025-12-06 06:57:28.577 226109 DEBUG oslo_concurrency.processutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3/disk.config b70ad842-3481-4e25-a66f-16f7ed5a6ab3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:28 compute-1 nova_compute[226101]: 2025-12-06 06:57:28.579 226109 INFO nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Deleting local config drive /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3/disk.config because it was imported into RBD.
Dec 06 06:57:28 compute-1 systemd[1]: Starting libvirt secret daemon...
Dec 06 06:57:28 compute-1 systemd[1]: Started libvirt secret daemon.
Dec 06 06:57:28 compute-1 systemd-machined[190302]: New machine qemu-1-instance-00000001.
Dec 06 06:57:28 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.190 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004249.1900222, b70ad842-3481-4e25-a66f-16f7ed5a6ab3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.191 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] VM Resumed (Lifecycle Event)
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.193 226109 DEBUG nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.194 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.196 226109 INFO nova.virt.libvirt.driver [-] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Instance spawned successfully.
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.197 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.677 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.683 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.813 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.814 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004249.1911194, b70ad842-3481-4e25-a66f-16f7ed5a6ab3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:57:29 compute-1 nova_compute[226101]: 2025-12-06 06:57:29.814 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] VM Started (Lifecycle Event)
Dec 06 06:57:30 compute-1 nova_compute[226101]: 2025-12-06 06:57:30.042 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:30 compute-1 nova_compute[226101]: 2025-12-06 06:57:30.042 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:30 compute-1 nova_compute[226101]: 2025-12-06 06:57:30.044 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:30 compute-1 nova_compute[226101]: 2025-12-06 06:57:30.045 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:30 compute-1 nova_compute[226101]: 2025-12-06 06:57:30.046 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:30 compute-1 nova_compute[226101]: 2025-12-06 06:57:30.047 226109 DEBUG nova.virt.libvirt.driver [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:30.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:30 compute-1 ceph-mon[81689]: pgmap v1083: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Dec 06 06:57:30 compute-1 nova_compute[226101]: 2025-12-06 06:57:30.306 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:57:30 compute-1 nova_compute[226101]: 2025-12-06 06:57:30.310 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:57:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Dec 06 06:57:31 compute-1 nova_compute[226101]: 2025-12-06 06:57:31.829 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:57:31 compute-1 nova_compute[226101]: 2025-12-06 06:57:31.918 226109 INFO nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Took 15.00 seconds to spawn the instance on the hypervisor.
Dec 06 06:57:31 compute-1 nova_compute[226101]: 2025-12-06 06:57:31.920 226109 DEBUG nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:57:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:32.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:32.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:32 compute-1 ceph-mon[81689]: pgmap v1084: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Dec 06 06:57:32 compute-1 ceph-mon[81689]: osdmap e153: 3 total, 3 up, 3 in
Dec 06 06:57:32 compute-1 nova_compute[226101]: 2025-12-06 06:57:32.835 226109 INFO nova.compute.manager [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Took 17.94 seconds to build instance.
Dec 06 06:57:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:33 compute-1 nova_compute[226101]: 2025-12-06 06:57:33.650 226109 DEBUG oslo_concurrency.lockutils [None req-51fec688-5f55-464b-abb4-85af8ea7fc88 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:34.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:34.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:34 compute-1 nova_compute[226101]: 2025-12-06 06:57:34.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:34 compute-1 nova_compute[226101]: 2025-12-06 06:57:34.592 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 06:57:34 compute-1 nova_compute[226101]: 2025-12-06 06:57:34.670 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 06:57:34 compute-1 nova_compute[226101]: 2025-12-06 06:57:34.672 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:34 compute-1 nova_compute[226101]: 2025-12-06 06:57:34.672 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 06:57:34 compute-1 nova_compute[226101]: 2025-12-06 06:57:34.692 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:34 compute-1 ceph-mon[81689]: pgmap v1086: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 75 op/s
Dec 06 06:57:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4043231983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:36.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:36.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:36 compute-1 nova_compute[226101]: 2025-12-06 06:57:36.708 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:36 compute-1 nova_compute[226101]: 2025-12-06 06:57:36.709 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:57:36 compute-1 ceph-mon[81689]: pgmap v1087: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 06 06:57:37 compute-1 nova_compute[226101]: 2025-12-06 06:57:37.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:37 compute-1 nova_compute[226101]: 2025-12-06 06:57:37.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:37 compute-1 nova_compute[226101]: 2025-12-06 06:57:37.676 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:37 compute-1 nova_compute[226101]: 2025-12-06 06:57:37.676 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:37 compute-1 nova_compute[226101]: 2025-12-06 06:57:37.677 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:37 compute-1 nova_compute[226101]: 2025-12-06 06:57:37.677 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:57:37 compute-1 nova_compute[226101]: 2025-12-06 06:57:37.677 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/13983779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:38.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:38 compute-1 nova_compute[226101]: 2025-12-06 06:57:38.128 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:57:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:38.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:57:38 compute-1 ceph-mon[81689]: pgmap v1088: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Dec 06 06:57:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/13983779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:39 compute-1 nova_compute[226101]: 2025-12-06 06:57:39.111 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:57:39 compute-1 nova_compute[226101]: 2025-12-06 06:57:39.111 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:57:39 compute-1 nova_compute[226101]: 2025-12-06 06:57:39.281 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:57:39 compute-1 nova_compute[226101]: 2025-12-06 06:57:39.283 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5206MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:57:39 compute-1 nova_compute[226101]: 2025-12-06 06:57:39.283 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:39 compute-1 nova_compute[226101]: 2025-12-06 06:57:39.284 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:40 compute-1 ceph-mon[81689]: pgmap v1089: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 06:57:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:40.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3990925264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:42 compute-1 podman[228409]: 2025-12-06 06:57:42.116125778 +0000 UTC m=+0.098272741 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:57:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:42.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:42.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:42 compute-1 ceph-mon[81689]: pgmap v1090: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 06:57:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1985528722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2467075325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1122354135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:44.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:44.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:44 compute-1 sudo[228437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:44 compute-1 sudo[228437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:44 compute-1 sudo[228437]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:44 compute-1 sudo[228462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:57:44 compute-1 sudo[228462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:44 compute-1 sudo[228462]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:44 compute-1 sudo[228487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:44 compute-1 sudo[228487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:44 compute-1 sudo[228487]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:44 compute-1 sudo[228512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:57:44 compute-1 sudo[228512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:45 compute-1 ceph-mon[81689]: pgmap v1091: 305 pgs: 305 active+clean; 110 MiB data, 231 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 962 KiB/s wr, 98 op/s
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.480 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance b70ad842-3481-4e25-a66f-16f7ed5a6ab3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.481 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.481 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.505 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.525 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.526 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.541 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.564 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 06:57:45 compute-1 podman[228607]: 2025-12-06 06:57:45.595247859 +0000 UTC m=+0.319197376 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:57:45 compute-1 nova_compute[226101]: 2025-12-06 06:57:45.606 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:45 compute-1 podman[228607]: 2025-12-06 06:57:45.72184796 +0000 UTC m=+0.445797457 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.042 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.048 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:46 compute-1 sudo[228512]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:46.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.244 226109 ERROR nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [req-357aa105-b6fc-46cc-b717-c8d4af0d7019] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-357aa105-b6fc-46cc-b717-c8d4af0d7019"}]}
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.260 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.297 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.298 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.315 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.371 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 06:57:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:46.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:46 compute-1 ceph-mon[81689]: pgmap v1092: 305 pgs: 305 active+clean; 134 MiB data, 241 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 06 06:57:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3000776520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/413436200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.558 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.685 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.686 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.708 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.787 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3418748782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.993 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:46 compute-1 nova_compute[226101]: 2025-12-06 06:57:46.998 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.036 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updated inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.037 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.037 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.076 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.076 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.077 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.082 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.083 226109 INFO nova.compute.claims [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Claim successful on node compute-1.ctlplane.example.com
Dec 06 06:57:47 compute-1 sudo[228771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:47 compute-1 sudo[228771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:47 compute-1 sudo[228771]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:47 compute-1 sudo[228796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:57:47 compute-1 sudo[228796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:47 compute-1 sudo[228796]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:47 compute-1 sudo[228821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:47 compute-1 sudo[228821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:47 compute-1 sudo[228821]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:47 compute-1 sudo[228846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:57:47 compute-1 sudo[228846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:47 compute-1 nova_compute[226101]: 2025-12-06 06:57:47.390 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/777465336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3418748782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2115016062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:47 compute-1 sudo[228846]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:47 compute-1 sudo[228922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:47 compute-1 sudo[228922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:47 compute-1 sudo[228922]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:47 compute-1 sudo[228947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:57:47 compute-1 sudo[228947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:47 compute-1 sudo[228947]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:47 compute-1 sudo[228973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:47 compute-1 sudo[228973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:47 compute-1 sudo[228973]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:47 compute-1 sudo[228998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 06:57:47 compute-1 sudo[228998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/512715773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.000 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.006 226109 DEBUG nova.compute.provider_tree [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.065 226109 DEBUG nova.scheduler.client.report [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.076 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.076 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.077 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.077 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:57:48 compute-1 sudo[228998]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:48.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.160 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.161 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.194 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.274 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.275 226109 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.309 226109 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.333 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:57:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:48.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.455 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.455 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.455 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.456 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b70ad842-3481-4e25-a66f-16f7ed5a6ab3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.509 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.510 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.511 226109 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Creating image(s)
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.541 226109 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.570 226109 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.597 226109 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.603 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:48 compute-1 ceph-mon[81689]: pgmap v1093: 305 pgs: 305 active+clean; 146 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 48 op/s
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3930106838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2428242993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/512715773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:57:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.665 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.666 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.667 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.667 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.691 226109 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:48 compute-1 nova_compute[226101]: 2025-12-06 06:57:48.695 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.031 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.038 226109 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Automatically allocating a network for project 066c314d67e347f6a49e8e3e27998441. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.317 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.336 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.336 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.337 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.337 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.337 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.338 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.767 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:49 compute-1 nova_compute[226101]: 2025-12-06 06:57:49.865 226109 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] resizing rbd image cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:57:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:50.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:50.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:51 compute-1 ceph-mon[81689]: pgmap v1094: 305 pgs: 305 active+clean; 155 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 06 06:57:51 compute-1 nova_compute[226101]: 2025-12-06 06:57:51.282 226109 DEBUG nova.objects.instance [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'migration_context' on Instance uuid cb2d7ff9-9629-4807-b46f-1172e2f1f4f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:51 compute-1 nova_compute[226101]: 2025-12-06 06:57:51.302 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:57:51 compute-1 nova_compute[226101]: 2025-12-06 06:57:51.302 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Ensure instance console log exists: /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:57:51 compute-1 nova_compute[226101]: 2025-12-06 06:57:51.303 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:51 compute-1 nova_compute[226101]: 2025-12-06 06:57:51.303 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:51 compute-1 nova_compute[226101]: 2025-12-06 06:57:51.303 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:52 compute-1 podman[229209]: 2025-12-06 06:57:52.077197311 +0000 UTC m=+0.059887828 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:57:52 compute-1 podman[229210]: 2025-12-06 06:57:52.097375955 +0000 UTC m=+0.081983433 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 06:57:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:57:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:52.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:57:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:52.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:53 compute-1 ceph-mon[81689]: pgmap v1095: 305 pgs: 305 active+clean; 155 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 06 06:57:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:54.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4214025489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/594518666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:54.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:54 compute-1 sudo[229248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:54 compute-1 sudo[229248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:54 compute-1 sudo[229248]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:54 compute-1 sudo[229273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:57:54 compute-1 sudo[229273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:54 compute-1 sudo[229273]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:55 compute-1 ceph-mon[81689]: pgmap v1096: 305 pgs: 305 active+clean; 211 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.6 MiB/s wr, 179 op/s
Dec 06 06:57:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:56.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:56 compute-1 ceph-mon[81689]: pgmap v1097: 305 pgs: 305 active+clean; 260 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.6 MiB/s wr, 213 op/s
Dec 06 06:57:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3377158659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2996594464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2594476977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:56.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Dec 06 06:57:57 compute-1 ceph-mon[81689]: osdmap e154: 3 total, 3 up, 3 in
Dec 06 06:57:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Dec 06 06:57:57 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 06:57:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:57:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:58.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:57:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:57:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:58.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:58 compute-1 ceph-mon[81689]: pgmap v1099: 305 pgs: 305 active+clean; 292 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.1 MiB/s wr, 272 op/s
Dec 06 06:57:58 compute-1 ceph-mon[81689]: osdmap e155: 3 total, 3 up, 3 in
Dec 06 06:57:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1243932209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:00.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:00.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:00 compute-1 ceph-mon[81689]: pgmap v1101: 305 pgs: 305 active+clean; 306 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 11 MiB/s wr, 350 op/s
Dec 06 06:58:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/708831470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:01.614 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:01.615 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:01.615 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3242452057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.113 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.114 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.130 226109 DEBUG nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:58:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:02.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.211 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.212 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.219 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.220 226109 INFO nova.compute.claims [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Claim successful on node compute-1.ctlplane.example.com
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.410 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:02.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:02 compute-1 ceph-mon[81689]: pgmap v1102: 305 pgs: 305 active+clean; 306 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.1 MiB/s wr, 204 op/s
Dec 06 06:58:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3709163343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.827 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.833 226109 DEBUG nova.compute.provider_tree [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.857 226109 DEBUG nova.scheduler.client.report [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.881 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.882 226109 DEBUG nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.921 226109 DEBUG nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.921 226109 DEBUG nova.network.neutron [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.956 226109 INFO nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:58:02 compute-1 nova_compute[226101]: 2025-12-06 06:58:02.972 226109 DEBUG nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.083 226109 DEBUG nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.084 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.084 226109 INFO nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Creating image(s)
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.108 226109 DEBUG nova.storage.rbd_utils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.133 226109 DEBUG nova.storage.rbd_utils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.157 226109 DEBUG nova.storage.rbd_utils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.160 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.223 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.224 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.224 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.224 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.249 226109 DEBUG nova.storage.rbd_utils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.253 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.516 226109 DEBUG nova.network.neutron [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.516 226109 DEBUG nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.528 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.602 226109 DEBUG nova.storage.rbd_utils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] resizing rbd image 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.716 226109 DEBUG nova.objects.instance [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lazy-loading 'migration_context' on Instance uuid 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3709163343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.731 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.732 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Ensure instance console log exists: /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.732 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.732 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.733 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.734 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.738 226109 WARNING nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.742 226109 DEBUG nova.virt.libvirt.host [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.743 226109 DEBUG nova.virt.libvirt.host [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.746 226109 DEBUG nova.virt.libvirt.host [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.746 226109 DEBUG nova.virt.libvirt.host [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.747 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.747 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.748 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.748 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.748 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.749 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.749 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.749 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.749 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.749 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.750 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.750 226109 DEBUG nova.virt.hardware [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:58:03 compute-1 nova_compute[226101]: 2025-12-06 06:58:03.752 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:04.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2344159975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.239 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.266 226109 DEBUG nova.storage.rbd_utils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.270 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:04.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:04 compute-1 sshd-session[229187]: Connection closed by 14.63.196.175 port 36924 [preauth]
Dec 06 06:58:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3348113514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.686 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.688 226109 DEBUG nova.objects.instance [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.703 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <uuid>86a61a20-48cb-49ff-9dc5-9ad7a87bfd47</uuid>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <name>instance-00000008</name>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <metadata>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <nova:name>tempest-LiveMigrationNegativeTest-server-2143738407</nova:name>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 06:58:03</nova:creationTime>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <nova:user uuid="16a339a76ea148bc97bcb8e720ea8511">tempest-LiveMigrationNegativeTest-2087231600-project-member</nova:user>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <nova:project uuid="cc0441dee99e4b51ac652297c4221de3">tempest-LiveMigrationNegativeTest-2087231600</nova:project>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   </metadata>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <system>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <entry name="serial">86a61a20-48cb-49ff-9dc5-9ad7a87bfd47</entry>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <entry name="uuid">86a61a20-48cb-49ff-9dc5-9ad7a87bfd47</entry>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     </system>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <os>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   </os>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <features>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <apic/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   </features>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   </clock>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk">
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       </source>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk.config">
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       </source>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:58:04 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47/console.log" append="off"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     </serial>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <video>
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     </video>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 06:58:04 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 06:58:04 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 06:58:04 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:58:04 compute-1 nova_compute[226101]: </domain>
Dec 06 06:58:04 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:58:04 compute-1 ceph-mon[81689]: pgmap v1103: 305 pgs: 305 active+clean; 333 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.7 MiB/s wr, 240 op/s
Dec 06 06:58:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2344159975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3348113514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.761 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.762 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.762 226109 INFO nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Using config drive
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.792 226109 DEBUG nova.storage.rbd_utils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.918 226109 INFO nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Creating config drive at /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47/disk.config
Dec 06 06:58:04 compute-1 nova_compute[226101]: 2025-12-06 06:58:04.922 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxr6fiwia execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.047 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxr6fiwia" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.073 226109 DEBUG nova.storage.rbd_utils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] rbd image 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.076 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47/disk.config 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.229 226109 DEBUG oslo_concurrency.processutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47/disk.config 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.230 226109 INFO nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Deleting local config drive /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47/disk.config because it was imported into RBD.
Dec 06 06:58:05 compute-1 systemd-machined[190302]: New machine qemu-2-instance-00000008.
Dec 06 06:58:05 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000008.
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.834 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004285.833881, 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.835 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] VM Resumed (Lifecycle Event)
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.837 226109 DEBUG nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.838 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.841 226109 INFO nova.virt.libvirt.driver [-] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Instance spawned successfully.
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.841 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.859 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.866 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.869 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.869 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.870 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.870 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.871 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.871 226109 DEBUG nova.virt.libvirt.driver [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.904 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.904 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004285.8374965, 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.904 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] VM Started (Lifecycle Event)
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.958 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.961 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.997 226109 INFO nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Took 2.91 seconds to spawn the instance on the hypervisor.
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.998 226109 DEBUG nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:05 compute-1 nova_compute[226101]: 2025-12-06 06:58:05.999 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:58:06 compute-1 nova_compute[226101]: 2025-12-06 06:58:06.066 226109 INFO nova.compute.manager [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Took 3.89 seconds to build instance.
Dec 06 06:58:06 compute-1 nova_compute[226101]: 2025-12-06 06:58:06.091 226109 DEBUG oslo_concurrency.lockutils [None req-36abd007-6c82-4391-9a2b-aee8acbfdcce 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 3.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:06.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Dec 06 06:58:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:06.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:07 compute-1 ceph-mon[81689]: pgmap v1104: 305 pgs: 305 active+clean; 364 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 7.9 MiB/s wr, 286 op/s
Dec 06 06:58:07 compute-1 ceph-mon[81689]: osdmap e156: 3 total, 3 up, 3 in
Dec 06 06:58:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3106078905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/364414299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:07 compute-1 nova_compute[226101]: 2025-12-06 06:58:07.787 226109 DEBUG nova.objects.instance [None req-8d57809a-cd55-4ac2-8c87-a738110fb0e7 79e744f8c10146349a0dec453b81c3b5 79f1b914fc1648b485eeb2c0976a8d07 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:07 compute-1 nova_compute[226101]: 2025-12-06 06:58:07.805 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004287.8054385, 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:07 compute-1 nova_compute[226101]: 2025-12-06 06:58:07.806 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] VM Paused (Lifecycle Event)
Dec 06 06:58:07 compute-1 nova_compute[226101]: 2025-12-06 06:58:07.824 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:07 compute-1 nova_compute[226101]: 2025-12-06 06:58:07.827 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:07 compute-1 nova_compute[226101]: 2025-12-06 06:58:07.850 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 06 06:58:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:08.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:08.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:08 compute-1 ceph-mon[81689]: pgmap v1106: 305 pgs: 305 active+clean; 359 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.8 MiB/s wr, 317 op/s
Dec 06 06:58:08 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 06 06:58:08 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Consumed 2.716s CPU time.
Dec 06 06:58:08 compute-1 systemd-machined[190302]: Machine qemu-2-instance-00000008 terminated.
Dec 06 06:58:08 compute-1 nova_compute[226101]: 2025-12-06 06:58:08.721 226109 DEBUG nova.compute.manager [None req-8d57809a-cd55-4ac2-8c87-a738110fb0e7 79e744f8c10146349a0dec453b81c3b5 79f1b914fc1648b485eeb2c0976a8d07 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2833273441' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:58:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2833273441' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:58:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:10.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:10.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:10 compute-1 ceph-mon[81689]: pgmap v1107: 305 pgs: 305 active+clean; 363 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.6 MiB/s wr, 338 op/s
Dec 06 06:58:12 compute-1 ceph-mon[81689]: pgmap v1108: 305 pgs: 305 active+clean; 363 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.6 MiB/s wr, 338 op/s
Dec 06 06:58:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:12.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:12.216 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:58:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:12.218 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 06:58:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:12.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.934 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.934 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.934 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.935 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.935 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.936 226109 INFO nova.compute.manager [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Terminating instance
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.937 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "refresh_cache-86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.937 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquired lock "refresh_cache-86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:58:12 compute-1 nova_compute[226101]: 2025-12-06 06:58:12.937 226109 DEBUG nova.network.neutron [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:58:13 compute-1 podman[229668]: 2025-12-06 06:58:13.140298954 +0000 UTC m=+0.084546093 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:58:13 compute-1 nova_compute[226101]: 2025-12-06 06:58:13.314 226109 DEBUG nova.network.neutron [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:58:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:13 compute-1 nova_compute[226101]: 2025-12-06 06:58:13.829 226109 DEBUG nova.network.neutron [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:13 compute-1 nova_compute[226101]: 2025-12-06 06:58:13.867 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Releasing lock "refresh_cache-86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:58:13 compute-1 nova_compute[226101]: 2025-12-06 06:58:13.867 226109 DEBUG nova.compute.manager [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 06:58:13 compute-1 nova_compute[226101]: 2025-12-06 06:58:13.873 226109 INFO nova.virt.libvirt.driver [-] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Instance destroyed successfully.
Dec 06 06:58:13 compute-1 nova_compute[226101]: 2025-12-06 06:58:13.873 226109 DEBUG nova.objects.instance [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lazy-loading 'resources' on Instance uuid 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:14.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:14 compute-1 ceph-mon[81689]: pgmap v1109: 305 pgs: 305 active+clean; 414 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 7.0 MiB/s wr, 324 op/s
Dec 06 06:58:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2324399139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.588 226109 INFO nova.virt.libvirt.driver [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Deleting instance files /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_del
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.589 226109 INFO nova.virt.libvirt.driver [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Deletion of /var/lib/nova/instances/86a61a20-48cb-49ff-9dc5-9ad7a87bfd47_del complete
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.648 226109 DEBUG nova.virt.libvirt.host [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.649 226109 INFO nova.virt.libvirt.host [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] UEFI support detected
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.660 226109 INFO nova.compute.manager [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Took 0.79 seconds to destroy the instance on the hypervisor.
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.661 226109 DEBUG oslo.service.loopingcall [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.662 226109 DEBUG nova.compute.manager [-] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.662 226109 DEBUG nova.network.neutron [-] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.844 226109 DEBUG nova.network.neutron [-] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.858 226109 DEBUG nova.network.neutron [-] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.877 226109 INFO nova.compute.manager [-] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Took 0.22 seconds to deallocate network for instance.
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.929 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:14 compute-1 nova_compute[226101]: 2025-12-06 06:58:14.930 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:15 compute-1 nova_compute[226101]: 2025-12-06 06:58:15.029 226109 DEBUG oslo_concurrency.processutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1471238150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:15 compute-1 nova_compute[226101]: 2025-12-06 06:58:15.515 226109 DEBUG oslo_concurrency.processutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:15 compute-1 nova_compute[226101]: 2025-12-06 06:58:15.521 226109 DEBUG nova.compute.provider_tree [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:58:15 compute-1 nova_compute[226101]: 2025-12-06 06:58:15.543 226109 DEBUG nova.scheduler.client.report [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:58:15 compute-1 nova_compute[226101]: 2025-12-06 06:58:15.571 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/440116575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1471238150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:15 compute-1 nova_compute[226101]: 2025-12-06 06:58:15.602 226109 INFO nova.scheduler.client.report [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Deleted allocations for instance 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47
Dec 06 06:58:15 compute-1 nova_compute[226101]: 2025-12-06 06:58:15.693 226109 DEBUG oslo_concurrency.lockutils [None req-a443b2bc-754a-4809-95fb-106cfce6e937 16a339a76ea148bc97bcb8e720ea8511 cc0441dee99e4b51ac652297c4221de3 - - default default] Lock "86a61a20-48cb-49ff-9dc5-9ad7a87bfd47" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:16.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:58:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:16.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:58:16 compute-1 ceph-mon[81689]: pgmap v1110: 305 pgs: 305 active+clean; 423 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.6 MiB/s wr, 304 op/s
Dec 06 06:58:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:18.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:18.220 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:18.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:18 compute-1 ceph-mon[81689]: pgmap v1111: 305 pgs: 305 active+clean; 410 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.4 MiB/s wr, 315 op/s
Dec 06 06:58:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:20.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:20 compute-1 nova_compute[226101]: 2025-12-06 06:58:20.414 226109 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Automatically allocated network: {'id': 'fa805a2c-a79c-458b-b658-8e0534714a02', 'name': 'auto_allocated_network', 'tenant_id': '066c314d67e347f6a49e8e3e27998441', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['04b3c0e5-fc90-42d4-bc2f-73e704cb01a0', 'd4a716c0-b329-403f-9680-e6cc8e535b63'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-12-06T06:57:49Z', 'updated_at': '2025-12-06T06:58:07Z', 'revision_number': 4, 'project_id': '066c314d67e347f6a49e8e3e27998441'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Dec 06 06:58:20 compute-1 nova_compute[226101]: 2025-12-06 06:58:20.424 226109 WARNING oslo_policy.policy [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 06 06:58:20 compute-1 nova_compute[226101]: 2025-12-06 06:58:20.424 226109 WARNING oslo_policy.policy [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 06 06:58:20 compute-1 nova_compute[226101]: 2025-12-06 06:58:20.426 226109 DEBUG nova.policy [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5122185c6194067bdb22d6ba8205dca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '066c314d67e347f6a49e8e3e27998441', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 06:58:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:20.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:20 compute-1 ceph-mon[81689]: pgmap v1112: 305 pgs: 305 active+clean; 360 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 225 op/s
Dec 06 06:58:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/723393079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1967313379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/539297012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:22.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:58:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1805.4 total, 600.0 interval
                                           Cumulative writes: 7860 writes, 31K keys, 7860 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7860 writes, 1658 syncs, 4.74 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1407 writes, 4222 keys, 1407 commit groups, 1.0 writes per commit group, ingest: 3.35 MB, 0.01 MB/s
                                           Interval WAL: 1407 writes, 603 syncs, 2.33 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 06:58:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:22.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:22 compute-1 ceph-mon[81689]: pgmap v1113: 305 pgs: 305 active+clean; 360 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 148 op/s
Dec 06 06:58:23 compute-1 podman[229737]: 2025-12-06 06:58:23.0922631 +0000 UTC m=+0.078993882 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 06:58:23 compute-1 podman[229736]: 2025-12-06 06:58:23.09226556 +0000 UTC m=+0.081572542 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 06:58:23 compute-1 nova_compute[226101]: 2025-12-06 06:58:23.194 226109 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Successfully created port: 040e5552-4bb0-419d-a471-f27509f99f8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 06:58:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:23 compute-1 nova_compute[226101]: 2025-12-06 06:58:23.723 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004288.7222133, 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:23 compute-1 nova_compute[226101]: 2025-12-06 06:58:23.724 226109 INFO nova.compute.manager [-] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] VM Stopped (Lifecycle Event)
Dec 06 06:58:23 compute-1 ceph-mon[81689]: pgmap v1114: 305 pgs: 305 active+clean; 306 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 213 op/s
Dec 06 06:58:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:24 compute-1 nova_compute[226101]: 2025-12-06 06:58:24.354 226109 DEBUG nova.compute.manager [None req-e1b3b6ba-29da-4c47-a498-f23b904a6c7a - - - - - -] [instance: 86a61a20-48cb-49ff-9dc5-9ad7a87bfd47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:24.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:25 compute-1 nova_compute[226101]: 2025-12-06 06:58:25.852 226109 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Successfully updated port: 040e5552-4bb0-419d-a471-f27509f99f8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 06:58:25 compute-1 nova_compute[226101]: 2025-12-06 06:58:25.875 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "refresh_cache-cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:58:25 compute-1 nova_compute[226101]: 2025-12-06 06:58:25.875 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquired lock "refresh_cache-cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:58:25 compute-1 nova_compute[226101]: 2025-12-06 06:58:25.876 226109 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:58:26 compute-1 nova_compute[226101]: 2025-12-06 06:58:26.007 226109 DEBUG nova.compute.manager [req-24632235-f9d7-4fd0-bcad-6deeb43ecf5f req-99790558-5857-4edf-90e6-ff6978eea4b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received event network-changed-040e5552-4bb0-419d-a471-f27509f99f8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:26 compute-1 nova_compute[226101]: 2025-12-06 06:58:26.007 226109 DEBUG nova.compute.manager [req-24632235-f9d7-4fd0-bcad-6deeb43ecf5f req-99790558-5857-4edf-90e6-ff6978eea4b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Refreshing instance network info cache due to event network-changed-040e5552-4bb0-419d-a471-f27509f99f8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 06:58:26 compute-1 nova_compute[226101]: 2025-12-06 06:58:26.008 226109 DEBUG oslo_concurrency.lockutils [req-24632235-f9d7-4fd0-bcad-6deeb43ecf5f req-99790558-5857-4edf-90e6-ff6978eea4b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:58:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:26.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:26 compute-1 nova_compute[226101]: 2025-12-06 06:58:26.202 226109 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:58:26 compute-1 ceph-mon[81689]: pgmap v1115: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 778 KiB/s wr, 163 op/s
Dec 06 06:58:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:26.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:28.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.219 226109 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Updating instance_info_cache with network_info: [{"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.238 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Releasing lock "refresh_cache-cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.239 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Instance network_info: |[{"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.239 226109 DEBUG oslo_concurrency.lockutils [req-24632235-f9d7-4fd0-bcad-6deeb43ecf5f req-99790558-5857-4edf-90e6-ff6978eea4b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.240 226109 DEBUG nova.network.neutron [req-24632235-f9d7-4fd0-bcad-6deeb43ecf5f req-99790558-5857-4edf-90e6-ff6978eea4b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Refreshing network info cache for port 040e5552-4bb0-419d-a471-f27509f99f8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.245 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Start _get_guest_xml network_info=[{"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.250 226109 WARNING nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.259 226109 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.260 226109 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.267 226109 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.267 226109 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.268 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.268 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.269 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.269 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.269 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.269 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.269 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.270 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.270 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.270 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.270 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.271 226109 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.273 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:28.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:28 compute-1 ceph-mon[81689]: pgmap v1116: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 88 KiB/s wr, 153 op/s
Dec 06 06:58:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:28 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/643756439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.692 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.715 226109 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:28 compute-1 nova_compute[226101]: 2025-12-06 06:58:28.719 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1229977498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.149 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.151 226109 DEBUG nova.virt.libvirt.vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-2',id=4,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:57:48Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=cb2d7ff9-9629-4807-b46f-1172e2f1f4f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.151 226109 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.152 226109 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:08:b2,bridge_name='br-int',has_traffic_filtering=True,id=040e5552-4bb0-419d-a471-f27509f99f8e,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap040e5552-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.154 226109 DEBUG nova.objects.instance [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb2d7ff9-9629-4807-b46f-1172e2f1f4f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.166 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <uuid>cb2d7ff9-9629-4807-b46f-1172e2f1f4f6</uuid>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <name>instance-00000004</name>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <metadata>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <nova:name>tempest-tempest.common.compute-instance-2024863607-2</nova:name>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 06:58:28</nova:creationTime>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <nova:user uuid="e5122185c6194067bdb22d6ba8205dca">tempest-AutoAllocateNetworkTest-1572395875-project-member</nova:user>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <nova:project uuid="066c314d67e347f6a49e8e3e27998441">tempest-AutoAllocateNetworkTest-1572395875</nova:project>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <nova:port uuid="040e5552-4bb0-419d-a471-f27509f99f8e">
Dec 06 06:58:29 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="fdfe:381f:8400::36d" ipVersion="6"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.1.0.57" ipVersion="4"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   </metadata>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <system>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <entry name="serial">cb2d7ff9-9629-4807-b46f-1172e2f1f4f6</entry>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <entry name="uuid">cb2d7ff9-9629-4807-b46f-1172e2f1f4f6</entry>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </system>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <os>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   </os>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <features>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <apic/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   </features>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   </clock>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk">
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       </source>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk.config">
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       </source>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:58:29 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:3f:08:b2"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <target dev="tap040e5552-4b"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </interface>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6/console.log" append="off"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </serial>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <video>
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </video>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 06:58:29 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 06:58:29 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 06:58:29 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:58:29 compute-1 nova_compute[226101]: </domain>
Dec 06 06:58:29 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.167 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Preparing to wait for external event network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.167 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.167 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.168 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.168 226109 DEBUG nova.virt.libvirt.vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-2',id=4,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:57:48Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=cb2d7ff9-9629-4807-b46f-1172e2f1f4f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.169 226109 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.169 226109 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:08:b2,bridge_name='br-int',has_traffic_filtering=True,id=040e5552-4bb0-419d-a471-f27509f99f8e,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap040e5552-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.170 226109 DEBUG os_vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:08:b2,bridge_name='br-int',has_traffic_filtering=True,id=040e5552-4bb0-419d-a471-f27509f99f8e,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap040e5552-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.208 226109 DEBUG ovsdbapp.backend.ovs_idl [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.208 226109 DEBUG ovsdbapp.backend.ovs_idl [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.209 226109 DEBUG ovsdbapp.backend.ovs_idl [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.209 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.210 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.210 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.210 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.211 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.213 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.222 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.222 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.223 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.224 226109 INFO oslo.privsep.daemon [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpnjooo47g/privsep.sock']
Dec 06 06:58:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/643756439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1018108383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1229977498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/788469425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.891 226109 INFO oslo.privsep.daemon [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Spawned new privsep daemon via rootwrap
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.773 229837 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.778 229837 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.781 229837 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 06 06:58:29 compute-1 nova_compute[226101]: 2025-12-06 06:58:29.781 229837 INFO oslo.privsep.daemon [-] privsep daemon running as pid 229837
Dec 06 06:58:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:30.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.219 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.220 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap040e5552-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.220 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap040e5552-4b, col_values=(('external_ids', {'iface-id': '040e5552-4bb0-419d-a471-f27509f99f8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:08:b2', 'vm-uuid': 'cb2d7ff9-9629-4807-b46f-1172e2f1f4f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.222 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:30 compute-1 NetworkManager[49031]: <info>  [1765004310.2236] manager: (tap040e5552-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.224 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.229 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.230 226109 INFO os_vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:08:b2,bridge_name='br-int',has_traffic_filtering=True,id=040e5552-4bb0-419d-a471-f27509f99f8e,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap040e5552-4b')
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.307 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.309 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.310 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No VIF found with MAC fa:16:3e:3f:08:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.311 226109 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Using config drive
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.353 226109 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:30.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.707 226109 DEBUG nova.network.neutron [req-24632235-f9d7-4fd0-bcad-6deeb43ecf5f req-99790558-5857-4edf-90e6-ff6978eea4b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Updated VIF entry in instance network info cache for port 040e5552-4bb0-419d-a471-f27509f99f8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.709 226109 DEBUG nova.network.neutron [req-24632235-f9d7-4fd0-bcad-6deeb43ecf5f req-99790558-5857-4edf-90e6-ff6978eea4b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Updating instance_info_cache with network_info: [{"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.731 226109 DEBUG oslo_concurrency.lockutils [req-24632235-f9d7-4fd0-bcad-6deeb43ecf5f req-99790558-5857-4edf-90e6-ff6978eea4b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.807 226109 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Creating config drive at /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6/disk.config
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.811 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr02ef96a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.932 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr02ef96a" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.957 226109 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:30 compute-1 nova_compute[226101]: 2025-12-06 06:58:30.960 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6/disk.config cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:31 compute-1 ceph-mon[81689]: pgmap v1117: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 38 KiB/s wr, 161 op/s
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.176 226109 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6/disk.config cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.177 226109 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Deleting local config drive /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6/disk.config because it was imported into RBD.
Dec 06 06:58:31 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 06 06:58:31 compute-1 kernel: tap040e5552-4b: entered promiscuous mode
Dec 06 06:58:31 compute-1 NetworkManager[49031]: <info>  [1765004311.2382] manager: (tap040e5552-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.239 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:31 compute-1 ovn_controller[130279]: 2025-12-06T06:58:31Z|00027|binding|INFO|Claiming lport 040e5552-4bb0-419d-a471-f27509f99f8e for this chassis.
Dec 06 06:58:31 compute-1 ovn_controller[130279]: 2025-12-06T06:58:31Z|00028|binding|INFO|040e5552-4bb0-419d-a471-f27509f99f8e: Claiming fa:16:3e:3f:08:b2 10.1.0.57 fdfe:381f:8400::36d
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.243 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:31 compute-1 systemd-udevd[229918]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:58:31 compute-1 NetworkManager[49031]: <info>  [1765004311.2765] device (tap040e5552-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 06:58:31 compute-1 NetworkManager[49031]: <info>  [1765004311.2773] device (tap040e5552-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 06:58:31 compute-1 systemd-machined[190302]: New machine qemu-3-instance-00000004.
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.316 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:31 compute-1 ovn_controller[130279]: 2025-12-06T06:58:31Z|00029|binding|INFO|Setting lport 040e5552-4bb0-419d-a471-f27509f99f8e ovn-installed in OVS
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.323 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:31 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Dec 06 06:58:31 compute-1 ovn_controller[130279]: 2025-12-06T06:58:31Z|00030|binding|INFO|Setting lport 040e5552-4bb0-419d-a471-f27509f99f8e up in Southbound
Dec 06 06:58:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:31.430 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:08:b2 10.1.0.57 fdfe:381f:8400::36d'], port_security=['fa:16:3e:3f:08:b2 10.1.0.57 fdfe:381f:8400::36d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.57/26 fdfe:381f:8400::36d/64', 'neutron:device_id': 'cb2d7ff9-9629-4807-b46f-1172e2f1f4f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa805a2c-a79c-458b-b658-8e0534714a02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '066c314d67e347f6a49e8e3e27998441', 'neutron:revision_number': '2', 'neutron:security_group_ids': '460e28b2-b45f-4429-a2b9-8f57e45c4e5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a47e992-7383-43fa-bf34-745cbd8b74f1, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=040e5552-4bb0-419d-a471-f27509f99f8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:58:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:31.432 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 040e5552-4bb0-419d-a471-f27509f99f8e in datapath fa805a2c-a79c-458b-b658-8e0534714a02 bound to our chassis
Dec 06 06:58:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:31.434 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa805a2c-a79c-458b-b658-8e0534714a02
Dec 06 06:58:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:31.435 139580 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpp6wi3hx5/privsep.sock']
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.799 226109 DEBUG nova.compute.manager [req-f90fa7c9-6341-4ed1-bba5-ad07b3a8af2d req-2b087e0d-3edc-4264-a623-9f4c1dda0268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received event network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.799 226109 DEBUG oslo_concurrency.lockutils [req-f90fa7c9-6341-4ed1-bba5-ad07b3a8af2d req-2b087e0d-3edc-4264-a623-9f4c1dda0268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.800 226109 DEBUG oslo_concurrency.lockutils [req-f90fa7c9-6341-4ed1-bba5-ad07b3a8af2d req-2b087e0d-3edc-4264-a623-9f4c1dda0268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.800 226109 DEBUG oslo_concurrency.lockutils [req-f90fa7c9-6341-4ed1-bba5-ad07b3a8af2d req-2b087e0d-3edc-4264-a623-9f4c1dda0268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:31 compute-1 nova_compute[226101]: 2025-12-06 06:58:31.800 226109 DEBUG nova.compute.manager [req-f90fa7c9-6341-4ed1-bba5-ad07b3a8af2d req-2b087e0d-3edc-4264-a623-9f4c1dda0268 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Processing event network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:32.091 139580 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:32.091 139580 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpp6wi3hx5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:31.970 229936 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:31.974 229936 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:31.977 229936 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:31.977 229936 INFO oslo.privsep.daemon [-] privsep daemon running as pid 229936
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:32.094 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[85e24071-11f8-4b2a-b7d8-0c12a6fd83ce]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:32.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:32 compute-1 ceph-mon[81689]: pgmap v1118: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 38 KiB/s wr, 123 op/s
Dec 06 06:58:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:32.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:32.639 229936 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:32.640 229936 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:32.640 229936 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.071 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.317 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c303d71e-4ebf-4545-a48f-2119184b4f7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.319 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa805a2c-a1 in ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 06:58:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.321 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa805a2c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 06:58:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.321 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4a262b95-4d49-47cb-ab35-0efcab847047]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.324 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8873de0c-ef8f-4490-ae1e-86ad6b97fe55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.345 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ea374778-330a-4d0d-a487-a0369f9c9990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.358 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1b4d51-1cbd-45ab-be14-cff22eba3804]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.360 139580 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp4oieq33_/privsep.sock']
Dec 06 06:58:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.952 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004313.9515936, cb2d7ff9-9629-4807-b46f-1172e2f1f4f6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.952 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] VM Started (Lifecycle Event)
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.955 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.958 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.961 226109 INFO nova.virt.libvirt.driver [-] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Instance spawned successfully.
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.962 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.989 226109 DEBUG nova.compute.manager [req-f20cd0ff-9835-443e-a74a-1c80b576a6b9 req-68b4fc51-f0e0-4fee-b9ca-b225dbb77cf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received event network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.990 226109 DEBUG oslo_concurrency.lockutils [req-f20cd0ff-9835-443e-a74a-1c80b576a6b9 req-68b4fc51-f0e0-4fee-b9ca-b225dbb77cf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.990 226109 DEBUG oslo_concurrency.lockutils [req-f20cd0ff-9835-443e-a74a-1c80b576a6b9 req-68b4fc51-f0e0-4fee-b9ca-b225dbb77cf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.991 226109 DEBUG oslo_concurrency.lockutils [req-f20cd0ff-9835-443e-a74a-1c80b576a6b9 req-68b4fc51-f0e0-4fee-b9ca-b225dbb77cf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.991 226109 DEBUG nova.compute.manager [req-f20cd0ff-9835-443e-a74a-1c80b576a6b9 req-68b4fc51-f0e0-4fee-b9ca-b225dbb77cf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] No waiting events found dispatching network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.991 226109 WARNING nova.compute.manager [req-f20cd0ff-9835-443e-a74a-1c80b576a6b9 req-68b4fc51-f0e0-4fee-b9ca-b225dbb77cf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received unexpected event network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e for instance with vm_state building and task_state spawning.
Dec 06 06:58:33 compute-1 nova_compute[226101]: 2025-12-06 06:58:33.994 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.000 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.003 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.003 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.004 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.004 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.005 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.005 226109 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:34.024 139580 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:34.026 139580 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4oieq33_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.900 229991 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.903 229991 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.905 229991 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:33.905 229991 INFO oslo.privsep.daemon [-] privsep daemon running as pid 229991
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:34.028 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[75ea57b7-ea91-4160-9362-8f621bacdda1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.032 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.032 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004313.951723, cb2d7ff9-9629-4807-b46f-1172e2f1f4f6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.032 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] VM Paused (Lifecycle Event)
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.063 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.066 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004313.9578779, cb2d7ff9-9629-4807-b46f-1172e2f1f4f6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.067 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] VM Resumed (Lifecycle Event)
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.076 226109 INFO nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Took 45.57 seconds to spawn the instance on the hypervisor.
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.077 226109 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.085 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.087 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.136 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.152 226109 INFO nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Took 47.39 seconds to build instance.
Dec 06 06:58:34 compute-1 nova_compute[226101]: 2025-12-06 06:58:34.170 226109 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 47.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:34.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:34.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:34.586 229991 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:34.586 229991 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:34.586 229991 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:35 compute-1 ceph-mon[81689]: pgmap v1119: 305 pgs: 305 active+clean; 306 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 50 KiB/s wr, 162 op/s
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.209 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ee6d1c-ee6b-4af8-9ea1-19948f0c3849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 nova_compute[226101]: 2025-12-06 06:58:35.223 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.225 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[00c3a920-82c0-415d-a4c0-d7ae4e9d3d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 NetworkManager[49031]: <info>  [1765004315.2275] manager: (tapfa805a2c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Dec 06 06:58:35 compute-1 systemd-udevd[230004]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.261 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4b2095-5a63-4f17-8f92-a39a8558fbb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.268 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[415bb460-df54-4523-b23a-e22b7efc5f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 NetworkManager[49031]: <info>  [1765004315.2922] device (tapfa805a2c-a0): carrier: link connected
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.299 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[715a80f5-5681-4ee4-b6cd-5b093f3aee43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.316 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[30c8ce70-99f4-4042-b058-4ab9d59d925f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa805a2c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:f0:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459479, 'reachable_time': 26771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230022, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.333 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f93cdb-7217-4d13-a731-8460d3319333]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:f092'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459479, 'tstamp': 459479}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230023, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.351 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c077ddb0-0d49-4be5-92a2-b58b2b440db4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa805a2c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:f0:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459479, 'reachable_time': 26771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 230024, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.383 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eec63771-36d7-48e4-b811-620e9cab09f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.442 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7e4b26-c9e3-4171-a0fb-e250c9b49bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.444 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa805a2c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.445 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.446 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa805a2c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:35 compute-1 NetworkManager[49031]: <info>  [1765004315.4491] manager: (tapfa805a2c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec 06 06:58:35 compute-1 kernel: tapfa805a2c-a0: entered promiscuous mode
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.452 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa805a2c-a0, col_values=(('external_ids', {'iface-id': '34b96bd0-4cf5-4098-a5e5-d0a4d39a953b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:35 compute-1 nova_compute[226101]: 2025-12-06 06:58:35.453 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:35 compute-1 ovn_controller[130279]: 2025-12-06T06:58:35Z|00031|binding|INFO|Releasing lport 34b96bd0-4cf5-4098-a5e5-d0a4d39a953b from this chassis (sb_readonly=0)
Dec 06 06:58:35 compute-1 nova_compute[226101]: 2025-12-06 06:58:35.467 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.469 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.470 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[30d31478-b35a-4c7a-8fe8-a48b73e0caf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.471 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: global
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-fa805a2c-a79c-458b-b658-8e0534714a02
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID fa805a2c-a79c-458b-b658-8e0534714a02
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 06:58:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:35.472 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'env', 'PROCESS_TAG=haproxy-fa805a2c-a79c-458b-b658-8e0534714a02', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa805a2c-a79c-458b-b658-8e0534714a02.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 06:58:35 compute-1 podman[230056]: 2025-12-06 06:58:35.918190729 +0000 UTC m=+0.049233570 container create 22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:58:35 compute-1 systemd[1]: Started libpod-conmon-22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be.scope.
Dec 06 06:58:35 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:58:35 compute-1 podman[230056]: 2025-12-06 06:58:35.893299067 +0000 UTC m=+0.024341908 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 06:58:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b502fdf2bf0af3d445650f323029f3739d2bdddf3c602290e1b1850001e3c7f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:36 compute-1 podman[230056]: 2025-12-06 06:58:36.007792647 +0000 UTC m=+0.138835518 container init 22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 06:58:36 compute-1 podman[230056]: 2025-12-06 06:58:36.013263935 +0000 UTC m=+0.144306786 container start 22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:58:36 compute-1 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[230071]: [NOTICE]   (230075) : New worker (230077) forked
Dec 06 06:58:36 compute-1 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[230071]: [NOTICE]   (230075) : Loading success.
Dec 06 06:58:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:36.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:36.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:36 compute-1 nova_compute[226101]: 2025-12-06 06:58:36.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:36 compute-1 nova_compute[226101]: 2025-12-06 06:58:36.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:58:36 compute-1 ceph-mon[81689]: pgmap v1120: 305 pgs: 305 active+clean; 318 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 147 op/s
Dec 06 06:58:37 compute-1 nova_compute[226101]: 2025-12-06 06:58:37.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:38 compute-1 nova_compute[226101]: 2025-12-06 06:58:38.095 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:38.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:38.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:38 compute-1 nova_compute[226101]: 2025-12-06 06:58:38.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:38 compute-1 nova_compute[226101]: 2025-12-06 06:58:38.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:39 compute-1 ceph-mon[81689]: pgmap v1121: 305 pgs: 305 active+clean; 326 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.9 MiB/s wr, 206 op/s
Dec 06 06:58:39 compute-1 nova_compute[226101]: 2025-12-06 06:58:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:39 compute-1 nova_compute[226101]: 2025-12-06 06:58:39.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:39 compute-1 nova_compute[226101]: 2025-12-06 06:58:39.612 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:39 compute-1 nova_compute[226101]: 2025-12-06 06:58:39.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:39 compute-1 nova_compute[226101]: 2025-12-06 06:58:39.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:39 compute-1 nova_compute[226101]: 2025-12-06 06:58:39.613 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:58:39 compute-1 nova_compute[226101]: 2025-12-06 06:58:39.613 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2084576725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.037 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.101 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.102 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.104 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.104 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:58:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:40.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.273 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.307 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.309 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4680MB free_disk=20.83694839477539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.309 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.309 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.430 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance b70ad842-3481-4e25-a66f-16f7ed5a6ab3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.431 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance cb2d7ff9-9629-4807-b46f-1172e2f1f4f6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.431 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.431 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:58:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:40.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.488 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:40 compute-1 ceph-mon[81689]: pgmap v1122: 305 pgs: 305 active+clean; 328 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 242 op/s
Dec 06 06:58:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/467992479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.896 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.903 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.917 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.942 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:58:40 compute-1 nova_compute[226101]: 2025-12-06 06:58:40.942 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2084576725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/467992479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:41 compute-1 nova_compute[226101]: 2025-12-06 06:58:41.935 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:41 compute-1 nova_compute[226101]: 2025-12-06 06:58:41.936 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:41 compute-1 nova_compute[226101]: 2025-12-06 06:58:41.936 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:58:41 compute-1 nova_compute[226101]: 2025-12-06 06:58:41.936 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:58:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:42.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:42 compute-1 nova_compute[226101]: 2025-12-06 06:58:42.699 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:58:42 compute-1 nova_compute[226101]: 2025-12-06 06:58:42.700 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:58:42 compute-1 nova_compute[226101]: 2025-12-06 06:58:42.700 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 06:58:42 compute-1 nova_compute[226101]: 2025-12-06 06:58:42.700 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid b70ad842-3481-4e25-a66f-16f7ed5a6ab3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:42 compute-1 ceph-mon[81689]: pgmap v1123: 305 pgs: 305 active+clean; 328 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Dec 06 06:58:42 compute-1 nova_compute[226101]: 2025-12-06 06:58:42.949 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:58:43 compute-1 nova_compute[226101]: 2025-12-06 06:58:43.097 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:43 compute-1 nova_compute[226101]: 2025-12-06 06:58:43.623 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:43 compute-1 nova_compute[226101]: 2025-12-06 06:58:43.640 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:58:43 compute-1 nova_compute[226101]: 2025-12-06 06:58:43.641 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 06:58:43 compute-1 nova_compute[226101]: 2025-12-06 06:58:43.641 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:43 compute-1 nova_compute[226101]: 2025-12-06 06:58:43.642 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:44 compute-1 podman[230132]: 2025-12-06 06:58:44.087910895 +0000 UTC m=+0.078029997 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:58:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:44.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:44.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1246655193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:45 compute-1 nova_compute[226101]: 2025-12-06 06:58:45.274 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:46 compute-1 nova_compute[226101]: 2025-12-06 06:58:46.177 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:46 compute-1 nova_compute[226101]: 2025-12-06 06:58:46.179 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:46 compute-1 nova_compute[226101]: 2025-12-06 06:58:46.179 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:46 compute-1 nova_compute[226101]: 2025-12-06 06:58:46.180 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:46 compute-1 nova_compute[226101]: 2025-12-06 06:58:46.180 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:46 compute-1 nova_compute[226101]: 2025-12-06 06:58:46.181 226109 INFO nova.compute.manager [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Terminating instance
Dec 06 06:58:46 compute-1 nova_compute[226101]: 2025-12-06 06:58:46.182 226109 DEBUG nova.compute.manager [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 06:58:46 compute-1 ceph-mon[81689]: pgmap v1124: 305 pgs: 305 active+clean; 316 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Dec 06 06:58:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1330517347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1169068870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:46.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:46.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 kernel: tap040e5552-4b (unregistering): left promiscuous mode
Dec 06 06:58:48 compute-1 NetworkManager[49031]: <info>  [1765004328.1443] device (tap040e5552-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 06:58:48 compute-1 ovn_controller[130279]: 2025-12-06T06:58:48Z|00032|binding|INFO|Releasing lport 040e5552-4bb0-419d-a471-f27509f99f8e from this chassis (sb_readonly=0)
Dec 06 06:58:48 compute-1 ovn_controller[130279]: 2025-12-06T06:58:48Z|00033|binding|INFO|Setting lport 040e5552-4bb0-419d-a471-f27509f99f8e down in Southbound
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.176 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 ovn_controller[130279]: 2025-12-06T06:58:48Z|00034|binding|INFO|Removing iface tap040e5552-4b ovn-installed in OVS
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.178 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.184 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:08:b2 10.1.0.57 fdfe:381f:8400::36d'], port_security=['fa:16:3e:3f:08:b2 10.1.0.57 fdfe:381f:8400::36d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.57/26 fdfe:381f:8400::36d/64', 'neutron:device_id': 'cb2d7ff9-9629-4807-b46f-1172e2f1f4f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa805a2c-a79c-458b-b658-8e0534714a02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '066c314d67e347f6a49e8e3e27998441', 'neutron:revision_number': '4', 'neutron:security_group_ids': '460e28b2-b45f-4429-a2b9-8f57e45c4e5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a47e992-7383-43fa-bf34-745cbd8b74f1, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=040e5552-4bb0-419d-a471-f27509f99f8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.185 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 040e5552-4bb0-419d-a471-f27509f99f8e in datapath fa805a2c-a79c-458b-b658-8e0534714a02 unbound from our chassis
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.187 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa805a2c-a79c-458b-b658-8e0534714a02, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.188 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[abe85d45-f63e-4a5d-b5e9-84a62e20a603]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.188 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 namespace which is not needed anymore
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.201 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 06 06:58:48 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 13.766s CPU time.
Dec 06 06:58:48 compute-1 systemd-machined[190302]: Machine qemu-3-instance-00000004 terminated.
Dec 06 06:58:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:48.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:48 compute-1 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[230071]: [NOTICE]   (230075) : haproxy version is 2.8.14-c23fe91
Dec 06 06:58:48 compute-1 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[230071]: [NOTICE]   (230075) : path to executable is /usr/sbin/haproxy
Dec 06 06:58:48 compute-1 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[230071]: [WARNING]  (230075) : Exiting Master process...
Dec 06 06:58:48 compute-1 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[230071]: [WARNING]  (230075) : Exiting Master process...
Dec 06 06:58:48 compute-1 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[230071]: [ALERT]    (230075) : Current worker (230077) exited with code 143 (Terminated)
Dec 06 06:58:48 compute-1 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[230071]: [WARNING]  (230075) : All workers exited. Exiting... (0)
Dec 06 06:58:48 compute-1 systemd[1]: libpod-22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be.scope: Deactivated successfully.
Dec 06 06:58:48 compute-1 podman[230183]: 2025-12-06 06:58:48.315183751 +0000 UTC m=+0.041891152 container died 22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:58:48 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be-userdata-shm.mount: Deactivated successfully.
Dec 06 06:58:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-b502fdf2bf0af3d445650f323029f3739d2bdddf3c602290e1b1850001e3c7f4-merged.mount: Deactivated successfully.
Dec 06 06:58:48 compute-1 podman[230183]: 2025-12-06 06:58:48.345234612 +0000 UTC m=+0.071941993 container cleanup 22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 06:58:48 compute-1 systemd[1]: libpod-conmon-22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be.scope: Deactivated successfully.
Dec 06 06:58:48 compute-1 podman[230216]: 2025-12-06 06:58:48.400402061 +0000 UTC m=+0.038449899 container remove 22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.400 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.405 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.406 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8e93b441-c0e8-4f67-b968-7eaec5394c2a]: (4, ('Sat Dec  6 06:58:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 (22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be)\n22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be\nSat Dec  6 06:58:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 (22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be)\n22c85a184745f663c0bff0baa1d5a46491b51bf01abbab15c9afb4cfb1d085be\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.407 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[af1fe15b-1026-4176-87c3-d14176f7b79b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.408 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa805a2c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.410 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.413 226109 INFO nova.virt.libvirt.driver [-] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Instance destroyed successfully.
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.414 226109 DEBUG nova.objects.instance [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'resources' on Instance uuid cb2d7ff9-9629-4807-b46f-1172e2f1f4f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:48 compute-1 kernel: tapfa805a2c-a0: left promiscuous mode
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.423 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.428 226109 DEBUG nova.virt.libvirt.vif [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T06:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-2',id=4,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-06T06:58:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T06:58:34Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=cb2d7ff9-9629-4807-b46f-1172e2f1f4f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.428 226109 DEBUG nova.network.os_vif_util [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "040e5552-4bb0-419d-a471-f27509f99f8e", "address": "fa:16:3e:3f:08:b2", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::36d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap040e5552-4b", "ovs_interfaceid": "040e5552-4bb0-419d-a471-f27509f99f8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.429 226109 DEBUG nova.network.os_vif_util [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:08:b2,bridge_name='br-int',has_traffic_filtering=True,id=040e5552-4bb0-419d-a471-f27509f99f8e,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap040e5552-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.430 226109 DEBUG os_vif [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:08:b2,bridge_name='br-int',has_traffic_filtering=True,id=040e5552-4bb0-419d-a471-f27509f99f8e,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap040e5552-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.432 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.432 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9e49e5-ea42-4d13-8b89-8267a95f18a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.433 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap040e5552-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.434 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.435 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.437 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.440 226109 INFO os_vif [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:08:b2,bridge_name='br-int',has_traffic_filtering=True,id=040e5552-4bb0-419d-a471-f27509f99f8e,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap040e5552-4b')
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.444 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1cccb5-e40b-48bf-ac2e-044fe7ecc946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.445 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a4aa6829-f3dd-4915-974f-9027362c99ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.460 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb50cce-4e45-44b6-9cff-279092ed2038]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459470, 'reachable_time': 25428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230253, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:48 compute-1 systemd[1]: run-netns-ovnmeta\x2dfa805a2c\x2da79c\x2d458b\x2db658\x2d8e0534714a02.mount: Deactivated successfully.
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.479 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 06:58:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:58:48.480 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[823efe7c-d0ef-46fb-83ad-a73e6229c0a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:48.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.550 226109 DEBUG nova.compute.manager [req-5e2ac437-200f-4b5f-b760-712c217022e5 req-f33dd580-fdd2-49f9-a12f-672b7245bcbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received event network-vif-unplugged-040e5552-4bb0-419d-a471-f27509f99f8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.550 226109 DEBUG oslo_concurrency.lockutils [req-5e2ac437-200f-4b5f-b760-712c217022e5 req-f33dd580-fdd2-49f9-a12f-672b7245bcbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.551 226109 DEBUG oslo_concurrency.lockutils [req-5e2ac437-200f-4b5f-b760-712c217022e5 req-f33dd580-fdd2-49f9-a12f-672b7245bcbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.551 226109 DEBUG oslo_concurrency.lockutils [req-5e2ac437-200f-4b5f-b760-712c217022e5 req-f33dd580-fdd2-49f9-a12f-672b7245bcbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.551 226109 DEBUG nova.compute.manager [req-5e2ac437-200f-4b5f-b760-712c217022e5 req-f33dd580-fdd2-49f9-a12f-672b7245bcbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] No waiting events found dispatching network-vif-unplugged-040e5552-4bb0-419d-a471-f27509f99f8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 06:58:48 compute-1 nova_compute[226101]: 2025-12-06 06:58:48.551 226109 DEBUG nova.compute.manager [req-5e2ac437-200f-4b5f-b760-712c217022e5 req-f33dd580-fdd2-49f9-a12f-672b7245bcbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received event network-vif-unplugged-040e5552-4bb0-419d-a471-f27509f99f8e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 06:58:48 compute-1 ceph-mon[81689]: pgmap v1125: 305 pgs: 305 active+clean; 310 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 257 op/s
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.559297) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328559336, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2008, "num_deletes": 258, "total_data_size": 4647057, "memory_usage": 4725504, "flush_reason": "Manual Compaction"}
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328576966, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 3012158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20957, "largest_seqno": 22960, "table_properties": {"data_size": 3003971, "index_size": 4937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17186, "raw_average_key_size": 19, "raw_value_size": 2987205, "raw_average_value_size": 3441, "num_data_blocks": 220, "num_entries": 868, "num_filter_entries": 868, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004171, "oldest_key_time": 1765004171, "file_creation_time": 1765004328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 17735 microseconds, and 6376 cpu microseconds.
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.577033) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 3012158 bytes OK
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.577050) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.579124) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.579167) EVENT_LOG_v1 {"time_micros": 1765004328579158, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.579187) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 4637999, prev total WAL file size 4637999, number of live WAL files 2.
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.580157) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323533' seq:72057594037927935, type:22 .. '6C6F676D00353037' seq:0, type:0; will stop at (end)
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(2941KB)], [42(7480KB)]
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328580196, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 10672469, "oldest_snapshot_seqno": -1}
Dec 06 06:58:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5192 keys, 10457299 bytes, temperature: kUnknown
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328636448, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 10457299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10420752, "index_size": 22485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 131385, "raw_average_key_size": 25, "raw_value_size": 10324873, "raw_average_value_size": 1988, "num_data_blocks": 924, "num_entries": 5192, "num_filter_entries": 5192, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765004328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.637813) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 10457299 bytes
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.639045) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.4 rd, 185.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 7.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 5727, records dropped: 535 output_compression: NoCompression
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.639069) EVENT_LOG_v1 {"time_micros": 1765004328639060, "job": 24, "event": "compaction_finished", "compaction_time_micros": 56356, "compaction_time_cpu_micros": 22212, "output_level": 6, "num_output_files": 1, "total_output_size": 10457299, "num_input_records": 5727, "num_output_records": 5192, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328639720, "job": 24, "event": "table_file_deletion", "file_number": 44}
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328641005, "job": 24, "event": "table_file_deletion", "file_number": 42}
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.580110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.641128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.641134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.641135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.641137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:48.641139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:49 compute-1 ceph-mon[81689]: pgmap v1126: 305 pgs: 305 active+clean; 322 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 242 op/s
Dec 06 06:58:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/913165207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3140234720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:49 compute-1 nova_compute[226101]: 2025-12-06 06:58:49.832 226109 INFO nova.virt.libvirt.driver [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Deleting instance files /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_del
Dec 06 06:58:49 compute-1 nova_compute[226101]: 2025-12-06 06:58:49.833 226109 INFO nova.virt.libvirt.driver [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Deletion of /var/lib/nova/instances/cb2d7ff9-9629-4807-b46f-1172e2f1f4f6_del complete
Dec 06 06:58:49 compute-1 nova_compute[226101]: 2025-12-06 06:58:49.891 226109 INFO nova.compute.manager [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Took 3.71 seconds to destroy the instance on the hypervisor.
Dec 06 06:58:49 compute-1 nova_compute[226101]: 2025-12-06 06:58:49.892 226109 DEBUG oslo.service.loopingcall [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 06:58:49 compute-1 nova_compute[226101]: 2025-12-06 06:58:49.892 226109 DEBUG nova.compute.manager [-] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 06:58:49 compute-1 nova_compute[226101]: 2025-12-06 06:58:49.892 226109 DEBUG nova.network.neutron [-] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 06:58:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 06:58:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:50.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 06:58:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:50.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:50 compute-1 nova_compute[226101]: 2025-12-06 06:58:50.696 226109 DEBUG nova.compute.manager [req-0c20319e-00d5-4ba4-a9dd-3a7cd347a2d6 req-bb19ce47-9354-49e7-8803-de9ad1756d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received event network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:50 compute-1 nova_compute[226101]: 2025-12-06 06:58:50.697 226109 DEBUG oslo_concurrency.lockutils [req-0c20319e-00d5-4ba4-a9dd-3a7cd347a2d6 req-bb19ce47-9354-49e7-8803-de9ad1756d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:50 compute-1 nova_compute[226101]: 2025-12-06 06:58:50.697 226109 DEBUG oslo_concurrency.lockutils [req-0c20319e-00d5-4ba4-a9dd-3a7cd347a2d6 req-bb19ce47-9354-49e7-8803-de9ad1756d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:50 compute-1 nova_compute[226101]: 2025-12-06 06:58:50.698 226109 DEBUG oslo_concurrency.lockutils [req-0c20319e-00d5-4ba4-a9dd-3a7cd347a2d6 req-bb19ce47-9354-49e7-8803-de9ad1756d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:50 compute-1 nova_compute[226101]: 2025-12-06 06:58:50.698 226109 DEBUG nova.compute.manager [req-0c20319e-00d5-4ba4-a9dd-3a7cd347a2d6 req-bb19ce47-9354-49e7-8803-de9ad1756d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] No waiting events found dispatching network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 06:58:50 compute-1 nova_compute[226101]: 2025-12-06 06:58:50.699 226109 WARNING nova.compute.manager [req-0c20319e-00d5-4ba4-a9dd-3a7cd347a2d6 req-bb19ce47-9354-49e7-8803-de9ad1756d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received unexpected event network-vif-plugged-040e5552-4bb0-419d-a471-f27509f99f8e for instance with vm_state active and task_state deleting.
Dec 06 06:58:50 compute-1 ceph-mon[81689]: pgmap v1127: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.823770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330823896, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 283, "num_deletes": 251, "total_data_size": 75203, "memory_usage": 82072, "flush_reason": "Manual Compaction"}
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330826567, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 48844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22965, "largest_seqno": 23243, "table_properties": {"data_size": 46979, "index_size": 94, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4938, "raw_average_key_size": 18, "raw_value_size": 43312, "raw_average_value_size": 161, "num_data_blocks": 4, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004329, "oldest_key_time": 1765004329, "file_creation_time": 1765004330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 2827 microseconds, and 1336 cpu microseconds.
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.826619) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 48844 bytes OK
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.826644) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.827915) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.827934) EVENT_LOG_v1 {"time_micros": 1765004330827929, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.827947) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 73067, prev total WAL file size 73067, number of live WAL files 2.
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.828428) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(47KB)], [45(10212KB)]
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330828564, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 10506143, "oldest_snapshot_seqno": -1}
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4952 keys, 8462370 bytes, temperature: kUnknown
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330881478, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 8462370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8429158, "index_size": 19726, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 127104, "raw_average_key_size": 25, "raw_value_size": 8339185, "raw_average_value_size": 1684, "num_data_blocks": 799, "num_entries": 4952, "num_filter_entries": 4952, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765004330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.881761) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8462370 bytes
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.883321) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.4 rd, 159.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(388.3) write-amplify(173.3) OK, records in: 5461, records dropped: 509 output_compression: NoCompression
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.883342) EVENT_LOG_v1 {"time_micros": 1765004330883333, "job": 26, "event": "compaction_finished", "compaction_time_micros": 52958, "compaction_time_cpu_micros": 17812, "output_level": 6, "num_output_files": 1, "total_output_size": 8462370, "num_input_records": 5461, "num_output_records": 4952, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330883498, "job": 26, "event": "table_file_deletion", "file_number": 47}
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330885751, "job": 26, "event": "table_file_deletion", "file_number": 45}
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.828284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.885897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.885907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.885909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.885912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-06:58:50.885914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:51 compute-1 nova_compute[226101]: 2025-12-06 06:58:51.494 226109 DEBUG nova.network.neutron [-] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:51 compute-1 nova_compute[226101]: 2025-12-06 06:58:51.518 226109 INFO nova.compute.manager [-] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Took 1.63 seconds to deallocate network for instance.
Dec 06 06:58:51 compute-1 nova_compute[226101]: 2025-12-06 06:58:51.560 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:51 compute-1 nova_compute[226101]: 2025-12-06 06:58:51.561 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:51 compute-1 nova_compute[226101]: 2025-12-06 06:58:51.630 226109 DEBUG oslo_concurrency.processutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:51 compute-1 nova_compute[226101]: 2025-12-06 06:58:51.670 226109 DEBUG nova.compute.manager [req-3483b59c-1055-478e-ae1f-d103c8950252 req-185abfe7-a66d-4cdd-9a8a-b4ec29741981 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Received event network-vif-deleted-040e5552-4bb0-419d-a471-f27509f99f8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:51 compute-1 ceph-mon[81689]: pgmap v1128: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 744 KiB/s rd, 2.2 MiB/s wr, 127 op/s
Dec 06 06:58:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:52 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/500948300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:52 compute-1 nova_compute[226101]: 2025-12-06 06:58:52.051 226109 DEBUG oslo_concurrency.processutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:52 compute-1 nova_compute[226101]: 2025-12-06 06:58:52.056 226109 DEBUG nova.compute.provider_tree [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:58:52 compute-1 nova_compute[226101]: 2025-12-06 06:58:52.075 226109 DEBUG nova.scheduler.client.report [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:58:52 compute-1 nova_compute[226101]: 2025-12-06 06:58:52.108 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:52 compute-1 nova_compute[226101]: 2025-12-06 06:58:52.129 226109 INFO nova.scheduler.client.report [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Deleted allocations for instance cb2d7ff9-9629-4807-b46f-1172e2f1f4f6
Dec 06 06:58:52 compute-1 nova_compute[226101]: 2025-12-06 06:58:52.220 226109 DEBUG oslo_concurrency.lockutils [None req-3176da86-af16-40db-826b-708e540ce702 e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "cb2d7ff9-9629-4807-b46f-1172e2f1f4f6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:52.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:52.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:53 compute-1 nova_compute[226101]: 2025-12-06 06:58:53.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/500948300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:53 compute-1 nova_compute[226101]: 2025-12-06 06:58:53.435 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:54 compute-1 podman[230287]: 2025-12-06 06:58:54.080893667 +0000 UTC m=+0.059217469 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 06:58:54 compute-1 podman[230288]: 2025-12-06 06:58:54.094585566 +0000 UTC m=+0.073142215 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 06:58:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:54.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:54 compute-1 ceph-mon[81689]: pgmap v1129: 305 pgs: 305 active+clean; 302 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 754 KiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 06:58:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:54.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:55 compute-1 sudo[230326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:55 compute-1 sudo[230326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-1 sudo[230326]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-1 sudo[230351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:58:55 compute-1 sudo[230351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-1 sudo[230351]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-1 sudo[230376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:55 compute-1 sudo[230376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-1 sudo[230376]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-1 sudo[230401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 06:58:55 compute-1 sudo[230401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-1 sudo[230401]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-1 sudo[230445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:55 compute-1 sudo[230445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-1 sudo[230445]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-1 sudo[230470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:58:55 compute-1 sudo[230470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-1 sudo[230470]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-1 sudo[230495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:55 compute-1 sudo[230495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-1 sudo[230495]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:56 compute-1 sudo[230520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:58:56 compute-1 sudo[230520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:56.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:56 compute-1 sudo[230520]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:56.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:56 compute-1 sudo[230575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:56 compute-1 sudo[230575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:56 compute-1 sudo[230575]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:56 compute-1 sudo[230600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:58:56 compute-1 sudo[230600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:56 compute-1 sudo[230600]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:56 compute-1 sudo[230625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:56 compute-1 sudo[230625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:56 compute-1 sudo[230625]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:56 compute-1 sudo[230650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 06:58:56 compute-1 sudo[230650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:57 compute-1 podman[230715]: 2025-12-06 06:58:57.050991654 +0000 UTC m=+0.045817447 container create bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:58:57 compute-1 systemd[1]: Started libpod-conmon-bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d.scope.
Dec 06 06:58:57 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:58:57 compute-1 podman[230715]: 2025-12-06 06:58:57.032098175 +0000 UTC m=+0.026923988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:58:57 compute-1 podman[230715]: 2025-12-06 06:58:57.135116025 +0000 UTC m=+0.129941848 container init bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:58:57 compute-1 podman[230715]: 2025-12-06 06:58:57.141667062 +0000 UTC m=+0.136492855 container start bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 06:58:57 compute-1 podman[230715]: 2025-12-06 06:58:57.146274446 +0000 UTC m=+0.141100259 container attach bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 06:58:57 compute-1 systemd[1]: libpod-bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d.scope: Deactivated successfully.
Dec 06 06:58:57 compute-1 stupefied_morse[230732]: 167 167
Dec 06 06:58:57 compute-1 podman[230715]: 2025-12-06 06:58:57.151809905 +0000 UTC m=+0.146635698 container died bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:58:57 compute-1 conmon[230732]: conmon bd6b65d880119c4f2ef4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d.scope/container/memory.events
Dec 06 06:58:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-0b93df7f9ca545ec1dd28909fe02357c1189e771b3cc0a2806e9f32d0ddbf53c-merged.mount: Deactivated successfully.
Dec 06 06:58:57 compute-1 podman[230715]: 2025-12-06 06:58:57.189892803 +0000 UTC m=+0.184718596 container remove bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 06:58:57 compute-1 systemd[1]: libpod-conmon-bd6b65d880119c4f2ef448c3f7401a8d25735f639d514ef9daf9b1bd120c9d8d.scope: Deactivated successfully.
Dec 06 06:58:57 compute-1 podman[230756]: 2025-12-06 06:58:57.352407939 +0000 UTC m=+0.051838470 container create f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 06:58:57 compute-1 systemd[1]: Started libpod-conmon-f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21.scope.
Dec 06 06:58:57 compute-1 systemd[1]: Started libcrun container.
Dec 06 06:58:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47054651fab596c45aebb8e61472d9e07ab116c01e01da05e188261cd393b7de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47054651fab596c45aebb8e61472d9e07ab116c01e01da05e188261cd393b7de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:57 compute-1 podman[230756]: 2025-12-06 06:58:57.325062511 +0000 UTC m=+0.024493092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:58:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47054651fab596c45aebb8e61472d9e07ab116c01e01da05e188261cd393b7de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47054651fab596c45aebb8e61472d9e07ab116c01e01da05e188261cd393b7de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:57 compute-1 podman[230756]: 2025-12-06 06:58:57.435033519 +0000 UTC m=+0.134464070 container init f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:58:57 compute-1 podman[230756]: 2025-12-06 06:58:57.444066663 +0000 UTC m=+0.143497194 container start f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:58:57 compute-1 podman[230756]: 2025-12-06 06:58:57.449590002 +0000 UTC m=+0.149020533 container attach f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_robinson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 06:58:57 compute-1 ceph-mon[81689]: pgmap v1130: 305 pgs: 305 active+clean; 279 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 1.2 MiB/s wr, 102 op/s
Dec 06 06:58:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2044821323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:58 compute-1 nova_compute[226101]: 2025-12-06 06:58:58.104 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:58 compute-1 nova_compute[226101]: 2025-12-06 06:58:58.436 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:58:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:58.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]: [
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:     {
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         "available": false,
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         "ceph_device": false,
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         "lsm_data": {},
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         "lvs": [],
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         "path": "/dev/sr0",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         "rejected_reasons": [
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "Insufficient space (<5GB)",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "Has a FileSystem"
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         ],
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         "sys_api": {
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "actuators": null,
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "device_nodes": "sr0",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "devname": "sr0",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "human_readable_size": "482.00 KB",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "id_bus": "ata",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "model": "QEMU DVD-ROM",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "nr_requests": "2",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "parent": "/dev/sr0",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "partitions": {},
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "path": "/dev/sr0",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "removable": "1",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "rev": "2.5+",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "ro": "0",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "rotational": "1",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "sas_address": "",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "sas_device_handle": "",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "scheduler_mode": "mq-deadline",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "sectors": 0,
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "sectorsize": "2048",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "size": 493568.0,
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "support_discard": "2048",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "type": "disk",
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:             "vendor": "QEMU"
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:         }
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]:     }
Dec 06 06:58:58 compute-1 wizardly_robinson[230773]: ]
Dec 06 06:58:58 compute-1 systemd[1]: libpod-f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21.scope: Deactivated successfully.
Dec 06 06:58:58 compute-1 systemd[1]: libpod-f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21.scope: Consumed 1.119s CPU time.
Dec 06 06:58:58 compute-1 podman[231851]: 2025-12-06 06:58:58.592111136 +0000 UTC m=+0.026803544 container died f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:58:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-47054651fab596c45aebb8e61472d9e07ab116c01e01da05e188261cd393b7de-merged.mount: Deactivated successfully.
Dec 06 06:58:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:58 compute-1 podman[231851]: 2025-12-06 06:58:58.636902935 +0000 UTC m=+0.071595333 container remove f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 06:58:58 compute-1 systemd[1]: libpod-conmon-f4ae541a0f4793b1dc049f0b9d9b659a17d3dfbf8e28c1a0c0b94be81389fd21.scope: Deactivated successfully.
Dec 06 06:58:58 compute-1 sudo[230650]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/400368129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:58 compute-1 ceph-mon[81689]: pgmap v1131: 305 pgs: 305 active+clean; 237 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 277 KiB/s rd, 123 KiB/s wr, 92 op/s
Dec 06 06:58:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1155014578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/400368129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:59 compute-1 ceph-mon[81689]: pgmap v1132: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 97 KiB/s wr, 63 op/s
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:58:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:59:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:00.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:00.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2344733042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/959108537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:59:01.615 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:59:01.615 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:59:01.615 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:01 compute-1 ceph-mon[81689]: pgmap v1133: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 17 KiB/s wr, 56 op/s
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.095 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:02.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.410 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.410 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.449 226109 DEBUG nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:59:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:02.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.574 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.575 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.585 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.585 226109 INFO nova.compute.claims [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Claim successful on node compute-1.ctlplane.example.com
Dec 06 06:59:02 compute-1 nova_compute[226101]: 2025-12-06 06:59:02.704 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.104 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3611950374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.151 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.158 226109 DEBUG nova.compute.provider_tree [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.192 226109 DEBUG nova.scheduler.client.report [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.224 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.225 226109 DEBUG nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.280 226109 DEBUG nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.281 226109 DEBUG nova.network.neutron [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.307 226109 INFO nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.328 226109 DEBUG nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.410 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004328.4093268, cb2d7ff9-9629-4807-b46f-1172e2f1f4f6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.411 226109 INFO nova.compute.manager [-] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] VM Stopped (Lifecycle Event)
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.438 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.441 226109 DEBUG nova.compute.manager [None req-0c8de623-2a43-4745-85c5-fa911fe0545a - - - - - -] [instance: cb2d7ff9-9629-4807-b46f-1172e2f1f4f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.442 226109 DEBUG nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.443 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.444 226109 INFO nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Creating image(s)
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.470 226109 DEBUG nova.storage.rbd_utils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image d32579c4-62c8-41ac-9d01-b617cc7992ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.496 226109 DEBUG nova.storage.rbd_utils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image d32579c4-62c8-41ac-9d01-b617cc7992ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.521 226109 DEBUG nova.storage.rbd_utils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image d32579c4-62c8-41ac-9d01-b617cc7992ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.525 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.582 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.583 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.584 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.584 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.606 226109 DEBUG nova.storage.rbd_utils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image d32579c4-62c8-41ac-9d01-b617cc7992ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.609 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d32579c4-62c8-41ac-9d01-b617cc7992ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.727 226109 DEBUG nova.network.neutron [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.729 226109 DEBUG nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.901 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d32579c4-62c8-41ac-9d01-b617cc7992ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:03 compute-1 nova_compute[226101]: 2025-12-06 06:59:03.966 226109 DEBUG nova.storage.rbd_utils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] resizing rbd image d32579c4-62c8-41ac-9d01-b617cc7992ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:59:04 compute-1 ceph-mon[81689]: pgmap v1134: 305 pgs: 305 active+clean; 227 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 1.2 MiB/s wr, 73 op/s
Dec 06 06:59:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3611950374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/542679742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:59:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/542679742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.074 226109 DEBUG nova.objects.instance [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'migration_context' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.089 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.090 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Ensure instance console log exists: /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.090 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.090 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.090 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.091 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.094 226109 WARNING nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.099 226109 DEBUG nova.virt.libvirt.host [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.099 226109 DEBUG nova.virt.libvirt.host [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.101 226109 DEBUG nova.virt.libvirt.host [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.102 226109 DEBUG nova.virt.libvirt.host [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.102 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.103 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.103 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.103 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.103 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.103 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.103 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.104 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.104 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.104 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.104 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.104 226109 DEBUG nova.virt.hardware [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.107 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:04.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:04.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:59:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2887831352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.555 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.556 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.556 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.556 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.557 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.558 226109 INFO nova.compute.manager [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Terminating instance
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.559 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.559 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquired lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.559 226109 DEBUG nova.network.neutron [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.560 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.585 226109 DEBUG nova.storage.rbd_utils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image d32579c4-62c8-41ac-9d01-b617cc7992ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.590 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:04 compute-1 nova_compute[226101]: 2025-12-06 06:59:04.852 226109 DEBUG nova.network.neutron [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:59:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2887831352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:59:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2849377092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.093 226109 DEBUG nova.network.neutron [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.095 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.096 226109 DEBUG nova.objects.instance [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'pci_devices' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.110 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <uuid>d32579c4-62c8-41ac-9d01-b617cc7992ed</uuid>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <name>instance-0000000b</name>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <metadata>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <nova:name>tempest-MigrationsAdminTest-server-822732855</nova:name>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 06:59:04</nova:creationTime>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   </metadata>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <system>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <entry name="serial">d32579c4-62c8-41ac-9d01-b617cc7992ed</entry>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <entry name="uuid">d32579c4-62c8-41ac-9d01-b617cc7992ed</entry>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     </system>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <os>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   </os>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <features>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <apic/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   </features>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   </clock>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/d32579c4-62c8-41ac-9d01-b617cc7992ed_disk">
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       </source>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/d32579c4-62c8-41ac-9d01-b617cc7992ed_disk.config">
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       </source>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:59:05 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/console.log" append="off"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     </serial>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <video>
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     </video>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 06:59:05 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 06:59:05 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 06:59:05 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:59:05 compute-1 nova_compute[226101]: </domain>
Dec 06 06:59:05 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.113 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Releasing lock "refresh_cache-b70ad842-3481-4e25-a66f-16f7ed5a6ab3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.114 226109 DEBUG nova.compute.manager [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.161 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.161 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.162 226109 INFO nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Using config drive
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.193 226109 DEBUG nova.storage.rbd_utils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image d32579c4-62c8-41ac-9d01-b617cc7992ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:05 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 06 06:59:05 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 18.523s CPU time.
Dec 06 06:59:05 compute-1 systemd-machined[190302]: Machine qemu-1-instance-00000001 terminated.
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.313 226109 INFO nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Creating config drive at /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/disk.config
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.317 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_oyf2mp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.337 226109 INFO nova.virt.libvirt.driver [-] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Instance destroyed successfully.
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.337 226109 DEBUG nova.objects.instance [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'resources' on Instance uuid b70ad842-3481-4e25-a66f-16f7ed5a6ab3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.442 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_oyf2mp" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.467 226109 DEBUG nova.storage.rbd_utils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image d32579c4-62c8-41ac-9d01-b617cc7992ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.469 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/disk.config d32579c4-62c8-41ac-9d01-b617cc7992ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.631 226109 DEBUG oslo_concurrency.processutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/disk.config d32579c4-62c8-41ac-9d01-b617cc7992ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.632 226109 INFO nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Deleting local config drive /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/disk.config because it was imported into RBD.
Dec 06 06:59:05 compute-1 systemd-machined[190302]: New machine qemu-4-instance-0000000b.
Dec 06 06:59:05 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Dec 06 06:59:05 compute-1 sudo[232204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:05 compute-1 sudo[232204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:05 compute-1 sudo[232204]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.780 226109 INFO nova.virt.libvirt.driver [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Deleting instance files /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3_del
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.781 226109 INFO nova.virt.libvirt.driver [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Deletion of /var/lib/nova/instances/b70ad842-3481-4e25-a66f-16f7ed5a6ab3_del complete
Dec 06 06:59:05 compute-1 sudo[232237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:59:05 compute-1 sudo[232237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:05 compute-1 sudo[232237]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.865 226109 INFO nova.compute.manager [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Took 0.75 seconds to destroy the instance on the hypervisor.
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.866 226109 DEBUG oslo.service.loopingcall [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.866 226109 DEBUG nova.compute.manager [-] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 06:59:05 compute-1 nova_compute[226101]: 2025-12-06 06:59:05.866 226109 DEBUG nova.network.neutron [-] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 06:59:06 compute-1 ceph-mon[81689]: pgmap v1135: 305 pgs: 305 active+clean; 264 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 724 KiB/s rd, 2.4 MiB/s wr, 101 op/s
Dec 06 06:59:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2849377092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:59:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.090 226109 DEBUG nova.network.neutron [-] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.114 226109 DEBUG nova.network.neutron [-] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.141 226109 INFO nova.compute.manager [-] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Took 0.27 seconds to deallocate network for instance.
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.193 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.194 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.250 226109 DEBUG oslo_concurrency.processutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:06.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.271 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004346.2606592, d32579c4-62c8-41ac-9d01-b617cc7992ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.272 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] VM Resumed (Lifecycle Event)
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.276 226109 DEBUG nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.277 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.282 226109 INFO nova.virt.libvirt.driver [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance spawned successfully.
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.282 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.304 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.311 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.315 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.315 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.316 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.316 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.317 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.317 226109 DEBUG nova.virt.libvirt.driver [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.330 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.331 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004346.2610843, d32579c4-62c8-41ac-9d01-b617cc7992ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.331 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] VM Started (Lifecycle Event)
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.351 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.355 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.385 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.394 226109 INFO nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Took 2.95 seconds to spawn the instance on the hypervisor.
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.394 226109 DEBUG nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.458 226109 INFO nova.compute.manager [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Took 3.93 seconds to build instance.
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.475 226109 DEBUG oslo_concurrency.lockutils [None req-441aa4ae-4f9f-4dd1-993c-2c20e5d566a1 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:06.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1650501530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.659 226109 DEBUG oslo_concurrency.processutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.663 226109 DEBUG nova.compute.provider_tree [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.676 226109 DEBUG nova.scheduler.client.report [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.694 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.716 226109 INFO nova.scheduler.client.report [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Deleted allocations for instance b70ad842-3481-4e25-a66f-16f7ed5a6ab3
Dec 06 06:59:06 compute-1 nova_compute[226101]: 2025-12-06 06:59:06.788 226109 DEBUG oslo_concurrency.lockutils [None req-f3426dde-c3c5-4fe6-bb1f-4e48b726373e e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "b70ad842-3481-4e25-a66f-16f7ed5a6ab3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3731078009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1650501530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:08 compute-1 ceph-mon[81689]: pgmap v1136: 305 pgs: 305 active+clean; 191 MiB data, 333 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 194 op/s
Dec 06 06:59:08 compute-1 nova_compute[226101]: 2025-12-06 06:59:08.138 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:08.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:08 compute-1 nova_compute[226101]: 2025-12-06 06:59:08.439 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:08.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3524332152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:59:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3524332152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:59:10 compute-1 ceph-mon[81689]: pgmap v1137: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 251 op/s
Dec 06 06:59:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:10.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:10.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:12 compute-1 ceph-mon[81689]: pgmap v1138: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Dec 06 06:59:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1905717733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:12 compute-1 nova_compute[226101]: 2025-12-06 06:59:12.158 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:59:12 compute-1 nova_compute[226101]: 2025-12-06 06:59:12.158 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquired lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:59:12 compute-1 nova_compute[226101]: 2025-12-06 06:59:12.159 226109 DEBUG nova.network.neutron [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:59:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:12.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:12.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:13 compute-1 nova_compute[226101]: 2025-12-06 06:59:13.140 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:13 compute-1 nova_compute[226101]: 2025-12-06 06:59:13.441 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:14.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:14 compute-1 ceph-mon[81689]: pgmap v1139: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 269 op/s
Dec 06 06:59:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:14.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:14 compute-1 nova_compute[226101]: 2025-12-06 06:59:14.709 226109 DEBUG nova.network.neutron [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.064 226109 DEBUG nova.network.neutron [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.088 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Releasing lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:59:15 compute-1 podman[232327]: 2025-12-06 06:59:15.138121433 +0000 UTC m=+0.099915197 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.193 226109 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.194 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Creating file /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/7a41940a3999440bba4f1659f2ba62b1.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.194 226109 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/7a41940a3999440bba4f1659f2ba62b1.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.694 226109 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/7a41940a3999440bba4f1659f2ba62b1.tmp" returned: 1 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.695 226109 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed/7a41940a3999440bba4f1659f2ba62b1.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.696 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Creating directory /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.696 226109 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.911 226109 DEBUG oslo_concurrency.processutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/d32579c4-62c8-41ac-9d01-b617cc7992ed" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:15 compute-1 nova_compute[226101]: 2025-12-06 06:59:15.915 226109 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 06:59:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:16.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:16 compute-1 ceph-mon[81689]: pgmap v1140: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 255 op/s
Dec 06 06:59:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:16.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2128981368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:18 compute-1 nova_compute[226101]: 2025-12-06 06:59:18.187 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:18.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:18 compute-1 nova_compute[226101]: 2025-12-06 06:59:18.442 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:18.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:18 compute-1 ceph-mon[81689]: pgmap v1141: 305 pgs: 305 active+clean; 107 MiB data, 326 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 259 op/s
Dec 06 06:59:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:59:19.335 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:59:19 compute-1 nova_compute[226101]: 2025-12-06 06:59:19.336 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:59:19.337 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 06:59:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:20.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:20 compute-1 nova_compute[226101]: 2025-12-06 06:59:20.335 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004345.3310843, b70ad842-3481-4e25-a66f-16f7ed5a6ab3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:20 compute-1 nova_compute[226101]: 2025-12-06 06:59:20.335 226109 INFO nova.compute.manager [-] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] VM Stopped (Lifecycle Event)
Dec 06 06:59:20 compute-1 nova_compute[226101]: 2025-12-06 06:59:20.355 226109 DEBUG nova.compute.manager [None req-6382d0ed-f16e-49d4-a35d-f260426f7e85 - - - - - -] [instance: b70ad842-3481-4e25-a66f-16f7ed5a6ab3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:20.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:20 compute-1 ceph-mon[81689]: pgmap v1142: 305 pgs: 305 active+clean; 105 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 208 op/s
Dec 06 06:59:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:22.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:22 compute-1 ceph-mon[81689]: pgmap v1143: 305 pgs: 305 active+clean; 114 MiB data, 312 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.0 MiB/s wr, 138 op/s
Dec 06 06:59:23 compute-1 nova_compute[226101]: 2025-12-06 06:59:23.190 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 06:59:23.338 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:59:23 compute-1 nova_compute[226101]: 2025-12-06 06:59:23.443 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:24.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:24.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:24 compute-1 ceph-mon[81689]: pgmap v1144: 305 pgs: 305 active+clean; 119 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.2 MiB/s wr, 165 op/s
Dec 06 06:59:25 compute-1 podman[232355]: 2025-12-06 06:59:25.065362291 +0000 UTC m=+0.045952790 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 06:59:25 compute-1 podman[232354]: 2025-12-06 06:59:25.070663655 +0000 UTC m=+0.051479641 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 06 06:59:25 compute-1 nova_compute[226101]: 2025-12-06 06:59:25.957 226109 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 06:59:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:26.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:26.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:26 compute-1 ceph-mon[81689]: pgmap v1145: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 562 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Dec 06 06:59:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2692941592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:28 compute-1 nova_compute[226101]: 2025-12-06 06:59:28.191 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:28.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:28 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec 06 06:59:28 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 13.876s CPU time.
Dec 06 06:59:28 compute-1 systemd-machined[190302]: Machine qemu-4-instance-0000000b terminated.
Dec 06 06:59:28 compute-1 nova_compute[226101]: 2025-12-06 06:59:28.445 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:28 compute-1 ceph-mon[81689]: pgmap v1146: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 553 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Dec 06 06:59:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1654061355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:28 compute-1 nova_compute[226101]: 2025-12-06 06:59:28.971 226109 INFO nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance shutdown successfully after 13 seconds.
Dec 06 06:59:28 compute-1 nova_compute[226101]: 2025-12-06 06:59:28.977 226109 INFO nova.virt.libvirt.driver [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance destroyed successfully.
Dec 06 06:59:28 compute-1 nova_compute[226101]: 2025-12-06 06:59:28.981 226109 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:59:28 compute-1 nova_compute[226101]: 2025-12-06 06:59:28.982 226109 DEBUG nova.virt.libvirt.driver [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:59:29 compute-1 nova_compute[226101]: 2025-12-06 06:59:29.057 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:59:29 compute-1 nova_compute[226101]: 2025-12-06 06:59:29.057 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:59:29 compute-1 nova_compute[226101]: 2025-12-06 06:59:29.066 226109 INFO nova.compute.rpcapi [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Dec 06 06:59:29 compute-1 nova_compute[226101]: 2025-12-06 06:59:29.067 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:59:29 compute-1 nova_compute[226101]: 2025-12-06 06:59:29.081 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "d32579c4-62c8-41ac-9d01-b617cc7992ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:29 compute-1 nova_compute[226101]: 2025-12-06 06:59:29.082 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:29 compute-1 nova_compute[226101]: 2025-12-06 06:59:29.082 226109 DEBUG oslo_concurrency.lockutils [None req-e771028f-2a3b-412e-a830-2fbefb2a7f0c 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:29 compute-1 ceph-mon[81689]: pgmap v1147: 305 pgs: 305 active+clean; 144 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 415 KiB/s rd, 3.3 MiB/s wr, 111 op/s
Dec 06 06:59:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/423695454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:30.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:59:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.5 total, 600.0 interval
                                           Cumulative writes: 4148 writes, 23K keys, 4148 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 4148 writes, 4148 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1510 writes, 7414 keys, 1510 commit groups, 1.0 writes per commit group, ingest: 15.58 MB, 0.03 MB/s
                                           Interval WAL: 1510 writes, 1510 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.1      1.54              0.09        13    0.119       0      0       0.0       0.0
                                             L6      1/0    8.07 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.7     34.3     28.6      3.78              0.30        12    0.315     60K   6380       0.0       0.0
                                            Sum      1/0    8.07 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.7     24.4     25.9      5.33              0.39        25    0.213     60K   6380       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.0     79.5     80.5      0.81              0.15        12    0.067     32K   3096       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     34.3     28.6      3.78              0.30        12    0.315     60K   6380       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.7      1.50              0.09        12    0.125       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.029, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.08 MB/s write, 0.13 GB read, 0.07 MB/s read, 5.3 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 10.68 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000132 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(587,10.19 MB,3.35323%) FilterBlock(25,172.86 KB,0.0555289%) IndexBlock(25,324.83 KB,0.104347%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:59:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:30.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Dec 06 06:59:32 compute-1 ceph-mon[81689]: pgmap v1148: 305 pgs: 305 active+clean; 161 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 184 KiB/s rd, 2.0 MiB/s wr, 75 op/s
Dec 06 06:59:32 compute-1 ceph-mon[81689]: osdmap e157: 3 total, 3 up, 3 in
Dec 06 06:59:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:32.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:32.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:33 compute-1 nova_compute[226101]: 2025-12-06 06:59:33.192 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:33 compute-1 nova_compute[226101]: 2025-12-06 06:59:33.447 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1428329051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:34.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:35 compute-1 ceph-mon[81689]: pgmap v1150: 305 pgs: 305 active+clean; 167 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 06 06:59:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3323031020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:36.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:36 compute-1 nova_compute[226101]: 2025-12-06 06:59:36.925 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:36 compute-1 nova_compute[226101]: 2025-12-06 06:59:36.926 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:36 compute-1 nova_compute[226101]: 2025-12-06 06:59:36.926 226109 DEBUG nova.compute.manager [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Going to confirm migration 1 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Dec 06 06:59:36 compute-1 ceph-mon[81689]: pgmap v1151: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 831 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Dec 06 06:59:37 compute-1 nova_compute[226101]: 2025-12-06 06:59:37.544 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:59:37 compute-1 nova_compute[226101]: 2025-12-06 06:59:37.545 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:59:37 compute-1 nova_compute[226101]: 2025-12-06 06:59:37.545 226109 DEBUG nova.network.neutron [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:59:37 compute-1 nova_compute[226101]: 2025-12-06 06:59:37.545 226109 DEBUG nova.objects.instance [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'info_cache' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:37 compute-1 nova_compute[226101]: 2025-12-06 06:59:37.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:37 compute-1 nova_compute[226101]: 2025-12-06 06:59:37.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:59:38 compute-1 nova_compute[226101]: 2025-12-06 06:59:38.008 226109 DEBUG nova.network.neutron [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:59:38 compute-1 ceph-mon[81689]: pgmap v1152: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Dec 06 06:59:38 compute-1 nova_compute[226101]: 2025-12-06 06:59:38.194 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:38.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:38 compute-1 nova_compute[226101]: 2025-12-06 06:59:38.450 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:38 compute-1 nova_compute[226101]: 2025-12-06 06:59:38.495 226109 DEBUG nova.network.neutron [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:59:38 compute-1 nova_compute[226101]: 2025-12-06 06:59:38.507 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-d32579c4-62c8-41ac-9d01-b617cc7992ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:59:38 compute-1 nova_compute[226101]: 2025-12-06 06:59:38.508 226109 DEBUG nova.objects.instance [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'migration_context' on Instance uuid d32579c4-62c8-41ac-9d01-b617cc7992ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:38.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:38 compute-1 nova_compute[226101]: 2025-12-06 06:59:38.596 226109 DEBUG nova.storage.rbd_utils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] removing snapshot(nova-resize) on rbd image(d32579c4-62c8-41ac-9d01-b617cc7992ed_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 06:59:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:39 compute-1 nova_compute[226101]: 2025-12-06 06:59:39.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:39 compute-1 nova_compute[226101]: 2025-12-06 06:59:39.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:40.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:40 compute-1 nova_compute[226101]: 2025-12-06 06:59:40.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:40 compute-1 nova_compute[226101]: 2025-12-06 06:59:40.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Dec 06 06:59:40 compute-1 ceph-mon[81689]: pgmap v1153: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 181 op/s
Dec 06 06:59:40 compute-1 nova_compute[226101]: 2025-12-06 06:59:40.800 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:40 compute-1 nova_compute[226101]: 2025-12-06 06:59:40.800 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:40 compute-1 nova_compute[226101]: 2025-12-06 06:59:40.871 226109 DEBUG oslo_concurrency.processutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:41 compute-1 sshd-session[232397]: Connection reset by authenticating user root 45.135.232.92 port 44174 [preauth]
Dec 06 06:59:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/237919553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.287 226109 DEBUG oslo_concurrency.processutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.295 226109 DEBUG nova.compute.provider_tree [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.325 226109 DEBUG nova.scheduler.client.report [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.369 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.492 226109 INFO nova.scheduler.client.report [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Deleted allocation for migration c599b82a-ac1d-47a0-919b-16ec8a9eb01d
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.577 226109 DEBUG oslo_concurrency.lockutils [None req-29a28ee7-170c-424a-b32b-49795120007c 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "d32579c4-62c8-41ac-9d01-b617cc7992ed" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.636 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.637 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.659 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.659 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.659 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.660 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:59:41 compute-1 nova_compute[226101]: 2025-12-06 06:59:41.660 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4098330128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:42 compute-1 nova_compute[226101]: 2025-12-06 06:59:42.143 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:42 compute-1 nova_compute[226101]: 2025-12-06 06:59:42.291 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:59:42 compute-1 nova_compute[226101]: 2025-12-06 06:59:42.292 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4959MB free_disk=20.921852111816406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:59:42 compute-1 nova_compute[226101]: 2025-12-06 06:59:42.293 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:42 compute-1 nova_compute[226101]: 2025-12-06 06:59:42.293 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:42.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:42 compute-1 nova_compute[226101]: 2025-12-06 06:59:42.340 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:59:42 compute-1 nova_compute[226101]: 2025-12-06 06:59:42.341 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:59:42 compute-1 ceph-mon[81689]: osdmap e158: 3 total, 3 up, 3 in
Dec 06 06:59:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/237919553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:42 compute-1 nova_compute[226101]: 2025-12-06 06:59:42.381 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:42.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:42 compute-1 sshd-session[232455]: Invalid user manager from 45.135.232.92 port 44222
Dec 06 06:59:43 compute-1 sshd-session[232455]: Connection reset by invalid user manager 45.135.232.92 port 44222 [preauth]
Dec 06 06:59:43 compute-1 nova_compute[226101]: 2025-12-06 06:59:43.196 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:43 compute-1 nova_compute[226101]: 2025-12-06 06:59:43.452 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:43 compute-1 nova_compute[226101]: 2025-12-06 06:59:43.552 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004368.55126, d32579c4-62c8-41ac-9d01-b617cc7992ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:43 compute-1 nova_compute[226101]: 2025-12-06 06:59:43.553 226109 INFO nova.compute.manager [-] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] VM Stopped (Lifecycle Event)
Dec 06 06:59:43 compute-1 nova_compute[226101]: 2025-12-06 06:59:43.578 226109 DEBUG nova.compute.manager [None req-105b94d3-9e11-4403-8159-64689ce952cf - - - - - -] [instance: d32579c4-62c8-41ac-9d01-b617cc7992ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:44.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:44.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:44 compute-1 ceph-mon[81689]: pgmap v1155: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 582 KiB/s wr, 190 op/s
Dec 06 06:59:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4098330128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1155863205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2920678114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:44 compute-1 nova_compute[226101]: 2025-12-06 06:59:44.975 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:44 compute-1 nova_compute[226101]: 2025-12-06 06:59:44.981 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.195 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.305 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.305 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:45 compute-1 ceph-mon[81689]: pgmap v1156: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 15 KiB/s wr, 177 op/s
Dec 06 06:59:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/323353182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.845 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.845 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.863 226109 DEBUG nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.945 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.946 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.953 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:59:45 compute-1 nova_compute[226101]: 2025-12-06 06:59:45.954 226109 INFO nova.compute.claims [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Claim successful on node compute-1.ctlplane.example.com
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.066 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:46 compute-1 podman[232507]: 2025-12-06 06:59:46.099131135 +0000 UTC m=+0.083265048 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:59:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:46.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:46 compute-1 ceph-mon[81689]: pgmap v1157: 305 pgs: 305 active+clean; 170 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 670 KiB/s wr, 160 op/s
Dec 06 06:59:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2920678114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2461963196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:46.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.565 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.572 226109 DEBUG nova.compute.provider_tree [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.606 226109 DEBUG nova.scheduler.client.report [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.638 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.640 226109 DEBUG nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.690 226109 DEBUG nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.690 226109 DEBUG nova.network.neutron [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.711 226109 INFO nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.746 226109 DEBUG nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.848 226109 DEBUG nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.850 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.850 226109 INFO nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Creating image(s)
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.878 226109 DEBUG nova.storage.rbd_utils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.909 226109 DEBUG nova.storage.rbd_utils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.943 226109 DEBUG nova.storage.rbd_utils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:46 compute-1 nova_compute[226101]: 2025-12-06 06:59:46.948 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.012 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.013 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.014 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.014 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.041 226109 DEBUG nova.storage.rbd_utils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.044 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.258 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.259 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.522 226109 DEBUG nova.network.neutron [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.523 226109 DEBUG nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 06:59:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2461963196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.586 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.661 226109 DEBUG nova.storage.rbd_utils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] resizing rbd image 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.784 226109 DEBUG nova.objects.instance [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'migration_context' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.885 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.885 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Ensure instance console log exists: /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.886 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.886 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.886 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.888 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.894 226109 WARNING nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.900 226109 DEBUG nova.virt.libvirt.host [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.900 226109 DEBUG nova.virt.libvirt.host [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.905 226109 DEBUG nova.virt.libvirt.host [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.906 226109 DEBUG nova.virt.libvirt.host [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.908 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.908 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.909 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.909 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.909 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.909 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.909 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.910 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.910 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.910 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.911 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.911 226109 DEBUG nova.virt.hardware [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:59:47 compute-1 nova_compute[226101]: 2025-12-06 06:59:47.914 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.198 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:48.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:59:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1495032765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.376 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.402 226109 DEBUG nova.storage.rbd_utils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.409 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.453 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:48.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:48 compute-1 ceph-mon[81689]: pgmap v1158: 305 pgs: 305 active+clean; 182 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Dec 06 06:59:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1495032765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:59:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3413033794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.888 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.890 226109 DEBUG nova.objects.instance [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.917 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <uuid>2341fd69-a672-42e2-834b-0f7b8269c7ef</uuid>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <name>instance-0000000d</name>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <metadata>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <nova:name>tempest-MigrationsAdminTest-server-524387245</nova:name>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 06:59:47</nova:creationTime>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   </metadata>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <system>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <entry name="serial">2341fd69-a672-42e2-834b-0f7b8269c7ef</entry>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <entry name="uuid">2341fd69-a672-42e2-834b-0f7b8269c7ef</entry>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     </system>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <os>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   </os>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <features>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <apic/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   </features>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   </clock>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   </cpu>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   <devices>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/2341fd69-a672-42e2-834b-0f7b8269c7ef_disk">
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       </source>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/2341fd69-a672-42e2-834b-0f7b8269c7ef_disk.config">
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       </source>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 06:59:48 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       </auth>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     </disk>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/console.log" append="off"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     </serial>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <video>
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     </video>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     </rng>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 06:59:48 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 06:59:48 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 06:59:48 compute-1 nova_compute[226101]:   </devices>
Dec 06 06:59:48 compute-1 nova_compute[226101]: </domain>
Dec 06 06:59:48 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.968 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.969 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.969 226109 INFO nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Using config drive
Dec 06 06:59:48 compute-1 nova_compute[226101]: 2025-12-06 06:59:48.990 226109 DEBUG nova.storage.rbd_utils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:49 compute-1 nova_compute[226101]: 2025-12-06 06:59:49.167 226109 INFO nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Creating config drive at /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/disk.config
Dec 06 06:59:49 compute-1 nova_compute[226101]: 2025-12-06 06:59:49.173 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqji05w5s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:49 compute-1 nova_compute[226101]: 2025-12-06 06:59:49.304 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqji05w5s" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:49 compute-1 nova_compute[226101]: 2025-12-06 06:59:49.330 226109 DEBUG nova.storage.rbd_utils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:49 compute-1 nova_compute[226101]: 2025-12-06 06:59:49.334 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/disk.config 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:49 compute-1 sshd-session[232503]: Invalid user Admin from 45.135.232.92 port 44256
Dec 06 06:59:49 compute-1 nova_compute[226101]: 2025-12-06 06:59:49.503 226109 DEBUG oslo_concurrency.processutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/disk.config 2341fd69-a672-42e2-834b-0f7b8269c7ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:49 compute-1 nova_compute[226101]: 2025-12-06 06:59:49.504 226109 INFO nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Deleting local config drive /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/disk.config because it was imported into RBD.
Dec 06 06:59:49 compute-1 systemd-machined[190302]: New machine qemu-5-instance-0000000d.
Dec 06 06:59:49 compute-1 systemd[1]: Started Virtual Machine qemu-5-instance-0000000d.
Dec 06 06:59:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3413033794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:49 compute-1 sshd-session[232503]: Connection reset by invalid user Admin 45.135.232.92 port 44256 [preauth]
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.032 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004390.032037, 2341fd69-a672-42e2-834b-0f7b8269c7ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.034 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] VM Resumed (Lifecycle Event)
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.038 226109 DEBUG nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.039 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.045 226109 INFO nova.virt.libvirt.driver [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance spawned successfully.
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.046 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.057 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.063 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.067 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.067 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.068 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.068 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.068 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.069 226109 DEBUG nova.virt.libvirt.driver [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.113 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.113 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004390.0322208, 2341fd69-a672-42e2-834b-0f7b8269c7ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.113 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] VM Started (Lifecycle Event)
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.145 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.149 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.154 226109 INFO nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Took 3.30 seconds to spawn the instance on the hypervisor.
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.154 226109 DEBUG nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.184 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.225 226109 INFO nova.compute.manager [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Took 4.30 seconds to build instance.
Dec 06 06:59:50 compute-1 nova_compute[226101]: 2025-12-06 06:59:50.245 226109 DEBUG oslo_concurrency.lockutils [None req-ac5bdc5b-630a-4edb-9f4a-bd153afc7e93 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:50.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:50 compute-1 ovn_controller[130279]: 2025-12-06T06:59:50Z|00035|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 06:59:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:50.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:50 compute-1 ceph-mon[81689]: pgmap v1159: 305 pgs: 305 active+clean; 214 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.3 MiB/s wr, 153 op/s
Dec 06 06:59:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2009234372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2345918381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Dec 06 06:59:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:52.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:52 compute-1 ceph-mon[81689]: pgmap v1160: 305 pgs: 305 active+clean; 234 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 946 KiB/s rd, 3.8 MiB/s wr, 154 op/s
Dec 06 06:59:52 compute-1 ceph-mon[81689]: osdmap e159: 3 total, 3 up, 3 in
Dec 06 06:59:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:52.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:53 compute-1 sshd-session[232896]: Connection reset by authenticating user root 45.135.232.92 port 46112 [preauth]
Dec 06 06:59:53 compute-1 nova_compute[226101]: 2025-12-06 06:59:53.199 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:53 compute-1 nova_compute[226101]: 2025-12-06 06:59:53.456 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:54 compute-1 nova_compute[226101]: 2025-12-06 06:59:54.290 226109 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:59:54 compute-1 nova_compute[226101]: 2025-12-06 06:59:54.291 226109 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:59:54 compute-1 nova_compute[226101]: 2025-12-06 06:59:54.291 226109 DEBUG nova.network.neutron [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:59:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:54.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:54.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:54 compute-1 nova_compute[226101]: 2025-12-06 06:59:54.670 226109 DEBUG nova.network.neutron [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:59:54 compute-1 nova_compute[226101]: 2025-12-06 06:59:54.897 226109 DEBUG nova.network.neutron [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:59:54 compute-1 ceph-mon[81689]: pgmap v1162: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 196 op/s
Dec 06 06:59:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4292194976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:54 compute-1 nova_compute[226101]: 2025-12-06 06:59:54.915 226109 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.037 226109 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.037 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Creating file /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/dcb38827d5944a6e9a8685ce77050b79.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.038 226109 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/dcb38827d5944a6e9a8685ce77050b79.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.460 226109 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/dcb38827d5944a6e9a8685ce77050b79.tmp" returned: 1 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.461 226109 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef/dcb38827d5944a6e9a8685ce77050b79.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.462 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Creating directory /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.462 226109 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:55 compute-1 sshd-session[232900]: Invalid user admin from 45.135.232.92 port 46126
Dec 06 06:59:55 compute-1 podman[232905]: 2025-12-06 06:59:55.614434795 +0000 UTC m=+0.049697763 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 06:59:55 compute-1 podman[232904]: 2025-12-06 06:59:55.626425828 +0000 UTC m=+0.063177516 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.674 226109 DEBUG oslo_concurrency.processutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/2341fd69-a672-42e2-834b-0f7b8269c7ef" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:55 compute-1 nova_compute[226101]: 2025-12-06 06:59:55.677 226109 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 06:59:55 compute-1 sshd-session[232900]: Connection reset by invalid user admin 45.135.232.92 port 46126 [preauth]
Dec 06 06:59:56 compute-1 ceph-mon[81689]: pgmap v1163: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 222 op/s
Dec 06 06:59:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:56.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:58 compute-1 nova_compute[226101]: 2025-12-06 06:59:58.201 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:58.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:58 compute-1 nova_compute[226101]: 2025-12-06 06:59:58.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 06:59:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:59 compute-1 ceph-mon[81689]: pgmap v1164: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 226 op/s
Dec 06 06:59:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/843334347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:00.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:00.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:00 compute-1 ceph-mon[81689]: pgmap v1165: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 142 op/s
Dec 06 07:00:00 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:00:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:00:01.615 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:00:01.616 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:00:01.616 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:02 compute-1 ceph-mon[81689]: pgmap v1166: 305 pgs: 305 active+clean; 266 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 131 op/s
Dec 06 07:00:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:02.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:02.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:03 compute-1 nova_compute[226101]: 2025-12-06 07:00:03.202 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:03 compute-1 nova_compute[226101]: 2025-12-06 07:00:03.459 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:04.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:04 compute-1 ceph-mon[81689]: pgmap v1167: 305 pgs: 305 active+clean; 260 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 123 op/s
Dec 06 07:00:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:04.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 07:00:05 compute-1 nova_compute[226101]: 2025-12-06 07:00:05.727 226109 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:00:06 compute-1 sudo[232942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:06 compute-1 sudo[232942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:06 compute-1 sudo[232942]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:06 compute-1 sudo[232967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:00:06 compute-1 sudo[232967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:06 compute-1 sudo[232967]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:06 compute-1 sudo[232992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:06 compute-1 sudo[232992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:06 compute-1 sudo[232992]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:06 compute-1 sudo[233017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:00:06 compute-1 sudo[233017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:06.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:06.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3864384142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/807715224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:06 compute-1 sudo[233017]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:08 compute-1 nova_compute[226101]: 2025-12-06 07:00:08.205 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:08 compute-1 ceph-mon[81689]: pgmap v1168: 305 pgs: 305 active+clean; 250 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 118 op/s
Dec 06 07:00:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:00:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:00:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:08.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:08 compute-1 nova_compute[226101]: 2025-12-06 07:00:08.461 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:08.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:09 compute-1 ceph-mon[81689]: pgmap v1169: 305 pgs: 305 active+clean; 239 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/197566583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/371361718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/371361718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:00:09 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:00:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2009438146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:10.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:10.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:10 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 06 07:00:10 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Consumed 14.554s CPU time.
Dec 06 07:00:10 compute-1 systemd-machined[190302]: Machine qemu-5-instance-0000000d terminated.
Dec 06 07:00:10 compute-1 nova_compute[226101]: 2025-12-06 07:00:10.901 226109 INFO nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance shutdown successfully after 15 seconds.
Dec 06 07:00:10 compute-1 nova_compute[226101]: 2025-12-06 07:00:10.908 226109 INFO nova.virt.libvirt.driver [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance destroyed successfully.
Dec 06 07:00:10 compute-1 nova_compute[226101]: 2025-12-06 07:00:10.911 226109 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:00:10 compute-1 nova_compute[226101]: 2025-12-06 07:00:10.912 226109 DEBUG nova.virt.libvirt.driver [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:00:11 compute-1 nova_compute[226101]: 2025-12-06 07:00:11.106 226109 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "2341fd69-a672-42e2-834b-0f7b8269c7ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:11 compute-1 nova_compute[226101]: 2025-12-06 07:00:11.107 226109 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:11 compute-1 nova_compute[226101]: 2025-12-06 07:00:11.107 226109 DEBUG oslo_concurrency.lockutils [None req-b1fd0da5-b333-4628-8281-83852c8c87de 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:11 compute-1 ceph-mon[81689]: pgmap v1170: 305 pgs: 305 active+clean; 278 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 150 op/s
Dec 06 07:00:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:12.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:12 compute-1 ceph-mon[81689]: pgmap v1171: 305 pgs: 305 active+clean; 287 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 165 op/s
Dec 06 07:00:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Dec 06 07:00:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:12.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:13 compute-1 nova_compute[226101]: 2025-12-06 07:00:13.207 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:13 compute-1 nova_compute[226101]: 2025-12-06 07:00:13.486 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:14 compute-1 ceph-mon[81689]: osdmap e160: 3 total, 3 up, 3 in
Dec 06 07:00:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1111738318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:14.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:14.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:15 compute-1 nova_compute[226101]: 2025-12-06 07:00:15.714 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:15 compute-1 nova_compute[226101]: 2025-12-06 07:00:15.715 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:15 compute-1 nova_compute[226101]: 2025-12-06 07:00:15.715 226109 DEBUG nova.compute.manager [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Going to confirm migration 2 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Dec 06 07:00:16 compute-1 nova_compute[226101]: 2025-12-06 07:00:16.156 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:00:16 compute-1 nova_compute[226101]: 2025-12-06 07:00:16.157 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:00:16 compute-1 nova_compute[226101]: 2025-12-06 07:00:16.157 226109 DEBUG nova.network.neutron [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:00:16 compute-1 nova_compute[226101]: 2025-12-06 07:00:16.158 226109 DEBUG nova.objects.instance [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'info_cache' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:16 compute-1 ceph-mon[81689]: pgmap v1173: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.1 MiB/s wr, 172 op/s
Dec 06 07:00:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3130597809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:16.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:16 compute-1 nova_compute[226101]: 2025-12-06 07:00:16.521 226109 DEBUG nova.network.neutron [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:00:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:16.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:17 compute-1 podman[233078]: 2025-12-06 07:00:17.118032411 +0000 UTC m=+0.099230140 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 07:00:17 compute-1 nova_compute[226101]: 2025-12-06 07:00:17.150 226109 DEBUG nova.network.neutron [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:00:17 compute-1 nova_compute[226101]: 2025-12-06 07:00:17.167 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-2341fd69-a672-42e2-834b-0f7b8269c7ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:00:17 compute-1 nova_compute[226101]: 2025-12-06 07:00:17.167 226109 DEBUG nova.objects.instance [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'migration_context' on Instance uuid 2341fd69-a672-42e2-834b-0f7b8269c7ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:17 compute-1 ceph-mon[81689]: pgmap v1174: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 229 op/s
Dec 06 07:00:17 compute-1 nova_compute[226101]: 2025-12-06 07:00:17.310 226109 DEBUG nova.storage.rbd_utils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] removing snapshot(nova-resize) on rbd image(2341fd69-a672-42e2-834b-0f7b8269c7ef_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:00:18 compute-1 nova_compute[226101]: 2025-12-06 07:00:18.208 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:18 compute-1 sudo[233140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:18 compute-1 sudo[233140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:18 compute-1 sudo[233140]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:18 compute-1 ceph-mon[81689]: pgmap v1175: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.3 MiB/s wr, 267 op/s
Dec 06 07:00:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:18 compute-1 sudo[233165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:00:18 compute-1 sudo[233165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:18 compute-1 sudo[233165]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:18.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:18 compute-1 nova_compute[226101]: 2025-12-06 07:00:18.489 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Dec 06 07:00:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:18.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:19 compute-1 nova_compute[226101]: 2025-12-06 07:00:19.229 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:19 compute-1 nova_compute[226101]: 2025-12-06 07:00:19.229 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:19 compute-1 nova_compute[226101]: 2025-12-06 07:00:19.628 226109 DEBUG oslo_concurrency.processutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:19 compute-1 ceph-mon[81689]: osdmap e161: 3 total, 3 up, 3 in
Dec 06 07:00:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:00:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/776762329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:20 compute-1 nova_compute[226101]: 2025-12-06 07:00:20.567 226109 DEBUG oslo_concurrency.processutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.938s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:20 compute-1 nova_compute[226101]: 2025-12-06 07:00:20.574 226109 DEBUG nova.compute.provider_tree [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:00:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:20.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:20 compute-1 nova_compute[226101]: 2025-12-06 07:00:20.626 226109 DEBUG nova.scheduler.client.report [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:00:20 compute-1 ceph-mon[81689]: pgmap v1177: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 418 KiB/s wr, 319 op/s
Dec 06 07:00:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/776762329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:21 compute-1 nova_compute[226101]: 2025-12-06 07:00:21.367 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 2.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:21 compute-1 nova_compute[226101]: 2025-12-06 07:00:21.559 226109 INFO nova.scheduler.client.report [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Deleted allocation for migration 0b2af42c-f92a-4adc-b824-3a317ca631b0
Dec 06 07:00:21 compute-1 nova_compute[226101]: 2025-12-06 07:00:21.782 226109 DEBUG oslo_concurrency.lockutils [None req-be8940ad-4884-4f3c-a008-212e06681ccd 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "2341fd69-a672-42e2-834b-0f7b8269c7ef" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:22 compute-1 ceph-mon[81689]: pgmap v1178: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 3.7 KiB/s wr, 318 op/s
Dec 06 07:00:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:22.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:22.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:23 compute-1 nova_compute[226101]: 2025-12-06 07:00:23.211 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:23 compute-1 nova_compute[226101]: 2025-12-06 07:00:23.490 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:00:23.569 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:00:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:00:23.569 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:00:23 compute-1 nova_compute[226101]: 2025-12-06 07:00:23.610 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:24.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:24 compute-1 ceph-mon[81689]: pgmap v1179: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 6.7 MiB/s rd, 3.6 KiB/s wr, 267 op/s
Dec 06 07:00:25 compute-1 nova_compute[226101]: 2025-12-06 07:00:25.903 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004410.9007587, 2341fd69-a672-42e2-834b-0f7b8269c7ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:00:25 compute-1 nova_compute[226101]: 2025-12-06 07:00:25.904 226109 INFO nova.compute.manager [-] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] VM Stopped (Lifecycle Event)
Dec 06 07:00:26 compute-1 ceph-mon[81689]: pgmap v1180: 305 pgs: 305 active+clean; 298 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 628 KiB/s wr, 174 op/s
Dec 06 07:00:26 compute-1 podman[233214]: 2025-12-06 07:00:26.079458004 +0000 UTC m=+0.047835242 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:00:26 compute-1 podman[233213]: 2025-12-06 07:00:26.088084426 +0000 UTC m=+0.062844726 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 06 07:00:26 compute-1 nova_compute[226101]: 2025-12-06 07:00:26.171 226109 DEBUG nova.compute.manager [None req-2ec1d87c-d78b-468c-8014-2406f0198d2d - - - - - -] [instance: 2341fd69-a672-42e2-834b-0f7b8269c7ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:26.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:28 compute-1 nova_compute[226101]: 2025-12-06 07:00:28.213 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Dec 06 07:00:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:28.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:28 compute-1 nova_compute[226101]: 2025-12-06 07:00:28.493 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:00:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:00:29 compute-1 ceph-mon[81689]: pgmap v1181: 305 pgs: 305 active+clean; 325 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 130 op/s
Dec 06 07:00:29 compute-1 ceph-mon[81689]: osdmap e162: 3 total, 3 up, 3 in
Dec 06 07:00:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:30.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:30 compute-1 ceph-mon[81689]: pgmap v1183: 305 pgs: 305 active+clean; 336 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.4 MiB/s wr, 112 op/s
Dec 06 07:00:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:32.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:00:32.571 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:00:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:32 compute-1 ceph-mon[81689]: pgmap v1184: 305 pgs: 305 active+clean; 341 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 622 KiB/s rd, 4.9 MiB/s wr, 117 op/s
Dec 06 07:00:33 compute-1 nova_compute[226101]: 2025-12-06 07:00:33.216 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:33 compute-1 nova_compute[226101]: 2025-12-06 07:00:33.495 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:34.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:34.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:36 compute-1 nova_compute[226101]: 2025-12-06 07:00:36.260 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "8818a36b-f8ca-411f-9037-85036a64a941" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:36 compute-1 nova_compute[226101]: 2025-12-06 07:00:36.260 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "8818a36b-f8ca-411f-9037-85036a64a941" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:36 compute-1 nova_compute[226101]: 2025-12-06 07:00:36.291 226109 DEBUG nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:00:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:36.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:36 compute-1 nova_compute[226101]: 2025-12-06 07:00:36.463 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:36 compute-1 nova_compute[226101]: 2025-12-06 07:00:36.464 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:36 compute-1 nova_compute[226101]: 2025-12-06 07:00:36.473 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:00:36 compute-1 nova_compute[226101]: 2025-12-06 07:00:36.473 226109 INFO nova.compute.claims [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:00:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:36.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:36 compute-1 nova_compute[226101]: 2025-12-06 07:00:36.713 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:00:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/60258488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:37 compute-1 nova_compute[226101]: 2025-12-06 07:00:37.518 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.805s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:37 compute-1 nova_compute[226101]: 2025-12-06 07:00:37.525 226109 DEBUG nova.compute.provider_tree [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:00:37 compute-1 nova_compute[226101]: 2025-12-06 07:00:37.593 226109 DEBUG nova.scheduler.client.report [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:00:37 compute-1 ceph-mon[81689]: pgmap v1185: 305 pgs: 305 active+clean; 346 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 809 KiB/s rd, 5.0 MiB/s wr, 137 op/s
Dec 06 07:00:37 compute-1 nova_compute[226101]: 2025-12-06 07:00:37.982 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:37 compute-1 nova_compute[226101]: 2025-12-06 07:00:37.983 226109 DEBUG nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.195 226109 DEBUG nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.196 226109 DEBUG nova.network.neutron [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.218 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.382 226109 INFO nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:00:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:38.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.439 226109 DEBUG nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.495 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.579 226109 DEBUG nova.network.neutron [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.580 226109 DEBUG nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:00:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:38.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.815 226109 DEBUG nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.816 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:00:38 compute-1 nova_compute[226101]: 2025-12-06 07:00:38.817 226109 INFO nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Creating image(s)
Dec 06 07:00:38 compute-1 ceph-mon[81689]: pgmap v1186: 305 pgs: 305 active+clean; 351 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 158 op/s
Dec 06 07:00:38 compute-1 ceph-mon[81689]: pgmap v1187: 305 pgs: 305 active+clean; 353 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:00:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/60258488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.485 226109 DEBUG nova.storage.rbd_utils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 8818a36b-f8ca-411f-9037-85036a64a941_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.511 226109 DEBUG nova.storage.rbd_utils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 8818a36b-f8ca-411f-9037-85036a64a941_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.551 226109 DEBUG nova.storage.rbd_utils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 8818a36b-f8ca-411f-9037-85036a64a941_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.556 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.624 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.624 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.625 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.625 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.651 226109 DEBUG nova.storage.rbd_utils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 8818a36b-f8ca-411f-9037-85036a64a941_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:39 compute-1 nova_compute[226101]: 2025-12-06 07:00:39.655 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8818a36b-f8ca-411f-9037-85036a64a941_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:40 compute-1 ceph-mon[81689]: pgmap v1188: 305 pgs: 305 active+clean; 360 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 139 op/s
Dec 06 07:00:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:40.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:40 compute-1 nova_compute[226101]: 2025-12-06 07:00:40.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:40.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:41 compute-1 nova_compute[226101]: 2025-12-06 07:00:41.285 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8818a36b-f8ca-411f-9037-85036a64a941_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1361533499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.113 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.114 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.114 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:00:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.158 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.158 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.159 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.162 226109 DEBUG nova.storage.rbd_utils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] resizing rbd image 8818a36b-f8ca-411f-9037-85036a64a941_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:00:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:42.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.491 226109 DEBUG nova.objects.instance [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'migration_context' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:42 compute-1 nova_compute[226101]: 2025-12-06 07:00:42.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:42.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:42 compute-1 ceph-mon[81689]: pgmap v1189: 305 pgs: 305 active+clean; 368 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 694 KiB/s rd, 693 KiB/s wr, 97 op/s
Dec 06 07:00:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3072397604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.148 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.148 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.149 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.149 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.149 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.169 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.169 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Ensure instance console log exists: /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.170 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.170 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.170 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.172 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.176 226109 WARNING nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.180 226109 DEBUG nova.virt.libvirt.host [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.181 226109 DEBUG nova.virt.libvirt.host [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.183 226109 DEBUG nova.virt.libvirt.host [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.184 226109 DEBUG nova.virt.libvirt.host [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.185 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.185 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T07:00:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0042aafa-1545-4d05-86d6-22053e265b25',id=26,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-11178841',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.186 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.186 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.186 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.186 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.186 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.187 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.187 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.187 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.187 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.187 226109 DEBUG nova.virt.hardware [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.190 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.220 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.497 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:00:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3478449055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:00:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2206640894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.616 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.634 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.657 226109 DEBUG nova.storage.rbd_utils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 8818a36b-f8ca-411f-9037-85036a64a941_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.661 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.832 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.833 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4917MB free_disk=20.805419921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.833 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:43 compute-1 nova_compute[226101]: 2025-12-06 07:00:43.833 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.240 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 8818a36b-f8ca-411f-9037-85036a64a941 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.240 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.241 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:00:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:00:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/710089016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.302 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:44.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.481 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.820s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.484 226109 DEBUG nova.objects.instance [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.557 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <uuid>8818a36b-f8ca-411f-9037-85036a64a941</uuid>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <name>instance-0000000f</name>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <nova:name>tempest-MigrationsAdminTest-server-1896797625</nova:name>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:00:43</nova:creationTime>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <nova:flavor name="tempest-test_resize_flavor_-11178841">
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <system>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <entry name="serial">8818a36b-f8ca-411f-9037-85036a64a941</entry>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <entry name="uuid">8818a36b-f8ca-411f-9037-85036a64a941</entry>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     </system>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <os>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   </os>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <features>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   </features>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8818a36b-f8ca-411f-9037-85036a64a941_disk">
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       </source>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8818a36b-f8ca-411f-9037-85036a64a941_disk.config">
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       </source>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:00:44 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/console.log" append="off"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <video>
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     </video>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:00:44 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:00:44 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:00:44 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:00:44 compute-1 nova_compute[226101]: </domain>
Dec 06 07:00:44 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:00:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:44.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.753 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.753 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.754 226109 INFO nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Using config drive
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.773 226109 DEBUG nova.storage.rbd_utils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 8818a36b-f8ca-411f-9037-85036a64a941_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:00:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/817716717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.970 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.975 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:00:44 compute-1 nova_compute[226101]: 2025-12-06 07:00:44.996 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:00:45 compute-1 nova_compute[226101]: 2025-12-06 07:00:45.024 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:00:45 compute-1 nova_compute[226101]: 2025-12-06 07:00:45.024 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:45 compute-1 nova_compute[226101]: 2025-12-06 07:00:45.037 226109 INFO nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Creating config drive at /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/disk.config
Dec 06 07:00:45 compute-1 nova_compute[226101]: 2025-12-06 07:00:45.042 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpngg309v8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:45 compute-1 nova_compute[226101]: 2025-12-06 07:00:45.166 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpngg309v8" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3478449055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2206640894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:45 compute-1 nova_compute[226101]: 2025-12-06 07:00:45.639 226109 DEBUG nova.storage.rbd_utils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 8818a36b-f8ca-411f-9037-85036a64a941_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:45 compute-1 nova_compute[226101]: 2025-12-06 07:00:45.643 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/disk.config 8818a36b-f8ca-411f-9037-85036a64a941_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:46 compute-1 nova_compute[226101]: 2025-12-06 07:00:46.009 226109 DEBUG oslo_concurrency.processutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/disk.config 8818a36b-f8ca-411f-9037-85036a64a941_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:46 compute-1 nova_compute[226101]: 2025-12-06 07:00:46.010 226109 INFO nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Deleting local config drive /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/disk.config because it was imported into RBD.
Dec 06 07:00:46 compute-1 nova_compute[226101]: 2025-12-06 07:00:46.017 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:46 compute-1 nova_compute[226101]: 2025-12-06 07:00:46.058 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:46 compute-1 nova_compute[226101]: 2025-12-06 07:00:46.059 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:46 compute-1 systemd-machined[190302]: New machine qemu-6-instance-0000000f.
Dec 06 07:00:46 compute-1 systemd[1]: Started Virtual Machine qemu-6-instance-0000000f.
Dec 06 07:00:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:46.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:46.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:46 compute-1 ceph-mon[81689]: pgmap v1190: 305 pgs: 305 active+clean; 377 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 636 KiB/s rd, 560 KiB/s wr, 87 op/s
Dec 06 07:00:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/710089016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2444872434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:46 compute-1 ceph-mon[81689]: pgmap v1191: 305 pgs: 305 active+clean; 350 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 490 KiB/s rd, 1.4 MiB/s wr, 85 op/s
Dec 06 07:00:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/817716717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3634300850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.109 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004447.1086674, 8818a36b-f8ca-411f-9037-85036a64a941 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.110 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Resumed (Lifecycle Event)
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.111 226109 DEBUG nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.112 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.115 226109 INFO nova.virt.libvirt.driver [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance spawned successfully.
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.115 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:00:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.385 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.389 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.690 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.690 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.692 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.692 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.693 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:47 compute-1 nova_compute[226101]: 2025-12-06 07:00:47.693 226109 DEBUG nova.virt.libvirt.driver [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4110431903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:47 compute-1 ceph-mon[81689]: pgmap v1192: 305 pgs: 305 active+clean; 360 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 276 KiB/s rd, 3.1 MiB/s wr, 114 op/s
Dec 06 07:00:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2989411430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:48 compute-1 podman[233664]: 2025-12-06 07:00:48.112308821 +0000 UTC m=+0.089487415 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:00:48 compute-1 nova_compute[226101]: 2025-12-06 07:00:48.141 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:00:48 compute-1 nova_compute[226101]: 2025-12-06 07:00:48.141 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004447.1093, 8818a36b-f8ca-411f-9037-85036a64a941 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:00:48 compute-1 nova_compute[226101]: 2025-12-06 07:00:48.142 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Started (Lifecycle Event)
Dec 06 07:00:48 compute-1 nova_compute[226101]: 2025-12-06 07:00:48.222 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:48.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:48 compute-1 nova_compute[226101]: 2025-12-06 07:00:48.499 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:48.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:49 compute-1 nova_compute[226101]: 2025-12-06 07:00:49.668 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:49 compute-1 nova_compute[226101]: 2025-12-06 07:00:49.672 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:00:49 compute-1 nova_compute[226101]: 2025-12-06 07:00:49.756 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:00:49 compute-1 nova_compute[226101]: 2025-12-06 07:00:49.847 226109 INFO nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Took 11.03 seconds to spawn the instance on the hypervisor.
Dec 06 07:00:49 compute-1 nova_compute[226101]: 2025-12-06 07:00:49.848 226109 DEBUG nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:50.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:50.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:50 compute-1 ceph-mon[81689]: pgmap v1193: 305 pgs: 305 active+clean; 360 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 149 KiB/s rd, 3.0 MiB/s wr, 104 op/s
Dec 06 07:00:51 compute-1 nova_compute[226101]: 2025-12-06 07:00:51.363 226109 INFO nova.compute.manager [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Took 14.93 seconds to build instance.
Dec 06 07:00:51 compute-1 nova_compute[226101]: 2025-12-06 07:00:51.902 226109 DEBUG oslo_concurrency.lockutils [None req-09a7cfb3-578b-4e6f-a2e6-136a063ac9ce 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "8818a36b-f8ca-411f-9037-85036a64a941" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:51 compute-1 ceph-mon[81689]: pgmap v1194: 305 pgs: 305 active+clean; 376 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Dec 06 07:00:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:52.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:53 compute-1 nova_compute[226101]: 2025-12-06 07:00:53.223 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:53 compute-1 nova_compute[226101]: 2025-12-06 07:00:53.500 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:54 compute-1 ceph-mon[81689]: pgmap v1195: 305 pgs: 305 active+clean; 376 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 155 op/s
Dec 06 07:00:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:56.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:56.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:56 compute-1 ceph-mon[81689]: pgmap v1196: 305 pgs: 305 active+clean; 341 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 167 op/s
Dec 06 07:00:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/487672312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:57 compute-1 podman[233692]: 2025-12-06 07:00:57.070338631 +0000 UTC m=+0.053066863 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:00:57 compute-1 podman[233693]: 2025-12-06 07:00:57.07806883 +0000 UTC m=+0.055279413 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:00:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:58 compute-1 nova_compute[226101]: 2025-12-06 07:00:58.224 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:58 compute-1 nova_compute[226101]: 2025-12-06 07:00:58.502 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:00:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:58.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:58 compute-1 ceph-mon[81689]: pgmap v1197: 305 pgs: 305 active+clean; 296 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 233 op/s
Dec 06 07:00:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1289067618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:59 compute-1 nova_compute[226101]: 2025-12-06 07:00:59.946 226109 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:00:59 compute-1 nova_compute[226101]: 2025-12-06 07:00:59.947 226109 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:00:59 compute-1 nova_compute[226101]: 2025-12-06 07:00:59.947 226109 DEBUG nova.network.neutron [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:01:00 compute-1 nova_compute[226101]: 2025-12-06 07:01:00.196 226109 DEBUG nova.network.neutron [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:01:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4229951688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:00.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:01.617 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:01.617 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:01.617 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:01 compute-1 CROND[233730]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 07:01:01 compute-1 run-parts[233733]: (/etc/cron.hourly) starting 0anacron
Dec 06 07:01:01 compute-1 run-parts[233739]: (/etc/cron.hourly) finished 0anacron
Dec 06 07:01:01 compute-1 CROND[233729]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 07:01:02 compute-1 ceph-mon[81689]: pgmap v1198: 305 pgs: 305 active+clean; 296 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 617 KiB/s wr, 173 op/s
Dec 06 07:01:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:02 compute-1 nova_compute[226101]: 2025-12-06 07:01:02.166 226109 DEBUG nova.network.neutron [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:02.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:03 compute-1 ceph-mon[81689]: pgmap v1199: 305 pgs: 305 active+clean; 302 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 208 op/s
Dec 06 07:01:03 compute-1 nova_compute[226101]: 2025-12-06 07:01:03.225 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:03 compute-1 nova_compute[226101]: 2025-12-06 07:01:03.541 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:04 compute-1 nova_compute[226101]: 2025-12-06 07:01:04.202 226109 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:04 compute-1 ceph-mon[81689]: pgmap v1200: 305 pgs: 305 active+clean; 308 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 184 op/s
Dec 06 07:01:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:04.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:04.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:05 compute-1 nova_compute[226101]: 2025-12-06 07:01:05.889 226109 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 07:01:05 compute-1 nova_compute[226101]: 2025-12-06 07:01:05.890 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Creating file /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/a8fc3519f21f4e15a0cbec02f9e38772.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 07:01:05 compute-1 nova_compute[226101]: 2025-12-06 07:01:05.890 226109 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/a8fc3519f21f4e15a0cbec02f9e38772.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:06 compute-1 nova_compute[226101]: 2025-12-06 07:01:06.409 226109 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/a8fc3519f21f4e15a0cbec02f9e38772.tmp" returned: 1 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:06 compute-1 nova_compute[226101]: 2025-12-06 07:01:06.410 226109 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/a8fc3519f21f4e15a0cbec02f9e38772.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 07:01:06 compute-1 nova_compute[226101]: 2025-12-06 07:01:06.410 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Creating directory /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 07:01:06 compute-1 nova_compute[226101]: 2025-12-06 07:01:06.411 226109 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:06.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:06 compute-1 nova_compute[226101]: 2025-12-06 07:01:06.626 226109 DEBUG oslo_concurrency.processutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:06 compute-1 nova_compute[226101]: 2025-12-06 07:01:06.630 226109 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:01:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:06.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:06 compute-1 ceph-mon[81689]: pgmap v1201: 305 pgs: 305 active+clean; 323 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 200 op/s
Dec 06 07:01:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/347649379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:08 compute-1 nova_compute[226101]: 2025-12-06 07:01:08.228 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:08.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:08 compute-1 nova_compute[226101]: 2025-12-06 07:01:08.542 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:01:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2183247011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:01:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:01:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2183247011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:01:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:08.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:09 compute-1 ceph-mon[81689]: pgmap v1202: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Dec 06 07:01:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2183247011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:01:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2183247011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:01:09 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec 06 07:01:09 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000f.scope: Consumed 13.657s CPU time.
Dec 06 07:01:09 compute-1 systemd-machined[190302]: Machine qemu-6-instance-0000000f terminated.
Dec 06 07:01:09 compute-1 nova_compute[226101]: 2025-12-06 07:01:09.645 226109 INFO nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance shutdown successfully after 3 seconds.
Dec 06 07:01:09 compute-1 nova_compute[226101]: 2025-12-06 07:01:09.650 226109 INFO nova.virt.libvirt.driver [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance destroyed successfully.
Dec 06 07:01:09 compute-1 nova_compute[226101]: 2025-12-06 07:01:09.655 226109 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:01:09 compute-1 nova_compute[226101]: 2025-12-06 07:01:09.656 226109 DEBUG nova.virt.libvirt.driver [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:01:10 compute-1 nova_compute[226101]: 2025-12-06 07:01:10.028 226109 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "8818a36b-f8ca-411f-9037-85036a64a941-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:10 compute-1 nova_compute[226101]: 2025-12-06 07:01:10.029 226109 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "8818a36b-f8ca-411f-9037-85036a64a941-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:10 compute-1 nova_compute[226101]: 2025-12-06 07:01:10.029 226109 DEBUG oslo_concurrency.lockutils [None req-a5e4b5dd-27d7-4e35-b18c-720c4b7c81fb 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "8818a36b-f8ca-411f-9037-85036a64a941-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:10 compute-1 ceph-mon[81689]: pgmap v1203: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Dec 06 07:01:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:10.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:12 compute-1 ceph-mon[81689]: pgmap v1204: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 153 op/s
Dec 06 07:01:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:12.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:13 compute-1 nova_compute[226101]: 2025-12-06 07:01:13.230 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:13 compute-1 nova_compute[226101]: 2025-12-06 07:01:13.543 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/698066887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2348888828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3837711917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:14.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:14.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Dec 06 07:01:15 compute-1 ceph-mon[81689]: pgmap v1205: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 340 KiB/s rd, 1.4 MiB/s wr, 137 op/s
Dec 06 07:01:15 compute-1 ceph-mon[81689]: osdmap e163: 3 total, 3 up, 3 in
Dec 06 07:01:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:16 compute-1 ceph-mon[81689]: pgmap v1207: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 222 KiB/s rd, 590 KiB/s wr, 94 op/s
Dec 06 07:01:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1114402516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4001666700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:18 compute-1 nova_compute[226101]: 2025-12-06 07:01:18.274 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:18.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:18 compute-1 nova_compute[226101]: 2025-12-06 07:01:18.545 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:18 compute-1 sudo[233746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:18 compute-1 sudo[233746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:18 compute-1 sudo[233746]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:18 compute-1 sudo[233777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:01:18 compute-1 sudo[233777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:18 compute-1 sudo[233777]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:18 compute-1 sudo[233816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:18 compute-1 sudo[233816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:18 compute-1 podman[233770]: 2025-12-06 07:01:18.674830322 +0000 UTC m=+0.107978306 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:01:18 compute-1 sudo[233816]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:18.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:18 compute-1 sudo[233844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:01:18 compute-1 sudo[233844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:19 compute-1 sudo[233844]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:19 compute-1 ceph-mon[81689]: pgmap v1208: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 45 KiB/s wr, 103 op/s
Dec 06 07:01:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:20.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:20.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:21 compute-1 ceph-mon[81689]: pgmap v1209: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 45 KiB/s wr, 103 op/s
Dec 06 07:01:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:22.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:22.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:23 compute-1 ceph-mon[81689]: pgmap v1210: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 244 op/s
Dec 06 07:01:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:01:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:01:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.199 226109 INFO nova.compute.manager [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Swapping old allocation on dict_keys(['466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83']) held by migration 9dc19b80-b16f-48ae-b0fd-0e7051a28cdf for instance
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.242 226109 DEBUG nova.scheduler.client.report [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Overwriting current allocation {'allocations': {'6d00757a-082f-486d-ae84-869a2ba2e6e7': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 11}}, 'project_id': 'fc6c493097a84d069d178020ca398a25', 'user_id': '538aa592cfb04958ab11223ed2d98106', 'consumer_generation': 1} on consumer 8818a36b-f8ca-411f-9037-85036a64a941 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.277 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.527 226109 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.528 226109 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.528 226109 DEBUG nova.network.neutron [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.546 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.865 226109 DEBUG nova.network.neutron [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:01:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:23.874 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:01:23 compute-1 nova_compute[226101]: 2025-12-06 07:01:23.874 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:23.874 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:01:24 compute-1 nova_compute[226101]: 2025-12-06 07:01:24.320 226109 DEBUG nova.network.neutron [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:24 compute-1 nova_compute[226101]: 2025-12-06 07:01:24.339 226109 DEBUG oslo_concurrency.lockutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:24 compute-1 nova_compute[226101]: 2025-12-06 07:01:24.340 226109 DEBUG nova.virt.libvirt.driver [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Dec 06 07:01:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:24.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:24.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:24.876 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:26.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:01:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:01:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:01:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/938090136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:27 compute-1 nova_compute[226101]: 2025-12-06 07:01:27.179 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004469.5329099, 8818a36b-f8ca-411f-9037-85036a64a941 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:27 compute-1 nova_compute[226101]: 2025-12-06 07:01:27.179 226109 INFO nova.compute.manager [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Stopped (Lifecycle Event)
Dec 06 07:01:27 compute-1 nova_compute[226101]: 2025-12-06 07:01:27.485 226109 DEBUG nova.compute.manager [None req-5844aad2-8d71-4f27-9d7d-8a1328523dbf - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:27 compute-1 nova_compute[226101]: 2025-12-06 07:01:27.489 226109 DEBUG nova.storage.rbd_utils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rolling back rbd image(8818a36b-f8ca-411f-9037-85036a64a941_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Dec 06 07:01:27 compute-1 nova_compute[226101]: 2025-12-06 07:01:27.492 226109 DEBUG nova.compute.manager [None req-5844aad2-8d71-4f27-9d7d-8a1328523dbf - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:01:27 compute-1 nova_compute[226101]: 2025-12-06 07:01:27.519 226109 INFO nova.compute.manager [None req-5844aad2-8d71-4f27-9d7d-8a1328523dbf - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:01:28 compute-1 podman[233936]: 2025-12-06 07:01:28.087370448 +0000 UTC m=+0.071694816 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:01:28 compute-1 podman[233937]: 2025-12-06 07:01:28.088388916 +0000 UTC m=+0.065641783 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:01:28 compute-1 ceph-mon[81689]: pgmap v1211: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 18 KiB/s wr, 262 op/s
Dec 06 07:01:28 compute-1 ceph-mon[81689]: pgmap v1212: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 17 KiB/s wr, 251 op/s
Dec 06 07:01:28 compute-1 ceph-mon[81689]: pgmap v1213: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 229 op/s
Dec 06 07:01:28 compute-1 nova_compute[226101]: 2025-12-06 07:01:28.336 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:28.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:28 compute-1 sudo[233973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:28 compute-1 sudo[233973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:28 compute-1 sudo[233973]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:28 compute-1 nova_compute[226101]: 2025-12-06 07:01:28.500 226109 DEBUG nova.storage.rbd_utils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] removing snapshot(nova-resize) on rbd image(8818a36b-f8ca-411f-9037-85036a64a941_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:01:28 compute-1 sudo[234016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:01:28 compute-1 nova_compute[226101]: 2025-12-06 07:01:28.548 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:28 compute-1 sudo[234016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:28 compute-1 sudo[234016]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:30.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Dec 06 07:01:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:30.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:31 compute-1 ceph-mon[81689]: pgmap v1214: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 341 B/s wr, 191 op/s
Dec 06 07:01:31 compute-1 ceph-mon[81689]: osdmap e164: 3 total, 3 up, 3 in
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.124 226109 DEBUG nova.virt.libvirt.driver [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.128 226109 WARNING nova.virt.libvirt.driver [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.133 226109 DEBUG nova.virt.libvirt.host [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.135 226109 DEBUG nova.virt.libvirt.host [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.137 226109 DEBUG nova.virt.libvirt.host [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.138 226109 DEBUG nova.virt.libvirt.host [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.139 226109 DEBUG nova.virt.libvirt.driver [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.139 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T07:00:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='0042aafa-1545-4d05-86d6-22053e265b25',id=26,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-11178841',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.139 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.140 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.140 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.140 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.140 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.140 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.141 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.141 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.141 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.141 226109 DEBUG nova.virt.hardware [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.141 226109 DEBUG nova.objects.instance [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.213 226109 DEBUG oslo_concurrency.processutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:01:31 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1034660562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.708 226109 DEBUG oslo_concurrency.processutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:31 compute-1 nova_compute[226101]: 2025-12-06 07:01:31.737 226109 DEBUG oslo_concurrency.processutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:01:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/573904402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.193 226109 DEBUG oslo_concurrency.processutils [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.197 226109 DEBUG nova.virt.libvirt.driver [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <uuid>8818a36b-f8ca-411f-9037-85036a64a941</uuid>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <name>instance-0000000f</name>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <nova:name>tempest-MigrationsAdminTest-server-1896797625</nova:name>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:01:31</nova:creationTime>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <nova:flavor name="tempest-test_resize_flavor_-11178841">
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <system>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <entry name="serial">8818a36b-f8ca-411f-9037-85036a64a941</entry>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <entry name="uuid">8818a36b-f8ca-411f-9037-85036a64a941</entry>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     </system>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <os>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   </os>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <features>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   </features>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8818a36b-f8ca-411f-9037-85036a64a941_disk">
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       </source>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8818a36b-f8ca-411f-9037-85036a64a941_disk.config">
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       </source>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:01:32 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941/console.log" append="off"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <video>
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     </video>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:01:32 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:01:32 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:01:32 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:01:32 compute-1 nova_compute[226101]: </domain>
Dec 06 07:01:32 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:01:32 compute-1 ceph-mon[81689]: pgmap v1216: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 615 KiB/s rd, 102 B/s wr, 78 op/s
Dec 06 07:01:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1034660562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/573904402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:32 compute-1 systemd-machined[190302]: New machine qemu-7-instance-0000000f.
Dec 06 07:01:32 compute-1 systemd[1]: Started Virtual Machine qemu-7-instance-0000000f.
Dec 06 07:01:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:32.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:32.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.831 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004492.8311598, 8818a36b-f8ca-411f-9037-85036a64a941 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.832 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Resumed (Lifecycle Event)
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.836 226109 DEBUG nova.compute.manager [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.839 226109 INFO nova.virt.libvirt.driver [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance running successfully.
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.841 226109 DEBUG nova.virt.libvirt.driver [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.864 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.867 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.923 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.924 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004492.8324697, 8818a36b-f8ca-411f-9037-85036a64a941 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:32 compute-1 nova_compute[226101]: 2025-12-06 07:01:32.924 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Started (Lifecycle Event)
Dec 06 07:01:33 compute-1 nova_compute[226101]: 2025-12-06 07:01:33.019 226109 INFO nova.compute.manager [None req-26a485bc-18e9-4d48-b72c-3ef42ef584a2 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating instance to original state: 'active'
Dec 06 07:01:33 compute-1 nova_compute[226101]: 2025-12-06 07:01:33.026 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:33 compute-1 nova_compute[226101]: 2025-12-06 07:01:33.029 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:01:33 compute-1 nova_compute[226101]: 2025-12-06 07:01:33.316 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:01:33 compute-1 nova_compute[226101]: 2025-12-06 07:01:33.338 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:33 compute-1 nova_compute[226101]: 2025-12-06 07:01:33.550 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:34.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:34.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:34 compute-1 ceph-mon[81689]: pgmap v1217: 305 pgs: 305 active+clean; 335 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 394 KiB/s wr, 75 op/s
Dec 06 07:01:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1559190735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:36.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:36.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:37 compute-1 ceph-mon[81689]: pgmap v1218: 305 pgs: 305 active+clean; 340 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 137 op/s
Dec 06 07:01:38 compute-1 nova_compute[226101]: 2025-12-06 07:01:38.340 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:38.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:38 compute-1 nova_compute[226101]: 2025-12-06 07:01:38.551 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:38.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:39 compute-1 ceph-mon[81689]: pgmap v1219: 305 pgs: 305 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Dec 06 07:01:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3772612913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:39 compute-1 nova_compute[226101]: 2025-12-06 07:01:39.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:40.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:40.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:41 compute-1 ceph-mon[81689]: pgmap v1220: 305 pgs: 305 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Dec 06 07:01:41 compute-1 nova_compute[226101]: 2025-12-06 07:01:41.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:41 compute-1 nova_compute[226101]: 2025-12-06 07:01:41.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:01:41 compute-1 nova_compute[226101]: 2025-12-06 07:01:41.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:01:41 compute-1 nova_compute[226101]: 2025-12-06 07:01:41.794 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:41 compute-1 nova_compute[226101]: 2025-12-06 07:01:41.795 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:41 compute-1 nova_compute[226101]: 2025-12-06 07:01:41.795 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:01:41 compute-1 nova_compute[226101]: 2025-12-06 07:01:41.795 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:41 compute-1 nova_compute[226101]: 2025-12-06 07:01:41.945 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:01:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Dec 06 07:01:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:42 compute-1 nova_compute[226101]: 2025-12-06 07:01:42.295 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:42 compute-1 nova_compute[226101]: 2025-12-06 07:01:42.308 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:42 compute-1 nova_compute[226101]: 2025-12-06 07:01:42.309 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:01:42 compute-1 nova_compute[226101]: 2025-12-06 07:01:42.309 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:42 compute-1 nova_compute[226101]: 2025-12-06 07:01:42.309 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:01:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:42.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:42 compute-1 nova_compute[226101]: 2025-12-06 07:01:42.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:42 compute-1 nova_compute[226101]: 2025-12-06 07:01:42.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:42.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:42 compute-1 ceph-mon[81689]: pgmap v1221: 305 pgs: 305 active+clean; 401 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 235 op/s
Dec 06 07:01:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1749293316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2910208125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:42 compute-1 ceph-mon[81689]: osdmap e165: 3 total, 3 up, 3 in
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.288 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.288 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.305 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.341 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.382 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.383 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.388 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.388 226109 INFO nova.compute.claims [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.539 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.559 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:43 compute-1 nova_compute[226101]: 2025-12-06 07:01:43.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:44.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1607848207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/982138029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1956691532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:01:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1242976262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.875 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.881 226109 DEBUG nova.compute.provider_tree [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.900 226109 DEBUG nova.scheduler.client.report [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.937 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.938 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.942 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 2.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.942 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.942 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:01:45 compute-1 nova_compute[226101]: 2025-12-06 07:01:45.942 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:46 compute-1 nova_compute[226101]: 2025-12-06 07:01:46.004 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:01:46 compute-1 nova_compute[226101]: 2025-12-06 07:01:46.004 226109 DEBUG nova.network.neutron [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:01:46 compute-1 nova_compute[226101]: 2025-12-06 07:01:46.023 226109 INFO nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:01:46 compute-1 nova_compute[226101]: 2025-12-06 07:01:46.042 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:01:46 compute-1 nova_compute[226101]: 2025-12-06 07:01:46.124 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:01:46 compute-1 nova_compute[226101]: 2025-12-06 07:01:46.125 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:01:46 compute-1 nova_compute[226101]: 2025-12-06 07:01:46.126 226109 INFO nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating image(s)
Dec 06 07:01:46 compute-1 ceph-mon[81689]: pgmap v1223: 305 pgs: 305 active+clean; 426 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 207 op/s
Dec 06 07:01:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1879664780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2971980504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1355878612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2755476702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1242976262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:46.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:46.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:47 compute-1 nova_compute[226101]: 2025-12-06 07:01:47.142 226109 DEBUG nova.storage.rbd_utils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:47 compute-1 nova_compute[226101]: 2025-12-06 07:01:47.171 226109 DEBUG nova.storage.rbd_utils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:01:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3194696246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:48 compute-1 ceph-mon[81689]: pgmap v1224: 305 pgs: 305 active+clean; 457 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 182 op/s
Dec 06 07:01:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:48.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.683 226109 DEBUG nova.storage.rbd_utils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.687 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.714 226109 DEBUG nova.policy [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3cae056210a400fa5e3495fe827d29a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.718 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.722 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.779s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:48.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.764 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.765 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.765 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.766 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:48 compute-1 podman[234261]: 2025-12-06 07:01:48.85575066 +0000 UTC m=+0.082170539 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.925 226109 DEBUG nova.storage.rbd_utils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.929 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.993 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:01:48 compute-1 nova_compute[226101]: 2025-12-06 07:01:48.993 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:01:49 compute-1 ceph-mon[81689]: pgmap v1225: 305 pgs: 305 active+clean; 466 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 178 KiB/s rd, 4.8 MiB/s wr, 98 op/s
Dec 06 07:01:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3194696246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.139 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.140 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4785MB free_disk=20.764747619628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.140 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.141 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.453 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 8818a36b-f8ca-411f-9037-85036a64a941 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.453 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 09a05ccc-abca-47d8-8e32-6e53adb95d4d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.453 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.454 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.532 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:01:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/135893021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.969 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:49 compute-1 nova_compute[226101]: 2025-12-06 07:01:49.975 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:01:50 compute-1 nova_compute[226101]: 2025-12-06 07:01:50.006 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:01:50 compute-1 ceph-mon[81689]: pgmap v1226: 305 pgs: 305 active+clean; 466 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 178 KiB/s rd, 4.8 MiB/s wr, 98 op/s
Dec 06 07:01:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/135893021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:50 compute-1 nova_compute[226101]: 2025-12-06 07:01:50.286 226109 DEBUG nova.network.neutron [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Successfully created port: 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:01:50 compute-1 nova_compute[226101]: 2025-12-06 07:01:50.337 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:01:50 compute-1 nova_compute[226101]: 2025-12-06 07:01:50.337 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:50.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:50.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:51 compute-1 nova_compute[226101]: 2025-12-06 07:01:51.392 226109 DEBUG nova.network.neutron [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Successfully updated port: 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:01:51 compute-1 nova_compute[226101]: 2025-12-06 07:01:51.415 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:51 compute-1 nova_compute[226101]: 2025-12-06 07:01:51.466 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:51 compute-1 nova_compute[226101]: 2025-12-06 07:01:51.467 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquired lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:51 compute-1 nova_compute[226101]: 2025-12-06 07:01:51.467 226109 DEBUG nova.network.neutron [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:01:51 compute-1 nova_compute[226101]: 2025-12-06 07:01:51.472 226109 DEBUG nova.storage.rbd_utils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] resizing rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:01:51 compute-1 nova_compute[226101]: 2025-12-06 07:01:51.975 226109 DEBUG nova.objects.instance [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'migration_context' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.061 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.062 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Ensure instance console log exists: /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.062 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.062 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.063 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.206 226109 DEBUG nova.network.neutron [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.337 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.338 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:52.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:52 compute-1 ceph-mon[81689]: pgmap v1227: 305 pgs: 305 active+clean; 503 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.6 MiB/s wr, 170 op/s
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.614 226109 DEBUG nova.compute.manager [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-changed-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.614 226109 DEBUG nova.compute.manager [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Refreshing instance network info cache due to event network-changed-0e3b84b1-ee05-4dbf-a372-b2a404592bf9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:01:52 compute-1 nova_compute[226101]: 2025-12-06 07:01:52.614 226109 DEBUG oslo_concurrency.lockutils [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:52.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.344 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.657 226109 DEBUG nova.network.neutron [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Updating instance_info_cache with network_info: [{"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.721 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2352722957' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.807 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Releasing lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.807 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance network_info: |[{"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.808 226109 DEBUG oslo_concurrency.lockutils [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.808 226109 DEBUG nova.network.neutron [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Refreshing network info cache for port 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.812 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Start _get_guest_xml network_info=[{"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.815 226109 WARNING nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.820 226109 DEBUG nova.virt.libvirt.host [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.821 226109 DEBUG nova.virt.libvirt.host [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.824 226109 DEBUG nova.virt.libvirt.host [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.824 226109 DEBUG nova.virt.libvirt.host [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.825 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.826 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.826 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.826 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.827 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.827 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.827 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.827 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.828 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.828 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.828 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.828 226109 DEBUG nova.virt.hardware [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.831 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:53 compute-1 nova_compute[226101]: 2025-12-06 07:01:53.915 226109 DEBUG nova.compute.manager [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.054 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.054 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.178 226109 DEBUG nova.objects.instance [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'pci_requests' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.194 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.194 226109 INFO nova.compute.claims [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.195 226109 DEBUG nova.objects.instance [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'resources' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.207 226109 DEBUG nova.objects.instance [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'numa_topology' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.235 226109 DEBUG nova.objects.instance [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'pci_devices' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.309 226109 INFO nova.compute.resource_tracker [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Updating resource usage from migration bd54e0e3-85e3-4797-b6c2-599f4f8510ad
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.310 226109 DEBUG nova.compute.resource_tracker [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Starting to track incoming migration bd54e0e3-85e3-4797-b6c2-599f4f8510ad with flavor 25848a18-11d9-4f11-80b5-5d005675c76d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:01:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:01:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1737184725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.386 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.416 226109 DEBUG nova.storage.rbd_utils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.422 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.491 226109 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:54.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:54 compute-1 ceph-mon[81689]: pgmap v1228: 305 pgs: 305 active+clean; 511 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:01:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2918283785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1737184725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:54.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:01:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/769131874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.880 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.882 226109 DEBUG nova.virt.libvirt.vif [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:01:46Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.883 226109 DEBUG nova.network.os_vif_util [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.884 226109 DEBUG nova.network.os_vif_util [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.886 226109 DEBUG nova.objects.instance [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:01:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/763879712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.952 226109 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:54 compute-1 nova_compute[226101]: 2025-12-06 07:01:54.957 226109 DEBUG nova.compute.provider_tree [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.017 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <uuid>09a05ccc-abca-47d8-8e32-6e53adb95d4d</uuid>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <name>instance-00000013</name>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersAdminTestJSON-server-1679126586</nova:name>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:01:53</nova:creationTime>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <nova:user uuid="a3cae056210a400fa5e3495fe827d29a">tempest-ServersAdminTestJSON-1902776367-project-member</nova:user>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <nova:project uuid="b6179a8b65c2484eb7ca1e068d93a58c">tempest-ServersAdminTestJSON-1902776367</nova:project>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <nova:port uuid="0e3b84b1-ee05-4dbf-a372-b2a404592bf9">
Dec 06 07:01:55 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <system>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <entry name="serial">09a05ccc-abca-47d8-8e32-6e53adb95d4d</entry>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <entry name="uuid">09a05ccc-abca-47d8-8e32-6e53adb95d4d</entry>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </system>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <os>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   </os>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <features>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   </features>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk">
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       </source>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config">
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       </source>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:01:55 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:7b:2f:12"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <target dev="tap0e3b84b1-ee"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/console.log" append="off"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <video>
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </video>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:01:55 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:01:55 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:01:55 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:01:55 compute-1 nova_compute[226101]: </domain>
Dec 06 07:01:55 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.019 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Preparing to wait for external event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.020 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.020 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.020 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.021 226109 DEBUG nova.virt.libvirt.vif [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:01:46Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.022 226109 DEBUG nova.network.os_vif_util [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.022 226109 DEBUG nova.network.os_vif_util [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.023 226109 DEBUG os_vif [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.024 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.025 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.025 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.029 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.030 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e3b84b1-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.030 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e3b84b1-ee, col_values=(('external_ids', {'iface-id': '0e3b84b1-ee05-4dbf-a372-b2a404592bf9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:2f:12', 'vm-uuid': '09a05ccc-abca-47d8-8e32-6e53adb95d4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.061 226109 DEBUG nova.scheduler.client.report [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.076 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:55 compute-1 NetworkManager[49031]: <info>  [1765004515.0770] manager: (tap0e3b84b1-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.079 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.084 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.086 226109 INFO os_vif [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee')
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.469 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.470 226109 INFO nova.compute.manager [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Migrating
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.484 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.484 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.484 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No VIF found with MAC fa:16:3e:7b:2f:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.485 226109 INFO nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Using config drive
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.516 226109 DEBUG nova.storage.rbd_utils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.634 226109 DEBUG nova.network.neutron [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Updated VIF entry in instance network info cache for port 0e3b84b1-ee05-4dbf-a372-b2a404592bf9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.634 226109 DEBUG nova.network.neutron [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Updating instance_info_cache with network_info: [{"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:55 compute-1 nova_compute[226101]: 2025-12-06 07:01:55.752 226109 DEBUG oslo_concurrency.lockutils [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/769131874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/763879712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.088 226109 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Creating tmpfile /var/lib/nova/instances/tmp5bh77yv5 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.198 226109 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5bh77yv5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.234 226109 INFO nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating config drive at /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.242 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp39o0knq1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.373 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp39o0knq1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.402 226109 DEBUG nova.storage.rbd_utils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.407 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:56.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.553 226109 DEBUG oslo_concurrency.processutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.554 226109 INFO nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deleting local config drive /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config because it was imported into RBD.
Dec 06 07:01:56 compute-1 kernel: tap0e3b84b1-ee: entered promiscuous mode
Dec 06 07:01:56 compute-1 NetworkManager[49031]: <info>  [1765004516.6087] manager: (tap0e3b84b1-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec 06 07:01:56 compute-1 ovn_controller[130279]: 2025-12-06T07:01:56Z|00036|binding|INFO|Claiming lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for this chassis.
Dec 06 07:01:56 compute-1 ovn_controller[130279]: 2025-12-06T07:01:56Z|00037|binding|INFO|0e3b84b1-ee05-4dbf-a372-b2a404592bf9: Claiming fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.610 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.613 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.617 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:56 compute-1 systemd-udevd[234580]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:01:56 compute-1 systemd-machined[190302]: New machine qemu-8-instance-00000013.
Dec 06 07:01:56 compute-1 NetworkManager[49031]: <info>  [1765004516.6546] device (tap0e3b84b1-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:01:56 compute-1 NetworkManager[49031]: <info>  [1765004516.6558] device (tap0e3b84b1-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:01:56 compute-1 systemd[1]: Started Virtual Machine qemu-8-instance-00000013.
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.682 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:56 compute-1 ovn_controller[130279]: 2025-12-06T07:01:56Z|00038|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 ovn-installed in OVS
Dec 06 07:01:56 compute-1 nova_compute[226101]: 2025-12-06 07:01:56.686 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:56 compute-1 ovn_controller[130279]: 2025-12-06T07:01:56Z|00039|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 up in Southbound
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.690 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:2f:12 10.100.0.5'], port_security=['fa:16:3e:7b:2f:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '09a05ccc-abca-47d8-8e32-6e53adb95d4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=0e3b84b1-ee05-4dbf-a372-b2a404592bf9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.691 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 bound to our chassis
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.693 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.706 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b7205ba3-8cc9-4bdb-8d83-92a593e43a69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.707 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9451d867-01 in ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.708 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9451d867-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.708 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[07683fed-adb5-4884-97d6-29cba9573c59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.709 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6eaee100-d55a-43ae-8fd5-9f5f47ce49d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.723 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[13e0f431-faf1-4211-bd46-7f569bbfb5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.746 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[de529a4f-cc4d-44e5-b5ff-7b8c9ef4ebea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.771 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[897c3edc-d7ba-48fb-8105-d7e2ea7a82b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.776 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e7a92a-cfee-4dc8-8e16-c2eab49f8345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 NetworkManager[49031]: <info>  [1765004516.7768] manager: (tap9451d867-00): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.815 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7610b822-0879-4ef4-ae57-b1cec56d05ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.818 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c4379d77-b99c-41b2-98cd-e047e09cde2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 NetworkManager[49031]: <info>  [1765004516.8423] device (tap9451d867-00): carrier: link connected
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.846 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9aeffbf4-c99d-47db-a89b-22f16d6d2d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.864 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[caba4045-719d-4830-bb3b-5cc299a70a02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479634, 'reachable_time': 43555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234613, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.882 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cede0c-31d1-408b-a11b-964cf542b262]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:45e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479634, 'tstamp': 479634}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234615, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ceph-mon[81689]: pgmap v1229: 305 pgs: 305 active+clean; 537 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 243 op/s
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.902 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fd514809-972c-4c91-85cf-4fb258fb47f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479634, 'reachable_time': 43555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234616, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:56.942 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[95930b96-2155-407f-a194-903f1e24817c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.015 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2d2ea8-5e42-4915-9f46-067872c9be57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.017 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.017 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.018 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.020 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:57 compute-1 NetworkManager[49031]: <info>  [1765004517.0213] manager: (tap9451d867-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec 06 07:01:57 compute-1 kernel: tap9451d867-00: entered promiscuous mode
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.022 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.025 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:57 compute-1 ovn_controller[130279]: 2025-12-06T07:01:57Z|00040|binding|INFO|Releasing lport fed07814-3a76-4798-8d3b-90759d15a8cf from this chassis (sb_readonly=0)
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.026 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.043 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.045 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.046 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dd859df1-2da8-4cc1-82fc-de9f73c5f6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.047 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:01:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:01:57.049 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'env', 'PROCESS_TAG=haproxy-9451d867-0aba-464d-b4d9-f947b887e903', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9451d867-0aba-464d-b4d9-f947b887e903.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:01:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.462 226109 DEBUG nova.compute.manager [req-268e00aa-3d2f-42dc-93f6-a9730acabf50 req-7ae15428-3b2b-4991-a7bd-cf78c39f25cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.462 226109 DEBUG oslo_concurrency.lockutils [req-268e00aa-3d2f-42dc-93f6-a9730acabf50 req-7ae15428-3b2b-4991-a7bd-cf78c39f25cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.463 226109 DEBUG oslo_concurrency.lockutils [req-268e00aa-3d2f-42dc-93f6-a9730acabf50 req-7ae15428-3b2b-4991-a7bd-cf78c39f25cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.463 226109 DEBUG oslo_concurrency.lockutils [req-268e00aa-3d2f-42dc-93f6-a9730acabf50 req-7ae15428-3b2b-4991-a7bd-cf78c39f25cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:57 compute-1 nova_compute[226101]: 2025-12-06 07:01:57.464 226109 DEBUG nova.compute.manager [req-268e00aa-3d2f-42dc-93f6-a9730acabf50 req-7ae15428-3b2b-4991-a7bd-cf78c39f25cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Processing event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:01:57 compute-1 podman[234668]: 2025-12-06 07:01:57.421674057 +0000 UTC m=+0.028540981 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:01:57 compute-1 sshd-session[234623]: Accepted publickey for nova from 192.168.122.100 port 41344 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:01:57 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Dec 06 07:01:57 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec 06 07:01:57 compute-1 systemd-logind[788]: New session 51 of user nova.
Dec 06 07:01:57 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec 06 07:01:57 compute-1 systemd[1]: Starting User Manager for UID 42436...
Dec 06 07:01:57 compute-1 systemd[234683]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:01:58 compute-1 systemd[234683]: Queued start job for default target Main User Target.
Dec 06 07:01:58 compute-1 systemd[234683]: Created slice User Application Slice.
Dec 06 07:01:58 compute-1 systemd[234683]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:01:58 compute-1 systemd[234683]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 07:01:58 compute-1 systemd[234683]: Reached target Paths.
Dec 06 07:01:58 compute-1 systemd[234683]: Reached target Timers.
Dec 06 07:01:58 compute-1 systemd[234683]: Starting D-Bus User Message Bus Socket...
Dec 06 07:01:58 compute-1 systemd[234683]: Starting Create User's Volatile Files and Directories...
Dec 06 07:01:58 compute-1 systemd[234683]: Finished Create User's Volatile Files and Directories.
Dec 06 07:01:58 compute-1 systemd[234683]: Listening on D-Bus User Message Bus Socket.
Dec 06 07:01:58 compute-1 systemd[234683]: Reached target Sockets.
Dec 06 07:01:58 compute-1 systemd[234683]: Reached target Basic System.
Dec 06 07:01:58 compute-1 systemd[234683]: Reached target Main User Target.
Dec 06 07:01:58 compute-1 systemd[234683]: Startup finished in 129ms.
Dec 06 07:01:58 compute-1 systemd[1]: Started User Manager for UID 42436.
Dec 06 07:01:58 compute-1 systemd[1]: Started Session 51 of User nova.
Dec 06 07:01:58 compute-1 sshd-session[234623]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:01:58 compute-1 sshd-session[234698]: Received disconnect from 192.168.122.100 port 41344:11: disconnected by user
Dec 06 07:01:58 compute-1 sshd-session[234698]: Disconnected from user nova 192.168.122.100 port 41344
Dec 06 07:01:58 compute-1 sshd-session[234623]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:01:58 compute-1 systemd[1]: session-51.scope: Deactivated successfully.
Dec 06 07:01:58 compute-1 systemd-logind[788]: Session 51 logged out. Waiting for processes to exit.
Dec 06 07:01:58 compute-1 systemd-logind[788]: Removed session 51.
Dec 06 07:01:58 compute-1 nova_compute[226101]: 2025-12-06 07:01:58.188 226109 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5bh77yv5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='714f2e5b-135b-4f7e-9c62-3e1849c5e151',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Dec 06 07:01:58 compute-1 podman[234707]: 2025-12-06 07:01:58.247465474 +0000 UTC m=+0.062954160 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:01:58 compute-1 nova_compute[226101]: 2025-12-06 07:01:58.259 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:58 compute-1 nova_compute[226101]: 2025-12-06 07:01:58.259 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquired lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:58 compute-1 nova_compute[226101]: 2025-12-06 07:01:58.259 226109 DEBUG nova.network.neutron [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:01:58 compute-1 podman[234709]: 2025-12-06 07:01:58.27031287 +0000 UTC m=+0.085744255 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:01:58 compute-1 sshd-session[234708]: Accepted publickey for nova from 192.168.122.100 port 41350 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:01:58 compute-1 systemd-logind[788]: New session 53 of user nova.
Dec 06 07:01:58 compute-1 systemd[1]: Started Session 53 of User nova.
Dec 06 07:01:58 compute-1 sshd-session[234708]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:01:58 compute-1 nova_compute[226101]: 2025-12-06 07:01:58.346 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:58 compute-1 sshd-session[234747]: Received disconnect from 192.168.122.100 port 41350:11: disconnected by user
Dec 06 07:01:58 compute-1 sshd-session[234747]: Disconnected from user nova 192.168.122.100 port 41350
Dec 06 07:01:58 compute-1 sshd-session[234708]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:01:58 compute-1 systemd[1]: session-53.scope: Deactivated successfully.
Dec 06 07:01:58 compute-1 systemd-logind[788]: Session 53 logged out. Waiting for processes to exit.
Dec 06 07:01:58 compute-1 systemd-logind[788]: Removed session 53.
Dec 06 07:01:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:01:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:01:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:01:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:58.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:59 compute-1 nova_compute[226101]: 2025-12-06 07:01:59.829 226109 DEBUG nova.compute.manager [req-3e2ac350-9278-4435-a95e-7c3909209f55 req-986e63b2-4294-4622-9db0-cb97d4ffdf01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:01:59 compute-1 nova_compute[226101]: 2025-12-06 07:01:59.829 226109 DEBUG oslo_concurrency.lockutils [req-3e2ac350-9278-4435-a95e-7c3909209f55 req-986e63b2-4294-4622-9db0-cb97d4ffdf01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:59 compute-1 nova_compute[226101]: 2025-12-06 07:01:59.829 226109 DEBUG oslo_concurrency.lockutils [req-3e2ac350-9278-4435-a95e-7c3909209f55 req-986e63b2-4294-4622-9db0-cb97d4ffdf01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:59 compute-1 nova_compute[226101]: 2025-12-06 07:01:59.830 226109 DEBUG oslo_concurrency.lockutils [req-3e2ac350-9278-4435-a95e-7c3909209f55 req-986e63b2-4294-4622-9db0-cb97d4ffdf01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:59 compute-1 nova_compute[226101]: 2025-12-06 07:01:59.830 226109 DEBUG nova.compute.manager [req-3e2ac350-9278-4435-a95e-7c3909209f55 req-986e63b2-4294-4622-9db0-cb97d4ffdf01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:01:59 compute-1 nova_compute[226101]: 2025-12-06 07:01:59.830 226109 WARNING nova.compute.manager [req-3e2ac350-9278-4435-a95e-7c3909209f55 req-986e63b2-4294-4622-9db0-cb97d4ffdf01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state building and task_state spawning.
Dec 06 07:02:00 compute-1 nova_compute[226101]: 2025-12-06 07:02:00.118 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:00.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:00.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:01.618 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:01.619 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:01.619 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:01 compute-1 podman[234668]: 2025-12-06 07:02:01.998049924 +0000 UTC m=+4.604916828 container create c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:02:02 compute-1 systemd[1]: Started libpod-conmon-c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93.scope.
Dec 06 07:02:02 compute-1 ceph-mon[81689]: pgmap v1230: 305 pgs: 305 active+clean; 527 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.6 MiB/s wr, 264 op/s
Dec 06 07:02:02 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:02:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70562644c80cfddda4f17cf0e108617f615e580ed262d789f30e61305bfccaef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:02 compute-1 podman[234668]: 2025-12-06 07:02:02.441684666 +0000 UTC m=+5.048551590 container init c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 07:02:02 compute-1 podman[234668]: 2025-12-06 07:02:02.447990856 +0000 UTC m=+5.054857760 container start c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:02:02 compute-1 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[234760]: [NOTICE]   (234764) : New worker (234766) forked
Dec 06 07:02:02 compute-1 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[234760]: [NOTICE]   (234764) : Loading success.
Dec 06 07:02:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:02.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:02.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.826 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004522.8262382, 09a05ccc-abca-47d8-8e32-6e53adb95d4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.827 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] VM Started (Lifecycle Event)
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.829 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.833 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.836 226109 INFO nova.virt.libvirt.driver [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance spawned successfully.
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.836 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.910 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.913 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.920 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.920 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.920 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.921 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.921 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.921 226109 DEBUG nova.virt.libvirt.driver [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.992 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.992 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004522.8263423, 09a05ccc-abca-47d8-8e32-6e53adb95d4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:02 compute-1 nova_compute[226101]: 2025-12-06 07:02:02.992 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] VM Paused (Lifecycle Event)
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.023 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.026 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004522.832022, 09a05ccc-abca-47d8-8e32-6e53adb95d4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.026 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] VM Resumed (Lifecycle Event)
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.102 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.105 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:03 compute-1 ceph-mon[81689]: pgmap v1231: 305 pgs: 305 active+clean; 527 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 250 op/s
Dec 06 07:02:03 compute-1 ceph-mon[81689]: pgmap v1232: 305 pgs: 305 active+clean; 473 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.2 MiB/s wr, 331 op/s
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.178 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.207 226109 INFO nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Took 17.08 seconds to spawn the instance on the hypervisor.
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.208 226109 DEBUG nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.328 226109 INFO nova.compute.manager [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Took 19.98 seconds to build instance.
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.348 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:03 compute-1 nova_compute[226101]: 2025-12-06 07:02:03.370 226109 DEBUG oslo_concurrency.lockutils [None req-e501c515-e0da-42da-bffe-cdb183672980 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.103 226109 DEBUG nova.network.neutron [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updating instance_info_cache with network_info: [{"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:04.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.735 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Releasing lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.737 226109 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5bh77yv5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='714f2e5b-135b-4f7e-9c62-3e1849c5e151',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.737 226109 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Creating instance directory: /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.738 226109 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Ensure instance console log exists: /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.738 226109 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.739 226109 DEBUG nova.virt.libvirt.vif [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-720825537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-720825537',id=17,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:01:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-od28yehe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:01:50Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=714f2e5b-135b-4f7e-9c62-3e1849c5e151,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.740 226109 DEBUG nova.network.os_vif_util [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.740 226109 DEBUG nova.network.os_vif_util [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.740 226109 DEBUG os_vif [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.741 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.741 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.742 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.744 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.744 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ba0fb02-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.745 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ba0fb02-de, col_values=(('external_ids', {'iface-id': '8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:c8:b4', 'vm-uuid': '714f2e5b-135b-4f7e-9c62-3e1849c5e151'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:04 compute-1 NetworkManager[49031]: <info>  [1765004524.7471] manager: (tap8ba0fb02-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.748 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.753 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.754 226109 INFO os_vif [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de')
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.754 226109 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Dec 06 07:02:04 compute-1 nova_compute[226101]: 2025-12-06 07:02:04.755 226109 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5bh77yv5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='714f2e5b-135b-4f7e-9c62-3e1849c5e151',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Dec 06 07:02:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:04.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:05 compute-1 ceph-mon[81689]: pgmap v1233: 305 pgs: 305 active+clean; 475 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.1 MiB/s wr, 256 op/s
Dec 06 07:02:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:06.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:06.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:07 compute-1 ceph-mon[81689]: pgmap v1234: 305 pgs: 305 active+clean; 492 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.5 MiB/s wr, 300 op/s
Dec 06 07:02:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:08 compute-1 ceph-mon[81689]: pgmap v1235: 305 pgs: 305 active+clean; 513 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.7 MiB/s wr, 273 op/s
Dec 06 07:02:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1796991372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:08 compute-1 nova_compute[226101]: 2025-12-06 07:02:08.350 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:08 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Dec 06 07:02:08 compute-1 systemd[234683]: Activating special unit Exit the Session...
Dec 06 07:02:08 compute-1 systemd[234683]: Stopped target Main User Target.
Dec 06 07:02:08 compute-1 systemd[234683]: Stopped target Basic System.
Dec 06 07:02:08 compute-1 systemd[234683]: Stopped target Paths.
Dec 06 07:02:08 compute-1 systemd[234683]: Stopped target Sockets.
Dec 06 07:02:08 compute-1 systemd[234683]: Stopped target Timers.
Dec 06 07:02:08 compute-1 systemd[234683]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:02:08 compute-1 systemd[234683]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 07:02:08 compute-1 systemd[234683]: Closed D-Bus User Message Bus Socket.
Dec 06 07:02:08 compute-1 systemd[234683]: Stopped Create User's Volatile Files and Directories.
Dec 06 07:02:08 compute-1 systemd[234683]: Removed slice User Application Slice.
Dec 06 07:02:08 compute-1 systemd[234683]: Reached target Shutdown.
Dec 06 07:02:08 compute-1 systemd[234683]: Finished Exit the Session.
Dec 06 07:02:08 compute-1 systemd[234683]: Reached target Exit the Session.
Dec 06 07:02:08 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Dec 06 07:02:08 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Dec 06 07:02:08 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec 06 07:02:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:08.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:08 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec 06 07:02:08 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec 06 07:02:08 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec 06 07:02:08 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Dec 06 07:02:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:08.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2399199750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:02:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2399199750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:02:09 compute-1 nova_compute[226101]: 2025-12-06 07:02:09.748 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:10 compute-1 nova_compute[226101]: 2025-12-06 07:02:10.349 226109 DEBUG nova.network.neutron [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Dec 06 07:02:10 compute-1 nova_compute[226101]: 2025-12-06 07:02:10.351 226109 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5bh77yv5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='714f2e5b-135b-4f7e-9c62-3e1849c5e151',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Dec 06 07:02:10 compute-1 ceph-mon[81689]: pgmap v1236: 305 pgs: 305 active+clean; 513 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 217 op/s
Dec 06 07:02:10 compute-1 systemd[1]: Starting libvirt proxy daemon...
Dec 06 07:02:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:10 compute-1 systemd[1]: Started libvirt proxy daemon.
Dec 06 07:02:10 compute-1 kernel: tap8ba0fb02-de: entered promiscuous mode
Dec 06 07:02:10 compute-1 NetworkManager[49031]: <info>  [1765004530.7006] manager: (tap8ba0fb02-de): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec 06 07:02:10 compute-1 nova_compute[226101]: 2025-12-06 07:02:10.703 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:10 compute-1 ovn_controller[130279]: 2025-12-06T07:02:10Z|00041|binding|INFO|Claiming lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for this additional chassis.
Dec 06 07:02:10 compute-1 ovn_controller[130279]: 2025-12-06T07:02:10Z|00042|binding|INFO|8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b: Claiming fa:16:3e:3d:c8:b4 10.100.0.14
Dec 06 07:02:10 compute-1 ovn_controller[130279]: 2025-12-06T07:02:10Z|00043|binding|INFO|Claiming lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 for this additional chassis.
Dec 06 07:02:10 compute-1 ovn_controller[130279]: 2025-12-06T07:02:10Z|00044|binding|INFO|5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4: Claiming fa:16:3e:cb:d9:12 19.80.0.179
Dec 06 07:02:10 compute-1 systemd-machined[190302]: New machine qemu-9-instance-00000011.
Dec 06 07:02:10 compute-1 systemd-udevd[234823]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:02:10 compute-1 NetworkManager[49031]: <info>  [1765004530.7472] device (tap8ba0fb02-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:02:10 compute-1 NetworkManager[49031]: <info>  [1765004530.7478] device (tap8ba0fb02-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:02:10 compute-1 systemd[1]: Started Virtual Machine qemu-9-instance-00000011.
Dec 06 07:02:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:10.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:10 compute-1 nova_compute[226101]: 2025-12-06 07:02:10.802 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:10 compute-1 ovn_controller[130279]: 2025-12-06T07:02:10Z|00045|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b ovn-installed in OVS
Dec 06 07:02:10 compute-1 nova_compute[226101]: 2025-12-06 07:02:10.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:12 compute-1 nova_compute[226101]: 2025-12-06 07:02:12.226 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:12 compute-1 nova_compute[226101]: 2025-12-06 07:02:12.228 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquired lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:12 compute-1 nova_compute[226101]: 2025-12-06 07:02:12.228 226109 DEBUG nova.network.neutron [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:12 compute-1 nova_compute[226101]: 2025-12-06 07:02:12.531 226109 DEBUG nova.network.neutron [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:12 compute-1 ceph-mon[81689]: pgmap v1237: 305 pgs: 305 active+clean; 551 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.9 MiB/s wr, 318 op/s
Dec 06 07:02:12 compute-1 nova_compute[226101]: 2025-12-06 07:02:12.759 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004532.758942, 714f2e5b-135b-4f7e-9c62-3e1849c5e151 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:12 compute-1 nova_compute[226101]: 2025-12-06 07:02:12.759 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] VM Started (Lifecycle Event)
Dec 06 07:02:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:12.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:12 compute-1 nova_compute[226101]: 2025-12-06 07:02:12.795 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.232 226109 DEBUG nova.network.neutron [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.261 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Releasing lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.336 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004533.336569, 714f2e5b-135b-4f7e-9c62-3e1849c5e151 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.337 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] VM Resumed (Lifecycle Event)
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.352 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.399 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.402 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.407 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.409 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.410 226109 INFO nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Creating image(s)
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.451 226109 DEBUG nova.storage.rbd_utils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] creating snapshot(nova-resize) on rbd image(345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.491 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Dec 06 07:02:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.799 226109 DEBUG nova.objects.instance [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.967 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.967 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Ensure instance console log exists: /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.968 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.968 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.969 226109 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.970 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.974 226109 WARNING nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.979 226109 DEBUG nova.virt.libvirt.host [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.980 226109 DEBUG nova.virt.libvirt.host [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.983 226109 DEBUG nova.virt.libvirt.host [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.984 226109 DEBUG nova.virt.libvirt.host [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.985 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.986 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.986 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.987 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.987 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.987 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.988 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.988 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.988 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.988 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.989 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.989 226109 DEBUG nova.virt.hardware [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:02:13 compute-1 nova_compute[226101]: 2025-12-06 07:02:13.989 226109 DEBUG nova.objects.instance [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:14 compute-1 nova_compute[226101]: 2025-12-06 07:02:14.042 226109 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:02:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1114765484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:14 compute-1 nova_compute[226101]: 2025-12-06 07:02:14.496 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:14 compute-1 nova_compute[226101]: 2025-12-06 07:02:14.498 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:14 compute-1 nova_compute[226101]: 2025-12-06 07:02:14.514 226109 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 07:02:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:14.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 07:02:14 compute-1 nova_compute[226101]: 2025-12-06 07:02:14.547 226109 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:14 compute-1 nova_compute[226101]: 2025-12-06 07:02:14.752 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:14.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:14 compute-1 nova_compute[226101]: 2025-12-06 07:02:14.789 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:02:14 compute-1 ceph-mon[81689]: pgmap v1238: 305 pgs: 305 active+clean; 557 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 6.3 MiB/s wr, 258 op/s
Dec 06 07:02:14 compute-1 ceph-mon[81689]: osdmap e166: 3 total, 3 up, 3 in
Dec 06 07:02:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1114765484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:02:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1398766661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.028 226109 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.031 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <uuid>345d5d4a-3a34-4809-9ae4-60a579c5e49a</uuid>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <name>instance-00000012</name>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <nova:name>tempest-MigrationsAdminTest-server-941592718</nova:name>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:02:13</nova:creationTime>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <system>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <entry name="serial">345d5d4a-3a34-4809-9ae4-60a579c5e49a</entry>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <entry name="uuid">345d5d4a-3a34-4809-9ae4-60a579c5e49a</entry>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     </system>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <os>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   </os>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <features>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   </features>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk">
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       </source>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk.config">
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       </source>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:02:15 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/console.log" append="off"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <video>
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     </video>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:02:15 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:02:15 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:02:15 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:02:15 compute-1 nova_compute[226101]: </domain>
Dec 06 07:02:15 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.036 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.037 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.054 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.054 226109 INFO nova.compute.claims [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.459 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.460 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.461 226109 INFO nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Using config drive
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00046|binding|INFO|Claiming lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for this chassis.
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00047|binding|INFO|8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b: Claiming fa:16:3e:3d:c8:b4 10.100.0.14
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00048|binding|INFO|Claiming lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 for this chassis.
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00049|binding|INFO|5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4: Claiming fa:16:3e:cb:d9:12 19.80.0.179
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00050|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b up in Southbound
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00051|binding|INFO|Setting lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 up in Southbound
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.547 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:d9:12 19.80.0.179'], port_security=['fa:16:3e:cb:d9:12 19.80.0.179'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1920716484', 'neutron:cidrs': '19.80.0.179/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1920716484', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c735218e-8fbf-4d82-b453-bc3944800b8e, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.550 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:c8:b4 10.100.0.14'], port_security=['fa:16:3e:3d:c8:b4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1426450350', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '714f2e5b-135b-4f7e-9c62-3e1849c5e151', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1426450350', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '10', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.552 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 in datapath 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed bound to our chassis
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00052|binding|INFO|Claiming lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for this additional chassis.
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.554 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00053|binding|INFO|8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b: Claiming fa:16:3e:3d:c8:b4 10.100.0.14
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00054|binding|INFO|Claiming lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 for this additional chassis.
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00055|binding|INFO|5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4: Claiming fa:16:3e:cb:d9:12 19.80.0.179
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00056|binding|INFO|Removing lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b ovn-installed in OVS
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00057|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b down in Southbound
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00058|binding|INFO|Setting lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 down in Southbound
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00059|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b ovn-installed in OVS
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.557 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.558 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:15 compute-1 systemd-machined[190302]: New machine qemu-10-instance-00000012.
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.565 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:d9:12 19.80.0.179'], port_security=['fa:16:3e:cb:d9:12 19.80.0.179'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1920716484', 'neutron:cidrs': '19.80.0.179/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1920716484', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c735218e-8fbf-4d82-b453-bc3944800b8e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4) old=Port_Binding(up=[True], additional_chassis=[], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.567 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:c8:b4 10.100.0.14'], port_security=['fa:16:3e:3d:c8:b4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1426450350', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '714f2e5b-135b-4f7e-9c62-3e1849c5e151', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1426450350', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '11', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b) old=Port_Binding(up=[True], additional_chassis=[], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.567 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[98e7e597-9b0b-4e8a-bd04-309767bd59b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.568 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a9cf9f3-d1 in ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.571 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a9cf9f3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.571 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3a62aec9-4bb5-4e19-b185-c42bacd56679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.572 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f360156e-68f8-47dd-8c63-f7ce1c1f8df4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 systemd[1]: Started Virtual Machine qemu-10-instance-00000012.
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.582 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.590 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[4f873167-c311-4059-bad3-0d931853082a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.615 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9a76622a-b5cb-4262-8eea-d7a5cb916dce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.643 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0df6f5cf-581c-44aa-a9e6-1f5cdfd3e367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 NetworkManager[49031]: <info>  [1765004535.6523] manager: (tap4a9cf9f3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.651 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a38ee907-82a1-473f-9656-dad514fb805a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 systemd-udevd[235049]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.695 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[749db318-f0ea-4d99-bace-e1b0c350b81c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.702 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[03b28933-210c-4c54-9715-0c85739c7f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 NetworkManager[49031]: <info>  [1765004535.7246] device (tap4a9cf9f3-d0): carrier: link connected
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.728 226109 INFO nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Post operation of migration started
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.730 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[75766977-e296-4101-aa5d-e3e9b0ec4a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.746 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b392ec4d-03b6-49ca-aaa8-b75fd32d4897]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a9cf9f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:5f:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481522, 'reachable_time': 35134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235088, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.763 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2366d8-6978-4514-9f62-edf3d23c8a94]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:5f2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481522, 'tstamp': 481522}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235089, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.783 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[91612496-50d5-4e1b-b1b2-b0e67207ede3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a9cf9f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:5f:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481522, 'reachable_time': 35134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235090, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.820 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e94cf9be-773e-4479-be02-6c71c5d13c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.862 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[72729346-bef7-455d-9f1d-79dd7a5d1c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.864 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9cf9f3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.864 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.865 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a9cf9f3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:15 compute-1 NetworkManager[49031]: <info>  [1765004535.8671] manager: (tap4a9cf9f3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.866 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:15 compute-1 kernel: tap4a9cf9f3-d0: entered promiscuous mode
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.869 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.874 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a9cf9f3-d0, col_values=(('external_ids', {'iface-id': '46c2af8b-f787-403f-aef5-72ec2f87b6fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:15 compute-1 ovn_controller[130279]: 2025-12-06T07:02:15Z|00060|binding|INFO|Releasing lport 46c2af8b-f787-403f-aef5-72ec2f87b6fc from this chassis (sb_readonly=0)
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.876 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.878 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.879 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[568685ef-0ffa-451c-8687-9ef7e4b80458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.879 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:15.880 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'env', 'PROCESS_TAG=haproxy-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:02:15 compute-1 nova_compute[226101]: 2025-12-06 07:02:15.889 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:16 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1673630881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Dec 06 07:02:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1398766661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.236900) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536236928, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2480, "num_deletes": 254, "total_data_size": 5963982, "memory_usage": 6052816, "flush_reason": "Manual Compaction"}
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.242 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.250 226109 DEBUG nova.compute.provider_tree [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.270 226109 DEBUG nova.scheduler.client.report [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.313 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.313 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:02:16 compute-1 podman[235131]: 2025-12-06 07:02:16.248948694 +0000 UTC m=+0.035427151 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536380234, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3869021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23248, "largest_seqno": 25723, "table_properties": {"data_size": 3858904, "index_size": 6419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21670, "raw_average_key_size": 20, "raw_value_size": 3838448, "raw_average_value_size": 3683, "num_data_blocks": 282, "num_entries": 1042, "num_filter_entries": 1042, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004331, "oldest_key_time": 1765004331, "file_creation_time": 1765004536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 143452 microseconds, and 8090 cpu microseconds.
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.403 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.403 226109 DEBUG nova.network.neutron [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.430 226109 INFO nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.380306) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3869021 bytes OK
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.380375) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.442151) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.442204) EVENT_LOG_v1 {"time_micros": 1765004536442194, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.442230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 5952980, prev total WAL file size 5952980, number of live WAL files 2.
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.443619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3778KB)], [48(8264KB)]
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536443648, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 12331391, "oldest_snapshot_seqno": -1}
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.452 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:02:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:16.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.576 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.577 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.577 226109 INFO nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Creating image(s)
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5467 keys, 10329030 bytes, temperature: kUnknown
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536623942, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 10329030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10290883, "index_size": 23381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 138800, "raw_average_key_size": 25, "raw_value_size": 10190502, "raw_average_value_size": 1864, "num_data_blocks": 954, "num_entries": 5467, "num_filter_entries": 5467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765004536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.631 226109 DEBUG nova.storage.rbd_utils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 5cc7f5c4-dc50-4137-af58-60294ea57208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.668 226109 DEBUG nova.storage.rbd_utils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 5cc7f5c4-dc50-4137-af58-60294ea57208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.697 226109 DEBUG nova.storage.rbd_utils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 5cc7f5c4-dc50-4137-af58-60294ea57208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.702 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.624284) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10329030 bytes
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.716972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.4 rd, 57.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.1 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5994, records dropped: 527 output_compression: NoCompression
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.717013) EVENT_LOG_v1 {"time_micros": 1765004536716999, "job": 28, "event": "compaction_finished", "compaction_time_micros": 180404, "compaction_time_cpu_micros": 25946, "output_level": 6, "num_output_files": 1, "total_output_size": 10329030, "num_input_records": 5994, "num_output_records": 5467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536717849, "job": 28, "event": "table_file_deletion", "file_number": 50}
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536719414, "job": 28, "event": "table_file_deletion", "file_number": 48}
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.443558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.719472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.719476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.719478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.719480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:02:16.719482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.723 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.724 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquired lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.724 226109 DEBUG nova.network.neutron [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:16 compute-1 podman[235131]: 2025-12-06 07:02:16.742500465 +0000 UTC m=+0.528978902 container create e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.773 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.774 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.775 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.775 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:16.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:16 compute-1 systemd[1]: Started libpod-conmon-e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702.scope.
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.804 226109 DEBUG nova.storage.rbd_utils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 5cc7f5c4-dc50-4137-af58-60294ea57208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.811 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5cc7f5c4-dc50-4137-af58-60294ea57208_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:16 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:02:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfd56ee089a9dc21ca5228028a75cd18666a3fdba96cc9fbf38a850e2c6b4a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.840 226109 DEBUG nova.policy [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3cae056210a400fa5e3495fe827d29a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.865 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004536.8654196, 345d5d4a-3a34-4809-9ae4-60a579c5e49a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.866 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] VM Resumed (Lifecycle Event)
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.869 226109 DEBUG nova.compute.manager [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.873 226109 INFO nova.virt.libvirt.driver [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance running successfully.
Dec 06 07:02:16 compute-1 virtqemud[225710]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.876 226109 DEBUG nova.virt.libvirt.guest [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.877 226109 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.901 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.909 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:16 compute-1 podman[235131]: 2025-12-06 07:02:16.939614157 +0000 UTC m=+0.726092624 container init e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:02:16 compute-1 podman[235131]: 2025-12-06 07:02:16.94636572 +0000 UTC m=+0.732844147 container start e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.950 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.951 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004536.868378, 345d5d4a-3a34-4809-9ae4-60a579c5e49a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.951 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] VM Started (Lifecycle Event)
Dec 06 07:02:16 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235252]: [NOTICE]   (235277) : New worker (235279) forked
Dec 06 07:02:16 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235252]: [NOTICE]   (235277) : Loading success.
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.983 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:16 compute-1 nova_compute[226101]: 2025-12-06 07:02:16.986 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.018 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.020 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfa287bf-10c3-40fc-8071-37bb7f801357, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.021 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[38cb1c21-e7b6-44be-95c3-77b61d9f6fa4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.021 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 in datapath 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed unbound from our chassis
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.023 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.023 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7da38e00-07d6-4bdb-b532-4c1dbbcfbfe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.023 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed namespace which is not needed anymore
Dec 06 07:02:17 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235252]: [NOTICE]   (235277) : haproxy version is 2.8.14-c23fe91
Dec 06 07:02:17 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235252]: [NOTICE]   (235277) : path to executable is /usr/sbin/haproxy
Dec 06 07:02:17 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235252]: [WARNING]  (235277) : Exiting Master process...
Dec 06 07:02:17 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235252]: [ALERT]    (235277) : Current worker (235279) exited with code 143 (Terminated)
Dec 06 07:02:17 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235252]: [WARNING]  (235277) : All workers exited. Exiting... (0)
Dec 06 07:02:17 compute-1 systemd[1]: libpod-e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702.scope: Deactivated successfully.
Dec 06 07:02:17 compute-1 podman[235304]: 2025-12-06 07:02:17.234532716 +0000 UTC m=+0.127049558 container died e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:02:17 compute-1 ceph-mon[81689]: pgmap v1240: 305 pgs: 305 active+clean; 559 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Dec 06 07:02:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1673630881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702-userdata-shm.mount: Deactivated successfully.
Dec 06 07:02:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-2bfd56ee089a9dc21ca5228028a75cd18666a3fdba96cc9fbf38a850e2c6b4a5-merged.mount: Deactivated successfully.
Dec 06 07:02:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:17 compute-1 podman[235304]: 2025-12-06 07:02:17.697583192 +0000 UTC m=+0.590100024 container cleanup e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:02:17 compute-1 systemd[1]: libpod-conmon-e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702.scope: Deactivated successfully.
Dec 06 07:02:17 compute-1 podman[235337]: 2025-12-06 07:02:17.914684055 +0000 UTC m=+0.193530007 container remove e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.921 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d6597e73-adb0-494b-9578-33d52ebe3315]: (4, ('Sat Dec  6 07:02:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed (e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702)\ne6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702\nSat Dec  6 07:02:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed (e6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702)\ne6cf047eb2b82b18ee2637bf06b7f897559a9093fce7acbbe2d631004a23d702\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.923 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5b31e4ac-2a3b-4c75-8539-f247182a536d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:17.924 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9cf9f3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:17.932 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5cc7f5c4-dc50-4137-af58-60294ea57208_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:17.998 226109 DEBUG nova.storage.rbd_utils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] resizing rbd image 5cc7f5c4-dc50-4137-af58-60294ea57208_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:18 compute-1 kernel: tap4a9cf9f3-d0: left promiscuous mode
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.115 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.118 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d276ff89-0111-45db-84bb-83da95490c98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.133 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3e333c2b-5747-47a9-a7bf-f57df4e0283d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.135 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c6999344-dc6b-426e-b6b8-53abaf2ab736]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.150 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[57f43947-fc2f-4f7c-ba94-a0d2de141ac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481514, 'reachable_time': 41895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235405, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.153 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.154 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[f97f605a-1fc3-41cb-9259-2b3b05fa74e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d4a9cf9f3\x2dd63a\x2d4198\x2da2a7\x2db24331e0d8ed.mount: Deactivated successfully.
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.155 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.157 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfa287bf-10c3-40fc-8071-37bb7f801357, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:02:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:18.158 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[950f6162-a7cf-41ec-81f8-ed4c62cacc93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.339 226109 DEBUG nova.objects.instance [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'migration_context' on Instance uuid 5cc7f5c4-dc50-4137-af58-60294ea57208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.345 226109 DEBUG nova.network.neutron [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Successfully created port: b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.355 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.357 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.358 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Ensure instance console log exists: /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.358 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.358 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.358 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:18 compute-1 ovn_controller[130279]: 2025-12-06T07:02:18Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:02:18 compute-1 ovn_controller[130279]: 2025-12-06T07:02:18Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:02:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:18.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.550 226109 DEBUG nova.network.neutron [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updating instance_info_cache with network_info: [{"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.586 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Releasing lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.605 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.606 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.606 226109 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.611 226109 INFO nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Dec 06 07:02:18 compute-1 virtqemud[225710]: Domain id=9 name='instance-00000011' uuid=714f2e5b-135b-4f7e-9c62-3e1849c5e151 is tainted: custom-monitor
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.733 226109 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.733 226109 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.734 226109 DEBUG nova.network.neutron [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:18.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:18 compute-1 ceph-mon[81689]: pgmap v1241: 305 pgs: 305 active+clean; 578 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Dec 06 07:02:18 compute-1 nova_compute[226101]: 2025-12-06 07:02:18.973 226109 DEBUG nova.network.neutron [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:19 compute-1 podman[235424]: 2025-12-06 07:02:19.102372985 +0000 UTC m=+0.087470618 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.391 226109 DEBUG nova.network.neutron [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.410 226109 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:19 compute-1 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000012.scope: Deactivated successfully.
Dec 06 07:02:19 compute-1 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000012.scope: Consumed 3.183s CPU time.
Dec 06 07:02:19 compute-1 systemd-machined[190302]: Machine qemu-10-instance-00000012 terminated.
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.618 226109 INFO nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00061|binding|INFO|Claiming lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for this chassis.
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00062|binding|INFO|8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b: Claiming fa:16:3e:3d:c8:b4 10.100.0.14
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00063|binding|INFO|Claiming lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 for this chassis.
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00064|binding|INFO|5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4: Claiming fa:16:3e:cb:d9:12 19.80.0.179
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00065|binding|INFO|Removing lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b ovn-installed in OVS
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00066|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b ovn-installed in OVS
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.644 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.649 226109 INFO nova.virt.libvirt.driver [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance destroyed successfully.
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00067|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b up in Southbound
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00068|binding|INFO|Setting lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 up in Southbound
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.649 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:d9:12 19.80.0.179'], port_security=['fa:16:3e:cb:d9:12 19.80.0.179'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1920716484', 'neutron:cidrs': '19.80.0.179/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1920716484', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c735218e-8fbf-4d82-b453-bc3944800b8e, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4) old=Port_Binding(additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.650 226109 DEBUG nova.objects.instance [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'resources' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.651 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:c8:b4 10.100.0.14'], port_security=['fa:16:3e:3d:c8:b4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1426450350', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '714f2e5b-135b-4f7e-9c62-3e1849c5e151', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1426450350', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '13', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b) old=Port_Binding(additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.652 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 in datapath 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed bound to our chassis
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.653 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.664 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ff33c352-2f79-42f0-9be5-bb7c5ab3e848]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.665 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a9cf9f3-d1 in ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.667 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a9cf9f3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.667 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c2895f6f-e56e-4505-b204-35ef937f9632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.668 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[25085e93-5ba5-482b-905d-e63590c6d4a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.679 226109 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.679 226109 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.680 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6fa3e7-647d-436f-9db6-a23bebf6fde0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.702 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6cfde1-b54e-45e2-98db-0a82d59d22c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.704 226109 DEBUG nova.objects.instance [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'migration_context' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.729 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c59f33ae-3b3e-4e62-94b3-53db58215d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 systemd-udevd[235450]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:02:19 compute-1 NetworkManager[49031]: <info>  [1765004539.7358] manager: (tap4a9cf9f3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.734 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[92e506ce-bf32-4b21-89bb-581717892f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.768 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d8943953-7238-4ff8-bc3b-f2d77fae372d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.771 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a8420a2d-0b1c-42fd-b61e-490f901f3680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.777 226109 DEBUG nova.network.neutron [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Successfully updated port: b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:02:19 compute-1 NetworkManager[49031]: <info>  [1765004539.7919] device (tap4a9cf9f3-d0): carrier: link connected
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.791 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "refresh_cache-5cc7f5c4-dc50-4137-af58-60294ea57208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.792 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquired lock "refresh_cache-5cc7f5c4-dc50-4137-af58-60294ea57208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.792 226109 DEBUG nova.network.neutron [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.799 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[77ba9a35-7d3d-48ef-85a4-342487e376b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.815 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b69465e7-63b4-4649-972d-41facbde815e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a9cf9f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:5f:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481929, 'reachable_time': 30184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235478, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.832 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[caf61159-f22e-44f3-aa19-295aaae9769b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:5f2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481929, 'tstamp': 481929}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235479, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.846 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[033bb07d-9ece-4e35-ad5e-806bc25e48b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a9cf9f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:5f:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481929, 'reachable_time': 30184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235480, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.871 226109 DEBUG oslo_concurrency.processutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.875 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2834f522-3692-4aca-bde5-8295aaf03a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.929 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[17ff0b70-f2ae-439b-b2bd-46f2695d5439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 NetworkManager[49031]: <info>  [1765004539.9336] manager: (tap4a9cf9f3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.930 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9cf9f3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.930 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.931 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a9cf9f3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:19 compute-1 kernel: tap4a9cf9f3-d0: entered promiscuous mode
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.933 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.948 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a9cf9f3-d0, col_values=(('external_ids', {'iface-id': '46c2af8b-f787-403f-aef5-72ec2f87b6fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:19 compute-1 ovn_controller[130279]: 2025-12-06T07:02:19Z|00069|binding|INFO|Releasing lport 46c2af8b-f787-403f-aef5-72ec2f87b6fc from this chassis (sb_readonly=0)
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.961 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.962 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1f302d16-44c0-43cb-8dbf-0edee9466586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.963 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:02:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:19.963 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'env', 'PROCESS_TAG=haproxy-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:02:19 compute-1 nova_compute[226101]: 2025-12-06 07:02:19.965 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.083 226109 DEBUG nova.network.neutron [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/239897114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.310 226109 DEBUG oslo_concurrency.processutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:20 compute-1 podman[235532]: 2025-12-06 07:02:20.31581408 +0000 UTC m=+0.046231962 container create 8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.316 226109 DEBUG nova.compute.provider_tree [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.342 226109 DEBUG nova.scheduler.client.report [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:02:20 compute-1 systemd[1]: Started libpod-conmon-8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95.scope.
Dec 06 07:02:20 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:02:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f67531613c60a361bdc4d48b6a2fb49712313fe4ae9b2a0bac386b5fbf6b4c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:20 compute-1 podman[235532]: 2025-12-06 07:02:20.290577947 +0000 UTC m=+0.020995849 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:02:20 compute-1 podman[235532]: 2025-12-06 07:02:20.394994283 +0000 UTC m=+0.125412165 container init 8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:02:20 compute-1 podman[235532]: 2025-12-06 07:02:20.400564853 +0000 UTC m=+0.130982735 container start 8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.400 226109 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:20 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235549]: [NOTICE]   (235553) : New worker (235555) forked
Dec 06 07:02:20 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235549]: [NOTICE]   (235553) : Loading success.
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.457 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.458 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfa287bf-10c3-40fc-8071-37bb7f801357
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.469 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9cea53b4-9e50-46ef-b671-78d331114009]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.470 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdfa287bf-11 in ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.472 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdfa287bf-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.472 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7ff3ed-1c8b-4b50-9f04-6964a1edade0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.472 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[241d8846-ea70-4c94-9401-2031ae6a7996]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.482 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ff54d810-2fcf-4622-91ce-49c313ce81fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.495 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[08341f6a-e2a9-4013-9707-1d7ec7dfc3c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.519 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8bf17e-25d9-4dd6-afef-9c7cf48b302f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.524 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6a1bcc-7f73-4904-8a42-0f6524f9a3d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 NetworkManager[49031]: <info>  [1765004540.5254] manager: (tapdfa287bf-10): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Dec 06 07:02:20 compute-1 systemd-udevd[235471]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:02:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:20.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.553 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a00c166f-77c0-4ae6-88cd-d682cf78dd9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.556 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[55f4cb5d-859f-49f5-bf98-22e4fddbb066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 NetworkManager[49031]: <info>  [1765004540.5776] device (tapdfa287bf-10): carrier: link connected
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.582 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[86d76716-8b36-4ef4-86e0-d851b6a716c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.599 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7d099d-3766-4c4d-924c-c327f1a03cdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482008, 'reachable_time': 32535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235574, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.613 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7d1c8b-629c-4d45-9b97-739c92a76cdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:a341'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482008, 'tstamp': 482008}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235575, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.624 226109 INFO nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.629 226109 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.631 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c18abe2e-de2a-4169-b0ec-e9b7bd914511]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482008, 'reachable_time': 32535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235576, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.659 226109 DEBUG nova.objects.instance [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.659 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b9601282-3335-4ea4-b9d6-94ec3589edad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.708 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f2dd03a9-0874-4a0d-a79d-c93e00e6963f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.709 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.710 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.710 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfa287bf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.711 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:20 compute-1 kernel: tapdfa287bf-10: entered promiscuous mode
Dec 06 07:02:20 compute-1 NetworkManager[49031]: <info>  [1765004540.7125] manager: (tapdfa287bf-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.713 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.714 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfa287bf-10, col_values=(('external_ids', {'iface-id': 'a8b489de-cf80-4c12-869a-5e807cdbba8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.715 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:20 compute-1 ovn_controller[130279]: 2025-12-06T07:02:20Z|00070|binding|INFO|Releasing lport a8b489de-cf80-4c12-869a-5e807cdbba8c from this chassis (sb_readonly=0)
Dec 06 07:02:20 compute-1 nova_compute[226101]: 2025-12-06 07:02:20.729 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.730 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.731 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7d28fc-9637-4279-8cbe-71824e055daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.732 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-dfa287bf-10c3-40fc-8071-37bb7f801357
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID dfa287bf-10c3-40fc-8071-37bb7f801357
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:02:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:20.732 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'env', 'PROCESS_TAG=haproxy-dfa287bf-10c3-40fc-8071-37bb7f801357', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dfa287bf-10c3-40fc-8071-37bb7f801357.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:02:20 compute-1 ceph-mon[81689]: pgmap v1242: 305 pgs: 305 active+clean; 578 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Dec 06 07:02:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/239897114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:20.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:21 compute-1 podman[235608]: 2025-12-06 07:02:21.139251076 +0000 UTC m=+0.053750755 container create 13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:02:21 compute-1 systemd[1]: Started libpod-conmon-13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863.scope.
Dec 06 07:02:21 compute-1 podman[235608]: 2025-12-06 07:02:21.113300394 +0000 UTC m=+0.027800063 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:02:21 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.221 226109 DEBUG nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.223 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.223 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.223 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.223 226109 DEBUG nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.224 226109 WARNING nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state None.
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.224 226109 DEBUG nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.224 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.224 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.224 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.224 226109 DEBUG nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.225 226109 WARNING nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state None.
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.225 226109 DEBUG nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received event network-changed-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6893fcdd7b343bdd3dd9a29330912c9b0e5a69dc2fe13e661381c8c172a2c97d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.225 226109 DEBUG nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Refreshing instance network info cache due to event network-changed-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.225 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5cc7f5c4-dc50-4137-af58-60294ea57208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:21 compute-1 podman[235608]: 2025-12-06 07:02:21.237507565 +0000 UTC m=+0.152007224 container init 13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:02:21 compute-1 podman[235608]: 2025-12-06 07:02:21.243495586 +0000 UTC m=+0.157995235 container start 13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:02:21 compute-1 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[235623]: [NOTICE]   (235627) : New worker (235629) forked
Dec 06 07:02:21 compute-1 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[235623]: [NOTICE]   (235627) : Loading success.
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.321 226109 DEBUG nova.network.neutron [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Updating instance_info_cache with network_info: [{"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.347 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Releasing lock "refresh_cache-5cc7f5c4-dc50-4137-af58-60294ea57208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.347 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Instance network_info: |[{"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.348 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5cc7f5c4-dc50-4137-af58-60294ea57208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.348 226109 DEBUG nova.network.neutron [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Refreshing network info cache for port b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.350 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Start _get_guest_xml network_info=[{"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.356 226109 WARNING nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.362 226109 DEBUG nova.virt.libvirt.host [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.363 226109 DEBUG nova.virt.libvirt.host [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.366 226109 DEBUG nova.virt.libvirt.host [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.367 226109 DEBUG nova.virt.libvirt.host [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.368 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.368 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.368 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.369 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.369 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.369 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.369 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.369 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.370 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.370 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.370 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.370 226109 DEBUG nova.virt.hardware [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.373 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:02:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2358499230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.873 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.896 226109 DEBUG nova.storage.rbd_utils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 5cc7f5c4-dc50-4137-af58-60294ea57208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:21 compute-1 nova_compute[226101]: 2025-12-06 07:02:21.900 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:02:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/13529011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.324 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.325 226109 DEBUG nova.virt.libvirt.vif [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:02:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1006486405',display_name='tempest-ServersAdminTestJSON-server-1006486405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1006486405',id=21,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-teupi66o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:02:16Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=5cc7f5c4-dc50-4137-af58-60294ea57208,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.326 226109 DEBUG nova.network.os_vif_util [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.326 226109 DEBUG nova.network.os_vif_util [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:d3:3d,bridge_name='br-int',has_traffic_filtering=True,id=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7a77c83-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.328 226109 DEBUG nova.objects.instance [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cc7f5c4-dc50-4137-af58-60294ea57208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.366 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <uuid>5cc7f5c4-dc50-4137-af58-60294ea57208</uuid>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <name>instance-00000015</name>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersAdminTestJSON-server-1006486405</nova:name>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:02:21</nova:creationTime>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <nova:user uuid="a3cae056210a400fa5e3495fe827d29a">tempest-ServersAdminTestJSON-1902776367-project-member</nova:user>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <nova:project uuid="b6179a8b65c2484eb7ca1e068d93a58c">tempest-ServersAdminTestJSON-1902776367</nova:project>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <nova:port uuid="b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a">
Dec 06 07:02:22 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <system>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <entry name="serial">5cc7f5c4-dc50-4137-af58-60294ea57208</entry>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <entry name="uuid">5cc7f5c4-dc50-4137-af58-60294ea57208</entry>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </system>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <os>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   </os>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <features>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   </features>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/5cc7f5c4-dc50-4137-af58-60294ea57208_disk">
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       </source>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/5cc7f5c4-dc50-4137-af58-60294ea57208_disk.config">
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       </source>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:02:22 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:cf:d3:3d"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <target dev="tapb7a77c83-fd"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208/console.log" append="off"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <video>
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </video>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:02:22 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:02:22 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:02:22 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:02:22 compute-1 nova_compute[226101]: </domain>
Dec 06 07:02:22 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.367 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Preparing to wait for external event network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.367 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.367 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.367 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.368 226109 DEBUG nova.virt.libvirt.vif [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:02:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1006486405',display_name='tempest-ServersAdminTestJSON-server-1006486405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1006486405',id=21,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-teupi66o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:02:16Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=5cc7f5c4-dc50-4137-af58-60294ea57208,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.368 226109 DEBUG nova.network.os_vif_util [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.369 226109 DEBUG nova.network.os_vif_util [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:d3:3d,bridge_name='br-int',has_traffic_filtering=True,id=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7a77c83-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.369 226109 DEBUG os_vif [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:d3:3d,bridge_name='br-int',has_traffic_filtering=True,id=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7a77c83-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.370 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.370 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.370 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.373 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.373 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7a77c83-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.374 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7a77c83-fd, col_values=(('external_ids', {'iface-id': 'b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:d3:3d', 'vm-uuid': '5cc7f5c4-dc50-4137-af58-60294ea57208'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.375 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:22 compute-1 NetworkManager[49031]: <info>  [1765004542.3764] manager: (tapb7a77c83-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.378 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.384 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.385 226109 INFO os_vif [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:d3:3d,bridge_name='br-int',has_traffic_filtering=True,id=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7a77c83-fd')
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.454 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.454 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.455 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No VIF found with MAC fa:16:3e:cf:d3:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.455 226109 INFO nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Using config drive
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.479 226109 DEBUG nova.storage.rbd_utils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 5cc7f5c4-dc50-4137-af58-60294ea57208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:22.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.759 226109 DEBUG nova.network.neutron [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Updated VIF entry in instance network info cache for port b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.760 226109 DEBUG nova.network.neutron [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Updating instance_info_cache with network_info: [{"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:22 compute-1 ceph-mon[81689]: pgmap v1243: 305 pgs: 305 active+clean; 635 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 259 op/s
Dec 06 07:02:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2358499230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1622845979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/13529011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.781 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5cc7f5c4-dc50-4137-af58-60294ea57208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.782 226109 DEBUG nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.782 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.783 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.783 226109 DEBUG oslo_concurrency.lockutils [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.783 226109 DEBUG nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.783 226109 WARNING nova.compute.manager [req-91b55c37-a496-43d0-baa6-f2d94bb1e319 req-40532f79-537b-486b-9d0f-c9ecf77c8ba9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state None.
Dec 06 07:02:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Dec 06 07:02:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:22.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.973 226109 INFO nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Creating config drive at /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208/disk.config
Dec 06 07:02:22 compute-1 nova_compute[226101]: 2025-12-06 07:02:22.978 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3ywqsg8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.102 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3ywqsg8" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.133 226109 DEBUG nova.storage.rbd_utils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 5cc7f5c4-dc50-4137-af58-60294ea57208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.138 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208/disk.config 5cc7f5c4-dc50-4137-af58-60294ea57208_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.303 226109 DEBUG oslo_concurrency.processutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208/disk.config 5cc7f5c4-dc50-4137-af58-60294ea57208_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.304 226109 INFO nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Deleting local config drive /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208/disk.config because it was imported into RBD.
Dec 06 07:02:23 compute-1 kernel: tapb7a77c83-fd: entered promiscuous mode
Dec 06 07:02:23 compute-1 ovn_controller[130279]: 2025-12-06T07:02:23Z|00071|binding|INFO|Claiming lport b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a for this chassis.
Dec 06 07:02:23 compute-1 ovn_controller[130279]: 2025-12-06T07:02:23Z|00072|binding|INFO|b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a: Claiming fa:16:3e:cf:d3:3d 10.100.0.11
Dec 06 07:02:23 compute-1 NetworkManager[49031]: <info>  [1765004543.3455] manager: (tapb7a77c83-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.342 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.351 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:d3:3d 10.100.0.11'], port_security=['fa:16:3e:cf:d3:3d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5cc7f5c4-dc50-4137-af58-60294ea57208', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.353 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a in datapath 9451d867-0aba-464d-b4d9-f947b887e903 bound to our chassis
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.355 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:02:23 compute-1 ovn_controller[130279]: 2025-12-06T07:02:23Z|00073|binding|INFO|Setting lport b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a ovn-installed in OVS
Dec 06 07:02:23 compute-1 ovn_controller[130279]: 2025-12-06T07:02:23Z|00074|binding|INFO|Setting lport b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a up in Southbound
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.366 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.370 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c53039c6-6eef-4a29-a165-d8ccb56f2c0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:23 compute-1 systemd-udevd[235773]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:02:23 compute-1 systemd-machined[190302]: New machine qemu-11-instance-00000015.
Dec 06 07:02:23 compute-1 NetworkManager[49031]: <info>  [1765004543.3852] device (tapb7a77c83-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:02:23 compute-1 NetworkManager[49031]: <info>  [1765004543.3867] device (tapb7a77c83-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.388 226109 DEBUG nova.compute.manager [req-3015a0e9-cc13-4682-b7ed-a70d482ea92e req-50302b78-9fce-4c73-b723-5f557e421f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.388 226109 DEBUG oslo_concurrency.lockutils [req-3015a0e9-cc13-4682-b7ed-a70d482ea92e req-50302b78-9fce-4c73-b723-5f557e421f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.389 226109 DEBUG oslo_concurrency.lockutils [req-3015a0e9-cc13-4682-b7ed-a70d482ea92e req-50302b78-9fce-4c73-b723-5f557e421f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.389 226109 DEBUG oslo_concurrency.lockutils [req-3015a0e9-cc13-4682-b7ed-a70d482ea92e req-50302b78-9fce-4c73-b723-5f557e421f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.389 226109 DEBUG nova.compute.manager [req-3015a0e9-cc13-4682-b7ed-a70d482ea92e req-50302b78-9fce-4c73-b723-5f557e421f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.389 226109 WARNING nova.compute.manager [req-3015a0e9-cc13-4682-b7ed-a70d482ea92e req-50302b78-9fce-4c73-b723-5f557e421f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state None.
Dec 06 07:02:23 compute-1 systemd[1]: Started Virtual Machine qemu-11-instance-00000015.
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.400 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[953264f5-e116-45e7-a804-588e87775648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.404 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[46771285-cb94-49c3-a45d-01edc0042aa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.432 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[04a09783-4953-4ced-ae31-538208181021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.447 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4a00b436-1e01-483b-960e-d9f7abae0f3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479634, 'reachable_time': 43555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235784, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.460 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f8207b39-e29a-41a2-bc3b-c9b02e9e92fa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479647, 'tstamp': 479647}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235787, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479651, 'tstamp': 479651}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235787, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.462 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:23 compute-1 nova_compute[226101]: 2025-12-06 07:02:23.463 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.465 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.465 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.466 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:23.466 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:23 compute-1 ceph-mon[81689]: osdmap e167: 3 total, 3 up, 3 in
Dec 06 07:02:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3168415423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1485497034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.046 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004544.0457406, 5cc7f5c4-dc50-4137-af58-60294ea57208 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.047 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] VM Started (Lifecycle Event)
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.103 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.106 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004544.0458567, 5cc7f5c4-dc50-4137-af58-60294ea57208 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.107 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] VM Paused (Lifecycle Event)
Dec 06 07:02:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:24.114 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.115 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:24.115 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.135 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.138 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:24 compute-1 nova_compute[226101]: 2025-12-06 07:02:24.161 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:02:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:24.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:24.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:24 compute-1 ceph-mon[81689]: pgmap v1245: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 260 op/s
Dec 06 07:02:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/54737310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.936 226109 DEBUG nova.compute.manager [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received event network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.937 226109 DEBUG oslo_concurrency.lockutils [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.937 226109 DEBUG oslo_concurrency.lockutils [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.937 226109 DEBUG oslo_concurrency.lockutils [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.937 226109 DEBUG nova.compute.manager [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Processing event network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.938 226109 DEBUG nova.compute.manager [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received event network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.938 226109 DEBUG oslo_concurrency.lockutils [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.938 226109 DEBUG oslo_concurrency.lockutils [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.938 226109 DEBUG oslo_concurrency.lockutils [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.938 226109 DEBUG nova.compute.manager [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] No waiting events found dispatching network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.938 226109 WARNING nova.compute.manager [req-cc89b50b-7552-4c3b-bce8-d381c9d65d15 req-ec2e5f9d-4165-4690-916f-ee0ea84fafd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received unexpected event network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a for instance with vm_state building and task_state spawning.
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.939 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.944 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004545.942785, 5cc7f5c4-dc50-4137-af58-60294ea57208 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.945 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] VM Resumed (Lifecycle Event)
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.946 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.952 226109 INFO nova.virt.libvirt.driver [-] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Instance spawned successfully.
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.953 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.975 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.983 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.988 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.989 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.989 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.990 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.990 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:25 compute-1 nova_compute[226101]: 2025-12-06 07:02:25.991 226109 DEBUG nova.virt.libvirt.driver [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:26 compute-1 nova_compute[226101]: 2025-12-06 07:02:26.019 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:02:26 compute-1 nova_compute[226101]: 2025-12-06 07:02:26.060 226109 INFO nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Took 9.48 seconds to spawn the instance on the hypervisor.
Dec 06 07:02:26 compute-1 nova_compute[226101]: 2025-12-06 07:02:26.061 226109 DEBUG nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:26 compute-1 nova_compute[226101]: 2025-12-06 07:02:26.126 226109 INFO nova.compute.manager [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Took 11.14 seconds to build instance.
Dec 06 07:02:26 compute-1 nova_compute[226101]: 2025-12-06 07:02:26.147 226109 DEBUG oslo_concurrency.lockutils [None req-2da219fa-b8ac-4637-8ffc-53aa673664e0 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:26.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:26.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:26 compute-1 ceph-mon[81689]: pgmap v1246: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.8 MiB/s wr, 229 op/s
Dec 06 07:02:27 compute-1 nova_compute[226101]: 2025-12-06 07:02:27.375 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:28 compute-1 ceph-mon[81689]: pgmap v1247: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:02:28 compute-1 nova_compute[226101]: 2025-12-06 07:02:28.370 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:28.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:28 compute-1 sudo[235831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:28 compute-1 sudo[235831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:28 compute-1 sudo[235831]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:28.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:28 compute-1 sudo[235868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:02:28 compute-1 sudo[235868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:28 compute-1 sudo[235868]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:28 compute-1 podman[235856]: 2025-12-06 07:02:28.835174547 +0000 UTC m=+0.063441197 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:02:28 compute-1 podman[235855]: 2025-12-06 07:02:28.840956613 +0000 UTC m=+0.073229301 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:02:28 compute-1 sudo[235917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:28 compute-1 sudo[235917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:28 compute-1 sudo[235917]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:28 compute-1 sudo[235942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:02:28 compute-1 sudo[235942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:29 compute-1 sudo[235942]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:30 compute-1 ceph-mon[81689]: pgmap v1248: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:02:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:02:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:02:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:02:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:02:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:02:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1688128300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:30.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:30.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.935 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "8818a36b-f8ca-411f-9037-85036a64a941" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.937 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "8818a36b-f8ca-411f-9037-85036a64a941" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.937 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "8818a36b-f8ca-411f-9037-85036a64a941-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.937 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "8818a36b-f8ca-411f-9037-85036a64a941-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.938 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "8818a36b-f8ca-411f-9037-85036a64a941-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.939 226109 INFO nova.compute.manager [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Terminating instance
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.940 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.940 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:30 compute-1 nova_compute[226101]: 2025-12-06 07:02:30.940 226109 DEBUG nova.network.neutron [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:31 compute-1 nova_compute[226101]: 2025-12-06 07:02:31.821 226109 DEBUG nova.network.neutron [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Dec 06 07:02:32 compute-1 nova_compute[226101]: 2025-12-06 07:02:32.225 226109 DEBUG nova.network.neutron [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:32 compute-1 nova_compute[226101]: 2025-12-06 07:02:32.254 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-8818a36b-f8ca-411f-9037-85036a64a941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:32 compute-1 nova_compute[226101]: 2025-12-06 07:02:32.255 226109 DEBUG nova.compute.manager [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:02:32 compute-1 nova_compute[226101]: 2025-12-06 07:02:32.392 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:32 compute-1 ceph-mon[81689]: pgmap v1249: 305 pgs: 305 active+clean; 583 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 480 KiB/s wr, 214 op/s
Dec 06 07:02:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:32.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:32.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:32 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec 06 07:02:32 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000f.scope: Consumed 15.218s CPU time.
Dec 06 07:02:32 compute-1 systemd-machined[190302]: Machine qemu-7-instance-0000000f terminated.
Dec 06 07:02:33 compute-1 nova_compute[226101]: 2025-12-06 07:02:33.082 226109 INFO nova.virt.libvirt.driver [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance destroyed successfully.
Dec 06 07:02:33 compute-1 nova_compute[226101]: 2025-12-06 07:02:33.082 226109 DEBUG nova.objects.instance [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'resources' on Instance uuid 8818a36b-f8ca-411f-9037-85036a64a941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:02:33.118 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:33 compute-1 nova_compute[226101]: 2025-12-06 07:02:33.372 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:33 compute-1 ceph-mon[81689]: osdmap e168: 3 total, 3 up, 3 in
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.107 226109 INFO nova.virt.libvirt.driver [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Deleting instance files /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941_del
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.108 226109 INFO nova.virt.libvirt.driver [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Deletion of /var/lib/nova/instances/8818a36b-f8ca-411f-9037-85036a64a941_del complete
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.188 226109 INFO nova.compute.manager [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Took 1.93 seconds to destroy the instance on the hypervisor.
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.188 226109 DEBUG oslo.service.loopingcall [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.189 226109 DEBUG nova.compute.manager [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.189 226109 DEBUG nova.network.neutron [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.533 226109 DEBUG nova.network.neutron [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.558 226109 DEBUG nova.network.neutron [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:34.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.578 226109 INFO nova.compute.manager [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Took 0.39 seconds to deallocate network for instance.
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.648 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004539.6470103, 345d5d4a-3a34-4809-9ae4-60a579c5e49a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.648 226109 INFO nova.compute.manager [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] VM Stopped (Lifecycle Event)
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.668 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.669 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.744 226109 DEBUG nova.compute.manager [None req-f7fcb24a-da24-49bd-a3f5-a38b8c210d42 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:34 compute-1 nova_compute[226101]: 2025-12-06 07:02:34.775 226109 DEBUG oslo_concurrency.processutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:34.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:34 compute-1 ceph-mon[81689]: pgmap v1251: 305 pgs: 305 active+clean; 569 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 34 KiB/s wr, 224 op/s
Dec 06 07:02:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3285884426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:35 compute-1 sudo[236040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:35 compute-1 sudo[236040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:35 compute-1 sudo[236040]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:35 compute-1 sudo[236065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:02:35 compute-1 sudo[236065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:35 compute-1 sudo[236065]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/401815971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:35 compute-1 nova_compute[226101]: 2025-12-06 07:02:35.255 226109 DEBUG oslo_concurrency.processutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:35 compute-1 nova_compute[226101]: 2025-12-06 07:02:35.263 226109 DEBUG nova.compute.provider_tree [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:02:35 compute-1 nova_compute[226101]: 2025-12-06 07:02:35.431 226109 DEBUG nova.scheduler.client.report [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:02:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:35 compute-1 ceph-mon[81689]: pgmap v1252: 305 pgs: 305 active+clean; 551 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 703 KiB/s wr, 220 op/s
Dec 06 07:02:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/401815971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:35 compute-1 nova_compute[226101]: 2025-12-06 07:02:35.981 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:36 compute-1 nova_compute[226101]: 2025-12-06 07:02:36.163 226109 INFO nova.scheduler.client.report [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Deleted allocations for instance 8818a36b-f8ca-411f-9037-85036a64a941
Dec 06 07:02:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:36.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:36.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:37 compute-1 nova_compute[226101]: 2025-12-06 07:02:37.397 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:37 compute-1 nova_compute[226101]: 2025-12-06 07:02:37.959 226109 DEBUG oslo_concurrency.lockutils [None req-eb96fc9d-8ab3-4f74-acee-991f9c62354b 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "8818a36b-f8ca-411f-9037-85036a64a941" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:38 compute-1 ceph-mon[81689]: pgmap v1253: 305 pgs: 305 active+clean; 534 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 253 op/s
Dec 06 07:02:38 compute-1 nova_compute[226101]: 2025-12-06 07:02:38.374 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:38.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:38.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:39 compute-1 nova_compute[226101]: 2025-12-06 07:02:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:40 compute-1 ceph-mon[81689]: pgmap v1254: 305 pgs: 305 active+clean; 534 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 253 op/s
Dec 06 07:02:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:40.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:40 compute-1 ovn_controller[130279]: 2025-12-06T07:02:40Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:d3:3d 10.100.0.11
Dec 06 07:02:40 compute-1 ovn_controller[130279]: 2025-12-06T07:02:40Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:d3:3d 10.100.0.11
Dec 06 07:02:41 compute-1 nova_compute[226101]: 2025-12-06 07:02:41.638 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:41 compute-1 nova_compute[226101]: 2025-12-06 07:02:41.639 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:41 compute-1 nova_compute[226101]: 2025-12-06 07:02:41.639 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:02:42 compute-1 nova_compute[226101]: 2025-12-06 07:02:42.402 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:42 compute-1 ceph-mon[81689]: pgmap v1255: 305 pgs: 305 active+clean; 548 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 127 op/s
Dec 06 07:02:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:42.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:42 compute-1 nova_compute[226101]: 2025-12-06 07:02:42.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:42 compute-1 nova_compute[226101]: 2025-12-06 07:02:42.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:42 compute-1 nova_compute[226101]: 2025-12-06 07:02:42.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:02:42 compute-1 nova_compute[226101]: 2025-12-06 07:02:42.621 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:02:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:42.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:43 compute-1 nova_compute[226101]: 2025-12-06 07:02:43.407 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:43 compute-1 nova_compute[226101]: 2025-12-06 07:02:43.621 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:43 compute-1 nova_compute[226101]: 2025-12-06 07:02:43.621 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:02:43 compute-1 nova_compute[226101]: 2025-12-06 07:02:43.622 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:02:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1762971457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1640493818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:44 compute-1 nova_compute[226101]: 2025-12-06 07:02:44.271 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:44 compute-1 nova_compute[226101]: 2025-12-06 07:02:44.272 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:44 compute-1 nova_compute[226101]: 2025-12-06 07:02:44.272 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:02:44 compute-1 nova_compute[226101]: 2025-12-06 07:02:44.272 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:44.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:44 compute-1 ceph-mon[81689]: pgmap v1256: 305 pgs: 305 active+clean; 557 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 114 op/s
Dec 06 07:02:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3876989622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1988349296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/968125397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:46.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:46 compute-1 ceph-mon[81689]: pgmap v1257: 305 pgs: 305 active+clean; 538 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 137 op/s
Dec 06 07:02:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1248195527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:46.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.069 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Updating instance_info_cache with network_info: [{"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.102 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-09a05ccc-abca-47d8-8e32-6e53adb95d4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.102 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.103 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.103 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.103 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.103 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.139 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.140 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.140 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.140 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.141 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.403 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/801178798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.620 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2579202930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/801178798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.732 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.732 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.735 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.736 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.738 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.738 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.902 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.903 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4363MB free_disk=20.710674285888672GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.904 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:47 compute-1 nova_compute[226101]: 2025-12-06 07:02:47.904 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.079 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004553.0788252, 8818a36b-f8ca-411f-9037-85036a64a941 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.080 226109 INFO nova.compute.manager [-] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] VM Stopped (Lifecycle Event)
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.134 226109 DEBUG nova.compute.manager [None req-45811e8e-0aed-4467-8c83-ef0319898dc1 - - - - - -] [instance: 8818a36b-f8ca-411f-9037-85036a64a941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.409 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.526 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 09a05ccc-abca-47d8-8e32-6e53adb95d4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.527 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 714f2e5b-135b-4f7e-9c62-3e1849c5e151 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.527 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 5cc7f5c4-dc50-4137-af58-60294ea57208 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.527 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.527 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:02:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.637 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:02:48 compute-1 ceph-mon[81689]: pgmap v1258: 305 pgs: 305 active+clean; 532 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 193 op/s
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.767 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.768 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.810 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:02:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:48.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:48 compute-1 nova_compute[226101]: 2025-12-06 07:02:48.889 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:02:49 compute-1 nova_compute[226101]: 2025-12-06 07:02:49.054 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2702856638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:49 compute-1 nova_compute[226101]: 2025-12-06 07:02:49.513 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:49 compute-1 nova_compute[226101]: 2025-12-06 07:02:49.520 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:02:49 compute-1 nova_compute[226101]: 2025-12-06 07:02:49.557 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:02:49 compute-1 nova_compute[226101]: 2025-12-06 07:02:49.608 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:02:49 compute-1 nova_compute[226101]: 2025-12-06 07:02:49.609 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:49 compute-1 nova_compute[226101]: 2025-12-06 07:02:49.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:49 compute-1 nova_compute[226101]: 2025-12-06 07:02:49.610 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:02:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2702856638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:50 compute-1 podman[236137]: 2025-12-06 07:02:50.155731201 +0000 UTC m=+0.140520852 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:02:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:50 compute-1 ceph-mon[81689]: pgmap v1259: 305 pgs: 305 active+clean; 532 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 3.9 MiB/s wr, 146 op/s
Dec 06 07:02:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/46249981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.112 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.144 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4028376923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3290223524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.925 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.951 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.952 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 5cc7f5c4-dc50-4137-af58-60294ea57208 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.952 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 714f2e5b-135b-4f7e-9c62-3e1849c5e151 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.952 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.953 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.954 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.954 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.954 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:51 compute-1 nova_compute[226101]: 2025-12-06 07:02:51.955 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:52 compute-1 nova_compute[226101]: 2025-12-06 07:02:52.002 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:52 compute-1 nova_compute[226101]: 2025-12-06 07:02:52.005 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:52 compute-1 nova_compute[226101]: 2025-12-06 07:02:52.006 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:52 compute-1 nova_compute[226101]: 2025-12-06 07:02:52.407 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:52.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:52.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:53 compute-1 ceph-mon[81689]: pgmap v1260: 305 pgs: 305 active+clean; 457 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 217 op/s
Dec 06 07:02:53 compute-1 nova_compute[226101]: 2025-12-06 07:02:53.442 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:02:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:54.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:02:54 compute-1 ceph-mon[81689]: pgmap v1261: 305 pgs: 305 active+clean; 451 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Dec 06 07:02:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:56.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:56 compute-1 ceph-mon[81689]: pgmap v1262: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 192 op/s
Dec 06 07:02:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:56.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:57 compute-1 nova_compute[226101]: 2025-12-06 07:02:57.409 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2544000975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:58 compute-1 nova_compute[226101]: 2025-12-06 07:02:58.444 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:58.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:02:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:58.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:58 compute-1 ceph-mon[81689]: pgmap v1263: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 158 op/s
Dec 06 07:02:59 compute-1 podman[236163]: 2025-12-06 07:02:59.076049534 +0000 UTC m=+0.052093140 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:02:59 compute-1 podman[236164]: 2025-12-06 07:02:59.08475505 +0000 UTC m=+0.054689462 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:03:00 compute-1 nova_compute[226101]: 2025-12-06 07:03:00.354 226109 INFO nova.compute.manager [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Rebuilding instance
Dec 06 07:03:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:00.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:00 compute-1 ceph-mon[81689]: pgmap v1264: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 24 KiB/s wr, 88 op/s
Dec 06 07:03:00 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 06 07:03:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:01 compute-1 nova_compute[226101]: 2025-12-06 07:03:01.309 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:01 compute-1 nova_compute[226101]: 2025-12-06 07:03:01.358 226109 DEBUG nova.compute.manager [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:01 compute-1 nova_compute[226101]: 2025-12-06 07:03:01.420 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'pci_requests' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:01 compute-1 nova_compute[226101]: 2025-12-06 07:03:01.437 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:01 compute-1 nova_compute[226101]: 2025-12-06 07:03:01.452 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'resources' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:01 compute-1 nova_compute[226101]: 2025-12-06 07:03:01.513 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'migration_context' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:01 compute-1 nova_compute[226101]: 2025-12-06 07:03:01.529 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:03:01 compute-1 nova_compute[226101]: 2025-12-06 07:03:01.532 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:03:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:01.619 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:01.619 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:01.620 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:02.125 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:02.126 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:03:02 compute-1 nova_compute[226101]: 2025-12-06 07:03:02.126 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:02 compute-1 nova_compute[226101]: 2025-12-06 07:03:02.438 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:02 compute-1 ceph-mon[81689]: pgmap v1265: 305 pgs: 305 active+clean; 472 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 132 op/s
Dec 06 07:03:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:02.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:03 compute-1 nova_compute[226101]: 2025-12-06 07:03:03.445 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:03 compute-1 kernel: tap0e3b84b1-ee (unregistering): left promiscuous mode
Dec 06 07:03:03 compute-1 NetworkManager[49031]: <info>  [1765004583.7880] device (tap0e3b84b1-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:03:03 compute-1 ovn_controller[130279]: 2025-12-06T07:03:03Z|00075|binding|INFO|Releasing lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 from this chassis (sb_readonly=0)
Dec 06 07:03:03 compute-1 nova_compute[226101]: 2025-12-06 07:03:03.797 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:03 compute-1 ovn_controller[130279]: 2025-12-06T07:03:03Z|00076|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 down in Southbound
Dec 06 07:03:03 compute-1 ovn_controller[130279]: 2025-12-06T07:03:03Z|00077|binding|INFO|Removing iface tap0e3b84b1-ee ovn-installed in OVS
Dec 06 07:03:03 compute-1 nova_compute[226101]: 2025-12-06 07:03:03.800 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:03 compute-1 nova_compute[226101]: 2025-12-06 07:03:03.814 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:03 compute-1 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000013.scope: Deactivated successfully.
Dec 06 07:03:03 compute-1 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000013.scope: Consumed 16.551s CPU time.
Dec 06 07:03:03 compute-1 systemd-machined[190302]: Machine qemu-8-instance-00000013 terminated.
Dec 06 07:03:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:03.980 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:2f:12 10.100.0.5'], port_security=['fa:16:3e:7b:2f:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '09a05ccc-abca-47d8-8e32-6e53adb95d4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=0e3b84b1-ee05-4dbf-a372-b2a404592bf9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:03.981 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 unbound from our chassis
Dec 06 07:03:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:03.983 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.001 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[955b38e2-73e8-4ba2-b3bd-a9e908f1fc15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.029 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[eac1f82e-1f1c-4a46-ac7d-084d7aefb728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.032 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[15d4cbf1-8426-49e8-985c-52b7d50d05ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.058 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[21b729a2-9453-44b8-9f9b-086601be7149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.074 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8935fae1-24b8-45d5-a343-06557e714075]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479634, 'reachable_time': 43555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236223, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.088 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8f79ee-00d3-4b8f-9185-02ee8743cd12]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479647, 'tstamp': 479647}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236224, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479651, 'tstamp': 479651}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236224, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.090 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.095 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.096 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.096 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.097 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:04.097 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.402 226109 DEBUG nova.compute.manager [req-adfc3d59-6e3f-4147-8c87-144caa16a297 req-8cf8ae97-5edf-4da6-8e95-19798797cb8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.403 226109 DEBUG oslo_concurrency.lockutils [req-adfc3d59-6e3f-4147-8c87-144caa16a297 req-8cf8ae97-5edf-4da6-8e95-19798797cb8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.403 226109 DEBUG oslo_concurrency.lockutils [req-adfc3d59-6e3f-4147-8c87-144caa16a297 req-8cf8ae97-5edf-4da6-8e95-19798797cb8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.404 226109 DEBUG oslo_concurrency.lockutils [req-adfc3d59-6e3f-4147-8c87-144caa16a297 req-8cf8ae97-5edf-4da6-8e95-19798797cb8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.404 226109 DEBUG nova.compute.manager [req-adfc3d59-6e3f-4147-8c87-144caa16a297 req-8cf8ae97-5edf-4da6-8e95-19798797cb8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.404 226109 WARNING nova.compute.manager [req-adfc3d59-6e3f-4147-8c87-144caa16a297 req-8cf8ae97-5edf-4da6-8e95-19798797cb8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state error and task_state rebuilding.
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.553 226109 INFO nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance shutdown successfully after 3 seconds.
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.559 226109 INFO nova.virt.libvirt.driver [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance destroyed successfully.
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.563 226109 INFO nova.virt.libvirt.driver [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance destroyed successfully.
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.564 226109 DEBUG nova.virt.libvirt.vif [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:02:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:02:59Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.564 226109 DEBUG nova.network.os_vif_util [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.565 226109 DEBUG nova.network.os_vif_util [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.566 226109 DEBUG os_vif [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.568 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e3b84b1-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:04.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.625 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.627 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:04 compute-1 nova_compute[226101]: 2025-12-06 07:03:04.632 226109 INFO os_vif [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee')
Dec 06 07:03:04 compute-1 ceph-mon[81689]: pgmap v1266: 305 pgs: 305 active+clean; 480 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 809 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 07:03:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.258 226109 INFO nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deleting instance files /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d_del
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.258 226109 INFO nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deletion of /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d_del complete
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.481 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.481 226109 INFO nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating image(s)
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.509 226109 DEBUG nova.storage.rbd_utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.537 226109 DEBUG nova.storage.rbd_utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.562 226109 DEBUG nova.storage.rbd_utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.565 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:05 compute-1 nova_compute[226101]: 2025-12-06 07:03:05.565 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:06 compute-1 ceph-mon[81689]: pgmap v1267: 305 pgs: 305 active+clean; 448 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.133 226109 DEBUG nova.virt.libvirt.imagebackend [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/412dd61d-1b1e-439f-b7f9-7e7c4e42924c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/412dd61d-1b1e-439f-b7f9-7e7c4e42924c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.569 226109 DEBUG nova.compute.manager [req-12b2ef1d-3fdb-4398-889d-559d84aadd77 req-52122747-e27e-4fb3-a821-f2511b54694b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.570 226109 DEBUG oslo_concurrency.lockutils [req-12b2ef1d-3fdb-4398-889d-559d84aadd77 req-52122747-e27e-4fb3-a821-f2511b54694b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.570 226109 DEBUG oslo_concurrency.lockutils [req-12b2ef1d-3fdb-4398-889d-559d84aadd77 req-52122747-e27e-4fb3-a821-f2511b54694b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.570 226109 DEBUG oslo_concurrency.lockutils [req-12b2ef1d-3fdb-4398-889d-559d84aadd77 req-52122747-e27e-4fb3-a821-f2511b54694b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.571 226109 DEBUG nova.compute.manager [req-12b2ef1d-3fdb-4398-889d-559d84aadd77 req-52122747-e27e-4fb3-a821-f2511b54694b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.571 226109 WARNING nova.compute.manager [req-12b2ef1d-3fdb-4398-889d-559d84aadd77 req-52122747-e27e-4fb3-a821-f2511b54694b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state error and task_state rebuild_spawning.
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.603 226109 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Creating tmpfile /var/lib/nova/instances/tmp9cr5jvub to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Dec 06 07:03:06 compute-1 nova_compute[226101]: 2025-12-06 07:03:06.605 226109 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9cr5jvub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Dec 06 07:03:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:06.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:06.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:07 compute-1 nova_compute[226101]: 2025-12-06 07:03:07.981 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.057 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.060 226109 DEBUG nova.virt.images [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] 412dd61d-1b1e-439f-b7f9-7e7c4e42924c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.061 226109 DEBUG nova.privsep.utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.062 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:08 compute-1 ceph-mon[81689]: pgmap v1268: 305 pgs: 305 active+clean; 405 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.311 226109 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9cr5jvub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.344 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.344 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquired lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.345 226109 DEBUG nova.network.neutron [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.386 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.391 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.448 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.461 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.462 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.490 226109 DEBUG nova.storage.rbd_utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:08 compute-1 nova_compute[226101]: 2025-12-06 07:03:08.494 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:08.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:08.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:09.129 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4229558082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:03:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4229558082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.250 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.325 226109 DEBUG nova.storage.rbd_utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] resizing rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.419 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.420 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Ensure instance console log exists: /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.420 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.421 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.421 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.423 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Start _get_guest_xml network_info=[{"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.428 226109 WARNING nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.436 226109 DEBUG nova.virt.libvirt.host [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.437 226109 DEBUG nova.virt.libvirt.host [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.445 226109 DEBUG nova.virt.libvirt.host [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.445 226109 DEBUG nova.virt.libvirt.host [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.446 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.446 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.447 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.447 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.447 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.447 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.448 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.448 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.448 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.448 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.448 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.449 226109 DEBUG nova.virt.hardware [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.449 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.467 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.625 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:03:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3468614630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.886 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.910 226109 DEBUG nova.storage.rbd_utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:09 compute-1 nova_compute[226101]: 2025-12-06 07:03:09.914 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:10 compute-1 ceph-mon[81689]: pgmap v1269: 305 pgs: 305 active+clean; 405 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:03:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3468614630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:03:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2512659534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.327 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.330 226109 DEBUG nova.virt.libvirt.vif [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:02:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:03:05Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.330 226109 DEBUG nova.network.os_vif_util [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.331 226109 DEBUG nova.network.os_vif_util [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.335 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <uuid>09a05ccc-abca-47d8-8e32-6e53adb95d4d</uuid>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <name>instance-00000013</name>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersAdminTestJSON-server-1679126586</nova:name>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:03:09</nova:creationTime>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <nova:user uuid="a3cae056210a400fa5e3495fe827d29a">tempest-ServersAdminTestJSON-1902776367-project-member</nova:user>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <nova:project uuid="b6179a8b65c2484eb7ca1e068d93a58c">tempest-ServersAdminTestJSON-1902776367</nova:project>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <nova:port uuid="0e3b84b1-ee05-4dbf-a372-b2a404592bf9">
Dec 06 07:03:10 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <system>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <entry name="serial">09a05ccc-abca-47d8-8e32-6e53adb95d4d</entry>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <entry name="uuid">09a05ccc-abca-47d8-8e32-6e53adb95d4d</entry>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </system>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <os>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   </os>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <features>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   </features>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk">
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       </source>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config">
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       </source>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:03:10 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:7b:2f:12"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <target dev="tap0e3b84b1-ee"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/console.log" append="off"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <video>
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </video>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:03:10 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:03:10 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:03:10 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:03:10 compute-1 nova_compute[226101]: </domain>
Dec 06 07:03:10 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.336 226109 DEBUG nova.virt.libvirt.vif [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:02:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:03:05Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.336 226109 DEBUG nova.network.os_vif_util [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.337 226109 DEBUG nova.network.os_vif_util [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.337 226109 DEBUG os_vif [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.338 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.339 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.339 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.342 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.342 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e3b84b1-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.343 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e3b84b1-ee, col_values=(('external_ids', {'iface-id': '0e3b84b1-ee05-4dbf-a372-b2a404592bf9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:2f:12', 'vm-uuid': '09a05ccc-abca-47d8-8e32-6e53adb95d4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:10 compute-1 NetworkManager[49031]: <info>  [1765004590.3460] manager: (tap0e3b84b1-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.347 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.350 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.351 226109 INFO os_vif [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee')
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.424 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.425 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.425 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No VIF found with MAC fa:16:3e:7b:2f:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.426 226109 INFO nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Using config drive
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.453 226109 DEBUG nova.storage.rbd_utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.475 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.522 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'keypairs' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:03:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.640 226109 DEBUG nova.network.neutron [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.667 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Releasing lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.669 226109 DEBUG os_brick.utils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:03:10 compute-1 nova_compute[226101]: 2025-12-06 07:03:10.670 226109 INFO oslo.privsep.daemon [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp8n6tu7co/privsep.sock']
Dec 06 07:03:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.173 226109 INFO nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating config drive at /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.178 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplt0i7ovs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2512659534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.302 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplt0i7ovs" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.333 226109 DEBUG nova.storage.rbd_utils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.336 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.355 226109 INFO oslo.privsep.daemon [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Spawned new privsep daemon via rootwrap
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.222 236517 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.226 236517 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.228 236517 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.229 236517 INFO oslo.privsep.daemon [-] privsep daemon running as pid 236517
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.359 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[97a7ddd6-d545-4931-9a44-36f58ad0091e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.450 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.463 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.463 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[b8571432-2fec-4237-9634-e23191662fb6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.465 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.472 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.473 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[c6277c8b-89c0-4e95-9315-a94b0491702b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.475 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.485 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.486 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[2a80aef1-588d-4a38-a525-5bd928ded313]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.488 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf976a-b7f1-433d-b358-1c9d9b1b96de]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.489 226109 DEBUG oslo_concurrency.processutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.508 226109 DEBUG oslo_concurrency.processutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.509 226109 INFO nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deleting local config drive /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config because it was imported into RBD.
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.510 226109 DEBUG oslo_concurrency.processutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.512 226109 DEBUG os_brick.initiator.connectors.lightos [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.513 226109 DEBUG os_brick.initiator.connectors.lightos [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.513 226109 DEBUG os_brick.initiator.connectors.lightos [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.513 226109 DEBUG os_brick.utils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] <== get_connector_properties: return (844ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:03:11 compute-1 kernel: tap0e3b84b1-ee: entered promiscuous mode
Dec 06 07:03:11 compute-1 NetworkManager[49031]: <info>  [1765004591.5550] manager: (tap0e3b84b1-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Dec 06 07:03:11 compute-1 ovn_controller[130279]: 2025-12-06T07:03:11Z|00078|binding|INFO|Claiming lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for this chassis.
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.556 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:11 compute-1 ovn_controller[130279]: 2025-12-06T07:03:11Z|00079|binding|INFO|0e3b84b1-ee05-4dbf-a372-b2a404592bf9: Claiming fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.568 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:2f:12 10.100.0.5'], port_security=['fa:16:3e:7b:2f:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '09a05ccc-abca-47d8-8e32-6e53adb95d4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=0e3b84b1-ee05-4dbf-a372-b2a404592bf9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.570 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 bound to our chassis
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.571 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:03:11 compute-1 ovn_controller[130279]: 2025-12-06T07:03:11Z|00080|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 ovn-installed in OVS
Dec 06 07:03:11 compute-1 ovn_controller[130279]: 2025-12-06T07:03:11Z|00081|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 up in Southbound
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.575 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.579 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:11 compute-1 systemd-udevd[236576]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.585 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c866c4e5-0e3d-41bc-8a46-852b7cae1f3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 systemd-machined[190302]: New machine qemu-12-instance-00000013.
Dec 06 07:03:11 compute-1 NetworkManager[49031]: <info>  [1765004591.5926] device (tap0e3b84b1-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:03:11 compute-1 NetworkManager[49031]: <info>  [1765004591.5933] device (tap0e3b84b1-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:03:11 compute-1 systemd[1]: Started Virtual Machine qemu-12-instance-00000013.
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.612 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[811d77a6-ecec-4708-9d21-90ecf20d5706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.614 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[41dc0183-d10d-4313-b36b-373d38873b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.641 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a02ec684-ee8e-458f-a5fd-6b98ef44d895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.660 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[39486935-e6b3-4024-8ff9-20f6123adf07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479634, 'reachable_time': 43555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236586, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.675 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e5966669-9fef-4d2f-8805-f9fe68f6ca43]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479647, 'tstamp': 479647}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236590, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479651, 'tstamp': 479651}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236590, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.677 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.679 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:11 compute-1 nova_compute[226101]: 2025-12-06 07:03:11.679 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.680 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.680 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.680 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:11.681 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.127 226109 DEBUG nova.compute.manager [req-d153b169-fc98-497c-9358-374175d1146d req-ea254036-405d-463c-b24c-05778777c27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.127 226109 DEBUG oslo_concurrency.lockutils [req-d153b169-fc98-497c-9358-374175d1146d req-ea254036-405d-463c-b24c-05778777c27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.128 226109 DEBUG oslo_concurrency.lockutils [req-d153b169-fc98-497c-9358-374175d1146d req-ea254036-405d-463c-b24c-05778777c27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.128 226109 DEBUG oslo_concurrency.lockutils [req-d153b169-fc98-497c-9358-374175d1146d req-ea254036-405d-463c-b24c-05778777c27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.128 226109 DEBUG nova.compute.manager [req-d153b169-fc98-497c-9358-374175d1146d req-ea254036-405d-463c-b24c-05778777c27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.128 226109 WARNING nova.compute.manager [req-d153b169-fc98-497c-9358-374175d1146d req-ea254036-405d-463c-b24c-05778777c27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state error and task_state rebuild_spawning.
Dec 06 07:03:12 compute-1 ceph-mon[81689]: pgmap v1270: 305 pgs: 305 active+clean; 444 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 202 op/s
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.348 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 09a05ccc-abca-47d8-8e32-6e53adb95d4d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.350 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004592.3477142, 09a05ccc-abca-47d8-8e32-6e53adb95d4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.350 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] VM Resumed (Lifecycle Event)
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.352 226109 DEBUG nova.compute.manager [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.352 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.355 226109 INFO nova.virt.libvirt.driver [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance spawned successfully.
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.356 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.446 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.449 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.476 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.477 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004592.3481965, 09a05ccc-abca-47d8-8e32-6e53adb95d4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.477 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] VM Started (Lifecycle Event)
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.479 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.479 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.480 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.480 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.480 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.481 226109 DEBUG nova.virt.libvirt.driver [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.493 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.500 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.533 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.586 226109 DEBUG nova.compute.manager [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.647 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.647 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.648 226109 DEBUG nova.objects.instance [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:03:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:12 compute-1 nova_compute[226101]: 2025-12-06 07:03:12.705 226109 DEBUG oslo_concurrency.lockutils [None req-97a87fb3-39ae-4efc-a1f5-164ff934a6ae a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/566616693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.450 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.623 226109 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9cr5jvub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={82a6fabc-b798-4571-a13e-f9c67a3f9413='1f7e2e99-fed4-47ad-9a87-fb8a1b33200d'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.624 226109 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Creating instance directory: /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.624 226109 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Ensure instance console log exists: /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.624 226109 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.624 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.625 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.626 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.630 226109 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.631 226109 DEBUG nova.virt.libvirt.vif [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:00Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.632 226109 DEBUG nova.network.os_vif_util [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.632 226109 DEBUG nova.network.os_vif_util [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.633 226109 DEBUG os_vif [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.633 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.634 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.634 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.636 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.637 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap510079bd-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.637 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap510079bd-5c, col_values=(('external_ids', {'iface-id': '510079bd-5c00-4b8a-b2eb-d63f0ffc3d69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:77:d0', 'vm-uuid': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:13 compute-1 NetworkManager[49031]: <info>  [1765004593.6395] manager: (tap510079bd-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.638 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.642 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.644 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.645 226109 INFO os_vif [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c')
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.648 226109 DEBUG nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Dec 06 07:03:13 compute-1 nova_compute[226101]: 2025-12-06 07:03:13.648 226109 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9cr5jvub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={82a6fabc-b798-4571-a13e-f9c67a3f9413='1f7e2e99-fed4-47ad-9a87-fb8a1b33200d'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Dec 06 07:03:14 compute-1 nova_compute[226101]: 2025-12-06 07:03:14.362 226109 DEBUG nova.compute.manager [req-0958acce-cb7e-4868-915e-a952e9ef6bf9 req-ae308861-2ed0-4360-8d7e-c7133d3915fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:14 compute-1 nova_compute[226101]: 2025-12-06 07:03:14.363 226109 DEBUG oslo_concurrency.lockutils [req-0958acce-cb7e-4868-915e-a952e9ef6bf9 req-ae308861-2ed0-4360-8d7e-c7133d3915fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:14 compute-1 nova_compute[226101]: 2025-12-06 07:03:14.363 226109 DEBUG oslo_concurrency.lockutils [req-0958acce-cb7e-4868-915e-a952e9ef6bf9 req-ae308861-2ed0-4360-8d7e-c7133d3915fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:14 compute-1 nova_compute[226101]: 2025-12-06 07:03:14.363 226109 DEBUG oslo_concurrency.lockutils [req-0958acce-cb7e-4868-915e-a952e9ef6bf9 req-ae308861-2ed0-4360-8d7e-c7133d3915fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:14 compute-1 nova_compute[226101]: 2025-12-06 07:03:14.364 226109 DEBUG nova.compute.manager [req-0958acce-cb7e-4868-915e-a952e9ef6bf9 req-ae308861-2ed0-4360-8d7e-c7133d3915fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:14 compute-1 nova_compute[226101]: 2025-12-06 07:03:14.364 226109 WARNING nova.compute.manager [req-0958acce-cb7e-4868-915e-a952e9ef6bf9 req-ae308861-2ed0-4360-8d7e-c7133d3915fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state active and task_state None.
Dec 06 07:03:14 compute-1 ceph-mon[81689]: pgmap v1271: 305 pgs: 305 active+clean; 451 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 160 op/s
Dec 06 07:03:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:14.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:15 compute-1 nova_compute[226101]: 2025-12-06 07:03:15.843 226109 DEBUG nova.network.neutron [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Dec 06 07:03:16 compute-1 nova_compute[226101]: 2025-12-06 07:03:16.137 226109 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp9cr5jvub',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={82a6fabc-b798-4571-a13e-f9c67a3f9413='1f7e2e99-fed4-47ad-9a87-fb8a1b33200d'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Dec 06 07:03:16 compute-1 kernel: tap510079bd-5c: entered promiscuous mode
Dec 06 07:03:16 compute-1 NetworkManager[49031]: <info>  [1765004596.3859] manager: (tap510079bd-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Dec 06 07:03:16 compute-1 nova_compute[226101]: 2025-12-06 07:03:16.389 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:16 compute-1 ovn_controller[130279]: 2025-12-06T07:03:16Z|00082|binding|INFO|Claiming lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for this additional chassis.
Dec 06 07:03:16 compute-1 ovn_controller[130279]: 2025-12-06T07:03:16Z|00083|binding|INFO|510079bd-5c00-4b8a-b2eb-d63f0ffc3d69: Claiming fa:16:3e:30:77:d0 10.100.0.8
Dec 06 07:03:16 compute-1 ovn_controller[130279]: 2025-12-06T07:03:16Z|00084|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 ovn-installed in OVS
Dec 06 07:03:16 compute-1 nova_compute[226101]: 2025-12-06 07:03:16.415 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:16 compute-1 systemd-machined[190302]: New machine qemu-13-instance-00000017.
Dec 06 07:03:16 compute-1 systemd[1]: Started Virtual Machine qemu-13-instance-00000017.
Dec 06 07:03:16 compute-1 systemd-udevd[236650]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:03:16 compute-1 NetworkManager[49031]: <info>  [1765004596.4690] device (tap510079bd-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:03:16 compute-1 NetworkManager[49031]: <info>  [1765004596.4699] device (tap510079bd-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:03:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:16 compute-1 ceph-mon[81689]: pgmap v1272: 305 pgs: 305 active+clean; 465 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.0 MiB/s wr, 199 op/s
Dec 06 07:03:16 compute-1 nova_compute[226101]: 2025-12-06 07:03:16.685 226109 INFO nova.compute.manager [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Rebuilding instance
Dec 06 07:03:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:17 compute-1 nova_compute[226101]: 2025-12-06 07:03:17.157 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:17 compute-1 nova_compute[226101]: 2025-12-06 07:03:17.172 226109 DEBUG nova.compute.manager [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:17 compute-1 nova_compute[226101]: 2025-12-06 07:03:17.241 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'pci_requests' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:17 compute-1 nova_compute[226101]: 2025-12-06 07:03:17.264 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'pci_devices' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:17 compute-1 nova_compute[226101]: 2025-12-06 07:03:17.293 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'resources' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:17 compute-1 nova_compute[226101]: 2025-12-06 07:03:17.328 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'migration_context' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:17 compute-1 nova_compute[226101]: 2025-12-06 07:03:17.355 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:03:17 compute-1 nova_compute[226101]: 2025-12-06 07:03:17.360 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:03:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.209 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004598.2093859, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.210 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Started (Lifecycle Event)
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.310 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.453 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.639 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:18 compute-1 ceph-mon[81689]: pgmap v1273: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.0 MiB/s wr, 236 op/s
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.695 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004598.6950295, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.696 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Resumed (Lifecycle Event)
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.739 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.743 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:03:18 compute-1 nova_compute[226101]: 2025-12-06 07:03:18.765 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com
Dec 06 07:03:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:20.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:20.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:20 compute-1 ovn_controller[130279]: 2025-12-06T07:03:20Z|00085|binding|INFO|Claiming lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for this chassis.
Dec 06 07:03:20 compute-1 ovn_controller[130279]: 2025-12-06T07:03:20Z|00086|binding|INFO|510079bd-5c00-4b8a-b2eb-d63f0ffc3d69: Claiming fa:16:3e:30:77:d0 10.100.0.8
Dec 06 07:03:20 compute-1 ovn_controller[130279]: 2025-12-06T07:03:20Z|00087|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 up in Southbound
Dec 06 07:03:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:20.892 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:77:d0 10.100.0.8'], port_security=['fa:16:3e:30:77:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '9', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:20.894 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 bound to our chassis
Dec 06 07:03:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:20.897 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfa287bf-10c3-40fc-8071-37bb7f801357
Dec 06 07:03:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:20.921 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0acb90-d6f7-4dec-b912-2ed1d39bcdae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:20.954 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0919e0af-eda8-4b07-adf5-14c1d265b6dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:20.958 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[90e98502-1ab5-42dd-9ed9-bb88926a4728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:20.987 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ca691990-da80-400c-8e29-36c0aaa7d39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:21.010 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7c20afbe-dfcb-4257-a5e0-089d954b68bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 5, 'rx_bytes': 868, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 5, 'rx_bytes': 868, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482008, 'reachable_time': 32535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236707, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:21.033 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd59dd9-4ddd-44b7-9cff-e9768af1192a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdfa287bf-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482018, 'tstamp': 482018}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236708, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdfa287bf-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482020, 'tstamp': 482020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236708, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:21.035 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:21 compute-1 nova_compute[226101]: 2025-12-06 07:03:21.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:21 compute-1 nova_compute[226101]: 2025-12-06 07:03:21.039 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:21.038 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfa287bf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:21.039 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:21.039 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfa287bf-10, col_values=(('external_ids', {'iface-id': 'a8b489de-cf80-4c12-869a-5e807cdbba8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:21.039 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:21 compute-1 podman[236706]: 2025-12-06 07:03:21.151525343 +0000 UTC m=+0.131576520 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:03:21 compute-1 nova_compute[226101]: 2025-12-06 07:03:21.342 226109 INFO nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Post operation of migration started
Dec 06 07:03:21 compute-1 ceph-mon[81689]: pgmap v1274: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 06 07:03:21 compute-1 nova_compute[226101]: 2025-12-06 07:03:21.778 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:03:21 compute-1 nova_compute[226101]: 2025-12-06 07:03:21.778 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquired lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:03:21 compute-1 nova_compute[226101]: 2025-12-06 07:03:21.778 226109 DEBUG nova.network.neutron [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:03:22 compute-1 ceph-mon[81689]: pgmap v1275: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 182 op/s
Dec 06 07:03:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:03:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:22.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:03:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:23 compute-1 nova_compute[226101]: 2025-12-06 07:03:23.455 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:23 compute-1 nova_compute[226101]: 2025-12-06 07:03:23.677 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:24 compute-1 nova_compute[226101]: 2025-12-06 07:03:24.433 226109 DEBUG nova.network.neutron [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:03:24 compute-1 ceph-mon[81689]: pgmap v1276: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Dec 06 07:03:24 compute-1 nova_compute[226101]: 2025-12-06 07:03:24.453 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Releasing lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:03:24 compute-1 nova_compute[226101]: 2025-12-06 07:03:24.467 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:24 compute-1 nova_compute[226101]: 2025-12-06 07:03:24.468 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:24 compute-1 nova_compute[226101]: 2025-12-06 07:03:24.468 226109 DEBUG oslo_concurrency.lockutils [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:24 compute-1 nova_compute[226101]: 2025-12-06 07:03:24.471 226109 INFO nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Dec 06 07:03:24 compute-1 virtqemud[225710]: Domain id=13 name='instance-00000017' uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce is tainted: custom-monitor
Dec 06 07:03:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:03:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:24.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:03:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:25 compute-1 nova_compute[226101]: 2025-12-06 07:03:25.478 226109 INFO nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Dec 06 07:03:26 compute-1 ovn_controller[130279]: 2025-12-06T07:03:26Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:03:26 compute-1 ovn_controller[130279]: 2025-12-06T07:03:26Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:03:26 compute-1 nova_compute[226101]: 2025-12-06 07:03:26.484 226109 INFO nova.virt.libvirt.driver [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Dec 06 07:03:26 compute-1 nova_compute[226101]: 2025-12-06 07:03:26.488 226109 DEBUG nova.compute.manager [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:26 compute-1 nova_compute[226101]: 2025-12-06 07:03:26.507 226109 DEBUG nova.objects.instance [None req-c2fcd6f4-b0d0-4f9c-bbbb-814408eff4b8 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:03:26 compute-1 ceph-mon[81689]: pgmap v1277: 305 pgs: 305 active+clean; 487 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 147 op/s
Dec 06 07:03:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:26.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:27 compute-1 nova_compute[226101]: 2025-12-06 07:03:27.405 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:03:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:28 compute-1 nova_compute[226101]: 2025-12-06 07:03:28.456 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:28 compute-1 ceph-mon[81689]: pgmap v1278: 305 pgs: 305 active+clean; 502 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 100 op/s
Dec 06 07:03:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/877678940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:28.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:28 compute-1 nova_compute[226101]: 2025-12-06 07:03:28.679 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:28.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2969829256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:29 compute-1 kernel: tap0e3b84b1-ee (unregistering): left promiscuous mode
Dec 06 07:03:29 compute-1 NetworkManager[49031]: <info>  [1765004609.6550] device (tap0e3b84b1-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:03:29 compute-1 nova_compute[226101]: 2025-12-06 07:03:29.664 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:29 compute-1 ovn_controller[130279]: 2025-12-06T07:03:29Z|00088|binding|INFO|Releasing lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 from this chassis (sb_readonly=0)
Dec 06 07:03:29 compute-1 ovn_controller[130279]: 2025-12-06T07:03:29Z|00089|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 down in Southbound
Dec 06 07:03:29 compute-1 ovn_controller[130279]: 2025-12-06T07:03:29Z|00090|binding|INFO|Removing iface tap0e3b84b1-ee ovn-installed in OVS
Dec 06 07:03:29 compute-1 nova_compute[226101]: 2025-12-06 07:03:29.669 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.673 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:2f:12 10.100.0.5'], port_security=['fa:16:3e:7b:2f:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '09a05ccc-abca-47d8-8e32-6e53adb95d4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=0e3b84b1-ee05-4dbf-a372-b2a404592bf9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.674 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 unbound from our chassis
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.675 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:03:29 compute-1 nova_compute[226101]: 2025-12-06 07:03:29.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.693 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b1c8f9-afb8-4d7c-963f-97a0f1be597a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:29 compute-1 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000013.scope: Deactivated successfully.
Dec 06 07:03:29 compute-1 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000013.scope: Consumed 14.573s CPU time.
Dec 06 07:03:29 compute-1 systemd-machined[190302]: Machine qemu-12-instance-00000013 terminated.
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.731 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[abeb7938-8a54-45d3-900a-089e3c3eb913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.734 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9d36a73a-dc52-46ce-ad5b-77ba848677df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:29 compute-1 podman[236741]: 2025-12-06 07:03:29.743086052 +0000 UTC m=+0.064155097 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:03:29 compute-1 podman[236739]: 2025-12-06 07:03:29.752382394 +0000 UTC m=+0.075870233 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.759 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8dc6eb-7f95-4aab-b402-54eba4a9da62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.774 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1d850e5c-3ef0-4854-b48e-6bddd71086d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479634, 'reachable_time': 43555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236788, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.789 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[277e88eb-9a6c-46b2-91d5-fc7f2050c5af]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479647, 'tstamp': 479647}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236789, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479651, 'tstamp': 479651}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236789, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.791 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:29 compute-1 nova_compute[226101]: 2025-12-06 07:03:29.792 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:29 compute-1 nova_compute[226101]: 2025-12-06 07:03:29.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.796 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.796 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.797 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:29.797 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:29 compute-1 nova_compute[226101]: 2025-12-06 07:03:29.879 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:29 compute-1 nova_compute[226101]: 2025-12-06 07:03:29.884 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.073 226109 DEBUG nova.compute.manager [req-3f5d6975-fbfb-4fe8-823f-8881c39761cb req-c258fd19-c304-4b97-aafe-f8fc56ff30c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.074 226109 DEBUG oslo_concurrency.lockutils [req-3f5d6975-fbfb-4fe8-823f-8881c39761cb req-c258fd19-c304-4b97-aafe-f8fc56ff30c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.074 226109 DEBUG oslo_concurrency.lockutils [req-3f5d6975-fbfb-4fe8-823f-8881c39761cb req-c258fd19-c304-4b97-aafe-f8fc56ff30c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.074 226109 DEBUG oslo_concurrency.lockutils [req-3f5d6975-fbfb-4fe8-823f-8881c39761cb req-c258fd19-c304-4b97-aafe-f8fc56ff30c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.074 226109 DEBUG nova.compute.manager [req-3f5d6975-fbfb-4fe8-823f-8881c39761cb req-c258fd19-c304-4b97-aafe-f8fc56ff30c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.075 226109 WARNING nova.compute.manager [req-3f5d6975-fbfb-4fe8-823f-8881c39761cb req-c258fd19-c304-4b97-aafe-f8fc56ff30c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state active and task_state rebuilding.
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.212 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Check if temp file /var/lib/nova/instances/tmpeswar0ys exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.212 226109 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpeswar0ys',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.418 226109 INFO nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance shutdown successfully after 13 seconds.
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.423 226109 INFO nova.virt.libvirt.driver [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance destroyed successfully.
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.428 226109 INFO nova.virt.libvirt.driver [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance destroyed successfully.
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.429 226109 DEBUG nova.virt.libvirt.vif [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:03:15Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.429 226109 DEBUG nova.network.os_vif_util [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.429 226109 DEBUG nova.network.os_vif_util [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.430 226109 DEBUG os_vif [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.431 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.432 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e3b84b1-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.433 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.435 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:03:30 compute-1 nova_compute[226101]: 2025-12-06 07:03:30.436 226109 INFO os_vif [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee')
Dec 06 07:03:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:30.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:30 compute-1 ceph-mon[81689]: pgmap v1279: 305 pgs: 305 active+clean; 502 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Dec 06 07:03:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:30.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:32 compute-1 ceph-mon[81689]: pgmap v1280: 305 pgs: 305 active+clean; 489 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.197 226109 DEBUG nova.compute.manager [req-f58e151a-2334-4897-9a15-a8c683265cc8 req-b76f49a9-633b-4d13-b2bc-468eff01d617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.198 226109 DEBUG oslo_concurrency.lockutils [req-f58e151a-2334-4897-9a15-a8c683265cc8 req-b76f49a9-633b-4d13-b2bc-468eff01d617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.198 226109 DEBUG oslo_concurrency.lockutils [req-f58e151a-2334-4897-9a15-a8c683265cc8 req-b76f49a9-633b-4d13-b2bc-468eff01d617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.198 226109 DEBUG oslo_concurrency.lockutils [req-f58e151a-2334-4897-9a15-a8c683265cc8 req-b76f49a9-633b-4d13-b2bc-468eff01d617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.198 226109 DEBUG nova.compute.manager [req-f58e151a-2334-4897-9a15-a8c683265cc8 req-b76f49a9-633b-4d13-b2bc-468eff01d617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.199 226109 WARNING nova.compute.manager [req-f58e151a-2334-4897-9a15-a8c683265cc8 req-b76f49a9-633b-4d13-b2bc-468eff01d617 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state active and task_state rebuilding.
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.254 226109 INFO nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deleting instance files /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d_del
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.255 226109 INFO nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deletion of /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d_del complete
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.412 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.413 226109 INFO nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating image(s)
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.436 226109 DEBUG nova.storage.rbd_utils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.461 226109 DEBUG nova.storage.rbd_utils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.488 226109 DEBUG nova.storage.rbd_utils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.492 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.546 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.548 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.548 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.549 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.576 226109 DEBUG nova.storage.rbd_utils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.580 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:32.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:32.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.925 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:32 compute-1 nova_compute[226101]: 2025-12-06 07:03:32.990 226109 DEBUG nova.storage.rbd_utils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] resizing rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.091 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.091 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Ensure instance console log exists: /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.092 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.092 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.092 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.094 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Start _get_guest_xml network_info=[{"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.099 226109 WARNING nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.107 226109 DEBUG nova.virt.libvirt.host [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.107 226109 DEBUG nova.virt.libvirt.host [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.113 226109 DEBUG nova.virt.libvirt.host [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.114 226109 DEBUG nova.virt.libvirt.host [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.115 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.115 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.115 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.115 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.116 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.116 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.116 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.116 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.117 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.117 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.117 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.117 226109 DEBUG nova.virt.hardware [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.117 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.134 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.458 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:03:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1048102116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.638 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.670 226109 DEBUG nova.storage.rbd_utils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:33 compute-1 nova_compute[226101]: 2025-12-06 07:03:33.676 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:34 compute-1 ceph-mon[81689]: pgmap v1281: 305 pgs: 305 active+clean; 477 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Dec 06 07:03:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1048102116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:03:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/492606586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.175 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.178 226109 DEBUG nova.virt.libvirt.vif [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:03:32Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.178 226109 DEBUG nova.network.os_vif_util [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.179 226109 DEBUG nova.network.os_vif_util [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.182 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <uuid>09a05ccc-abca-47d8-8e32-6e53adb95d4d</uuid>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <name>instance-00000013</name>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersAdminTestJSON-server-1679126586</nova:name>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:03:33</nova:creationTime>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <nova:user uuid="a3cae056210a400fa5e3495fe827d29a">tempest-ServersAdminTestJSON-1902776367-project-member</nova:user>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <nova:project uuid="b6179a8b65c2484eb7ca1e068d93a58c">tempest-ServersAdminTestJSON-1902776367</nova:project>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <nova:port uuid="0e3b84b1-ee05-4dbf-a372-b2a404592bf9">
Dec 06 07:03:34 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <system>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <entry name="serial">09a05ccc-abca-47d8-8e32-6e53adb95d4d</entry>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <entry name="uuid">09a05ccc-abca-47d8-8e32-6e53adb95d4d</entry>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </system>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <os>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   </os>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <features>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   </features>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk">
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       </source>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config">
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       </source>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:03:34 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:7b:2f:12"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <target dev="tap0e3b84b1-ee"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/console.log" append="off"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <video>
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </video>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:03:34 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:03:34 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:03:34 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:03:34 compute-1 nova_compute[226101]: </domain>
Dec 06 07:03:34 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.183 226109 DEBUG nova.virt.libvirt.vif [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:03:32Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.183 226109 DEBUG nova.network.os_vif_util [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.184 226109 DEBUG nova.network.os_vif_util [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.184 226109 DEBUG os_vif [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.185 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.185 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.186 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.188 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.188 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e3b84b1-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.189 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e3b84b1-ee, col_values=(('external_ids', {'iface-id': '0e3b84b1-ee05-4dbf-a372-b2a404592bf9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:2f:12', 'vm-uuid': '09a05ccc-abca-47d8-8e32-6e53adb95d4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.222 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:34 compute-1 NetworkManager[49031]: <info>  [1765004614.2227] manager: (tap0e3b84b1-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.226 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.227 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.228 226109 INFO os_vif [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee')
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.285 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.285 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.285 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No VIF found with MAC fa:16:3e:7b:2f:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.286 226109 INFO nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Using config drive
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.311 226109 DEBUG nova.storage.rbd_utils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.339 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.398 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'keypairs' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:03:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:34.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:03:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:03:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:34.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.909 226109 INFO nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Creating config drive at /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config
Dec 06 07:03:34 compute-1 nova_compute[226101]: 2025-12-06 07:03:34.914 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrann8rx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.047 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrann8rx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.075 226109 DEBUG nova.storage.rbd_utils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.080 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/492606586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/299817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.230 226109 DEBUG oslo_concurrency.processutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config 09a05ccc-abca-47d8-8e32-6e53adb95d4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.231 226109 INFO nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deleting local config drive /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d/disk.config because it was imported into RBD.
Dec 06 07:03:35 compute-1 kernel: tap0e3b84b1-ee: entered promiscuous mode
Dec 06 07:03:35 compute-1 NetworkManager[49031]: <info>  [1765004615.2808] manager: (tap0e3b84b1-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Dec 06 07:03:35 compute-1 systemd-udevd[237119]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:03:35 compute-1 ovn_controller[130279]: 2025-12-06T07:03:35Z|00091|binding|INFO|Claiming lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for this chassis.
Dec 06 07:03:35 compute-1 ovn_controller[130279]: 2025-12-06T07:03:35Z|00092|binding|INFO|0e3b84b1-ee05-4dbf-a372-b2a404592bf9: Claiming fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.315 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:35 compute-1 NetworkManager[49031]: <info>  [1765004615.3266] device (tap0e3b84b1-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:03:35 compute-1 NetworkManager[49031]: <info>  [1765004615.3282] device (tap0e3b84b1-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.328 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:2f:12 10.100.0.5'], port_security=['fa:16:3e:7b:2f:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '09a05ccc-abca-47d8-8e32-6e53adb95d4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=0e3b84b1-ee05-4dbf-a372-b2a404592bf9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.329 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 bound to our chassis
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.332 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:03:35 compute-1 ovn_controller[130279]: 2025-12-06T07:03:35Z|00093|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 ovn-installed in OVS
Dec 06 07:03:35 compute-1 ovn_controller[130279]: 2025-12-06T07:03:35Z|00094|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 up in Southbound
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.333 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.345 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b15431d-1d73-43e4-a1e0-245ddc1b5112]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:35 compute-1 systemd-machined[190302]: New machine qemu-14-instance-00000013.
Dec 06 07:03:35 compute-1 systemd[1]: Started Virtual Machine qemu-14-instance-00000013.
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.381 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5979b289-76b6-486a-a6de-065e29af7ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.384 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[904bc8ed-a5c7-4ec0-82c2-b0fdfe9ea2af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:35 compute-1 sudo[237126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:35 compute-1 sudo[237126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-1 sudo[237126]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.416 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[05d3c490-b0ac-4906-a5f6-4ff5482ba4ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.427 226109 DEBUG nova.compute.manager [req-7df51c6f-cf53-470f-986f-46aee545a6e9 req-268bdfb6-d2b6-4d37-862a-08ae0997916a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.428 226109 DEBUG oslo_concurrency.lockutils [req-7df51c6f-cf53-470f-986f-46aee545a6e9 req-268bdfb6-d2b6-4d37-862a-08ae0997916a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.429 226109 DEBUG oslo_concurrency.lockutils [req-7df51c6f-cf53-470f-986f-46aee545a6e9 req-268bdfb6-d2b6-4d37-862a-08ae0997916a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.429 226109 DEBUG oslo_concurrency.lockutils [req-7df51c6f-cf53-470f-986f-46aee545a6e9 req-268bdfb6-d2b6-4d37-862a-08ae0997916a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.429 226109 DEBUG nova.compute.manager [req-7df51c6f-cf53-470f-986f-46aee545a6e9 req-268bdfb6-d2b6-4d37-862a-08ae0997916a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.429 226109 DEBUG nova.compute.manager [req-7df51c6f-cf53-470f-986f-46aee545a6e9 req-268bdfb6-d2b6-4d37-862a-08ae0997916a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.434 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cb91e587-ea8a-4b1f-a51a-900f95adf82e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479634, 'reachable_time': 43555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237163, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.451 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[38ba8d25-924a-4bf9-916a-97c9dcda8993]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479647, 'tstamp': 479647}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237179, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479651, 'tstamp': 479651}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237179, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.453 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.454 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.455 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.457 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.457 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.458 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:35.458 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:35 compute-1 sudo[237160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:03:35 compute-1 sudo[237160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-1 sudo[237160]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:35 compute-1 sudo[237188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:35 compute-1 sudo[237188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-1 sudo[237188]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:35 compute-1 sudo[237213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:03:35 compute-1 sudo[237213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.867 226109 DEBUG nova.compute.manager [req-51a6af58-5d7e-422c-b17d-e5d096693d95 req-600364be-9744-4788-a597-31f6bb9769d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.868 226109 DEBUG oslo_concurrency.lockutils [req-51a6af58-5d7e-422c-b17d-e5d096693d95 req-600364be-9744-4788-a597-31f6bb9769d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.868 226109 DEBUG oslo_concurrency.lockutils [req-51a6af58-5d7e-422c-b17d-e5d096693d95 req-600364be-9744-4788-a597-31f6bb9769d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.868 226109 DEBUG oslo_concurrency.lockutils [req-51a6af58-5d7e-422c-b17d-e5d096693d95 req-600364be-9744-4788-a597-31f6bb9769d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.869 226109 DEBUG nova.compute.manager [req-51a6af58-5d7e-422c-b17d-e5d096693d95 req-600364be-9744-4788-a597-31f6bb9769d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:35 compute-1 nova_compute[226101]: 2025-12-06 07:03:35.869 226109 WARNING nova.compute.manager [req-51a6af58-5d7e-422c-b17d-e5d096693d95 req-600364be-9744-4788-a597-31f6bb9769d2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state active and task_state rebuild_spawning.
Dec 06 07:03:36 compute-1 sudo[237213]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:36 compute-1 ceph-mon[81689]: pgmap v1282: 305 pgs: 305 active+clean; 459 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 2.7 MiB/s wr, 103 op/s
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.242 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 09a05ccc-abca-47d8-8e32-6e53adb95d4d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.242 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004616.2419417, 09a05ccc-abca-47d8-8e32-6e53adb95d4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.243 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] VM Resumed (Lifecycle Event)
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.246 226109 DEBUG nova.compute.manager [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.246 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.249 226109 INFO nova.virt.libvirt.driver [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance spawned successfully.
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.250 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.477 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.488 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.493 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.494 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.494 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.495 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.495 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.495 226109 DEBUG nova.virt.libvirt.driver [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.527 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.527 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004616.2452526, 09a05ccc-abca-47d8-8e32-6e53adb95d4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.528 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] VM Started (Lifecycle Event)
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.566 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.570 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.593 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.603 226109 DEBUG nova.compute.manager [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:03:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:36.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.669 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.669 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.670 226109 DEBUG nova.objects.instance [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.726 226109 DEBUG oslo_concurrency.lockutils [None req-d53b464c-3d52-4f3c-8341-bba5de6e5e8d a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.756 226109 INFO nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Took 5.79 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.757 226109 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.797 226109 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpeswar0ys',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4dd1863a-3cb1-4e42-8611-ed9587a0b6ce',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(180afc56-4a01-4d3b-b7b6-eba0219610c0),old_vol_attachment_ids={82a6fabc-b798-4571-a13e-f9c67a3f9413='48e3400e-42f6-4554-8920-994a6676cae0'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.799 226109 DEBUG nova.objects.instance [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lazy-loading 'migration_context' on Instance uuid 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.801 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.802 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.802 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.824 226109 DEBUG nova.virt.libvirt.migration [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Find same serial number: pos=1, serial=82a6fabc-b798-4571-a13e-f9c67a3f9413 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.825 226109 DEBUG nova.virt.libvirt.vif [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:26Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.826 226109 DEBUG nova.network.os_vif_util [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.826 226109 DEBUG nova.network.os_vif_util [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.827 226109 DEBUG nova.virt.libvirt.migration [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating guest XML with vif config: <interface type="ethernet">
Dec 06 07:03:36 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:30:77:d0"/>
Dec 06 07:03:36 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:03:36 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:03:36 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:03:36 compute-1 nova_compute[226101]:   <target dev="tap510079bd-5c"/>
Dec 06 07:03:36 compute-1 nova_compute[226101]: </interface>
Dec 06 07:03:36 compute-1 nova_compute[226101]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 06 07:03:36 compute-1 nova_compute[226101]: 2025-12-06 07:03:36.827 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 06 07:03:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:36.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.305 226109 DEBUG nova.virt.libvirt.migration [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.305 226109 INFO nova.virt.libvirt.migration [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.410 226109 INFO nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.578 226109 DEBUG nova.compute.manager [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.579 226109 DEBUG oslo_concurrency.lockutils [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.579 226109 DEBUG oslo_concurrency.lockutils [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.579 226109 DEBUG oslo_concurrency.lockutils [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.579 226109 DEBUG nova.compute.manager [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.579 226109 WARNING nova.compute.manager [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.579 226109 DEBUG nova.compute.manager [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-changed-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.580 226109 DEBUG nova.compute.manager [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Refreshing instance network info cache due to event network-changed-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.580 226109 DEBUG oslo_concurrency.lockutils [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.580 226109 DEBUG oslo_concurrency.lockutils [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.580 226109 DEBUG nova.network.neutron [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Refreshing network info cache for port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:03:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.906 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004617.9060938, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.906 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Paused (Lifecycle Event)
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.915 226109 DEBUG nova.virt.libvirt.migration [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.915 226109 DEBUG nova.virt.libvirt.migration [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.930 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.933 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.952 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.982 226109 DEBUG nova.compute.manager [req-147d2a96-7ecc-457c-9c3a-a885413a33c0 req-82680ac2-05f1-4bba-873a-fa5008d43b22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.983 226109 DEBUG oslo_concurrency.lockutils [req-147d2a96-7ecc-457c-9c3a-a885413a33c0 req-82680ac2-05f1-4bba-873a-fa5008d43b22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.983 226109 DEBUG oslo_concurrency.lockutils [req-147d2a96-7ecc-457c-9c3a-a885413a33c0 req-82680ac2-05f1-4bba-873a-fa5008d43b22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.983 226109 DEBUG oslo_concurrency.lockutils [req-147d2a96-7ecc-457c-9c3a-a885413a33c0 req-82680ac2-05f1-4bba-873a-fa5008d43b22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.984 226109 DEBUG nova.compute.manager [req-147d2a96-7ecc-457c-9c3a-a885413a33c0 req-82680ac2-05f1-4bba-873a-fa5008d43b22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:37 compute-1 nova_compute[226101]: 2025-12-06 07:03:37.984 226109 WARNING nova.compute.manager [req-147d2a96-7ecc-457c-9c3a-a885413a33c0 req-82680ac2-05f1-4bba-873a-fa5008d43b22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state error and task_state None.
Dec 06 07:03:38 compute-1 kernel: tap510079bd-5c (unregistering): left promiscuous mode
Dec 06 07:03:38 compute-1 NetworkManager[49031]: <info>  [1765004618.0971] device (tap510079bd-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.105 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:38 compute-1 ovn_controller[130279]: 2025-12-06T07:03:38Z|00095|binding|INFO|Releasing lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 from this chassis (sb_readonly=0)
Dec 06 07:03:38 compute-1 ovn_controller[130279]: 2025-12-06T07:03:38Z|00096|binding|INFO|Setting lport 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 down in Southbound
Dec 06 07:03:38 compute-1 ovn_controller[130279]: 2025-12-06T07:03:38Z|00097|binding|INFO|Removing iface tap510079bd-5c ovn-installed in OVS
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.108 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.115 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:77:d0 10.100.0.8'], port_security=['fa:16:3e:30:77:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '9f96b960-b4f2-40bd-ae99-08121f5e8b78'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '16', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.117 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.118 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfa287bf-10c3-40fc-8071-37bb7f801357
Dec 06 07:03:38 compute-1 ceph-mon[81689]: pgmap v1283: 305 pgs: 305 active+clean; 484 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.8 MiB/s wr, 126 op/s
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.134 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e0011045-a602-4774-814d-c9703e46513f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.162 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0da90dc2-0d92-458a-855f-9f74f1a21baf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.165 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b8daaa-9310-4ef9-9129-fbe6acf444c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:38 compute-1 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000017.scope: Deactivated successfully.
Dec 06 07:03:38 compute-1 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000017.scope: Consumed 3.212s CPU time.
Dec 06 07:03:38 compute-1 systemd-machined[190302]: Machine qemu-13-instance-00000017 terminated.
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.188 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8e02fa0d-4f08-41ba-b632-304b95dd48b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.203 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[87358106-ac66-4242-859e-ddc08a1c3a5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 29, 'tx_packets': 7, 'rx_bytes': 1498, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 29, 'tx_packets': 7, 'rx_bytes': 1498, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482008, 'reachable_time': 32535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237324, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.217 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[28c61324-23eb-44e4-827c-7552e1c2446b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdfa287bf-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482018, 'tstamp': 482018}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237325, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdfa287bf-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482020, 'tstamp': 482020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237325, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.218 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.219 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.223 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.224 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfa287bf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.224 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.224 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfa287bf-10, col_values=(('external_ids', {'iface-id': 'a8b489de-cf80-4c12-869a-5e807cdbba8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:38.225 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:38 compute-1 virtqemud[225710]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-82a6fabc-b798-4571-a13e-f9c67a3f9413: No such file or directory
Dec 06 07:03:38 compute-1 virtqemud[225710]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-82a6fabc-b798-4571-a13e-f9c67a3f9413: No such file or directory
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.304 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.304 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.305 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.417 226109 DEBUG nova.virt.libvirt.guest [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '4dd1863a-3cb1-4e42-8611-ed9587a0b6ce' (instance-00000017) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.418 226109 INFO nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migration operation has completed
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.418 226109 INFO nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] _post_live_migration() is started..
Dec 06 07:03:38 compute-1 nova_compute[226101]: 2025-12-06 07:03:38.460 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:38.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:38.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.222 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.338 226109 DEBUG nova.network.neutron [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updated VIF entry in instance network info cache for port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.339 226109 DEBUG nova.network.neutron [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating instance_info_cache with network_info: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.385 226109 DEBUG oslo_concurrency.lockutils [req-a25a78a7-0b0b-4622-a4e9-080175982091 req-d5a2707f-a0fb-4b5e-8e00-c183defaad18 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4dd1863a-3cb1-4e42-8611-ed9587a0b6ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.650 226109 DEBUG nova.compute.manager [req-44aec3ce-68f6-4b20-b6e4-7f573a4fb3a7 req-dc9e603d-33c0-489a-8af5-35a362136231 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.650 226109 DEBUG oslo_concurrency.lockutils [req-44aec3ce-68f6-4b20-b6e4-7f573a4fb3a7 req-dc9e603d-33c0-489a-8af5-35a362136231 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.650 226109 DEBUG oslo_concurrency.lockutils [req-44aec3ce-68f6-4b20-b6e4-7f573a4fb3a7 req-dc9e603d-33c0-489a-8af5-35a362136231 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.651 226109 DEBUG oslo_concurrency.lockutils [req-44aec3ce-68f6-4b20-b6e4-7f573a4fb3a7 req-dc9e603d-33c0-489a-8af5-35a362136231 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.651 226109 DEBUG nova.compute.manager [req-44aec3ce-68f6-4b20-b6e4-7f573a4fb3a7 req-dc9e603d-33c0-489a-8af5-35a362136231 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.651 226109 DEBUG nova.compute.manager [req-44aec3ce-68f6-4b20-b6e4-7f573a4fb3a7 req-dc9e603d-33c0-489a-8af5-35a362136231 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.849 226109 DEBUG nova.network.neutron [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Activated binding for port 510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.850 226109 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.851 226109 DEBUG nova.virt.libvirt.vif [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:02:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-509443515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-509443515',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:00Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-m8zqnud8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:29Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=4dd1863a-3cb1-4e42-8611-ed9587a0b6ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.851 226109 DEBUG nova.network.os_vif_util [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "address": "fa:16:3e:30:77:d0", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap510079bd-5c", "ovs_interfaceid": "510079bd-5c00-4b8a-b2eb-d63f0ffc3d69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.852 226109 DEBUG nova.network.os_vif_util [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.852 226109 DEBUG os_vif [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.858 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.859 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap510079bd-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.860 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.862 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.864 226109 INFO os_vif [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:77:d0,bridge_name='br-int',has_traffic_filtering=True,id=510079bd-5c00-4b8a-b2eb-d63f0ffc3d69,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap510079bd-5c')
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.865 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.865 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.865 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.866 226109 DEBUG nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.866 226109 INFO nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deleting instance files /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_del
Dec 06 07:03:39 compute-1 nova_compute[226101]: 2025-12-06 07:03:39.866 226109 INFO nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Deletion of /var/lib/nova/instances/4dd1863a-3cb1-4e42-8611-ed9587a0b6ce_del complete
Dec 06 07:03:40 compute-1 ceph-mon[81689]: pgmap v1284: 305 pgs: 305 active+clean; 484 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 2.7 MiB/s wr, 104 op/s
Dec 06 07:03:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:03:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:03:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:03:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:03:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.641 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.642 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.642 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.642 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.642 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.642 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-unplugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.643 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.643 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.643 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.643 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.644 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.644 226109 WARNING nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.644 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.644 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.644 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.645 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.645 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.645 226109 WARNING nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.645 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.645 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.646 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.646 226109 DEBUG oslo_concurrency.lockutils [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.646 226109 DEBUG nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:40 compute-1 nova_compute[226101]: 2025-12-06 07:03:40.646 226109 WARNING nova.compute.manager [req-ed2b4eec-6170-40b5-8b7e-2ff4ca4989b9 req-4cc1ea2c-3147-43ed-8eba-2863aa365519 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.
Dec 06 07:03:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:40.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:42 compute-1 ceph-mon[81689]: pgmap v1285: 305 pgs: 305 active+clean; 423 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Dec 06 07:03:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:42.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:42.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.461 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.640 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.640 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.640 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.640 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:03:43 compute-1 nova_compute[226101]: 2025-12-06 07:03:43.641 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2765538678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.086 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.173 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.173 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.176 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.176 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.179 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.179 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.333 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.334 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4231MB free_disk=20.830738067626953GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.335 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.335 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.451 226109 INFO nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Updating resource usage from migration 180afc56-4a01-4d3b-b7b6-eba0219610c0
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.478 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 09a05ccc-abca-47d8-8e32-6e53adb95d4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.478 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 714f2e5b-135b-4f7e-9c62-3e1849c5e151 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.478 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 5cc7f5c4-dc50-4137-af58-60294ea57208 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.479 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Migration 180afc56-4a01-4d3b-b7b6-eba0219610c0 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.479 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.479 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.606 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:44 compute-1 ceph-mon[81689]: pgmap v1286: 305 pgs: 305 active+clean; 405 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Dec 06 07:03:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2765538678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/847334170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.811 226109 DEBUG nova.compute.manager [req-80468758-4b5a-4ac8-a195-1d3a1fb29486 req-f4e8ed7d-75f1-4eba-8457-7352e83880fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.812 226109 DEBUG oslo_concurrency.lockutils [req-80468758-4b5a-4ac8-a195-1d3a1fb29486 req-f4e8ed7d-75f1-4eba-8457-7352e83880fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.812 226109 DEBUG oslo_concurrency.lockutils [req-80468758-4b5a-4ac8-a195-1d3a1fb29486 req-f4e8ed7d-75f1-4eba-8457-7352e83880fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.812 226109 DEBUG oslo_concurrency.lockutils [req-80468758-4b5a-4ac8-a195-1d3a1fb29486 req-f4e8ed7d-75f1-4eba-8457-7352e83880fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.813 226109 DEBUG nova.compute.manager [req-80468758-4b5a-4ac8-a195-1d3a1fb29486 req-f4e8ed7d-75f1-4eba-8457-7352e83880fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] No waiting events found dispatching network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.813 226109 WARNING nova.compute.manager [req-80468758-4b5a-4ac8-a195-1d3a1fb29486 req-f4e8ed7d-75f1-4eba-8457-7352e83880fb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Received unexpected event network-vif-plugged-510079bd-5c00-4b8a-b2eb-d63f0ffc3d69 for instance with vm_state active and task_state migrating.
Dec 06 07:03:44 compute-1 nova_compute[226101]: 2025-12-06 07:03:44.862 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3834391616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.106 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.112 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.137 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.173 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.173 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.568 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.568 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.568 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.569 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.569 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.570 226109 INFO nova.compute.manager [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Terminating instance
Dec 06 07:03:45 compute-1 nova_compute[226101]: 2025-12-06 07:03:45.571 226109 DEBUG nova.compute.manager [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:03:45 compute-1 sudo[237384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:45 compute-1 sudo[237384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:45 compute-1 sudo[237384]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:45 compute-1 sudo[237409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:03:45 compute-1 sudo[237409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:45 compute-1 sudo[237409]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2531916716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3834391616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1796179177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:46 compute-1 kernel: tapb7a77c83-fd (unregistering): left promiscuous mode
Dec 06 07:03:46 compute-1 NetworkManager[49031]: <info>  [1765004626.0298] device (tapb7a77c83-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.041 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:46 compute-1 ovn_controller[130279]: 2025-12-06T07:03:46Z|00098|binding|INFO|Releasing lport b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a from this chassis (sb_readonly=0)
Dec 06 07:03:46 compute-1 ovn_controller[130279]: 2025-12-06T07:03:46Z|00099|binding|INFO|Setting lport b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a down in Southbound
Dec 06 07:03:46 compute-1 ovn_controller[130279]: 2025-12-06T07:03:46Z|00100|binding|INFO|Removing iface tapb7a77c83-fd ovn-installed in OVS
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.045 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:d3:3d 10.100.0.11'], port_security=['fa:16:3e:cf:d3:3d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5cc7f5c4-dc50-4137-af58-60294ea57208', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.047 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a in datapath 9451d867-0aba-464d-b4d9-f947b887e903 unbound from our chassis
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.049 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.054 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.066 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0bad24d5-abc2-4de6-a307-7f86119c0d21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.092 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7d39bf29-4b83-4887-bd96-ba2590836abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:46 compute-1 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000015.scope: Deactivated successfully.
Dec 06 07:03:46 compute-1 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000015.scope: Consumed 17.526s CPU time.
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.095 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1cf92e-ca60-4f1a-90bd-de2d285c3d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:46 compute-1 systemd-machined[190302]: Machine qemu-11-instance-00000015 terminated.
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.123 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa2cf06-518b-4d3a-a0ce-4599dba66c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.137 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[22f7ecc9-ad69-44b3-85b3-60802b3df114]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479634, 'reachable_time': 43555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237445, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.152 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[789ef157-2ba7-4a82-9a3e-af9db8d203d1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479647, 'tstamp': 479647}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237446, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9451d867-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479651, 'tstamp': 479651}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237446, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.154 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.155 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.160 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.161 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.161 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.161 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:46.162 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.167 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.167 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.167 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.195 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.208 226109 INFO nova.virt.libvirt.driver [-] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Instance destroyed successfully.
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.209 226109 DEBUG nova.objects.instance [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'resources' on Instance uuid 5cc7f5c4-dc50-4137-af58-60294ea57208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.230 226109 DEBUG nova.virt.libvirt.vif [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:02:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1006486405',display_name='tempest-ServersAdminTestJSON-server-1006486405',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1006486405',id=21,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:02:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-teupi66o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:02:26Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=5cc7f5c4-dc50-4137-af58-60294ea57208,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.231 226109 DEBUG nova.network.os_vif_util [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "address": "fa:16:3e:cf:d3:3d", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7a77c83-fd", "ovs_interfaceid": "b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.232 226109 DEBUG nova.network.os_vif_util [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:d3:3d,bridge_name='br-int',has_traffic_filtering=True,id=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7a77c83-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.233 226109 DEBUG os_vif [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:d3:3d,bridge_name='br-int',has_traffic_filtering=True,id=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7a77c83-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.235 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.236 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7a77c83-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.238 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.239 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.241 226109 INFO os_vif [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:d3:3d,bridge_name='br-int',has_traffic_filtering=True,id=b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7a77c83-fd')
Dec 06 07:03:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:03:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.717 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.719 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:03:46 compute-1 nova_compute[226101]: 2025-12-06 07:03:46.719 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:03:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:46.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:47 compute-1 ceph-mon[81689]: pgmap v1287: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Dec 06 07:03:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.783 226109 DEBUG nova.compute.manager [req-3ab4474b-d8b1-4429-a490-3e79cc23fd7b req-467f395a-78e2-47af-8bcb-b25211d42241 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received event network-vif-unplugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.784 226109 DEBUG oslo_concurrency.lockutils [req-3ab4474b-d8b1-4429-a490-3e79cc23fd7b req-467f395a-78e2-47af-8bcb-b25211d42241 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.784 226109 DEBUG oslo_concurrency.lockutils [req-3ab4474b-d8b1-4429-a490-3e79cc23fd7b req-467f395a-78e2-47af-8bcb-b25211d42241 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.785 226109 DEBUG oslo_concurrency.lockutils [req-3ab4474b-d8b1-4429-a490-3e79cc23fd7b req-467f395a-78e2-47af-8bcb-b25211d42241 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.785 226109 DEBUG nova.compute.manager [req-3ab4474b-d8b1-4429-a490-3e79cc23fd7b req-467f395a-78e2-47af-8bcb-b25211d42241 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] No waiting events found dispatching network-vif-unplugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.785 226109 DEBUG nova.compute.manager [req-3ab4474b-d8b1-4429-a490-3e79cc23fd7b req-467f395a-78e2-47af-8bcb-b25211d42241 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received event network-vif-unplugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.887 226109 INFO nova.virt.libvirt.driver [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Deleting instance files /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208_del
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.888 226109 INFO nova.virt.libvirt.driver [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Deletion of /var/lib/nova/instances/5cc7f5c4-dc50-4137-af58-60294ea57208_del complete
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.979 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.979 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.980 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "4dd1863a-3cb1-4e42-8611-ed9587a0b6ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:47 compute-1 nova_compute[226101]: 2025-12-06 07:03:47.999 226109 INFO nova.compute.manager [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Took 2.43 seconds to destroy the instance on the hypervisor.
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.000 226109 DEBUG oslo.service.loopingcall [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.000 226109 DEBUG nova.compute.manager [-] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.000 226109 DEBUG nova.network.neutron [-] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.019 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.019 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.020 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.020 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.020 226109 DEBUG oslo_concurrency.processutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:48 compute-1 ceph-mon[81689]: pgmap v1288: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 127 op/s
Dec 06 07:03:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3055950552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.464 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.468 226109 DEBUG oslo_concurrency.processutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.623 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.624 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.627 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.627 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:03:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:03:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:48.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.833 226109 WARNING nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.834 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4386MB free_disk=20.83074188232422GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.834 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.835 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.899 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Migration for instance 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:03:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:48.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.931 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.952 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Instance 09a05ccc-abca-47d8-8e32-6e53adb95d4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.953 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Instance 714f2e5b-135b-4f7e-9c62-3e1849c5e151 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.953 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Instance 5cc7f5c4-dc50-4137-af58-60294ea57208 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.953 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Migration 180afc56-4a01-4d3b-b7b6-eba0219610c0 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.953 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:03:48 compute-1 nova_compute[226101]: 2025-12-06 07:03:48.953 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.043 226109 DEBUG oslo_concurrency.processutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3055950552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2535004301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/948676523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/729169714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.493 226109 DEBUG oslo_concurrency.processutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.499 226109 DEBUG nova.compute.provider_tree [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.516 226109 DEBUG nova.scheduler.client.report [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.556 226109 DEBUG nova.compute.resource_tracker [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.557 226109 DEBUG oslo_concurrency.lockutils [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.563 226109 INFO nova.compute.manager [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Migrating instance to compute-2.ctlplane.example.com finished successfully.
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.668 226109 INFO nova.scheduler.client.report [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Deleted allocation for migration 180afc56-4a01-4d3b-b7b6-eba0219610c0
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.669 226109 DEBUG nova.virt.libvirt.driver [None req-5fc21d04-82e3-43dc-91cc-cf8bb9b7a810 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.827 226109 DEBUG nova.network.neutron [-] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.863 226109 INFO nova.compute.manager [-] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Took 1.86 seconds to deallocate network for instance.
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.914 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.914 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.967 226109 DEBUG nova.compute.manager [req-a874328c-d7a1-4881-98d3-ac9c2259fc79 req-462eb5df-08f8-4299-88bf-28c7fe3e9e69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received event network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.968 226109 DEBUG oslo_concurrency.lockutils [req-a874328c-d7a1-4881-98d3-ac9c2259fc79 req-462eb5df-08f8-4299-88bf-28c7fe3e9e69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.968 226109 DEBUG oslo_concurrency.lockutils [req-a874328c-d7a1-4881-98d3-ac9c2259fc79 req-462eb5df-08f8-4299-88bf-28c7fe3e9e69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.968 226109 DEBUG oslo_concurrency.lockutils [req-a874328c-d7a1-4881-98d3-ac9c2259fc79 req-462eb5df-08f8-4299-88bf-28c7fe3e9e69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.968 226109 DEBUG nova.compute.manager [req-a874328c-d7a1-4881-98d3-ac9c2259fc79 req-462eb5df-08f8-4299-88bf-28c7fe3e9e69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] No waiting events found dispatching network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.969 226109 WARNING nova.compute.manager [req-a874328c-d7a1-4881-98d3-ac9c2259fc79 req-462eb5df-08f8-4299-88bf-28c7fe3e9e69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received unexpected event network-vif-plugged-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a for instance with vm_state deleted and task_state None.
Dec 06 07:03:49 compute-1 nova_compute[226101]: 2025-12-06 07:03:49.994 226109 DEBUG oslo_concurrency.processutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.038 226109 DEBUG nova.compute.manager [req-c19e76be-3679-498a-ae45-abee87440d20 req-5dfe6a9d-277f-4d9c-85f0-2d5a6d85de92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Received event network-vif-deleted-b7a77c83-fd9d-452b-8a7d-bd5fa4125c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.201 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updating instance_info_cache with network_info: [{"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.220 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.220 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.221 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.221 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.222 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.222 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:50 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/828642717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.428 226109 DEBUG oslo_concurrency.processutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.433 226109 DEBUG nova.compute.provider_tree [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.447 226109 DEBUG nova.scheduler.client.report [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.473 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.499 226109 INFO nova.scheduler.client.report [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Deleted allocations for instance 5cc7f5c4-dc50-4137-af58-60294ea57208
Dec 06 07:03:50 compute-1 ovn_controller[130279]: 2025-12-06T07:03:50Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:03:50 compute-1 ovn_controller[130279]: 2025-12-06T07:03:50Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:2f:12 10.100.0.5
Dec 06 07:03:50 compute-1 nova_compute[226101]: 2025-12-06 07:03:50.605 226109 DEBUG oslo_concurrency.lockutils [None req-db29f879-8228-4d49-a50b-552867535539 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "5cc7f5c4-dc50-4137-af58-60294ea57208" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:50.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:50 compute-1 ceph-mon[81689]: pgmap v1289: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 99 op/s
Dec 06 07:03:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/729169714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2473430784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:51 compute-1 nova_compute[226101]: 2025-12-06 07:03:51.238 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:52 compute-1 podman[237545]: 2025-12-06 07:03:52.100307781 +0000 UTC m=+0.082435271 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:03:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/828642717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:53 compute-1 ceph-mon[81689]: pgmap v1290: 305 pgs: 305 active+clean; 397 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 171 op/s
Dec 06 07:03:53 compute-1 nova_compute[226101]: 2025-12-06 07:03:53.303 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004618.3023102, 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:53 compute-1 nova_compute[226101]: 2025-12-06 07:03:53.303 226109 INFO nova.compute.manager [-] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] VM Stopped (Lifecycle Event)
Dec 06 07:03:53 compute-1 nova_compute[226101]: 2025-12-06 07:03:53.324 226109 DEBUG nova.compute.manager [None req-2171adab-20ac-46f7-b24a-98c4df6512c2 - - - - - -] [instance: 4dd1863a-3cb1-4e42-8611-ed9587a0b6ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:53 compute-1 nova_compute[226101]: 2025-12-06 07:03:53.465 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:54 compute-1 ceph-mon[81689]: pgmap v1291: 305 pgs: 305 active+clean; 397 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.8 MiB/s wr, 147 op/s
Dec 06 07:03:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:54.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:03:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2320804594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:03:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:03:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2320804594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:03:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/761473893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3015818047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2320804594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:03:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2320804594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:03:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3510907800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:55 compute-1 nova_compute[226101]: 2025-12-06 07:03:55.826 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:55 compute-1 nova_compute[226101]: 2025-12-06 07:03:55.827 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:55 compute-1 nova_compute[226101]: 2025-12-06 07:03:55.827 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:55 compute-1 nova_compute[226101]: 2025-12-06 07:03:55.827 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:55 compute-1 nova_compute[226101]: 2025-12-06 07:03:55.827 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:55 compute-1 nova_compute[226101]: 2025-12-06 07:03:55.828 226109 INFO nova.compute.manager [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Terminating instance
Dec 06 07:03:55 compute-1 nova_compute[226101]: 2025-12-06 07:03:55.829 226109 DEBUG nova.compute.manager [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:03:55 compute-1 kernel: tap8ba0fb02-de (unregistering): left promiscuous mode
Dec 06 07:03:55 compute-1 NetworkManager[49031]: <info>  [1765004635.9949] device (tap8ba0fb02-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00101|binding|INFO|Releasing lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b from this chassis (sb_readonly=0)
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00102|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b down in Southbound
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00103|binding|INFO|Releasing lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 from this chassis (sb_readonly=0)
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00104|binding|INFO|Setting lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 down in Southbound
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00105|binding|INFO|Removing iface tap8ba0fb02-de ovn-installed in OVS
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.006 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00106|binding|INFO|Releasing lport a8b489de-cf80-4c12-869a-5e807cdbba8c from this chassis (sb_readonly=0)
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00107|binding|INFO|Releasing lport fed07814-3a76-4798-8d3b-90759d15a8cf from this chassis (sb_readonly=0)
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00108|binding|INFO|Releasing lport 46c2af8b-f787-403f-aef5-72ec2f87b6fc from this chassis (sb_readonly=0)
Dec 06 07:03:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:56.015 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:d9:12 19.80.0.179'], port_security=['fa:16:3e:cb:d9:12 19.80.0.179'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1920716484', 'neutron:cidrs': '19.80.0.179/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1920716484', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '7', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c735218e-8fbf-4d82-b453-bc3944800b8e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:56.018 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:c8:b4 10.100.0.14'], port_security=['fa:16:3e:3d:c8:b4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1426450350', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '714f2e5b-135b-4f7e-9c62-3e1849c5e151', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1426450350', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '15', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:56.020 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 in datapath 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed unbound from our chassis
Dec 06 07:03:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:56.024 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:03:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:56.025 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac2d191-db82-4f11-b344-b717a94d1004]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:56.026 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed namespace which is not needed anymore
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.033 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000011.scope: Deactivated successfully.
Dec 06 07:03:56 compute-1 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000011.scope: Consumed 5.228s CPU time.
Dec 06 07:03:56 compute-1 systemd-machined[190302]: Machine qemu-9-instance-00000011 terminated.
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.163 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.248 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.251 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.261 226109 INFO nova.virt.libvirt.driver [-] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Instance destroyed successfully.
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.261 226109 DEBUG nova.objects.instance [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lazy-loading 'resources' on Instance uuid 714f2e5b-135b-4f7e-9c62-3e1849c5e151 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.302 226109 DEBUG nova.virt.libvirt.vif [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-720825537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-720825537',id=17,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:01:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-od28yehe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:02:20Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=714f2e5b-135b-4f7e-9c62-3e1849c5e151,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.303 226109 DEBUG nova.network.os_vif_util [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converting VIF {"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.303 226109 DEBUG nova.network.os_vif_util [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.304 226109 DEBUG os_vif [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.305 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ba0fb02-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.306 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.309 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.312 226109 INFO os_vif [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de')
Dec 06 07:03:56 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235549]: [NOTICE]   (235553) : haproxy version is 2.8.14-c23fe91
Dec 06 07:03:56 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235549]: [NOTICE]   (235553) : path to executable is /usr/sbin/haproxy
Dec 06 07:03:56 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235549]: [WARNING]  (235553) : Exiting Master process...
Dec 06 07:03:56 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235549]: [WARNING]  (235553) : Exiting Master process...
Dec 06 07:03:56 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235549]: [ALERT]    (235553) : Current worker (235555) exited with code 143 (Terminated)
Dec 06 07:03:56 compute-1 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[235549]: [WARNING]  (235553) : All workers exited. Exiting... (0)
Dec 06 07:03:56 compute-1 systemd[1]: libpod-8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95.scope: Deactivated successfully.
Dec 06 07:03:56 compute-1 podman[237597]: 2025-12-06 07:03:56.368504734 +0000 UTC m=+0.187953876 container died 8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.461 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.462 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.462 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.462 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.462 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.463 226109 INFO nova.compute.manager [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Terminating instance
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.464 226109 DEBUG nova.compute.manager [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:03:56 compute-1 ceph-mon[81689]: pgmap v1292: 305 pgs: 305 active+clean; 346 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 386 KiB/s rd, 3.8 MiB/s wr, 165 op/s
Dec 06 07:03:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3527159582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:56.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:56 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95-userdata-shm.mount: Deactivated successfully.
Dec 06 07:03:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-1f67531613c60a361bdc4d48b6a2fb49712313fe4ae9b2a0bac386b5fbf6b4c0-merged.mount: Deactivated successfully.
Dec 06 07:03:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:56 compute-1 kernel: tap0e3b84b1-ee (unregistering): left promiscuous mode
Dec 06 07:03:56 compute-1 NetworkManager[49031]: <info>  [1765004636.9628] device (tap0e3b84b1-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:03:56 compute-1 podman[237597]: 2025-12-06 07:03:56.966309636 +0000 UTC m=+0.785758788 container cleanup 8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00109|binding|INFO|Releasing lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 from this chassis (sb_readonly=0)
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00110|binding|INFO|Setting lport 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 down in Southbound
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.972 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:56 compute-1 ovn_controller[130279]: 2025-12-06T07:03:56Z|00111|binding|INFO|Removing iface tap0e3b84b1-ee ovn-installed in OVS
Dec 06 07:03:56 compute-1 systemd[1]: libpod-conmon-8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95.scope: Deactivated successfully.
Dec 06 07:03:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:56.979 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:2f:12 10.100.0.5'], port_security=['fa:16:3e:7b:2f:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '09a05ccc-abca-47d8-8e32-6e53adb95d4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=0e3b84b1-ee05-4dbf-a372-b2a404592bf9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:56 compute-1 nova_compute[226101]: 2025-12-06 07:03:56.991 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-1 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000013.scope: Deactivated successfully.
Dec 06 07:03:57 compute-1 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000013.scope: Consumed 14.183s CPU time.
Dec 06 07:03:57 compute-1 systemd-machined[190302]: Machine qemu-14-instance-00000013 terminated.
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.042 226109 DEBUG nova.compute.manager [req-be47d269-5394-4cfa-a5b1-97e41baec233 req-27f82b5a-f168-49a2-b1a2-d6aa15bdeead 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.042 226109 DEBUG oslo_concurrency.lockutils [req-be47d269-5394-4cfa-a5b1-97e41baec233 req-27f82b5a-f168-49a2-b1a2-d6aa15bdeead 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.043 226109 DEBUG oslo_concurrency.lockutils [req-be47d269-5394-4cfa-a5b1-97e41baec233 req-27f82b5a-f168-49a2-b1a2-d6aa15bdeead 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.043 226109 DEBUG oslo_concurrency.lockutils [req-be47d269-5394-4cfa-a5b1-97e41baec233 req-27f82b5a-f168-49a2-b1a2-d6aa15bdeead 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.043 226109 DEBUG nova.compute.manager [req-be47d269-5394-4cfa-a5b1-97e41baec233 req-27f82b5a-f168-49a2-b1a2-d6aa15bdeead 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.043 226109 DEBUG nova.compute.manager [req-be47d269-5394-4cfa-a5b1-97e41baec233 req-27f82b5a-f168-49a2-b1a2-d6aa15bdeead 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:03:57 compute-1 NetworkManager[49031]: <info>  [1765004637.0803] manager: (tap0e3b84b1-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.085 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.097 226109 INFO nova.virt.libvirt.driver [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Instance destroyed successfully.
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.097 226109 DEBUG nova.objects.instance [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'resources' on Instance uuid 09a05ccc-abca-47d8-8e32-6e53adb95d4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.254 226109 DEBUG nova.virt.libvirt.vif [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1679126586',display_name='tempest-ServersAdminTestJSON-server-1679126586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1679126586',id=19,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ujgtrz05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:38Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=09a05ccc-abca-47d8-8e32-6e53adb95d4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.254 226109 DEBUG nova.network.os_vif_util [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "address": "fa:16:3e:7b:2f:12", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e3b84b1-ee", "ovs_interfaceid": "0e3b84b1-ee05-4dbf-a372-b2a404592bf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.255 226109 DEBUG nova.network.os_vif_util [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.256 226109 DEBUG os_vif [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.258 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.258 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e3b84b1-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.261 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.263 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.265 226109 INFO os_vif [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:2f:12,bridge_name='br-int',has_traffic_filtering=True,id=0e3b84b1-ee05-4dbf-a372-b2a404592bf9,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e3b84b1-ee')
Dec 06 07:03:57 compute-1 podman[237658]: 2025-12-06 07:03:57.379201135 +0000 UTC m=+0.385951492 container remove 8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.384 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7b09e2ae-1c06-4a55-9ad1-bc3562b398c2]: (4, ('Sat Dec  6 07:03:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed (8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95)\n8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95\nSat Dec  6 07:03:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed (8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95)\n8893d37eaaeb0143d24be008c87332b97cfa6f16b1904f344945e265004e9d95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.386 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3397b617-7a9f-4bfe-8455-19b9d3c787e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.388 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9cf9f3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:57 compute-1 kernel: tap4a9cf9f3-d0: left promiscuous mode
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.390 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.406 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.408 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[21c3e863-9cf0-45ee-a873-dcb3ba304339]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.434 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4a6367-cd3c-4601-9e0a-8d84d97657ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.435 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7bf068-8ed2-4880-a768-18f839d954f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.449 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[108a072c-6115-4b19-8206-721b6fbc952f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481922, 'reachable_time': 25894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237702, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.451 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:03:57 compute-1 systemd[1]: run-netns-ovnmeta\x2d4a9cf9f3\x2dd63a\x2d4198\x2da2a7\x2db24331e0d8ed.mount: Deactivated successfully.
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.452 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[89379b86-b2c8-4807-8cfd-aab81c9d310e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.453 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.455 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfa287bf-10c3-40fc-8071-37bb7f801357, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.455 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8cfb1f-cf00-4746-a13a-bbe8b7be206e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:57.456 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 namespace which is not needed anymore
Dec 06 07:03:57 compute-1 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[235623]: [NOTICE]   (235627) : haproxy version is 2.8.14-c23fe91
Dec 06 07:03:57 compute-1 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[235623]: [NOTICE]   (235627) : path to executable is /usr/sbin/haproxy
Dec 06 07:03:57 compute-1 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[235623]: [WARNING]  (235627) : Exiting Master process...
Dec 06 07:03:57 compute-1 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[235623]: [WARNING]  (235627) : Exiting Master process...
Dec 06 07:03:57 compute-1 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[235623]: [ALERT]    (235627) : Current worker (235629) exited with code 143 (Terminated)
Dec 06 07:03:57 compute-1 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[235623]: [WARNING]  (235627) : All workers exited. Exiting... (0)
Dec 06 07:03:57 compute-1 systemd[1]: libpod-13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863.scope: Deactivated successfully.
Dec 06 07:03:57 compute-1 podman[237720]: 2025-12-06 07:03:57.61301014 +0000 UTC m=+0.078797742 container died 13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:03:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:57 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863-userdata-shm.mount: Deactivated successfully.
Dec 06 07:03:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-6893fcdd7b343bdd3dd9a29330912c9b0e5a69dc2fe13e661381c8c172a2c97d-merged.mount: Deactivated successfully.
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.821 226109 DEBUG nova.compute.manager [req-6ab78bee-54d6-4e6c-b719-5f065ebee48f req-856977ea-81fc-4296-8b81-29af52b3383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.822 226109 DEBUG oslo_concurrency.lockutils [req-6ab78bee-54d6-4e6c-b719-5f065ebee48f req-856977ea-81fc-4296-8b81-29af52b3383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.822 226109 DEBUG oslo_concurrency.lockutils [req-6ab78bee-54d6-4e6c-b719-5f065ebee48f req-856977ea-81fc-4296-8b81-29af52b3383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.822 226109 DEBUG oslo_concurrency.lockutils [req-6ab78bee-54d6-4e6c-b719-5f065ebee48f req-856977ea-81fc-4296-8b81-29af52b3383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.823 226109 DEBUG nova.compute.manager [req-6ab78bee-54d6-4e6c-b719-5f065ebee48f req-856977ea-81fc-4296-8b81-29af52b3383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:57 compute-1 nova_compute[226101]: 2025-12-06 07:03:57.823 226109 DEBUG nova.compute.manager [req-6ab78bee-54d6-4e6c-b719-5f065ebee48f req-856977ea-81fc-4296-8b81-29af52b3383e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-unplugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:03:57 compute-1 podman[237720]: 2025-12-06 07:03:57.860394963 +0000 UTC m=+0.326182565 container cleanup 13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:03:57 compute-1 systemd[1]: libpod-conmon-13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863.scope: Deactivated successfully.
Dec 06 07:03:58 compute-1 podman[237751]: 2025-12-06 07:03:58.150222734 +0000 UTC m=+0.271889896 container remove 13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.155 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d73c6131-8a81-484b-a8ff-5e5ab061a009]: (4, ('Sat Dec  6 07:03:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 (13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863)\n13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863\nSat Dec  6 07:03:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 (13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863)\n13d8f4d573b979bb8462451dd824424a871f82a9581c2540a3155325d08ea863\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.156 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[33358512-5181-4ad2-966d-185b09f5d64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.157 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:58 compute-1 nova_compute[226101]: 2025-12-06 07:03:58.158 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-1 kernel: tapdfa287bf-10: left promiscuous mode
Dec 06 07:03:58 compute-1 nova_compute[226101]: 2025-12-06 07:03:58.172 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-1 nova_compute[226101]: 2025-12-06 07:03:58.173 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.175 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c129ff84-4e91-4b25-9708-67dc70ea0f73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.194 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c54b5d25-b8a6-425f-ad07-e5faf3672dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.195 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1896d2e6-82f7-41c7-b327-cc876e7fc929]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.211 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c41b9b11-6033-4862-8d94-dd39f1647f70]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482001, 'reachable_time': 22523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237766, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.212 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.212 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6df052-6557-4ad9-819f-afdc418d919d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.213 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 0e3b84b1-ee05-4dbf-a372-b2a404592bf9 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 unbound from our chassis
Dec 06 07:03:58 compute-1 systemd[1]: run-netns-ovnmeta\x2ddfa287bf\x2d10c3\x2d40fc\x2d8071\x2d37bb7f801357.mount: Deactivated successfully.
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.214 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9451d867-0aba-464d-b4d9-f947b887e903, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.215 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6562763c-7bbd-4a5f-9d41-d5ae1d567d82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:58.215 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 namespace which is not needed anymore
Dec 06 07:03:58 compute-1 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[234760]: [NOTICE]   (234764) : haproxy version is 2.8.14-c23fe91
Dec 06 07:03:58 compute-1 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[234760]: [NOTICE]   (234764) : path to executable is /usr/sbin/haproxy
Dec 06 07:03:58 compute-1 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[234760]: [WARNING]  (234764) : Exiting Master process...
Dec 06 07:03:58 compute-1 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[234760]: [ALERT]    (234764) : Current worker (234766) exited with code 143 (Terminated)
Dec 06 07:03:58 compute-1 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[234760]: [WARNING]  (234764) : All workers exited. Exiting... (0)
Dec 06 07:03:58 compute-1 systemd[1]: libpod-c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93.scope: Deactivated successfully.
Dec 06 07:03:58 compute-1 podman[237785]: 2025-12-06 07:03:58.382488107 +0000 UTC m=+0.096735038 container died c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:03:58 compute-1 nova_compute[226101]: 2025-12-06 07:03:58.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93-userdata-shm.mount: Deactivated successfully.
Dec 06 07:03:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-70562644c80cfddda4f17cf0e108617f615e580ed262d789f30e61305bfccaef-merged.mount: Deactivated successfully.
Dec 06 07:03:58 compute-1 ceph-mon[81689]: pgmap v1293: 305 pgs: 305 active+clean; 246 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:03:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:58.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:58 compute-1 podman[237785]: 2025-12-06 07:03:58.778722866 +0000 UTC m=+0.492969787 container cleanup c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:03:58 compute-1 systemd[1]: libpod-conmon-c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93.scope: Deactivated successfully.
Dec 06 07:03:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:03:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:58.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:59 compute-1 podman[237816]: 2025-12-06 07:03:59.151404647 +0000 UTC m=+0.353693999 container remove c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.157 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0f058ca4-00f1-4aeb-9331-4d18de020841]: (4, ('Sat Dec  6 07:03:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 (c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93)\nc17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93\nSat Dec  6 07:03:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 (c17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93)\nc17a0adc08371bc3fc4736193b65e2698491521cb244a32614f28e45b50b9e93\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.159 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbb09ec-abe2-43e4-8265-e4be6b7c2e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.160 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.161 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:59 compute-1 kernel: tap9451d867-00: left promiscuous mode
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.176 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.179 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d137be-001e-42fe-9219-058b8d8f3383]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.203 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[97ecde96-448c-4aaa-924e-ae7e5b148f70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.204 226109 DEBUG nova.compute.manager [req-d5cae0d0-b82b-4e71-ac22-85dc125d3958 req-1addc40f-5cf5-42cd-a75a-a099c8799820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.204 226109 DEBUG oslo_concurrency.lockutils [req-d5cae0d0-b82b-4e71-ac22-85dc125d3958 req-1addc40f-5cf5-42cd-a75a-a099c8799820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.204 226109 DEBUG oslo_concurrency.lockutils [req-d5cae0d0-b82b-4e71-ac22-85dc125d3958 req-1addc40f-5cf5-42cd-a75a-a099c8799820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.204 226109 DEBUG oslo_concurrency.lockutils [req-d5cae0d0-b82b-4e71-ac22-85dc125d3958 req-1addc40f-5cf5-42cd-a75a-a099c8799820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.205 226109 DEBUG nova.compute.manager [req-d5cae0d0-b82b-4e71-ac22-85dc125d3958 req-1addc40f-5cf5-42cd-a75a-a099c8799820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.205 226109 WARNING nova.compute.manager [req-d5cae0d0-b82b-4e71-ac22-85dc125d3958 req-1addc40f-5cf5-42cd-a75a-a099c8799820 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state deleting.
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.205 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb4afed-d587-4cfa-833d-4fac96284aca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.218 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[781968f3-3666-4ef8-bbe1-f7887f5f3b7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479627, 'reachable_time': 39355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237834, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.221 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:03:59 compute-1 systemd[1]: run-netns-ovnmeta\x2d9451d867\x2d0aba\x2d464d\x2db4d9\x2df947b887e903.mount: Deactivated successfully.
Dec 06 07:03:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:03:59.221 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[c0df0124-40de-49a6-b09f-6cb2db76b854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.993 226109 DEBUG nova.compute.manager [req-5497b308-09fe-4263-8948-acc8a6c4da08 req-c9f9db6e-7654-48b5-8568-b6d6d6542e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.994 226109 DEBUG oslo_concurrency.lockutils [req-5497b308-09fe-4263-8948-acc8a6c4da08 req-c9f9db6e-7654-48b5-8568-b6d6d6542e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.994 226109 DEBUG oslo_concurrency.lockutils [req-5497b308-09fe-4263-8948-acc8a6c4da08 req-c9f9db6e-7654-48b5-8568-b6d6d6542e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.994 226109 DEBUG oslo_concurrency.lockutils [req-5497b308-09fe-4263-8948-acc8a6c4da08 req-c9f9db6e-7654-48b5-8568-b6d6d6542e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.994 226109 DEBUG nova.compute.manager [req-5497b308-09fe-4263-8948-acc8a6c4da08 req-c9f9db6e-7654-48b5-8568-b6d6d6542e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] No waiting events found dispatching network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:59 compute-1 nova_compute[226101]: 2025-12-06 07:03:59.995 226109 WARNING nova.compute.manager [req-5497b308-09fe-4263-8948-acc8a6c4da08 req-c9f9db6e-7654-48b5-8568-b6d6d6542e0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received unexpected event network-vif-plugged-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 for instance with vm_state active and task_state deleting.
Dec 06 07:04:00 compute-1 podman[237837]: 2025-12-06 07:04:00.073482981 +0000 UTC m=+0.058638267 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:00 compute-1 podman[237838]: 2025-12-06 07:04:00.087790969 +0000 UTC m=+0.070664743 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:04:00 compute-1 nova_compute[226101]: 2025-12-06 07:04:00.546 226109 INFO nova.virt.libvirt.driver [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deleting instance files /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d_del
Dec 06 07:04:00 compute-1 nova_compute[226101]: 2025-12-06 07:04:00.546 226109 INFO nova.virt.libvirt.driver [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deletion of /var/lib/nova/instances/09a05ccc-abca-47d8-8e32-6e53adb95d4d_del complete
Dec 06 07:04:00 compute-1 nova_compute[226101]: 2025-12-06 07:04:00.631 226109 INFO nova.compute.manager [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Took 4.17 seconds to destroy the instance on the hypervisor.
Dec 06 07:04:00 compute-1 nova_compute[226101]: 2025-12-06 07:04:00.632 226109 DEBUG oslo.service.loopingcall [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:04:00 compute-1 nova_compute[226101]: 2025-12-06 07:04:00.632 226109 DEBUG nova.compute.manager [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:04:00 compute-1 nova_compute[226101]: 2025-12-06 07:04:00.632 226109 DEBUG nova.network.neutron [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:04:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:00.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:00 compute-1 ceph-mon[81689]: pgmap v1294: 305 pgs: 305 active+clean; 246 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:04:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.190 226109 INFO nova.virt.libvirt.driver [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Deleting instance files /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151_del
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.191 226109 INFO nova.virt.libvirt.driver [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Deletion of /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151_del complete
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.201 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004626.2008781, 5cc7f5c4-dc50-4137-af58-60294ea57208 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.202 226109 INFO nova.compute.manager [-] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] VM Stopped (Lifecycle Event)
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.235 226109 DEBUG nova.compute.manager [None req-19ecdd6a-d083-4c65-b4e1-5e7bd8f2947a - - - - - -] [instance: 5cc7f5c4-dc50-4137-af58-60294ea57208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.263 226109 INFO nova.compute.manager [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Took 5.43 seconds to destroy the instance on the hypervisor.
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.263 226109 DEBUG oslo.service.loopingcall [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.263 226109 DEBUG nova.compute.manager [-] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:04:01 compute-1 nova_compute[226101]: 2025-12-06 07:04:01.263 226109 DEBUG nova.network.neutron [-] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:04:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:01.619 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:01.619 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:01.620 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1303928982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:02 compute-1 nova_compute[226101]: 2025-12-06 07:04:02.262 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:02.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:02 compute-1 ceph-mon[81689]: pgmap v1295: 305 pgs: 305 active+clean; 134 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Dec 06 07:04:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:02 compute-1 nova_compute[226101]: 2025-12-06 07:04:02.984 226109 DEBUG nova.network.neutron [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.015 226109 INFO nova.compute.manager [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Took 2.38 seconds to deallocate network for instance.
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.065 226109 DEBUG nova.compute.manager [req-4a93dc06-6ce0-4f71-b48e-ecf2c28dcb50 req-b109ada8-a5ea-4ee6-81a9-512e8b99c504 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Received event network-vif-deleted-0e3b84b1-ee05-4dbf-a372-b2a404592bf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.083 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.083 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.175 226109 DEBUG oslo_concurrency.processutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.467 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/199556350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.593 226109 DEBUG oslo_concurrency.processutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.600 226109 DEBUG nova.compute.provider_tree [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.621 226109 DEBUG nova.scheduler.client.report [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.675 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.715 226109 INFO nova.scheduler.client.report [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Deleted allocations for instance 09a05ccc-abca-47d8-8e32-6e53adb95d4d
Dec 06 07:04:03 compute-1 nova_compute[226101]: 2025-12-06 07:04:03.831 226109 DEBUG oslo_concurrency.lockutils [None req-15829dc5-5556-46d1-9acc-49850fb7a65a a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "09a05ccc-abca-47d8-8e32-6e53adb95d4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1730755814' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/199556350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3035910838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:04 compute-1 nova_compute[226101]: 2025-12-06 07:04:04.245 226109 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Creating tmpfile /var/lib/nova/instances/tmpfww7d9mr to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Dec 06 07:04:04 compute-1 nova_compute[226101]: 2025-12-06 07:04:04.246 226109 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfww7d9mr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Dec 06 07:04:04 compute-1 nova_compute[226101]: 2025-12-06 07:04:04.278 226109 DEBUG nova.network.neutron [-] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:04:04 compute-1 nova_compute[226101]: 2025-12-06 07:04:04.388 226109 INFO nova.compute.manager [-] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Took 3.12 seconds to deallocate network for instance.
Dec 06 07:04:04 compute-1 nova_compute[226101]: 2025-12-06 07:04:04.506 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:04 compute-1 nova_compute[226101]: 2025-12-06 07:04:04.506 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:04 compute-1 nova_compute[226101]: 2025-12-06 07:04:04.575 226109 DEBUG oslo_concurrency.processutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:04.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:04.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:04 compute-1 ceph-mon[81689]: pgmap v1296: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 791 KiB/s wr, 203 op/s
Dec 06 07:04:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3958060182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:05 compute-1 nova_compute[226101]: 2025-12-06 07:04:05.009 226109 DEBUG oslo_concurrency.processutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:05 compute-1 nova_compute[226101]: 2025-12-06 07:04:05.015 226109 DEBUG nova.compute.provider_tree [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:04:05 compute-1 nova_compute[226101]: 2025-12-06 07:04:05.032 226109 DEBUG nova.scheduler.client.report [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:04:05 compute-1 nova_compute[226101]: 2025-12-06 07:04:05.055 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:05 compute-1 nova_compute[226101]: 2025-12-06 07:04:05.079 226109 INFO nova.scheduler.client.report [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Deleted allocations for instance 714f2e5b-135b-4f7e-9c62-3e1849c5e151
Dec 06 07:04:05 compute-1 nova_compute[226101]: 2025-12-06 07:04:05.155 226109 DEBUG oslo_concurrency.lockutils [None req-487ea88b-02f6-493c-9217-6df21b62df75 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3958060182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:05 compute-1 ceph-mon[81689]: pgmap v1297: 305 pgs: 305 active+clean; 101 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 679 KiB/s wr, 226 op/s
Dec 06 07:04:06 compute-1 nova_compute[226101]: 2025-12-06 07:04:06.018 226109 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfww7d9mr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76601abc-9380-4d0e-8360-39afb25adf0c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Dec 06 07:04:06 compute-1 nova_compute[226101]: 2025-12-06 07:04:06.047 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:04:06 compute-1 nova_compute[226101]: 2025-12-06 07:04:06.047 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquired lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:04:06 compute-1 nova_compute[226101]: 2025-12-06 07:04:06.048 226109 DEBUG nova.network.neutron [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:04:06 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:06.104 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:04:06 compute-1 nova_compute[226101]: 2025-12-06 07:04:06.105 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:06 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:06.106 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:04:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:06.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:06.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.265 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.832 226109 DEBUG nova.network.neutron [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updating instance_info_cache with network_info: [{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.853 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Releasing lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.854 226109 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfww7d9mr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76601abc-9380-4d0e-8360-39afb25adf0c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.855 226109 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Creating instance directory: /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.855 226109 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Ensure instance console log exists: /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.856 226109 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.857 226109 DEBUG nova.virt.libvirt.vif [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:03:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1401760543',display_name='tempest-LiveMigrationTest-server-1401760543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1401760543',id=24,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-5qbwc0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:59Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=76601abc-9380-4d0e-8360-39afb25adf0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.857 226109 DEBUG nova.network.os_vif_util [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.857 226109 DEBUG nova.network.os_vif_util [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.858 226109 DEBUG os_vif [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.859 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.859 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.859 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.862 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.862 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60375867-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.862 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60375867-89, col_values=(('external_ids', {'iface-id': '60375867-89f6-4607-b20c-d94ca837383e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:4d:99', 'vm-uuid': '76601abc-9380-4d0e-8360-39afb25adf0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.863 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:07 compute-1 NetworkManager[49031]: <info>  [1765004647.8646] manager: (tap60375867-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.866 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.870 226109 INFO os_vif [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89')
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.871 226109 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Dec 06 07:04:07 compute-1 nova_compute[226101]: 2025-12-06 07:04:07.871 226109 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfww7d9mr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76601abc-9380-4d0e-8360-39afb25adf0c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Dec 06 07:04:08 compute-1 ceph-mon[81689]: pgmap v1298: 305 pgs: 305 active+clean; 134 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 192 op/s
Dec 06 07:04:08 compute-1 nova_compute[226101]: 2025-12-06 07:04:08.469 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:08.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:08 compute-1 nova_compute[226101]: 2025-12-06 07:04:08.901 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:08.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1565463242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:04:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1565463242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:04:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3303593592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:10 compute-1 nova_compute[226101]: 2025-12-06 07:04:10.221 226109 DEBUG nova.network.neutron [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Port 60375867-89f6-4607-b20c-d94ca837383e updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Dec 06 07:04:10 compute-1 nova_compute[226101]: 2025-12-06 07:04:10.222 226109 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfww7d9mr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76601abc-9380-4d0e-8360-39afb25adf0c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Dec 06 07:04:10 compute-1 ceph-mon[81689]: pgmap v1299: 305 pgs: 305 active+clean; 134 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 06 07:04:10 compute-1 kernel: tap60375867-89: entered promiscuous mode
Dec 06 07:04:10 compute-1 NetworkManager[49031]: <info>  [1765004650.4641] manager: (tap60375867-89): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Dec 06 07:04:10 compute-1 ovn_controller[130279]: 2025-12-06T07:04:10Z|00112|binding|INFO|Claiming lport 60375867-89f6-4607-b20c-d94ca837383e for this additional chassis.
Dec 06 07:04:10 compute-1 nova_compute[226101]: 2025-12-06 07:04:10.464 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:10 compute-1 ovn_controller[130279]: 2025-12-06T07:04:10Z|00113|binding|INFO|60375867-89f6-4607-b20c-d94ca837383e: Claiming fa:16:3e:2e:4d:99 10.100.0.6
Dec 06 07:04:10 compute-1 ovn_controller[130279]: 2025-12-06T07:04:10Z|00114|binding|INFO|Claiming lport 40dca971-8880-4c3a-a5fd-8d055d31de88 for this additional chassis.
Dec 06 07:04:10 compute-1 ovn_controller[130279]: 2025-12-06T07:04:10Z|00115|binding|INFO|40dca971-8880-4c3a-a5fd-8d055d31de88: Claiming fa:16:3e:7f:91:51 19.80.0.39
Dec 06 07:04:10 compute-1 systemd-udevd[237936]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:04:10 compute-1 systemd-machined[190302]: New machine qemu-15-instance-00000018.
Dec 06 07:04:10 compute-1 NetworkManager[49031]: <info>  [1765004650.5094] device (tap60375867-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:04:10 compute-1 NetworkManager[49031]: <info>  [1765004650.5103] device (tap60375867-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:04:10 compute-1 nova_compute[226101]: 2025-12-06 07:04:10.579 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:10 compute-1 nova_compute[226101]: 2025-12-06 07:04:10.585 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:10 compute-1 systemd[1]: Started Virtual Machine qemu-15-instance-00000018.
Dec 06 07:04:10 compute-1 ovn_controller[130279]: 2025-12-06T07:04:10Z|00116|binding|INFO|Setting lport 60375867-89f6-4607-b20c-d94ca837383e ovn-installed in OVS
Dec 06 07:04:10 compute-1 nova_compute[226101]: 2025-12-06 07:04:10.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:10.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:10.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.145 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004651.1452725, 76601abc-9380-4d0e-8360-39afb25adf0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.146 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] VM Started (Lifecycle Event)
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.162 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.259 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004636.2583926, 714f2e5b-135b-4f7e-9c62-3e1849c5e151 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.260 226109 INFO nova.compute.manager [-] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] VM Stopped (Lifecycle Event)
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.294 226109 DEBUG nova.compute.manager [None req-352081a3-86ae-4110-89ec-b850fbd4bc3c - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.632 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004651.6323614, 76601abc-9380-4d0e-8360-39afb25adf0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.633 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] VM Resumed (Lifecycle Event)
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.656 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.659 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:11 compute-1 nova_compute[226101]: 2025-12-06 07:04:11.682 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Dec 06 07:04:12 compute-1 nova_compute[226101]: 2025-12-06 07:04:12.095 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004637.094612, 09a05ccc-abca-47d8-8e32-6e53adb95d4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:12 compute-1 nova_compute[226101]: 2025-12-06 07:04:12.096 226109 INFO nova.compute.manager [-] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] VM Stopped (Lifecycle Event)
Dec 06 07:04:12 compute-1 nova_compute[226101]: 2025-12-06 07:04:12.120 226109 DEBUG nova.compute.manager [None req-f8c8c222-8a3e-4f22-bbfd-41eeb738c3e7 - - - - - -] [instance: 09a05ccc-abca-47d8-8e32-6e53adb95d4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:12 compute-1 ceph-mon[81689]: pgmap v1300: 305 pgs: 305 active+clean; 99 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 06 07:04:12 compute-1 ovn_controller[130279]: 2025-12-06T07:04:12Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:4d:99 10.100.0.6
Dec 06 07:04:12 compute-1 ovn_controller[130279]: 2025-12-06T07:04:12Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:4d:99 10.100.0.6
Dec 06 07:04:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:12.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:12 compute-1 nova_compute[226101]: 2025-12-06 07:04:12.864 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:12.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:13 compute-1 ovn_controller[130279]: 2025-12-06T07:04:13Z|00117|binding|INFO|Claiming lport 60375867-89f6-4607-b20c-d94ca837383e for this chassis.
Dec 06 07:04:13 compute-1 ovn_controller[130279]: 2025-12-06T07:04:13Z|00118|binding|INFO|60375867-89f6-4607-b20c-d94ca837383e: Claiming fa:16:3e:2e:4d:99 10.100.0.6
Dec 06 07:04:13 compute-1 ovn_controller[130279]: 2025-12-06T07:04:13Z|00119|binding|INFO|Claiming lport 40dca971-8880-4c3a-a5fd-8d055d31de88 for this chassis.
Dec 06 07:04:13 compute-1 ovn_controller[130279]: 2025-12-06T07:04:13Z|00120|binding|INFO|40dca971-8880-4c3a-a5fd-8d055d31de88: Claiming fa:16:3e:7f:91:51 19.80.0.39
Dec 06 07:04:13 compute-1 ovn_controller[130279]: 2025-12-06T07:04:13Z|00121|binding|INFO|Setting lport 60375867-89f6-4607-b20c-d94ca837383e up in Southbound
Dec 06 07:04:13 compute-1 ovn_controller[130279]: 2025-12-06T07:04:13Z|00122|binding|INFO|Setting lport 40dca971-8880-4c3a-a5fd-8d055d31de88 up in Southbound
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.076 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:91:51 19.80.0.39'], port_security=['fa:16:3e:7f:91:51 19.80.0.39'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['60375867-89f6-4607-b20c-d94ca837383e'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-308547339', 'neutron:cidrs': '19.80.0.39/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-870f583a-cbfc-4c59-b592-cf3095306ec5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-308547339', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=8aebe1f8-cfb2-4cb1-a24a-2cc56e9a461b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=40dca971-8880-4c3a-a5fd-8d055d31de88) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.078 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:4d:99 10.100.0.6'], port_security=['fa:16:3e:2e:4d:99 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2114195499', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '76601abc-9380-4d0e-8360-39afb25adf0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2114195499', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '11', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=60375867-89f6-4607-b20c-d94ca837383e) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.079 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 40dca971-8880-4c3a-a5fd-8d055d31de88 in datapath 870f583a-cbfc-4c59-b592-cf3095306ec5 bound to our chassis
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.080 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 870f583a-cbfc-4c59-b592-cf3095306ec5
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.090 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0b9be9-2395-4b9d-8d96-7612c77c0f71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.091 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap870f583a-c1 in ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.093 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap870f583a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.093 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5436aa-8149-42cd-9c77-0ca8d25a54df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.094 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c16fa608-fcad-44b9-b1af-a36872a809f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.105 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c6336c-e4aa-447e-a2f2-2ca79dbc5c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.128 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3d4e15-c663-4985-b28b-a3c13c779492]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.158 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0560d9-02bd-46b6-a50c-52a96960cbbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 NetworkManager[49031]: <info>  [1765004653.1647] manager: (tap870f583a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.164 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[61d77306-2cd6-4caa-9375-94ab0ea35028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.198 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[79181949-ecad-48f7-9951-c6c14723bfa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.200 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[27b3ff4b-5ad1-4316-87d8-7cf299578f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 NetworkManager[49031]: <info>  [1765004653.2210] device (tap870f583a-c0): carrier: link connected
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.227 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[52723bbc-eff2-42f6-8237-d7c46b70f448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.242 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[037d49de-eb1c-4fd6-be1c-3b311fa0db50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap870f583a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:d9:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493272, 'reachable_time': 22815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238011, 'error': None, 'target': 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.257 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6e574d91-b9b9-40de-9a44-cac46334087d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:d987'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493272, 'tstamp': 493272}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238012, 'error': None, 'target': 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.273 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[639d68c1-6718-428a-b50a-f352726a7107]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap870f583a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:d9:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493272, 'reachable_time': 22815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238013, 'error': None, 'target': 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.302 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fde790-e7eb-42b0-a928-c4d46fbe15c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 nova_compute[226101]: 2025-12-06 07:04:13.304 226109 INFO nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Post operation of migration started
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.352 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3aecfdb5-1632-4b86-b14a-f59745a99fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.354 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap870f583a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.354 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.354 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap870f583a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:13 compute-1 nova_compute[226101]: 2025-12-06 07:04:13.356 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:13 compute-1 NetworkManager[49031]: <info>  [1765004653.3567] manager: (tap870f583a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec 06 07:04:13 compute-1 kernel: tap870f583a-c0: entered promiscuous mode
Dec 06 07:04:13 compute-1 nova_compute[226101]: 2025-12-06 07:04:13.358 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.359 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap870f583a-c0, col_values=(('external_ids', {'iface-id': 'a37358dd-1cb9-4fbc-9a91-e10a5094a9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:13 compute-1 nova_compute[226101]: 2025-12-06 07:04:13.360 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:13 compute-1 ovn_controller[130279]: 2025-12-06T07:04:13Z|00123|binding|INFO|Releasing lport a37358dd-1cb9-4fbc-9a91-e10a5094a9ac from this chassis (sb_readonly=0)
Dec 06 07:04:13 compute-1 nova_compute[226101]: 2025-12-06 07:04:13.374 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.374 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/870f583a-cbfc-4c59-b592-cf3095306ec5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/870f583a-cbfc-4c59-b592-cf3095306ec5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.375 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dc58e8-bab6-428b-a2e5-9dec2a8930f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.376 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-870f583a-cbfc-4c59-b592-cf3095306ec5
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/870f583a-cbfc-4c59-b592-cf3095306ec5.pid.haproxy
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 870f583a-cbfc-4c59-b592-cf3095306ec5
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.377 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'env', 'PROCESS_TAG=haproxy-870f583a-cbfc-4c59-b592-cf3095306ec5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/870f583a-cbfc-4c59-b592-cf3095306ec5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:04:13 compute-1 nova_compute[226101]: 2025-12-06 07:04:13.471 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:13 compute-1 podman[238046]: 2025-12-06 07:04:13.761498791 +0000 UTC m=+0.058649427 container create 438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:13 compute-1 systemd[1]: Started libpod-conmon-438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f.scope.
Dec 06 07:04:13 compute-1 podman[238046]: 2025-12-06 07:04:13.725940289 +0000 UTC m=+0.023090945 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:04:13 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:04:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621cb883df4683de3225142a619fa06eb98e4d5682cef7fca5ace7548996c618/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:13 compute-1 podman[238046]: 2025-12-06 07:04:13.872081593 +0000 UTC m=+0.169232239 container init 438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:13 compute-1 podman[238046]: 2025-12-06 07:04:13.883030198 +0000 UTC m=+0.180180824 container start 438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:13 compute-1 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[238061]: [NOTICE]   (238065) : New worker (238067) forked
Dec 06 07:04:13 compute-1 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[238061]: [NOTICE]   (238065) : Loading success.
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.967 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 60375867-89f6-4607-b20c-d94ca837383e in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 unbound from our chassis
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.969 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9238b9b5-08f5-4634-bd05-370e3192b201
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.981 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[89190bf3-8b3f-420f-9a77-74ce6137fa89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.982 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9238b9b5-01 in ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.984 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9238b9b5-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.984 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4e50b4c4-6240-4ce3-92dd-7eb88ac7170c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.985 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d518503-a4da-488d-88e8-2b07ed8a7fad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:13.998 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cf7732-1a1c-4118-a3d3-7ecba3dfc9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.015 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0907bc15-1da4-4b8b-b790-93e6a8f4cf72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.046 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5a72d4b3-c687-40d9-9275-652fbd076f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.052 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4c6f74-cf27-49e4-b144-aed84a33507c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 NetworkManager[49031]: <info>  [1765004654.0535] manager: (tap9238b9b5-00): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Dec 06 07:04:14 compute-1 systemd-udevd[238083]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.079 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7de6e850-1824-42ea-bd1f-d658e75d9ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.083 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[704b8291-49f2-455a-ad0d-00d2be67905e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.108 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:14 compute-1 NetworkManager[49031]: <info>  [1765004654.1130] device (tap9238b9b5-00): carrier: link connected
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.120 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[74df9ecb-f151-4aa1-b440-f340bf482f20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.136 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4063e6-3fbf-449c-8bc8-99e23fefd9fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493361, 'reachable_time': 21436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238103, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.151 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe55051-972c-4b43-845a-38c34430c058]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:4ff3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493361, 'tstamp': 493361}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238104, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.164 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[538c35b8-e5cf-4756-97f6-df4d6ec9c5e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493361, 'reachable_time': 21436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238105, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.186 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4fef14eb-af57-4b2f-ad66-f855198269f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.237 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[28032355-09af-4f94-a0bf-270c55d0b22f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.238 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.239 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.239 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9238b9b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:14 compute-1 nova_compute[226101]: 2025-12-06 07:04:14.241 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:14 compute-1 NetworkManager[49031]: <info>  [1765004654.2426] manager: (tap9238b9b5-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec 06 07:04:14 compute-1 kernel: tap9238b9b5-00: entered promiscuous mode
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.245 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9238b9b5-00, col_values=(('external_ids', {'iface-id': '5c223717-35ae-4662-bf3f-55f7a73b7a9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:14 compute-1 nova_compute[226101]: 2025-12-06 07:04:14.244 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:14 compute-1 nova_compute[226101]: 2025-12-06 07:04:14.246 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:14 compute-1 ovn_controller[130279]: 2025-12-06T07:04:14Z|00124|binding|INFO|Releasing lport 5c223717-35ae-4662-bf3f-55f7a73b7a9a from this chassis (sb_readonly=0)
Dec 06 07:04:14 compute-1 nova_compute[226101]: 2025-12-06 07:04:14.260 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.260 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.261 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[69cd3ef3-8e58-4be2-89fe-c1d45134dcbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.262 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-9238b9b5-08f5-4634-bd05-370e3192b201
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 9238b9b5-08f5-4634-bd05-370e3192b201
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:04:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:14.262 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'env', 'PROCESS_TAG=haproxy-9238b9b5-08f5-4634-bd05-370e3192b201', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9238b9b5-08f5-4634-bd05-370e3192b201.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:04:14 compute-1 ceph-mon[81689]: pgmap v1301: 305 pgs: 305 active+clean; 97 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Dec 06 07:04:14 compute-1 nova_compute[226101]: 2025-12-06 07:04:14.557 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:04:14 compute-1 nova_compute[226101]: 2025-12-06 07:04:14.558 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquired lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:04:14 compute-1 nova_compute[226101]: 2025-12-06 07:04:14.558 226109 DEBUG nova.network.neutron [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:04:14 compute-1 podman[238137]: 2025-12-06 07:04:14.588157064 +0000 UTC m=+0.044463824 container create a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:04:14 compute-1 systemd[1]: Started libpod-conmon-a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355.scope.
Dec 06 07:04:14 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:04:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa7706b32bd6c46a442ecf1b7f3a953ef20d950836e1cf163c26b67ab42128b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:14 compute-1 podman[238137]: 2025-12-06 07:04:14.565009807 +0000 UTC m=+0.021316587 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:04:14 compute-1 podman[238137]: 2025-12-06 07:04:14.663492602 +0000 UTC m=+0.119799372 container init a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:04:14 compute-1 podman[238137]: 2025-12-06 07:04:14.668658951 +0000 UTC m=+0.124965711 container start a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:04:14 compute-1 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[238154]: [NOTICE]   (238158) : New worker (238160) forked
Dec 06 07:04:14 compute-1 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[238154]: [NOTICE]   (238158) : Loading success.
Dec 06 07:04:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:14.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:14.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:16 compute-1 ceph-mon[81689]: pgmap v1302: 305 pgs: 305 active+clean; 110 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 203 op/s
Dec 06 07:04:16 compute-1 nova_compute[226101]: 2025-12-06 07:04:16.466 226109 DEBUG nova.network.neutron [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updating instance_info_cache with network_info: [{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:04:16 compute-1 nova_compute[226101]: 2025-12-06 07:04:16.488 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Releasing lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:04:16 compute-1 nova_compute[226101]: 2025-12-06 07:04:16.503 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:16 compute-1 nova_compute[226101]: 2025-12-06 07:04:16.503 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:16 compute-1 nova_compute[226101]: 2025-12-06 07:04:16.503 226109 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:16 compute-1 nova_compute[226101]: 2025-12-06 07:04:16.508 226109 INFO nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Dec 06 07:04:16 compute-1 virtqemud[225710]: Domain id=15 name='instance-00000018' uuid=76601abc-9380-4d0e-8360-39afb25adf0c is tainted: custom-monitor
Dec 06 07:04:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:16.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:17 compute-1 nova_compute[226101]: 2025-12-06 07:04:17.516 226109 INFO nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Dec 06 07:04:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:17 compute-1 nova_compute[226101]: 2025-12-06 07:04:17.866 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:18 compute-1 ceph-mon[81689]: pgmap v1303: 305 pgs: 305 active+clean; 121 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 186 op/s
Dec 06 07:04:18 compute-1 nova_compute[226101]: 2025-12-06 07:04:18.473 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:18 compute-1 nova_compute[226101]: 2025-12-06 07:04:18.521 226109 INFO nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Dec 06 07:04:18 compute-1 nova_compute[226101]: 2025-12-06 07:04:18.525 226109 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:18 compute-1 nova_compute[226101]: 2025-12-06 07:04:18.548 226109 DEBUG nova.objects.instance [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:04:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:18.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:18.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:20 compute-1 ceph-mon[81689]: pgmap v1304: 305 pgs: 305 active+clean; 121 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Dec 06 07:04:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/886785863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/136576715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:20.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/413554628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:22.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:22 compute-1 nova_compute[226101]: 2025-12-06 07:04:22.867 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:22 compute-1 ceph-mon[81689]: pgmap v1305: 305 pgs: 305 active+clean; 126 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 155 op/s
Dec 06 07:04:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3895071870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1359816576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:23 compute-1 podman[238169]: 2025-12-06 07:04:23.113456661 +0000 UTC m=+0.100534751 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 07:04:23 compute-1 nova_compute[226101]: 2025-12-06 07:04:23.474 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:24.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:24.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:25 compute-1 ceph-mon[81689]: pgmap v1306: 305 pgs: 305 active+clean; 133 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 344 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Dec 06 07:04:26 compute-1 ceph-mon[81689]: pgmap v1307: 305 pgs: 305 active+clean; 153 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 2.6 MiB/s wr, 85 op/s
Dec 06 07:04:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:26.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:26.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:27 compute-1 nova_compute[226101]: 2025-12-06 07:04:27.869 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:28 compute-1 nova_compute[226101]: 2025-12-06 07:04:28.477 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:28 compute-1 ceph-mon[81689]: pgmap v1308: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:04:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:28.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:28.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:30.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:30 compute-1 ceph-mon[81689]: pgmap v1309: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 06 07:04:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:30.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:31 compute-1 podman[238196]: 2025-12-06 07:04:31.072147889 +0000 UTC m=+0.053686573 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:31 compute-1 podman[238197]: 2025-12-06 07:04:31.080567877 +0000 UTC m=+0.057382243 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 07:04:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1089812738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:32.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:32 compute-1 nova_compute[226101]: 2025-12-06 07:04:32.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:32.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:33 compute-1 ceph-mon[81689]: pgmap v1310: 305 pgs: 305 active+clean; 128 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Dec 06 07:04:33 compute-1 nova_compute[226101]: 2025-12-06 07:04:33.478 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:34 compute-1 ceph-mon[81689]: pgmap v1311: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.6 MiB/s wr, 133 op/s
Dec 06 07:04:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:34.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:36 compute-1 ceph-mon[81689]: pgmap v1312: 305 pgs: 305 active+clean; 145 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 147 op/s
Dec 06 07:04:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:36.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:36.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:37 compute-1 nova_compute[226101]: 2025-12-06 07:04:37.871 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:38 compute-1 ceph-mon[81689]: pgmap v1313: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 140 op/s
Dec 06 07:04:38 compute-1 nova_compute[226101]: 2025-12-06 07:04:38.514 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:38.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:38.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/554912748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:40 compute-1 nova_compute[226101]: 2025-12-06 07:04:40.408 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "673ce976-9b77-47c7-8433-9050600c9b19" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:40 compute-1 nova_compute[226101]: 2025-12-06 07:04:40.409 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "673ce976-9b77-47c7-8433-9050600c9b19" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:40 compute-1 nova_compute[226101]: 2025-12-06 07:04:40.436 226109 DEBUG nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:04:40 compute-1 nova_compute[226101]: 2025-12-06 07:04:40.586 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:40 compute-1 nova_compute[226101]: 2025-12-06 07:04:40.587 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:40 compute-1 nova_compute[226101]: 2025-12-06 07:04:40.595 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:04:40 compute-1 nova_compute[226101]: 2025-12-06 07:04:40.596 226109 INFO nova.compute.claims [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:04:40 compute-1 ceph-mon[81689]: pgmap v1314: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 06 07:04:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1753776930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:40.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:40 compute-1 nova_compute[226101]: 2025-12-06 07:04:40.918 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:40.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3843001736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.374 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.379 226109 DEBUG nova.compute.provider_tree [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.402 226109 DEBUG nova.scheduler.client.report [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.440 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.441 226109 DEBUG nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.491 226109 DEBUG nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.492 226109 DEBUG nova.network.neutron [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.513 226109 INFO nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.551 226109 DEBUG nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.885 226109 DEBUG nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.887 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.887 226109 INFO nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Creating image(s)
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.919 226109 DEBUG nova.storage.rbd_utils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 673ce976-9b77-47c7-8433-9050600c9b19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.947 226109 DEBUG nova.storage.rbd_utils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 673ce976-9b77-47c7-8433-9050600c9b19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.975 226109 DEBUG nova.storage.rbd_utils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 673ce976-9b77-47c7-8433-9050600c9b19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:41 compute-1 nova_compute[226101]: 2025-12-06 07:04:41.978 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2461733571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3843001736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.051 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.052 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.053 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.053 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.076 226109 DEBUG nova.storage.rbd_utils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 673ce976-9b77-47c7-8433-9050600c9b19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.079 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 673ce976-9b77-47c7-8433-9050600c9b19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.137 226109 DEBUG nova.network.neutron [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.137 226109 DEBUG nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:04:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:42.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:42 compute-1 nova_compute[226101]: 2025-12-06 07:04:42.872 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:42.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:43 compute-1 nova_compute[226101]: 2025-12-06 07:04:43.516 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:44 compute-1 ceph-mon[81689]: pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 06 07:04:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/468274952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4113499709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:44 compute-1 nova_compute[226101]: 2025-12-06 07:04:44.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:44 compute-1 nova_compute[226101]: 2025-12-06 07:04:44.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:44 compute-1 nova_compute[226101]: 2025-12-06 07:04:44.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:44 compute-1 nova_compute[226101]: 2025-12-06 07:04:44.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:44 compute-1 nova_compute[226101]: 2025-12-06 07:04:44.613 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:04:44 compute-1 nova_compute[226101]: 2025-12-06 07:04:44.614 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:44.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:44.816 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:04:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:44.818 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:04:44 compute-1 nova_compute[226101]: 2025-12-06 07:04:44.818 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:44.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:04:45.820 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:45 compute-1 sudo[238369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:45 compute-1 sudo[238369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:45 compute-1 sudo[238369]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:45 compute-1 sudo[238394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:04:45 compute-1 sudo[238394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:45 compute-1 sudo[238394]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:46 compute-1 ceph-mon[81689]: pgmap v1316: 305 pgs: 305 active+clean; 172 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.043 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 673ce976-9b77-47c7-8433-9050600c9b19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.964s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:46 compute-1 sudo[238419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:46 compute-1 sudo[238419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:46 compute-1 sudo[238419]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:46 compute-1 sudo[238461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:04:46 compute-1 sudo[238461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.119 226109 DEBUG nova.storage.rbd_utils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] resizing rbd image 673ce976-9b77-47c7-8433-9050600c9b19_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:04:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3020370546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.368 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.414 226109 DEBUG nova.objects.instance [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lazy-loading 'migration_context' on Instance uuid 673ce976-9b77-47c7-8433-9050600c9b19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.435 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.435 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Ensure instance console log exists: /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.436 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.436 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.436 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.437 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.443 226109 WARNING nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.447 226109 DEBUG nova.virt.libvirt.host [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.448 226109 DEBUG nova.virt.libvirt.host [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.451 226109 DEBUG nova.virt.libvirt.host [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.452 226109 DEBUG nova.virt.libvirt.host [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.453 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.453 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.453 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.454 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.454 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.454 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.454 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.454 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.455 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.455 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.455 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.455 226109 DEBUG nova.virt.hardware [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.457 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.484 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.485 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:04:46 compute-1 sudo[238461]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.671 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.672 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4668MB free_disk=20.93905258178711GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.672 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.673 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:46.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.763 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 76601abc-9380-4d0e-8360-39afb25adf0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.764 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 673ce976-9b77-47c7-8433-9050600c9b19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.764 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.764 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:04:46 compute-1 nova_compute[226101]: 2025-12-06 07:04:46.849 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:47.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:04:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2509531797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.120 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:47 compute-1 ceph-mon[81689]: pgmap v1317: 305 pgs: 305 active+clean; 225 MiB data, 461 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 4.6 MiB/s wr, 49 op/s
Dec 06 07:04:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1474950124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2687729709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3020370546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:04:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:04:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:04:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3726549628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.574 226109 DEBUG nova.storage.rbd_utils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 673ce976-9b77-47c7-8433-9050600c9b19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.579 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3821259062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.865 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.870 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.874 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.895 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.922 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:04:47 compute-1 nova_compute[226101]: 2025-12-06 07:04:47.923 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:04:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2894877082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.038 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.040 226109 DEBUG nova.objects.instance [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 673ce976-9b77-47c7-8433-9050600c9b19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.064 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <uuid>673ce976-9b77-47c7-8433-9050600c9b19</uuid>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <name>instance-0000001d</name>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-466229730</nova:name>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:04:46</nova:creationTime>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <nova:user uuid="1a0ca5a46a9442b1845863069ff295f4">tempest-ServersAdminNegativeTestJSON-1345610286-project-member</nova:user>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <nova:project uuid="a055b1b8e2e54e4a81cfca74765ddcb1">tempest-ServersAdminNegativeTestJSON-1345610286</nova:project>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <system>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <entry name="serial">673ce976-9b77-47c7-8433-9050600c9b19</entry>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <entry name="uuid">673ce976-9b77-47c7-8433-9050600c9b19</entry>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     </system>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <os>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   </os>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <features>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   </features>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/673ce976-9b77-47c7-8433-9050600c9b19_disk">
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       </source>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/673ce976-9b77-47c7-8433-9050600c9b19_disk.config">
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       </source>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:04:48 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19/console.log" append="off"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <video>
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     </video>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:04:48 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:04:48 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:04:48 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:04:48 compute-1 nova_compute[226101]: </domain>
Dec 06 07:04:48 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.383 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.384 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.384 226109 INFO nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Using config drive
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.408 226109 DEBUG nova.storage.rbd_utils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 673ce976-9b77-47c7-8433-9050600c9b19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.517 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.657 226109 INFO nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Creating config drive at /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19/disk.config
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.662 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa_5voh2w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2509531797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:04:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:04:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:04:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/599834630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:48.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.786 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa_5voh2w" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.814 226109 DEBUG nova.storage.rbd_utils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 673ce976-9b77-47c7-8433-9050600c9b19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.819 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19/disk.config 673ce976-9b77-47c7-8433-9050600c9b19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.923 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.924 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.947 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.947 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.948 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:04:48 compute-1 nova_compute[226101]: 2025-12-06 07:04:48.964 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:04:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:49.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:49 compute-1 nova_compute[226101]: 2025-12-06 07:04:49.023 226109 DEBUG oslo_concurrency.processutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19/disk.config 673ce976-9b77-47c7-8433-9050600c9b19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:49 compute-1 nova_compute[226101]: 2025-12-06 07:04:49.024 226109 INFO nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Deleting local config drive /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19/disk.config because it was imported into RBD.
Dec 06 07:04:49 compute-1 nova_compute[226101]: 2025-12-06 07:04:49.095 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:04:49 compute-1 nova_compute[226101]: 2025-12-06 07:04:49.097 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:04:49 compute-1 systemd-machined[190302]: New machine qemu-16-instance-0000001d.
Dec 06 07:04:49 compute-1 nova_compute[226101]: 2025-12-06 07:04:49.097 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:04:49 compute-1 nova_compute[226101]: 2025-12-06 07:04:49.097 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 76601abc-9380-4d0e-8360-39afb25adf0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:49 compute-1 systemd[1]: Started Virtual Machine qemu-16-instance-0000001d.
Dec 06 07:04:49 compute-1 ceph-mon[81689]: pgmap v1318: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 4.6 MiB/s wr, 67 op/s
Dec 06 07:04:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3821259062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2894877082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/119544535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.023 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004690.0235047, 673ce976-9b77-47c7-8433-9050600c9b19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.025 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] VM Resumed (Lifecycle Event)
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.026 226109 DEBUG nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.026 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.029 226109 INFO nova.virt.libvirt.driver [-] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Instance spawned successfully.
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.030 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.047 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.053 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.056 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.056 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.057 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.057 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.058 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.058 226109 DEBUG nova.virt.libvirt.driver [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.080 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.081 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004690.0242782, 673ce976-9b77-47c7-8433-9050600c9b19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.081 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] VM Started (Lifecycle Event)
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.113 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.116 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.127 226109 INFO nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Took 8.24 seconds to spawn the instance on the hypervisor.
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.127 226109 DEBUG nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.139 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.217 226109 INFO nova.compute.manager [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Took 9.65 seconds to build instance.
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.254 226109 DEBUG oslo_concurrency.lockutils [None req-65eec87f-e5a2-494f-b45d-f6bfad2341cb 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "673ce976-9b77-47c7-8433-9050600c9b19" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.445 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updating instance_info_cache with network_info: [{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.459 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.459 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.459 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.459 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.459 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.460 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.460 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.460 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:04:50 compute-1 nova_compute[226101]: 2025-12-06 07:04:50.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:50 compute-1 ceph-mon[81689]: pgmap v1319: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 3.6 MiB/s wr, 47 op/s
Dec 06 07:04:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Dec 06 07:04:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:51.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:51 compute-1 ceph-mon[81689]: osdmap e169: 3 total, 3 up, 3 in
Dec 06 07:04:51 compute-1 ceph-mon[81689]: pgmap v1321: 305 pgs: 305 active+clean; 272 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.5 MiB/s wr, 175 op/s
Dec 06 07:04:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:52 compute-1 nova_compute[226101]: 2025-12-06 07:04:52.876 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:53.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:53 compute-1 nova_compute[226101]: 2025-12-06 07:04:53.519 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:53 compute-1 sudo[238777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:53 compute-1 sudo[238777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:53 compute-1 sudo[238777]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:53 compute-1 nova_compute[226101]: 2025-12-06 07:04:53.992 226109 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Creating tmpfile /var/lib/nova/instances/tmpnrrgb899 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Dec 06 07:04:53 compute-1 nova_compute[226101]: 2025-12-06 07:04:53.994 226109 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpnrrgb899',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Dec 06 07:04:54 compute-1 sudo[238808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:04:54 compute-1 sudo[238808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:54 compute-1 sudo[238808]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:54 compute-1 podman[238801]: 2025-12-06 07:04:54.050935883 +0000 UTC m=+0.081320541 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:04:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:54.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:55.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:55 compute-1 nova_compute[226101]: 2025-12-06 07:04:55.055 226109 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpnrrgb899',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Dec 06 07:04:55 compute-1 ceph-mon[81689]: pgmap v1322: 305 pgs: 305 active+clean; 272 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 6.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Dec 06 07:04:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2603884885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:55 compute-1 nova_compute[226101]: 2025-12-06 07:04:55.081 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:04:55 compute-1 nova_compute[226101]: 2025-12-06 07:04:55.082 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquired lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:04:55 compute-1 nova_compute[226101]: 2025-12-06 07:04:55.082 226109 DEBUG nova.network.neutron [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:04:56 compute-1 ceph-mon[81689]: pgmap v1323: 305 pgs: 305 active+clean; 280 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.0 MiB/s wr, 326 op/s
Dec 06 07:04:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/152312170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.285 226109 DEBUG nova.network.neutron [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.324 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Releasing lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.326 226109 DEBUG os_brick.utils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.327 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.336 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.337 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[615ee4ec-44fd-47b6-9b99-b9f3731bb453]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.338 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.344 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.345 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[fb500e94-0bab-47b6-8582-237d126cb3c4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.346 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.353 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.353 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[a41732bd-f429-4906-952f-c8913576fb2b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.354 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc59d19-4f08-4f60-82d6-714273a89f73]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.355 226109 DEBUG oslo_concurrency.processutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.374 226109 DEBUG oslo_concurrency.processutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.377 226109 DEBUG os_brick.initiator.connectors.lightos [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.377 226109 DEBUG os_brick.initiator.connectors.lightos [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.377 226109 DEBUG os_brick.initiator.connectors.lightos [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:04:56 compute-1 nova_compute[226101]: 2025-12-06 07:04:56.378 226109 DEBUG os_brick.utils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:04:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:56.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:57.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2591720672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/32107808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4064367949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.349 226109 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpnrrgb899',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2b74c89f-1018-4ad4-8af8-84109979b9c7='b2a91c5a-7187-4625-b7d3-9dd1c6ec4d90'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.351 226109 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Creating instance directory: /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.352 226109 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Ensure instance console log exists: /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.352 226109 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.356 226109 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.358 226109 DEBUG nova.virt.libvirt.vif [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:04:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:04:50Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.358 226109 DEBUG nova.network.os_vif_util [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.359 226109 DEBUG nova.network.os_vif_util [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.360 226109 DEBUG os_vif [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.361 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.362 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.362 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.366 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.367 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e200480-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.368 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e200480-de, col_values=(('external_ids', {'iface-id': '9e200480-deff-4f39-9945-230d157ac906', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:3c:15', 'vm-uuid': '1712064c-6ba8-4660-972f-1e827a40781a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.369 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:57 compute-1 NetworkManager[49031]: <info>  [1765004697.3707] manager: (tap9e200480-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.372 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.377 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.378 226109 INFO os_vif [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de')
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.381 226109 DEBUG nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Dec 06 07:04:57 compute-1 nova_compute[226101]: 2025-12-06 07:04:57.381 226109 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpnrrgb899',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2b74c89f-1018-4ad4-8af8-84109979b9c7='b2a91c5a-7187-4625-b7d3-9dd1c6ec4d90'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Dec 06 07:04:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:58 compute-1 ceph-mon[81689]: pgmap v1324: 305 pgs: 305 active+clean; 305 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 3.1 MiB/s wr, 306 op/s
Dec 06 07:04:58 compute-1 nova_compute[226101]: 2025-12-06 07:04:58.521 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:58 compute-1 nova_compute[226101]: 2025-12-06 07:04:58.674 226109 DEBUG nova.network.neutron [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Port 9e200480-deff-4f39-9945-230d157ac906 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Dec 06 07:04:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:58.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:58 compute-1 nova_compute[226101]: 2025-12-06 07:04:58.843 226109 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpnrrgb899',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2b74c89f-1018-4ad4-8af8-84109979b9c7='b2a91c5a-7187-4625-b7d3-9dd1c6ec4d90'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Dec 06 07:04:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:04:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:04:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:59.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:04:59 compute-1 kernel: tap9e200480-de: entered promiscuous mode
Dec 06 07:04:59 compute-1 NetworkManager[49031]: <info>  [1765004699.1240] manager: (tap9e200480-de): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Dec 06 07:04:59 compute-1 ovn_controller[130279]: 2025-12-06T07:04:59Z|00125|binding|INFO|Claiming lport 9e200480-deff-4f39-9945-230d157ac906 for this additional chassis.
Dec 06 07:04:59 compute-1 ovn_controller[130279]: 2025-12-06T07:04:59Z|00126|binding|INFO|9e200480-deff-4f39-9945-230d157ac906: Claiming fa:16:3e:e9:3c:15 10.100.0.13
Dec 06 07:04:59 compute-1 nova_compute[226101]: 2025-12-06 07:04:59.146 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:59 compute-1 ovn_controller[130279]: 2025-12-06T07:04:59Z|00127|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 ovn-installed in OVS
Dec 06 07:04:59 compute-1 nova_compute[226101]: 2025-12-06 07:04:59.154 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:59 compute-1 nova_compute[226101]: 2025-12-06 07:04:59.155 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:59 compute-1 systemd-machined[190302]: New machine qemu-17-instance-0000001b.
Dec 06 07:04:59 compute-1 systemd-udevd[238877]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:04:59 compute-1 systemd[1]: Started Virtual Machine qemu-17-instance-0000001b.
Dec 06 07:04:59 compute-1 NetworkManager[49031]: <info>  [1765004699.1895] device (tap9e200480-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:04:59 compute-1 NetworkManager[49031]: <info>  [1765004699.1902] device (tap9e200480-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:04:59 compute-1 nova_compute[226101]: 2025-12-06 07:04:59.952 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004699.9516222, 1712064c-6ba8-4660-972f-1e827a40781a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:59 compute-1 nova_compute[226101]: 2025-12-06 07:04:59.954 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Started (Lifecycle Event)
Dec 06 07:04:59 compute-1 nova_compute[226101]: 2025-12-06 07:04:59.972 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:00 compute-1 nova_compute[226101]: 2025-12-06 07:05:00.336 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004700.3357003, 1712064c-6ba8-4660-972f-1e827a40781a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:00 compute-1 nova_compute[226101]: 2025-12-06 07:05:00.336 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Resumed (Lifecycle Event)
Dec 06 07:05:00 compute-1 nova_compute[226101]: 2025-12-06 07:05:00.363 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:00 compute-1 nova_compute[226101]: 2025-12-06 07:05:00.366 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:05:00 compute-1 nova_compute[226101]: 2025-12-06 07:05:00.388 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com
Dec 06 07:05:00 compute-1 ceph-mon[81689]: pgmap v1325: 305 pgs: 305 active+clean; 305 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 3.1 MiB/s wr, 306 op/s
Dec 06 07:05:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:00.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:01.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:01.620 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:01.621 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:01.623 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:02 compute-1 podman[238929]: 2025-12-06 07:05:02.10345219 +0000 UTC m=+0.061789372 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:05:02 compute-1 podman[238928]: 2025-12-06 07:05:02.113835951 +0000 UTC m=+0.072174023 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 07:05:02 compute-1 nova_compute[226101]: 2025-12-06 07:05:02.409 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:02 compute-1 ceph-mon[81689]: pgmap v1326: 305 pgs: 305 active+clean; 369 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.8 MiB/s wr, 346 op/s
Dec 06 07:05:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2461479803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:02.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:02 compute-1 ovn_controller[130279]: 2025-12-06T07:05:02Z|00128|binding|INFO|Claiming lport 9e200480-deff-4f39-9945-230d157ac906 for this chassis.
Dec 06 07:05:02 compute-1 ovn_controller[130279]: 2025-12-06T07:05:02Z|00129|binding|INFO|9e200480-deff-4f39-9945-230d157ac906: Claiming fa:16:3e:e9:3c:15 10.100.0.13
Dec 06 07:05:02 compute-1 ovn_controller[130279]: 2025-12-06T07:05:02Z|00130|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 up in Southbound
Dec 06 07:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:02.883 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:3c:15 10.100.0.13'], port_security=['fa:16:3e:e9:3c:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1712064c-6ba8-4660-972f-1e827a40781a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '11', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9e200480-deff-4f39-9945-230d157ac906) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:02.884 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9e200480-deff-4f39-9945-230d157ac906 in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 bound to our chassis
Dec 06 07:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:02.885 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9238b9b5-08f5-4634-bd05-370e3192b201
Dec 06 07:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:02.900 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f9dc8380-19cc-44d9-9b14-1613aea9a972]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:02.931 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2b3677-26db-4818-a140-b92725f5d901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:02.934 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfb297a-37bd-474a-a28a-1f37b00e136d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:02.968 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb2fa42-38fb-4a57-90b9-a89b7cd17c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:02.987 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[74d16acf-c972-4029-88d2-538a3af4be5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 6, 'rx_bytes': 952, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 6, 'rx_bytes': 952, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493361, 'reachable_time': 29325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238971, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:03.007 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5e777c09-4a74-4e02-9774-77f7cea29dd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9238b9b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493371, 'tstamp': 493371}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238972, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9238b9b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493373, 'tstamp': 493373}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238972, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:03.009 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:03 compute-1 nova_compute[226101]: 2025-12-06 07:05:03.011 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:03 compute-1 nova_compute[226101]: 2025-12-06 07:05:03.012 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:03.012 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9238b9b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:03.012 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:05:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:03.013 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9238b9b5-00, col_values=(('external_ids', {'iface-id': '5c223717-35ae-4662-bf3f-55f7a73b7a9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:03.013 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:05:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:03 compute-1 nova_compute[226101]: 2025-12-06 07:05:03.179 226109 INFO nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Post operation of migration started
Dec 06 07:05:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3823637738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:03 compute-1 nova_compute[226101]: 2025-12-06 07:05:03.566 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:03 compute-1 nova_compute[226101]: 2025-12-06 07:05:03.581 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:05:03 compute-1 nova_compute[226101]: 2025-12-06 07:05:03.582 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquired lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:05:03 compute-1 nova_compute[226101]: 2025-12-06 07:05:03.582 226109 DEBUG nova.network.neutron [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:05:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:05.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:05 compute-1 ceph-mon[81689]: pgmap v1327: 305 pgs: 305 active+clean; 392 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.5 MiB/s wr, 295 op/s
Dec 06 07:05:05 compute-1 nova_compute[226101]: 2025-12-06 07:05:05.786 226109 DEBUG nova.network.neutron [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:05 compute-1 nova_compute[226101]: 2025-12-06 07:05:05.812 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Releasing lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:05:05 compute-1 nova_compute[226101]: 2025-12-06 07:05:05.833 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:05 compute-1 nova_compute[226101]: 2025-12-06 07:05:05.834 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:05 compute-1 nova_compute[226101]: 2025-12-06 07:05:05.834 226109 DEBUG oslo_concurrency.lockutils [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:05 compute-1 nova_compute[226101]: 2025-12-06 07:05:05.838 226109 INFO nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Dec 06 07:05:05 compute-1 virtqemud[225710]: Domain id=17 name='instance-0000001b' uuid=1712064c-6ba8-4660-972f-1e827a40781a is tainted: custom-monitor
Dec 06 07:05:06 compute-1 ceph-mon[81689]: pgmap v1328: 305 pgs: 305 active+clean; 429 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 8.1 MiB/s wr, 286 op/s
Dec 06 07:05:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:06 compute-1 nova_compute[226101]: 2025-12-06 07:05:06.847 226109 INFO nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Dec 06 07:05:07 compute-1 ovn_controller[130279]: 2025-12-06T07:05:07Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:3c:15 10.100.0.13
Dec 06 07:05:07 compute-1 ovn_controller[130279]: 2025-12-06T07:05:07Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:3c:15 10.100.0.13
Dec 06 07:05:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.412 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/338766413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.814 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "673ce976-9b77-47c7-8433-9050600c9b19" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.815 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "673ce976-9b77-47c7-8433-9050600c9b19" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.815 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "673ce976-9b77-47c7-8433-9050600c9b19-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.816 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "673ce976-9b77-47c7-8433-9050600c9b19-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.816 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "673ce976-9b77-47c7-8433-9050600c9b19-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.817 226109 INFO nova.compute.manager [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Terminating instance
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.818 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "refresh_cache-673ce976-9b77-47c7-8433-9050600c9b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.818 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquired lock "refresh_cache-673ce976-9b77-47c7-8433-9050600c9b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.818 226109 DEBUG nova.network.neutron [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.853 226109 INFO nova.virt.libvirt.driver [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.857 226109 DEBUG nova.compute.manager [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:07 compute-1 nova_compute[226101]: 2025-12-06 07:05:07.885 226109 DEBUG nova.objects.instance [None req-7217ce72-b1f2-4333-8cf6-95e5e446949b daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:05:08 compute-1 nova_compute[226101]: 2025-12-06 07:05:08.570 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:08 compute-1 nova_compute[226101]: 2025-12-06 07:05:08.592 226109 DEBUG nova.network.neutron [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:05:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:08 compute-1 nova_compute[226101]: 2025-12-06 07:05:08.935 226109 DEBUG nova.network.neutron [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:08 compute-1 nova_compute[226101]: 2025-12-06 07:05:08.952 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Releasing lock "refresh_cache-673ce976-9b77-47c7-8433-9050600c9b19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:05:08 compute-1 nova_compute[226101]: 2025-12-06 07:05:08.954 226109 DEBUG nova.compute.manager [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:05:08 compute-1 ceph-mon[81689]: pgmap v1329: 305 pgs: 305 active+clean; 435 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.8 MiB/s wr, 317 op/s
Dec 06 07:05:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:09 compute-1 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Dec 06 07:05:09 compute-1 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001d.scope: Consumed 13.952s CPU time.
Dec 06 07:05:09 compute-1 systemd-machined[190302]: Machine qemu-16-instance-0000001d terminated.
Dec 06 07:05:10 compute-1 nova_compute[226101]: 2025-12-06 07:05:10.176 226109 INFO nova.virt.libvirt.driver [-] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Instance destroyed successfully.
Dec 06 07:05:10 compute-1 nova_compute[226101]: 2025-12-06 07:05:10.176 226109 DEBUG nova.objects.instance [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lazy-loading 'resources' on Instance uuid 673ce976-9b77-47c7-8433-9050600c9b19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:10.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2765720048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1546777161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:05:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1546777161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:05:12 compute-1 ceph-mon[81689]: pgmap v1330: 305 pgs: 305 active+clean; 435 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 9.0 MiB/s wr, 302 op/s
Dec 06 07:05:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4125302750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:12 compute-1 nova_compute[226101]: 2025-12-06 07:05:12.251 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Check if temp file /var/lib/nova/instances/tmpwy14j4_p exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 06 07:05:12 compute-1 nova_compute[226101]: 2025-12-06 07:05:12.252 226109 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwy14j4_p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 06 07:05:12 compute-1 nova_compute[226101]: 2025-12-06 07:05:12.413 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:12.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:13.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:13 compute-1 ceph-mon[81689]: pgmap v1331: 305 pgs: 305 active+clean; 422 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 9.1 MiB/s wr, 384 op/s
Dec 06 07:05:13 compute-1 nova_compute[226101]: 2025-12-06 07:05:13.572 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:14 compute-1 ceph-mon[81689]: pgmap v1332: 305 pgs: 305 active+clean; 425 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.0 MiB/s wr, 299 op/s
Dec 06 07:05:14 compute-1 nova_compute[226101]: 2025-12-06 07:05:14.660 226109 INFO nova.virt.libvirt.driver [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Deleting instance files /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19_del
Dec 06 07:05:14 compute-1 nova_compute[226101]: 2025-12-06 07:05:14.661 226109 INFO nova.virt.libvirt.driver [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Deletion of /var/lib/nova/instances/673ce976-9b77-47c7-8433-9050600c9b19_del complete
Dec 06 07:05:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:14.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:14 compute-1 nova_compute[226101]: 2025-12-06 07:05:14.881 226109 INFO nova.compute.manager [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Took 5.93 seconds to destroy the instance on the hypervisor.
Dec 06 07:05:14 compute-1 nova_compute[226101]: 2025-12-06 07:05:14.881 226109 DEBUG oslo.service.loopingcall [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:05:14 compute-1 nova_compute[226101]: 2025-12-06 07:05:14.882 226109 DEBUG nova.compute.manager [-] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:05:14 compute-1 nova_compute[226101]: 2025-12-06 07:05:14.882 226109 DEBUG nova.network.neutron [-] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:05:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:15.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.055 226109 DEBUG nova.network.neutron [-] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.073 226109 DEBUG nova.network.neutron [-] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.088 226109 INFO nova.compute.manager [-] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Took 0.21 seconds to deallocate network for instance.
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.143 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.144 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.234 226109 DEBUG oslo_concurrency.processutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2455542892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.680 226109 DEBUG oslo_concurrency.processutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.686 226109 DEBUG nova.compute.provider_tree [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.710 226109 DEBUG nova.scheduler.client.report [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.748 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.772 226109 INFO nova.scheduler.client.report [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Deleted allocations for instance 673ce976-9b77-47c7-8433-9050600c9b19
Dec 06 07:05:15 compute-1 nova_compute[226101]: 2025-12-06 07:05:15.845 226109 DEBUG oslo_concurrency.lockutils [None req-e1052e2f-70c2-49f8-9747-912872c93a69 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "673ce976-9b77-47c7-8433-9050600c9b19" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.150 226109 DEBUG nova.compute.manager [req-996c7093-e0f3-48ef-8cc8-1538d652f9b5 req-7c6c2f29-029f-4d51-b84d-5b84361e67c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.151 226109 DEBUG oslo_concurrency.lockutils [req-996c7093-e0f3-48ef-8cc8-1538d652f9b5 req-7c6c2f29-029f-4d51-b84d-5b84361e67c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.151 226109 DEBUG oslo_concurrency.lockutils [req-996c7093-e0f3-48ef-8cc8-1538d652f9b5 req-7c6c2f29-029f-4d51-b84d-5b84361e67c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.151 226109 DEBUG oslo_concurrency.lockutils [req-996c7093-e0f3-48ef-8cc8-1538d652f9b5 req-7c6c2f29-029f-4d51-b84d-5b84361e67c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.152 226109 DEBUG nova.compute.manager [req-996c7093-e0f3-48ef-8cc8-1538d652f9b5 req-7c6c2f29-029f-4d51-b84d-5b84361e67c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.152 226109 DEBUG nova.compute.manager [req-996c7093-e0f3-48ef-8cc8-1538d652f9b5 req-7c6c2f29-029f-4d51-b84d-5b84361e67c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:05:16 compute-1 ceph-mon[81689]: pgmap v1333: 305 pgs: 305 active+clean; 386 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 266 op/s
Dec 06 07:05:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/759470460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2455542892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:16.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.871 226109 INFO nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Took 3.95 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.871 226109 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.886 226109 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwy14j4_p',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='1712064c-6ba8-4660-972f-1e827a40781a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(acdf4d1e-bc89-49b1-a922-9c7aca3be85b),old_vol_attachment_ids={2b74c89f-1018-4ad4-8af8-84109979b9c7='9c9d8800-f1b1-4d13-bc69-4ef58df68604'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.890 226109 DEBUG nova.objects.instance [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lazy-loading 'migration_context' on Instance uuid 1712064c-6ba8-4660-972f-1e827a40781a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.892 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.894 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.894 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.912 226109 DEBUG nova.virt.libvirt.migration [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Find same serial number: pos=1, serial=2b74c89f-1018-4ad4-8af8-84109979b9c7 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.914 226109 DEBUG nova.virt.libvirt.vif [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:04:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:05:07Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.914 226109 DEBUG nova.network.os_vif_util [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.915 226109 DEBUG nova.network.os_vif_util [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.915 226109 DEBUG nova.virt.libvirt.migration [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating guest XML with vif config: <interface type="ethernet">
Dec 06 07:05:16 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:e9:3c:15"/>
Dec 06 07:05:16 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:05:16 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:05:16 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:05:16 compute-1 nova_compute[226101]:   <target dev="tap9e200480-de"/>
Dec 06 07:05:16 compute-1 nova_compute[226101]: </interface>
Dec 06 07:05:16 compute-1 nova_compute[226101]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 06 07:05:16 compute-1 nova_compute[226101]: 2025-12-06 07:05:16.916 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 06 07:05:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:17.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:17 compute-1 nova_compute[226101]: 2025-12-06 07:05:17.397 226109 DEBUG nova.virt.libvirt.migration [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:05:17 compute-1 nova_compute[226101]: 2025-12-06 07:05:17.397 226109 INFO nova.virt.libvirt.migration [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 06 07:05:17 compute-1 nova_compute[226101]: 2025-12-06 07:05:17.420 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:17 compute-1 nova_compute[226101]: 2025-12-06 07:05:17.479 226109 INFO nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 06 07:05:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Dec 06 07:05:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:17 compute-1 nova_compute[226101]: 2025-12-06 07:05:17.983 226109 DEBUG nova.virt.libvirt.migration [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:05:17 compute-1 nova_compute[226101]: 2025-12-06 07:05:17.984 226109 DEBUG nova.virt.libvirt.migration [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.264 226109 DEBUG nova.compute.manager [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.265 226109 DEBUG oslo_concurrency.lockutils [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.265 226109 DEBUG oslo_concurrency.lockutils [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.265 226109 DEBUG oslo_concurrency.lockutils [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.266 226109 DEBUG nova.compute.manager [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.266 226109 WARNING nova.compute.manager [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.266 226109 DEBUG nova.compute.manager [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-changed-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.267 226109 DEBUG nova.compute.manager [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Refreshing instance network info cache due to event network-changed-9e200480-deff-4f39-9945-230d157ac906. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.267 226109 DEBUG oslo_concurrency.lockutils [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.267 226109 DEBUG oslo_concurrency.lockutils [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.267 226109 DEBUG nova.network.neutron [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Refreshing network info cache for port 9e200480-deff-4f39-9945-230d157ac906 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.359 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004718.3592176, 1712064c-6ba8-4660-972f-1e827a40781a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.360 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Paused (Lifecycle Event)
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.378 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.382 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.403 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.487 226109 DEBUG nova.virt.libvirt.migration [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.488 226109 DEBUG nova.virt.libvirt.migration [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.574 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-1 ceph-mon[81689]: pgmap v1334: 305 pgs: 305 active+clean; 346 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 224 op/s
Dec 06 07:05:18 compute-1 ceph-mon[81689]: osdmap e170: 3 total, 3 up, 3 in
Dec 06 07:05:18 compute-1 kernel: tap9e200480-de (unregistering): left promiscuous mode
Dec 06 07:05:18 compute-1 NetworkManager[49031]: <info>  [1765004718.7563] device (tap9e200480-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.765 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-1 ovn_controller[130279]: 2025-12-06T07:05:18Z|00131|binding|INFO|Releasing lport 9e200480-deff-4f39-9945-230d157ac906 from this chassis (sb_readonly=0)
Dec 06 07:05:18 compute-1 ovn_controller[130279]: 2025-12-06T07:05:18Z|00132|binding|INFO|Setting lport 9e200480-deff-4f39-9945-230d157ac906 down in Southbound
Dec 06 07:05:18 compute-1 ovn_controller[130279]: 2025-12-06T07:05:18Z|00133|binding|INFO|Removing iface tap9e200480-de ovn-installed in OVS
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.768 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.777 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:3c:15 10.100.0.13'], port_security=['fa:16:3e:e9:3c:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '9f96b960-b4f2-40bd-ae99-08121f5e8b78'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1712064c-6ba8-4660-972f-1e827a40781a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '18', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9e200480-deff-4f39-9945-230d157ac906) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.778 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9e200480-deff-4f39-9945-230d157ac906 in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 unbound from our chassis
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.779 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9238b9b5-08f5-4634-bd05-370e3192b201
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.781 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.794 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe53a1c-bfc2-4242-9367-f77d4136e07e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:18.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:18 compute-1 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Dec 06 07:05:18 compute-1 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001b.scope: Consumed 5.917s CPU time.
Dec 06 07:05:18 compute-1 systemd-machined[190302]: Machine qemu-17-instance-0000001b terminated.
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.823 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[aabf9dbf-458a-4f3e-a0b8-b7b9e5ac1988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.826 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e6660b64-cc32-4cab-98cc-9781b679509e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.853 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[89f7675d-eb53-4008-9e6b-d660451696d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.867 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3b341c-0b09-4f33-9c3d-40febd45f2c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 8, 'rx_bytes': 1036, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 8, 'rx_bytes': 1036, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493361, 'reachable_time': 29325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239033, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.883 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1ecb7e-639d-4aac-8097-0a2206654424]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9238b9b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493371, 'tstamp': 493371}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239034, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9238b9b5-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493373, 'tstamp': 493373}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239034, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.884 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.886 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.890 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.891 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9238b9b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.891 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.891 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9238b9b5-00, col_values=(('external_ids', {'iface-id': '5c223717-35ae-4662-bf3f-55f7a73b7a9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:18.892 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:05:18 compute-1 virtqemud[225710]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-2b74c89f-1018-4ad4-8af8-84109979b9c7: No such file or directory
Dec 06 07:05:18 compute-1 virtqemud[225710]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-2b74c89f-1018-4ad4-8af8-84109979b9c7: No such file or directory
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.910 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.916 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.927 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.927 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.927 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.990 226109 DEBUG nova.virt.libvirt.guest [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '1712064c-6ba8-4660-972f-1e827a40781a' (instance-0000001b) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.990 226109 INFO nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migration operation has completed
Dec 06 07:05:18 compute-1 nova_compute[226101]: 2025-12-06 07:05:18.990 226109 INFO nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] _post_live_migration() is started..
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.012 226109 DEBUG nova.compute.manager [req-4c0f1461-7ec7-4612-b24a-dcf009c2a1b8 req-d901a29c-327f-4ad8-b23f-2acde85fe03f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.012 226109 DEBUG oslo_concurrency.lockutils [req-4c0f1461-7ec7-4612-b24a-dcf009c2a1b8 req-d901a29c-327f-4ad8-b23f-2acde85fe03f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.012 226109 DEBUG oslo_concurrency.lockutils [req-4c0f1461-7ec7-4612-b24a-dcf009c2a1b8 req-d901a29c-327f-4ad8-b23f-2acde85fe03f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.013 226109 DEBUG oslo_concurrency.lockutils [req-4c0f1461-7ec7-4612-b24a-dcf009c2a1b8 req-d901a29c-327f-4ad8-b23f-2acde85fe03f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.013 226109 DEBUG nova.compute.manager [req-4c0f1461-7ec7-4612-b24a-dcf009c2a1b8 req-d901a29c-327f-4ad8-b23f-2acde85fe03f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.013 226109 DEBUG nova.compute.manager [req-4c0f1461-7ec7-4612-b24a-dcf009c2a1b8 req-d901a29c-327f-4ad8-b23f-2acde85fe03f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:05:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:19.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.645 226109 DEBUG nova.network.neutron [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updated VIF entry in instance network info cache for port 9e200480-deff-4f39-9945-230d157ac906. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.645 226109 DEBUG nova.network.neutron [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Updating instance_info_cache with network_info: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Dec 06 07:05:19 compute-1 nova_compute[226101]: 2025-12-06 07:05:19.663 226109 DEBUG oslo_concurrency.lockutils [req-48fb3ccf-873c-4236-98f9-862a3f53c053 req-597510aa-a56f-4984-a719-007715968b56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1712064c-6ba8-4660-972f-1e827a40781a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.434 226109 DEBUG nova.compute.manager [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.434 226109 DEBUG oslo_concurrency.lockutils [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.435 226109 DEBUG oslo_concurrency.lockutils [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.435 226109 DEBUG oslo_concurrency.lockutils [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.435 226109 DEBUG nova.compute.manager [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.435 226109 DEBUG nova.compute.manager [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-unplugged-9e200480-deff-4f39-9945-230d157ac906 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.436 226109 DEBUG nova.compute.manager [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.436 226109 DEBUG oslo_concurrency.lockutils [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.436 226109 DEBUG oslo_concurrency.lockutils [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.436 226109 DEBUG oslo_concurrency.lockutils [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.436 226109 DEBUG nova.compute.manager [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.437 226109 WARNING nova.compute.manager [req-56800b18-7f34-47e1-ac80-6cf59b92a102 req-677daf43-5091-4019-9c83-2ef7a2c1e4f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.586 226109 DEBUG nova.network.neutron [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Activated binding for port 9e200480-deff-4f39-9945-230d157ac906 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.587 226109 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.588 226109 DEBUG nova.virt.libvirt.vif [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:04:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1567617044',display_name='tempest-LiveMigrationTest-server-1567617044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1567617044',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:04:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-i5n1flwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:05:11Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=1712064c-6ba8-4660-972f-1e827a40781a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.588 226109 DEBUG nova.network.os_vif_util [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "9e200480-deff-4f39-9945-230d157ac906", "address": "fa:16:3e:e9:3c:15", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e200480-de", "ovs_interfaceid": "9e200480-deff-4f39-9945-230d157ac906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.589 226109 DEBUG nova.network.os_vif_util [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.589 226109 DEBUG os_vif [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.590 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.591 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e200480-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.593 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.596 226109 INFO os_vif [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:3c:15,bridge_name='br-int',has_traffic_filtering=True,id=9e200480-deff-4f39-9945-230d157ac906,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e200480-de')
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.596 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.597 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.597 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.597 226109 DEBUG nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.597 226109 INFO nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deleting instance files /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a_del
Dec 06 07:05:20 compute-1 nova_compute[226101]: 2025-12-06 07:05:20.598 226109 INFO nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Deletion of /var/lib/nova/instances/1712064c-6ba8-4660-972f-1e827a40781a_del complete
Dec 06 07:05:20 compute-1 ceph-mon[81689]: pgmap v1336: 305 pgs: 305 active+clean; 346 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 208 KiB/s wr, 147 op/s
Dec 06 07:05:20 compute-1 ceph-mon[81689]: osdmap e171: 3 total, 3 up, 3 in
Dec 06 07:05:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Dec 06 07:05:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:20.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:21.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.137 226109 DEBUG nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.137 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.137 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.137 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.137 226109 DEBUG nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.138 226109 WARNING nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.138 226109 DEBUG nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.138 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.138 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.138 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.139 226109 DEBUG nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.139 226109 WARNING nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.139 226109 DEBUG nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.139 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.139 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.140 226109 DEBUG oslo_concurrency.lockutils [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.140 226109 DEBUG nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] No waiting events found dispatching network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:21 compute-1 nova_compute[226101]: 2025-12-06 07:05:21.140 226109 WARNING nova.compute.manager [req-0d61f30f-bb8b-4d96-b647-2d041d6ce39b req-33345371-a381-4d66-bfd7-5020f875ccf9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Received unexpected event network-vif-plugged-9e200480-deff-4f39-9945-230d157ac906 for instance with vm_state active and task_state migrating.
Dec 06 07:05:22 compute-1 ceph-mon[81689]: osdmap e172: 3 total, 3 up, 3 in
Dec 06 07:05:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:22.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:23 compute-1 nova_compute[226101]: 2025-12-06 07:05:23.576 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:24 compute-1 ceph-mon[81689]: pgmap v1339: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 417 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.3 MiB/s wr, 266 op/s
Dec 06 07:05:24 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Dec 06 07:05:24 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:24.748073) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:05:24 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Dec 06 07:05:24 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724748111, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2544, "num_deletes": 509, "total_data_size": 5182815, "memory_usage": 5244720, "flush_reason": "Manual Compaction"}
Dec 06 07:05:24 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Dec 06 07:05:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:24.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:25.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:25 compute-1 podman[239046]: 2025-12-06 07:05:25.118533596 +0000 UTC m=+0.099852923 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.174 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004710.173127, 673ce976-9b77-47c7-8433-9050600c9b19 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.174 226109 INFO nova.compute.manager [-] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] VM Stopped (Lifecycle Event)
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.196 226109 DEBUG nova.compute.manager [None req-c4d20edf-b3af-479d-b546-5b5119f6d42f - - - - - -] [instance: 673ce976-9b77-47c7-8433-9050600c9b19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004725285134, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 2368532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25729, "largest_seqno": 28267, "table_properties": {"data_size": 2360433, "index_size": 4081, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 23222, "raw_average_key_size": 19, "raw_value_size": 2340778, "raw_average_value_size": 2014, "num_data_blocks": 180, "num_entries": 1162, "num_filter_entries": 1162, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004536, "oldest_key_time": 1765004536, "file_creation_time": 1765004724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 537121 microseconds, and 5408 cpu microseconds.
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.285191) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 2368532 bytes OK
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.285215) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.288667) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.288706) EVENT_LOG_v1 {"time_micros": 1765004725288697, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.288733) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5170642, prev total WAL file size 5170642, number of live WAL files 2.
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.290498) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353036' seq:72057594037927935, type:22 .. '6C6F676D00373538' seq:0, type:0; will stop at (end)
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(2313KB)], [51(10086KB)]
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004725290586, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12697562, "oldest_snapshot_seqno": -1}
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5658 keys, 9730081 bytes, temperature: kUnknown
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004725489766, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 9730081, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9692057, "index_size": 22746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 144700, "raw_average_key_size": 25, "raw_value_size": 9589913, "raw_average_value_size": 1694, "num_data_blocks": 921, "num_entries": 5658, "num_filter_entries": 5658, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765004725, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.490099) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 9730081 bytes
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.507852) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.7 rd, 48.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(9.5) write-amplify(4.1) OK, records in: 6629, records dropped: 971 output_compression: NoCompression
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.507896) EVENT_LOG_v1 {"time_micros": 1765004725507879, "job": 30, "event": "compaction_finished", "compaction_time_micros": 199278, "compaction_time_cpu_micros": 30772, "output_level": 6, "num_output_files": 1, "total_output_size": 9730081, "num_input_records": 6629, "num_output_records": 5658, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.290337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.508070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.508077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.508079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.508082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:25.508084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:25 compute-1 ceph-mon[81689]: pgmap v1340: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 452 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 12 MiB/s wr, 264 op/s
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.549 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "1712064c-6ba8-4660-972f-1e827a40781a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.549 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.550 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "1712064c-6ba8-4660-972f-1e827a40781a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.569 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.569 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.569 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.569 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.570 226109 DEBUG oslo_concurrency.processutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:25 compute-1 nova_compute[226101]: 2025-12-06 07:05:25.593 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1417600619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.020 226109 DEBUG oslo_concurrency.processutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.081 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.082 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.232 226109 WARNING nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.233 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4640MB free_disk=20.851757049560547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.233 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.233 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.306 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Migration for instance 1712064c-6ba8-4660-972f-1e827a40781a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.328 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.374 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Instance 76601abc-9380-4d0e-8360-39afb25adf0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.374 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Migration acdf4d1e-bc89-49b1-a922-9c7aca3be85b is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.375 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.375 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:05:26 compute-1 nova_compute[226101]: 2025-12-06 07:05:26.419 226109 DEBUG oslo_concurrency.processutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:26.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:26 compute-1 ceph-mon[81689]: pgmap v1341: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 9.6 MiB/s wr, 223 op/s
Dec 06 07:05:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1417600619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:27.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Dec 06 07:05:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2483436212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:27 compute-1 nova_compute[226101]: 2025-12-06 07:05:27.232 226109 DEBUG oslo_concurrency.processutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:27 compute-1 nova_compute[226101]: 2025-12-06 07:05:27.237 226109 DEBUG nova.compute.provider_tree [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:27 compute-1 nova_compute[226101]: 2025-12-06 07:05:27.264 226109 DEBUG nova.scheduler.client.report [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:27 compute-1 nova_compute[226101]: 2025-12-06 07:05:27.295 226109 DEBUG nova.compute.resource_tracker [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:05:27 compute-1 nova_compute[226101]: 2025-12-06 07:05:27.295 226109 DEBUG oslo_concurrency.lockutils [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:27 compute-1 nova_compute[226101]: 2025-12-06 07:05:27.331 226109 INFO nova.compute.manager [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Migrating instance to compute-2.ctlplane.example.com finished successfully.
Dec 06 07:05:27 compute-1 nova_compute[226101]: 2025-12-06 07:05:27.425 226109 INFO nova.scheduler.client.report [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Deleted allocation for migration acdf4d1e-bc89-49b1-a922-9c7aca3be85b
Dec 06 07:05:27 compute-1 nova_compute[226101]: 2025-12-06 07:05:27.425 226109 DEBUG nova.virt.libvirt.driver [None req-9e1e7cc8-64bd-4145-90a5-d4ac65535ddd daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 06 07:05:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:28 compute-1 ceph-mon[81689]: pgmap v1342: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 9.0 MiB/s wr, 226 op/s
Dec 06 07:05:28 compute-1 ceph-mon[81689]: osdmap e173: 3 total, 3 up, 3 in
Dec 06 07:05:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2483436212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3401685630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:28 compute-1 nova_compute[226101]: 2025-12-06 07:05:28.577 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:28.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:29.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:05:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4211555932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:29 compute-1 ovn_controller[130279]: 2025-12-06T07:05:29Z|00134|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:05:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4211555932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:30 compute-1 ceph-mon[81689]: pgmap v1344: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.5 MiB/s wr, 183 op/s
Dec 06 07:05:30 compute-1 nova_compute[226101]: 2025-12-06 07:05:30.595 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:05:30 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3560226663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:05:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:05:30 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3560226663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:05:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:30.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:31.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.579 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.580 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.580 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.580 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.580 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.581 226109 INFO nova.compute.manager [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Terminating instance
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.582 226109 DEBUG nova.compute.manager [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:05:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3560226663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:05:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3560226663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:05:31 compute-1 kernel: tap60375867-89 (unregistering): left promiscuous mode
Dec 06 07:05:31 compute-1 NetworkManager[49031]: <info>  [1765004731.7472] device (tap60375867-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:05:31 compute-1 ovn_controller[130279]: 2025-12-06T07:05:31Z|00135|binding|INFO|Releasing lport 60375867-89f6-4607-b20c-d94ca837383e from this chassis (sb_readonly=0)
Dec 06 07:05:31 compute-1 ovn_controller[130279]: 2025-12-06T07:05:31Z|00136|binding|INFO|Setting lport 60375867-89f6-4607-b20c-d94ca837383e down in Southbound
Dec 06 07:05:31 compute-1 ovn_controller[130279]: 2025-12-06T07:05:31Z|00137|binding|INFO|Releasing lport 40dca971-8880-4c3a-a5fd-8d055d31de88 from this chassis (sb_readonly=0)
Dec 06 07:05:31 compute-1 ovn_controller[130279]: 2025-12-06T07:05:31Z|00138|binding|INFO|Setting lport 40dca971-8880-4c3a-a5fd-8d055d31de88 down in Southbound
Dec 06 07:05:31 compute-1 ovn_controller[130279]: 2025-12-06T07:05:31Z|00139|binding|INFO|Removing iface tap60375867-89 ovn-installed in OVS
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.759 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:31.763 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:91:51 19.80.0.39'], port_security=['fa:16:3e:7f:91:51 19.80.0.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['60375867-89f6-4607-b20c-d94ca837383e'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-308547339', 'neutron:cidrs': '19.80.0.39/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-870f583a-cbfc-4c59-b592-cf3095306ec5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-308547339', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '5', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=8aebe1f8-cfb2-4cb1-a24a-2cc56e9a461b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=40dca971-8880-4c3a-a5fd-8d055d31de88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:05:31 compute-1 ovn_controller[130279]: 2025-12-06T07:05:31Z|00140|binding|INFO|Releasing lport a37358dd-1cb9-4fbc-9a91-e10a5094a9ac from this chassis (sb_readonly=0)
Dec 06 07:05:31 compute-1 ovn_controller[130279]: 2025-12-06T07:05:31Z|00141|binding|INFO|Releasing lport 5c223717-35ae-4662-bf3f-55f7a73b7a9a from this chassis (sb_readonly=0)
Dec 06 07:05:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:31.765 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:4d:99 10.100.0.6'], port_security=['fa:16:3e:2e:4d:99 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2114195499', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '76601abc-9380-4d0e-8360-39afb25adf0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2114195499', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '11', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=60375867-89f6-4607-b20c-d94ca837383e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:05:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:31.766 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 40dca971-8880-4c3a-a5fd-8d055d31de88 in datapath 870f583a-cbfc-4c59-b592-cf3095306ec5 unbound from our chassis
Dec 06 07:05:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:31.767 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 870f583a-cbfc-4c59-b592-cf3095306ec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:05:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:31.768 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[133e459d-3dba-4acb-b684-2e8fa9a0ef0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:31.769 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 namespace which is not needed anymore
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.788 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:31 compute-1 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000018.scope: Deactivated successfully.
Dec 06 07:05:31 compute-1 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000018.scope: Consumed 5.721s CPU time.
Dec 06 07:05:31 compute-1 systemd-machined[190302]: Machine qemu-15-instance-00000018 terminated.
Dec 06 07:05:31 compute-1 nova_compute[226101]: 2025-12-06 07:05:31.861 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[238061]: [NOTICE]   (238065) : haproxy version is 2.8.14-c23fe91
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[238061]: [NOTICE]   (238065) : path to executable is /usr/sbin/haproxy
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[238061]: [WARNING]  (238065) : Exiting Master process...
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[238061]: [ALERT]    (238065) : Current worker (238067) exited with code 143 (Terminated)
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[238061]: [WARNING]  (238065) : All workers exited. Exiting... (0)
Dec 06 07:05:32 compute-1 systemd[1]: libpod-438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f.scope: Deactivated successfully.
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.017 226109 INFO nova.virt.libvirt.driver [-] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Instance destroyed successfully.
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.018 226109 DEBUG nova.objects.instance [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lazy-loading 'resources' on Instance uuid 76601abc-9380-4d0e-8360-39afb25adf0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:32 compute-1 podman[239141]: 2025-12-06 07:05:32.02312958 +0000 UTC m=+0.173618887 container died 438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.038 226109 DEBUG nova.virt.libvirt.vif [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:03:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1401760543',display_name='tempest-LiveMigrationTest-server-1401760543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1401760543',id=24,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-5qbwc0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:04:18Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=76601abc-9380-4d0e-8360-39afb25adf0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.039 226109 DEBUG nova.network.os_vif_util [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converting VIF {"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.039 226109 DEBUG nova.network.os_vif_util [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.040 226109 DEBUG os_vif [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.042 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.043 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60375867-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.046 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.049 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.051 226109 INFO os_vif [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89')
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.067 226109 DEBUG nova.compute.manager [req-44cae948-59d1-4c24-8f0d-73b108f4fe95 req-df174d50-32da-4473-be59-09308e45e520 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.067 226109 DEBUG oslo_concurrency.lockutils [req-44cae948-59d1-4c24-8f0d-73b108f4fe95 req-df174d50-32da-4473-be59-09308e45e520 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.067 226109 DEBUG oslo_concurrency.lockutils [req-44cae948-59d1-4c24-8f0d-73b108f4fe95 req-df174d50-32da-4473-be59-09308e45e520 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.067 226109 DEBUG oslo_concurrency.lockutils [req-44cae948-59d1-4c24-8f0d-73b108f4fe95 req-df174d50-32da-4473-be59-09308e45e520 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.068 226109 DEBUG nova.compute.manager [req-44cae948-59d1-4c24-8f0d-73b108f4fe95 req-df174d50-32da-4473-be59-09308e45e520 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.068 226109 DEBUG nova.compute.manager [req-44cae948-59d1-4c24-8f0d-73b108f4fe95 req-df174d50-32da-4473-be59-09308e45e520 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:05:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f-userdata-shm.mount: Deactivated successfully.
Dec 06 07:05:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-621cb883df4683de3225142a619fa06eb98e4d5682cef7fca5ace7548996c618-merged.mount: Deactivated successfully.
Dec 06 07:05:32 compute-1 podman[239141]: 2025-12-06 07:05:32.474371657 +0000 UTC m=+0.624860934 container cleanup 438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:05:32 compute-1 ceph-mon[81689]: pgmap v1345: 305 pgs: 305 active+clean; 375 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 806 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Dec 06 07:05:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/837476368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:32 compute-1 systemd[1]: libpod-conmon-438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f.scope: Deactivated successfully.
Dec 06 07:05:32 compute-1 podman[239199]: 2025-12-06 07:05:32.540128186 +0000 UTC m=+0.051205655 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 07:05:32 compute-1 podman[239205]: 2025-12-06 07:05:32.548626606 +0000 UTC m=+0.042564112 container remove 438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.552 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[70a8219c-74f2-43a5-ba33-30bd703bf830]: (4, ('Sat Dec  6 07:05:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 (438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f)\n438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f\nSat Dec  6 07:05:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 (438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f)\n438fa7b058192c2879dd423b95503845a46c724577fe28a3bc97bcc98ec4744f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.554 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2cb2d3-1ea5-4d75-a695-4791a7500b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.554 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap870f583a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.556 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 kernel: tap870f583a-c0: left promiscuous mode
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.558 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 podman[239198]: 2025-12-06 07:05:32.560774045 +0000 UTC m=+0.076098119 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible)
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.561 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b90636a2-1baf-4d48-93d5-b2a263fe9a46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.572 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.576 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e7aae6a1-4e49-470d-8e0d-c96f79e0e233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.577 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb2614d-e1e2-4f3d-a47d-e27ad55c2341]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.593 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4afa8d93-35a3-430d-9905-3e233b6ece2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493265, 'reachable_time': 17228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239251, 'error': None, 'target': 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 systemd[1]: run-netns-ovnmeta\x2d870f583a\x2dcbfc\x2d4c59\x2db592\x2dcf3095306ec5.mount: Deactivated successfully.
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.597 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.598 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ef48f93f-f38e-41e1-9bff-d9a3b8dcb3fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.598 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 60375867-89f6-4607-b20c-d94ca837383e in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 unbound from our chassis
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.600 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9238b9b5-08f5-4634-bd05-370e3192b201, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.600 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[623a452c-d8f6-44f8-88da-8899460f4631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.601 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 namespace which is not needed anymore
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[238154]: [NOTICE]   (238158) : haproxy version is 2.8.14-c23fe91
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[238154]: [NOTICE]   (238158) : path to executable is /usr/sbin/haproxy
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[238154]: [WARNING]  (238158) : Exiting Master process...
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[238154]: [ALERT]    (238158) : Current worker (238160) exited with code 143 (Terminated)
Dec 06 07:05:32 compute-1 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[238154]: [WARNING]  (238158) : All workers exited. Exiting... (0)
Dec 06 07:05:32 compute-1 systemd[1]: libpod-a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355.scope: Deactivated successfully.
Dec 06 07:05:32 compute-1 podman[239269]: 2025-12-06 07:05:32.730091945 +0000 UTC m=+0.045233194 container died a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 06 07:05:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355-userdata-shm.mount: Deactivated successfully.
Dec 06 07:05:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-3fa7706b32bd6c46a442ecf1b7f3a953ef20d950836e1cf163c26b67ab42128b-merged.mount: Deactivated successfully.
Dec 06 07:05:32 compute-1 podman[239269]: 2025-12-06 07:05:32.771803394 +0000 UTC m=+0.086944633 container cleanup a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:05:32 compute-1 systemd[1]: libpod-conmon-a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355.scope: Deactivated successfully.
Dec 06 07:05:32 compute-1 podman[239298]: 2025-12-06 07:05:32.825391533 +0000 UTC m=+0.036625811 container remove a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:05:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:32.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.830 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7c25871e-4bd1-4abf-9327-de7c8bb077fc]: (4, ('Sat Dec  6 07:05:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 (a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355)\na1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355\nSat Dec  6 07:05:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 (a1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355)\na1c527807ff7e9e38a1eaea2bf4f80e43b321f4c9cf9f6525b74bdfdf674a355\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.832 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[190289e2-fbe0-4d60-a81b-74fcb9fdad63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.832 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.834 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 kernel: tap9238b9b5-00: left promiscuous mode
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.836 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.840 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[87062be5-3cff-4576-aaa5-30a2b2f9cf0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 nova_compute[226101]: 2025-12-06 07:05:32.847 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.863 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[186cc63c-8891-45f9-b22f-3a07467171db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.864 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[adf2ef84-0ac1-4c90-97f6-cba27ebcf045]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.881 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd84969-e392-42ee-87c7-83684c223f43]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493354, 'reachable_time': 17172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239313, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.884 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:05:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:32.884 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[53e7a485-f092-4cce-896c-1c42a4a29ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:05:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:33.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.111 226109 INFO nova.virt.libvirt.driver [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Deleting instance files /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c_del
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.112 226109 INFO nova.virt.libvirt.driver [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Deletion of /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c_del complete
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.203 226109 INFO nova.compute.manager [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Took 1.62 seconds to destroy the instance on the hypervisor.
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.204 226109 DEBUG oslo.service.loopingcall [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.204 226109 DEBUG nova.compute.manager [-] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.204 226109 DEBUG nova.network.neutron [-] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:05:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:33 compute-1 systemd[1]: run-netns-ovnmeta\x2d9238b9b5\x2d08f5\x2d4634\x2dbd05\x2d370e3192b201.mount: Deactivated successfully.
Dec 06 07:05:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3394481438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:33.517 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:05:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:33.517 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.517 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.579 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.926 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004718.9251242, 1712064c-6ba8-4660-972f-1e827a40781a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.926 226109 INFO nova.compute.manager [-] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] VM Stopped (Lifecycle Event)
Dec 06 07:05:33 compute-1 nova_compute[226101]: 2025-12-06 07:05:33.946 226109 DEBUG nova.compute.manager [None req-ccbaf842-eaae-46f3-a279-4bb5597393d1 - - - - - -] [instance: 1712064c-6ba8-4660-972f-1e827a40781a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:34 compute-1 nova_compute[226101]: 2025-12-06 07:05:34.139 226109 DEBUG nova.compute.manager [req-5bc3a04c-1b3a-4f6c-b53b-77d6811284c0 req-4753584a-2f72-4211-82d4-1dd2e62742ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:05:34 compute-1 nova_compute[226101]: 2025-12-06 07:05:34.139 226109 DEBUG oslo_concurrency.lockutils [req-5bc3a04c-1b3a-4f6c-b53b-77d6811284c0 req-4753584a-2f72-4211-82d4-1dd2e62742ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:34 compute-1 nova_compute[226101]: 2025-12-06 07:05:34.140 226109 DEBUG oslo_concurrency.lockutils [req-5bc3a04c-1b3a-4f6c-b53b-77d6811284c0 req-4753584a-2f72-4211-82d4-1dd2e62742ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:34 compute-1 nova_compute[226101]: 2025-12-06 07:05:34.140 226109 DEBUG oslo_concurrency.lockutils [req-5bc3a04c-1b3a-4f6c-b53b-77d6811284c0 req-4753584a-2f72-4211-82d4-1dd2e62742ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:34 compute-1 nova_compute[226101]: 2025-12-06 07:05:34.140 226109 DEBUG nova.compute.manager [req-5bc3a04c-1b3a-4f6c-b53b-77d6811284c0 req-4753584a-2f72-4211-82d4-1dd2e62742ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:05:34 compute-1 nova_compute[226101]: 2025-12-06 07:05:34.140 226109 WARNING nova.compute.manager [req-5bc3a04c-1b3a-4f6c-b53b-77d6811284c0 req-4753584a-2f72-4211-82d4-1dd2e62742ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received unexpected event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e for instance with vm_state active and task_state deleting.
Dec 06 07:05:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:05:34.519 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:34 compute-1 ceph-mon[81689]: pgmap v1346: 305 pgs: 305 active+clean; 331 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 30 KiB/s wr, 90 op/s
Dec 06 07:05:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1654068104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3395349547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:34.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:34.941372) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734941544, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 401, "num_deletes": 252, "total_data_size": 354159, "memory_usage": 363016, "flush_reason": "Manual Compaction"}
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734945918, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 233059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28272, "largest_seqno": 28668, "table_properties": {"data_size": 230750, "index_size": 409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6069, "raw_average_key_size": 19, "raw_value_size": 226001, "raw_average_value_size": 712, "num_data_blocks": 18, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004725, "oldest_key_time": 1765004725, "file_creation_time": 1765004734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 4602 microseconds, and 2024 cpu microseconds.
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:34.945990) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 233059 bytes OK
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:34.946022) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:34.947163) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:34.947180) EVENT_LOG_v1 {"time_micros": 1765004734947173, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:34.947206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 351544, prev total WAL file size 351544, number of live WAL files 2.
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734947826, "job": 31, "event": "table_file_deletion", "file_number": 53}
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734949754, "job": 31, "event": "table_file_deletion", "file_number": 51}
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:34.950027) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(227KB)], [54(9502KB)]
Dec 06 07:05:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734950069, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 9963140, "oldest_snapshot_seqno": -1}
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5458 keys, 7990695 bytes, temperature: kUnknown
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004735013603, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 7990695, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7955548, "index_size": 20381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 141266, "raw_average_key_size": 25, "raw_value_size": 7858367, "raw_average_value_size": 1439, "num_data_blocks": 813, "num_entries": 5458, "num_filter_entries": 5458, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765004734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:35.013842) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7990695 bytes
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:35.015105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.7 rd, 125.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.3 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(77.0) write-amplify(34.3) OK, records in: 5975, records dropped: 517 output_compression: NoCompression
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:35.015121) EVENT_LOG_v1 {"time_micros": 1765004735015114, "job": 32, "event": "compaction_finished", "compaction_time_micros": 63582, "compaction_time_cpu_micros": 21514, "output_level": 6, "num_output_files": 1, "total_output_size": 7990695, "num_input_records": 5975, "num_output_records": 5458, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004735015255, "job": 32, "event": "table_file_deletion", "file_number": 56}
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004735016749, "job": 32, "event": "table_file_deletion", "file_number": 54}
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:34.949927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:35.016778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:35.016782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:35.016783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:35.016785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:05:35.016786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 07:05:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:35.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 07:05:35 compute-1 nova_compute[226101]: 2025-12-06 07:05:35.523 226109 DEBUG nova.network.neutron [-] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:35 compute-1 nova_compute[226101]: 2025-12-06 07:05:35.544 226109 INFO nova.compute.manager [-] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Took 2.34 seconds to deallocate network for instance.
Dec 06 07:05:35 compute-1 nova_compute[226101]: 2025-12-06 07:05:35.587 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:35 compute-1 nova_compute[226101]: 2025-12-06 07:05:35.588 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:35 compute-1 nova_compute[226101]: 2025-12-06 07:05:35.639 226109 DEBUG oslo_concurrency.processutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2942269010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1388583674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3011554113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:36 compute-1 nova_compute[226101]: 2025-12-06 07:05:36.048 226109 DEBUG oslo_concurrency.processutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:36 compute-1 nova_compute[226101]: 2025-12-06 07:05:36.054 226109 DEBUG nova.compute.provider_tree [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:36 compute-1 nova_compute[226101]: 2025-12-06 07:05:36.071 226109 DEBUG nova.scheduler.client.report [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:36 compute-1 nova_compute[226101]: 2025-12-06 07:05:36.089 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:36 compute-1 nova_compute[226101]: 2025-12-06 07:05:36.112 226109 INFO nova.scheduler.client.report [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Deleted allocations for instance 76601abc-9380-4d0e-8360-39afb25adf0c
Dec 06 07:05:36 compute-1 nova_compute[226101]: 2025-12-06 07:05:36.201 226109 DEBUG oslo_concurrency.lockutils [None req-2dd7b9f3-e265-425c-9634-611752f6b9d4 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:36.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:37 compute-1 nova_compute[226101]: 2025-12-06 07:05:37.047 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:37.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:37 compute-1 ceph-mon[81689]: pgmap v1347: 305 pgs: 305 active+clean; 287 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 154 op/s
Dec 06 07:05:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1868520597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3011554113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:38 compute-1 nova_compute[226101]: 2025-12-06 07:05:38.581 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:38 compute-1 ceph-mon[81689]: pgmap v1348: 305 pgs: 305 active+clean; 266 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.8 MiB/s wr, 275 op/s
Dec 06 07:05:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:39.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Dec 06 07:05:40 compute-1 ceph-mon[81689]: pgmap v1349: 305 pgs: 305 active+clean; 266 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 230 op/s
Dec 06 07:05:40 compute-1 ceph-mon[81689]: osdmap e174: 3 total, 3 up, 3 in
Dec 06 07:05:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3372856054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:41.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:05:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/526554932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:05:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:05:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/526554932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:05:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/526554932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:05:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/526554932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:05:42 compute-1 nova_compute[226101]: 2025-12-06 07:05:42.051 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:42 compute-1 ceph-mon[81689]: pgmap v1351: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 162 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 8.2 MiB/s rd, 6.8 MiB/s wr, 435 op/s
Dec 06 07:05:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:42.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:43.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:43 compute-1 nova_compute[226101]: 2025-12-06 07:05:43.584 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:44 compute-1 ceph-mon[81689]: pgmap v1352: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 121 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.8 MiB/s wr, 471 op/s
Dec 06 07:05:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:44.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:45.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.618 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.618 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.619 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.619 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:05:45 compute-1 nova_compute[226101]: 2025-12-06 07:05:45.619 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/249549195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.102 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:46 compute-1 ceph-mon[81689]: pgmap v1353: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.6 MiB/s wr, 400 op/s
Dec 06 07:05:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/249549195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2187097515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.254 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.255 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4849MB free_disk=20.9427490234375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.255 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.256 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.336 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.337 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.358 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/173234693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.758 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.765 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.782 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.807 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:05:46 compute-1 nova_compute[226101]: 2025-12-06 07:05:46.807 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:46.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:47 compute-1 nova_compute[226101]: 2025-12-06 07:05:47.016 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004732.0154762, 76601abc-9380-4d0e-8360-39afb25adf0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:47 compute-1 nova_compute[226101]: 2025-12-06 07:05:47.017 226109 INFO nova.compute.manager [-] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] VM Stopped (Lifecycle Event)
Dec 06 07:05:47 compute-1 nova_compute[226101]: 2025-12-06 07:05:47.050 226109 DEBUG nova.compute.manager [None req-30916654-4ee1-408b-a1e6-8b93ccdaf333 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:47 compute-1 nova_compute[226101]: 2025-12-06 07:05:47.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:47.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/173234693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3455870158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Dec 06 07:05:47 compute-1 nova_compute[226101]: 2025-12-06 07:05:47.807 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:47 compute-1 nova_compute[226101]: 2025-12-06 07:05:47.808 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:48 compute-1 ceph-mon[81689]: pgmap v1354: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 34 KiB/s wr, 266 op/s
Dec 06 07:05:48 compute-1 ceph-mon[81689]: osdmap e175: 3 total, 3 up, 3 in
Dec 06 07:05:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/109364209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3546733337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:48 compute-1 nova_compute[226101]: 2025-12-06 07:05:48.587 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:48 compute-1 nova_compute[226101]: 2025-12-06 07:05:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:48 compute-1 nova_compute[226101]: 2025-12-06 07:05:48.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:05:48 compute-1 nova_compute[226101]: 2025-12-06 07:05:48.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:05:48 compute-1 nova_compute[226101]: 2025-12-06 07:05:48.666 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:05:48 compute-1 nova_compute[226101]: 2025-12-06 07:05:48.667 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:48.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:49.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:49 compute-1 nova_compute[226101]: 2025-12-06 07:05:49.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:50 compute-1 ceph-mon[81689]: pgmap v1356: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 36 KiB/s wr, 282 op/s
Dec 06 07:05:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:50.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:51.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:51 compute-1 nova_compute[226101]: 2025-12-06 07:05:51.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:52 compute-1 nova_compute[226101]: 2025-12-06 07:05:52.093 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:52 compute-1 ceph-mon[81689]: pgmap v1357: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 110 op/s
Dec 06 07:05:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:52.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:53.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:53 compute-1 nova_compute[226101]: 2025-12-06 07:05:53.589 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:54 compute-1 sudo[239382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:54 compute-1 sudo[239382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:54 compute-1 sudo[239382]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:54 compute-1 sudo[239407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:05:54 compute-1 sudo[239407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:54 compute-1 sudo[239407]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:54 compute-1 sudo[239432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:54 compute-1 sudo[239432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:54 compute-1 sudo[239432]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:54 compute-1 sudo[239457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:05:54 compute-1 sudo[239457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:54 compute-1 ceph-mon[81689]: pgmap v1358: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 635 KiB/s rd, 13 KiB/s wr, 56 op/s
Dec 06 07:05:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:54 compute-1 sudo[239457]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:55.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.172 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquiring lock "1433f316-bd4f-4481-870f-80c64bc51202" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.173 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "1433f316-bd4f-4481-870f-80c64bc51202" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.194 226109 DEBUG nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.265 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.265 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.274 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.274 226109 INFO nova.compute.claims [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.449 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:05:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:05:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:05:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:05:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:05:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:05:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/30733717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.897 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.902 226109 DEBUG nova.compute.provider_tree [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.917 226109 DEBUG nova.scheduler.client.report [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.938 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.938 226109 DEBUG nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.979 226109 DEBUG nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.980 226109 DEBUG nova.network.neutron [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:05:55 compute-1 nova_compute[226101]: 2025-12-06 07:05:55.998 226109 INFO nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.013 226109 DEBUG nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:05:56 compute-1 podman[239534]: 2025-12-06 07:05:56.115410496 +0000 UTC m=+0.099692777 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.145 226109 DEBUG nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.148 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.149 226109 INFO nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Creating image(s)
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.183 226109 DEBUG nova.storage.rbd_utils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] rbd image 1433f316-bd4f-4481-870f-80c64bc51202_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.218 226109 DEBUG nova.storage.rbd_utils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] rbd image 1433f316-bd4f-4481-870f-80c64bc51202_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.252 226109 DEBUG nova.storage.rbd_utils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] rbd image 1433f316-bd4f-4481-870f-80c64bc51202_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.257 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.319 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.321 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.322 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.322 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.348 226109 DEBUG nova.storage.rbd_utils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] rbd image 1433f316-bd4f-4481-870f-80c64bc51202_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.353 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1433f316-bd4f-4481-870f-80c64bc51202_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.518 226109 DEBUG nova.network.neutron [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.519 226109 DEBUG nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:05:56 compute-1 ceph-mon[81689]: pgmap v1359: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 636 KiB/s rd, 31 KiB/s wr, 56 op/s
Dec 06 07:05:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/30733717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:56.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:56 compute-1 nova_compute[226101]: 2025-12-06 07:05:56.946 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1433f316-bd4f-4481-870f-80c64bc51202_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.014 226109 DEBUG nova.storage.rbd_utils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] resizing rbd image 1433f316-bd4f-4481-870f-80c64bc51202_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:05:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:57.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.124 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.170 226109 DEBUG nova.objects.instance [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lazy-loading 'migration_context' on Instance uuid 1433f316-bd4f-4481-870f-80c64bc51202 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.194 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.194 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Ensure instance console log exists: /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.195 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.195 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.196 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.197 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.201 226109 WARNING nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.207 226109 DEBUG nova.virt.libvirt.host [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.207 226109 DEBUG nova.virt.libvirt.host [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.210 226109 DEBUG nova.virt.libvirt.host [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.211 226109 DEBUG nova.virt.libvirt.host [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.212 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.212 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.212 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.213 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.213 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.213 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.213 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.214 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.214 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.214 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.214 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.214 226109 DEBUG nova.virt.hardware [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.217 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Dec 06 07:05:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:05:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1169664722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.629 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.661 226109 DEBUG nova.storage.rbd_utils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] rbd image 1433f316-bd4f-4481-870f-80c64bc51202_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:57 compute-1 nova_compute[226101]: 2025-12-06 07:05:57.667 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:05:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/273254576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.122 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.125 226109 DEBUG nova.objects.instance [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1433f316-bd4f-4481-870f-80c64bc51202 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.152 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <uuid>1433f316-bd4f-4481-870f-80c64bc51202</uuid>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <name>instance-00000021</name>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerExternalEventsTest-server-572286910</nova:name>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:05:57</nova:creationTime>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <nova:user uuid="5e66cd58a9a94a0d852ed7c8b792e595">tempest-ServerExternalEventsTest-1157073264-project-member</nova:user>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <nova:project uuid="dad6457c27494a8a9e5f2004a841cf60">tempest-ServerExternalEventsTest-1157073264</nova:project>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <system>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <entry name="serial">1433f316-bd4f-4481-870f-80c64bc51202</entry>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <entry name="uuid">1433f316-bd4f-4481-870f-80c64bc51202</entry>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     </system>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <os>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   </os>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <features>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   </features>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1433f316-bd4f-4481-870f-80c64bc51202_disk">
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       </source>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1433f316-bd4f-4481-870f-80c64bc51202_disk.config">
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       </source>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:05:58 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202/console.log" append="off"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <video>
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     </video>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:05:58 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:05:58 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:05:58 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:05:58 compute-1 nova_compute[226101]: </domain>
Dec 06 07:05:58 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.223 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.224 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.224 226109 INFO nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Using config drive
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.245 226109 DEBUG nova.storage.rbd_utils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] rbd image 1433f316-bd4f-4481-870f-80c64bc51202_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.481 226109 INFO nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Creating config drive at /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202/disk.config
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.486 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsxppoyjb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:58 compute-1 ceph-mon[81689]: pgmap v1360: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 637 KiB/s rd, 35 KiB/s wr, 57 op/s
Dec 06 07:05:58 compute-1 ceph-mon[81689]: osdmap e176: 3 total, 3 up, 3 in
Dec 06 07:05:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1169664722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/273254576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.612 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsxppoyjb" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.638 226109 DEBUG nova.storage.rbd_utils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] rbd image 1433f316-bd4f-4481-870f-80c64bc51202_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.641 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202/disk.config 1433f316-bd4f-4481-870f-80c64bc51202_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:58.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.876 226109 DEBUG oslo_concurrency.processutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202/disk.config 1433f316-bd4f-4481-870f-80c64bc51202_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:58 compute-1 nova_compute[226101]: 2025-12-06 07:05:58.877 226109 INFO nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Deleting local config drive /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202/disk.config because it was imported into RBD.
Dec 06 07:05:58 compute-1 systemd-machined[190302]: New machine qemu-18-instance-00000021.
Dec 06 07:05:58 compute-1 systemd[1]: Started Virtual Machine qemu-18-instance-00000021.
Dec 06 07:05:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:05:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:59.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.494 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004759.4935894, 1433f316-bd4f-4481-870f-80c64bc51202 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.494 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] VM Resumed (Lifecycle Event)
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.498 226109 DEBUG nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.498 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.504 226109 INFO nova.virt.libvirt.driver [-] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Instance spawned successfully.
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.504 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.546 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.550 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.550 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.551 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.551 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.552 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.552 226109 DEBUG nova.virt.libvirt.driver [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.556 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.591 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.592 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004759.4944496, 1433f316-bd4f-4481-870f-80c64bc51202 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.592 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] VM Started (Lifecycle Event)
Dec 06 07:05:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Dec 06 07:05:59 compute-1 ceph-mon[81689]: osdmap e177: 3 total, 3 up, 3 in
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.619 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.622 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.648 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.650 226109 INFO nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Took 3.50 seconds to spawn the instance on the hypervisor.
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.650 226109 DEBUG nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.707 226109 INFO nova.compute.manager [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Took 4.47 seconds to build instance.
Dec 06 07:05:59 compute-1 nova_compute[226101]: 2025-12-06 07:05:59.726 226109 DEBUG oslo_concurrency.lockutils [None req-8cbd0056-b6eb-4af7-9bcd-699b7611db08 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "1433f316-bd4f-4481-870f-80c64bc51202" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.544 226109 DEBUG nova.compute.manager [None req-e55afc03-60e8-4b79-960c-21efce2b690d 51ddcd9aef3f4694b510bfe0c16a2eb5 9f25949b55a44fdd965efc6e6f4460e8 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.546 226109 DEBUG nova.compute.manager [None req-e55afc03-60e8-4b79-960c-21efce2b690d 51ddcd9aef3f4694b510bfe0c16a2eb5 9f25949b55a44fdd965efc6e6f4460e8 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.546 226109 DEBUG oslo_concurrency.lockutils [None req-e55afc03-60e8-4b79-960c-21efce2b690d 51ddcd9aef3f4694b510bfe0c16a2eb5 9f25949b55a44fdd965efc6e6f4460e8 - - default default] Acquiring lock "refresh_cache-1433f316-bd4f-4481-870f-80c64bc51202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.546 226109 DEBUG oslo_concurrency.lockutils [None req-e55afc03-60e8-4b79-960c-21efce2b690d 51ddcd9aef3f4694b510bfe0c16a2eb5 9f25949b55a44fdd965efc6e6f4460e8 - - default default] Acquired lock "refresh_cache-1433f316-bd4f-4481-870f-80c64bc51202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.546 226109 DEBUG nova.network.neutron [None req-e55afc03-60e8-4b79-960c-21efce2b690d 51ddcd9aef3f4694b510bfe0c16a2eb5 9f25949b55a44fdd965efc6e6f4460e8 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:00 compute-1 ceph-mon[81689]: pgmap v1363: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 27 KiB/s wr, 15 op/s
Dec 06 07:06:00 compute-1 ceph-mon[81689]: osdmap e178: 3 total, 3 up, 3 in
Dec 06 07:06:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:06:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:06:00 compute-1 sudo[239903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:06:00 compute-1 sudo[239903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:00 compute-1 sudo[239903]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.750 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquiring lock "1433f316-bd4f-4481-870f-80c64bc51202" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.751 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "1433f316-bd4f-4481-870f-80c64bc51202" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.751 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquiring lock "1433f316-bd4f-4481-870f-80c64bc51202-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.752 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "1433f316-bd4f-4481-870f-80c64bc51202-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.752 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "1433f316-bd4f-4481-870f-80c64bc51202-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.753 226109 INFO nova.compute.manager [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Terminating instance
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.754 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquiring lock "refresh_cache-1433f316-bd4f-4481-870f-80c64bc51202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:00 compute-1 nova_compute[226101]: 2025-12-06 07:06:00.765 226109 DEBUG nova.network.neutron [None req-e55afc03-60e8-4b79-960c-21efce2b690d 51ddcd9aef3f4694b510bfe0c16a2eb5 9f25949b55a44fdd965efc6e6f4460e8 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:00 compute-1 sudo[239928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:06:00 compute-1 sudo[239928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:00 compute-1 sudo[239928]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:01.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:01 compute-1 nova_compute[226101]: 2025-12-06 07:06:01.170 226109 DEBUG nova.network.neutron [None req-e55afc03-60e8-4b79-960c-21efce2b690d 51ddcd9aef3f4694b510bfe0c16a2eb5 9f25949b55a44fdd965efc6e6f4460e8 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:01 compute-1 nova_compute[226101]: 2025-12-06 07:06:01.186 226109 DEBUG oslo_concurrency.lockutils [None req-e55afc03-60e8-4b79-960c-21efce2b690d 51ddcd9aef3f4694b510bfe0c16a2eb5 9f25949b55a44fdd965efc6e6f4460e8 - - default default] Releasing lock "refresh_cache-1433f316-bd4f-4481-870f-80c64bc51202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:01 compute-1 nova_compute[226101]: 2025-12-06 07:06:01.187 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquired lock "refresh_cache-1433f316-bd4f-4481-870f-80c64bc51202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:01 compute-1 nova_compute[226101]: 2025-12-06 07:06:01.187 226109 DEBUG nova.network.neutron [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:01 compute-1 nova_compute[226101]: 2025-12-06 07:06:01.617 226109 DEBUG nova.network.neutron [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:01.621 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:01.622 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:01.622 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.003 226109 DEBUG nova.network.neutron [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.021 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Releasing lock "refresh_cache-1433f316-bd4f-4481-870f-80c64bc51202" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.022 226109 DEBUG nova.compute.manager [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:06:02 compute-1 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000021.scope: Deactivated successfully.
Dec 06 07:06:02 compute-1 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000021.scope: Consumed 3.200s CPU time.
Dec 06 07:06:02 compute-1 systemd-machined[190302]: Machine qemu-18-instance-00000021 terminated.
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.126 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.241 226109 INFO nova.virt.libvirt.driver [-] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Instance destroyed successfully.
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.241 226109 DEBUG nova.objects.instance [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lazy-loading 'resources' on Instance uuid 1433f316-bd4f-4481-870f-80c64bc51202 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.628 226109 INFO nova.virt.libvirt.driver [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Deleting instance files /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202_del
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.629 226109 INFO nova.virt.libvirt.driver [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Deletion of /var/lib/nova/instances/1433f316-bd4f-4481-870f-80c64bc51202_del complete
Dec 06 07:06:02 compute-1 ceph-mon[81689]: pgmap v1365: 305 pgs: 305 active+clean; 209 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.3 MiB/s wr, 184 op/s
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.724 226109 INFO nova.compute.manager [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Took 0.70 seconds to destroy the instance on the hypervisor.
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.725 226109 DEBUG oslo.service.loopingcall [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.725 226109 DEBUG nova.compute.manager [-] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.725 226109 DEBUG nova.network.neutron [-] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:06:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:02.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.955 226109 DEBUG nova.network.neutron [-] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:02 compute-1 nova_compute[226101]: 2025-12-06 07:06:02.983 226109 DEBUG nova.network.neutron [-] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.004 226109 INFO nova.compute.manager [-] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Took 0.28 seconds to deallocate network for instance.
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.065 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.066 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:03 compute-1 podman[239974]: 2025-12-06 07:06:03.084770522 +0000 UTC m=+0.069522392 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:06:03 compute-1 podman[239975]: 2025-12-06 07:06:03.084821854 +0000 UTC m=+0.059493311 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Dec 06 07:06:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:03.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.153 226109 DEBUG oslo_concurrency.processutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/931065436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.584 226109 DEBUG oslo_concurrency.processutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.591 226109 DEBUG nova.compute.provider_tree [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.593 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.611 226109 DEBUG nova.scheduler.client.report [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4019730247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/931065436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.658 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.724 226109 INFO nova.scheduler.client.report [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Deleted allocations for instance 1433f316-bd4f-4481-870f-80c64bc51202
Dec 06 07:06:03 compute-1 nova_compute[226101]: 2025-12-06 07:06:03.823 226109 DEBUG oslo_concurrency.lockutils [None req-55d8102b-8a49-4db3-9366-0bb7d41b0f93 5e66cd58a9a94a0d852ed7c8b792e595 dad6457c27494a8a9e5f2004a841cf60 - - default default] Lock "1433f316-bd4f-4481-870f-80c64bc51202" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:04.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:05 compute-1 ceph-mon[81689]: pgmap v1366: 305 pgs: 305 active+clean; 250 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 341 op/s
Dec 06 07:06:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:05.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:06 compute-1 ceph-mon[81689]: pgmap v1367: 305 pgs: 305 active+clean; 197 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 9.1 MiB/s wr, 352 op/s
Dec 06 07:06:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:06.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:07.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.131 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.687 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "dd7ff314-b789-4550-ab9a-44dc02948350" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.688 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.688 226109 INFO nova.compute.manager [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Unshelving
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.820 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.821 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.826 226109 DEBUG nova.objects.instance [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'pci_requests' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.849 226109 DEBUG nova.objects.instance [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'numa_topology' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.874 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:06:07 compute-1 nova_compute[226101]: 2025-12-06 07:06:07.874 226109 INFO nova.compute.claims [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:06:08 compute-1 nova_compute[226101]: 2025-12-06 07:06:08.056 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:08 compute-1 ceph-mon[81689]: pgmap v1368: 305 pgs: 305 active+clean; 122 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.0 MiB/s wr, 340 op/s
Dec 06 07:06:08 compute-1 ceph-mon[81689]: osdmap e179: 3 total, 3 up, 3 in
Dec 06 07:06:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2488208435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:08 compute-1 nova_compute[226101]: 2025-12-06 07:06:08.516 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:08 compute-1 nova_compute[226101]: 2025-12-06 07:06:08.523 226109 DEBUG nova.compute.provider_tree [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:08 compute-1 nova_compute[226101]: 2025-12-06 07:06:08.543 226109 DEBUG nova.scheduler.client.report [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:08 compute-1 nova_compute[226101]: 2025-12-06 07:06:08.569 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:08 compute-1 nova_compute[226101]: 2025-12-06 07:06:08.594 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:06:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4213712616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:06:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:06:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4213712616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:06:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:08.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:09.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2488208435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2825645883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4213712616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:06:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4213712616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:06:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2450534043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:09 compute-1 nova_compute[226101]: 2025-12-06 07:06:09.625 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:09 compute-1 nova_compute[226101]: 2025-12-06 07:06:09.626 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquired lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:09 compute-1 nova_compute[226101]: 2025-12-06 07:06:09.626 226109 DEBUG nova.network.neutron [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:10 compute-1 ceph-mon[81689]: pgmap v1370: 305 pgs: 305 active+clean; 122 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.2 MiB/s wr, 305 op/s
Dec 06 07:06:10 compute-1 nova_compute[226101]: 2025-12-06 07:06:10.621 226109 DEBUG nova.network.neutron [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:10.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.087 226109 DEBUG nova.network.neutron [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:11.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.628 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Releasing lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.630 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.631 226109 INFO nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating image(s)
Dec 06 07:06:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:11.658 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:06:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:11.659 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.661 226109 DEBUG nova.storage.rbd_utils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.665 226109 DEBUG nova.objects.instance [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'trusted_certs' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.669 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.725 226109 DEBUG nova.storage.rbd_utils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.751 226109 DEBUG nova.storage.rbd_utils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.755 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "38ac52abd1fc0fa364968ae21317812ffdddd816" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:11 compute-1 nova_compute[226101]: 2025-12-06 07:06:11.755 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "38ac52abd1fc0fa364968ae21317812ffdddd816" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:11 compute-1 ovn_controller[130279]: 2025-12-06T07:06:11Z|00142|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:06:12 compute-1 nova_compute[226101]: 2025-12-06 07:06:12.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:12 compute-1 ceph-mon[81689]: pgmap v1371: 305 pgs: 305 active+clean; 155 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 198 op/s
Dec 06 07:06:12 compute-1 nova_compute[226101]: 2025-12-06 07:06:12.543 226109 DEBUG nova.virt.libvirt.imagebackend [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/11dbd9ca-4725-400f-ade9-f5493c82f6e2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/11dbd9ca-4725-400f-ade9-f5493c82f6e2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:06:12 compute-1 nova_compute[226101]: 2025-12-06 07:06:12.598 226109 DEBUG nova.virt.libvirt.imagebackend [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/11dbd9ca-4725-400f-ade9-f5493c82f6e2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:06:12 compute-1 nova_compute[226101]: 2025-12-06 07:06:12.598 226109 DEBUG nova.storage.rbd_utils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] cloning images/11dbd9ca-4725-400f-ade9-f5493c82f6e2@snap to None/dd7ff314-b789-4550-ab9a-44dc02948350_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:06:12 compute-1 nova_compute[226101]: 2025-12-06 07:06:12.715 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "38ac52abd1fc0fa364968ae21317812ffdddd816" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:12 compute-1 nova_compute[226101]: 2025-12-06 07:06:12.853 226109 DEBUG nova.objects.instance [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'migration_context' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:12.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:12 compute-1 nova_compute[226101]: 2025-12-06 07:06:12.926 226109 DEBUG nova.storage.rbd_utils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] flattening vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:06:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:13.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.272 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Image rbd:vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.273 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.273 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Ensure instance console log exists: /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.273 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.274 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.274 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.276 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:05:42Z,direct_url=<?>,disk_format='raw',id=11dbd9ca-4725-400f-ade9-f5493c82f6e2,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-316061423-shelved',owner='e2384cf38a13417c9220db3aafff6b24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:06:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.280 226109 WARNING nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.285 226109 DEBUG nova.virt.libvirt.host [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.286 226109 DEBUG nova.virt.libvirt.host [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.290 226109 DEBUG nova.virt.libvirt.host [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.291 226109 DEBUG nova.virt.libvirt.host [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.292 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.292 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:05:42Z,direct_url=<?>,disk_format='raw',id=11dbd9ca-4725-400f-ade9-f5493c82f6e2,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-316061423-shelved',owner='e2384cf38a13417c9220db3aafff6b24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:06:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.293 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.293 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.293 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.294 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.294 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.294 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.295 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.295 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.295 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.295 226109 DEBUG nova.virt.hardware [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.296 226109 DEBUG nova.objects.instance [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'vcpu_model' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.315 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.595 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2867888828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.748 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.776 226109 DEBUG nova.storage.rbd_utils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:13 compute-1 nova_compute[226101]: 2025-12-06 07:06:13.780 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2772567101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.205 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.208 226109 DEBUG nova.objects.instance [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'pci_devices' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.230 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <uuid>dd7ff314-b789-4550-ab9a-44dc02948350</uuid>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <name>instance-0000001c</name>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-316061423</nova:name>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:06:13</nova:creationTime>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <nova:user uuid="3c3859554f86419a941a5924e80b88de">tempest-UnshelveToHostMultiNodesTest-311268056-project-member</nova:user>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <nova:project uuid="e2384cf38a13417c9220db3aafff6b24">tempest-UnshelveToHostMultiNodesTest-311268056</nova:project>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="11dbd9ca-4725-400f-ade9-f5493c82f6e2"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <system>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <entry name="serial">dd7ff314-b789-4550-ab9a-44dc02948350</entry>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <entry name="uuid">dd7ff314-b789-4550-ab9a-44dc02948350</entry>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     </system>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <os>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   </os>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <features>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   </features>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk">
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       </source>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk.config">
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       </source>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:06:14 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/console.log" append="off"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <video>
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     </video>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:06:14 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:06:14 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:06:14 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:06:14 compute-1 nova_compute[226101]: </domain>
Dec 06 07:06:14 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.292 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.293 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.293 226109 INFO nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Using config drive
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.318 226109 DEBUG nova.storage.rbd_utils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.335 226109 DEBUG nova.objects.instance [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'ec2_ids' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:14 compute-1 ceph-mon[81689]: pgmap v1372: 305 pgs: 305 active+clean; 184 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 655 KiB/s rd, 3.0 MiB/s wr, 118 op/s
Dec 06 07:06:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2867888828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2772567101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.371 226109 DEBUG nova.objects.instance [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'keypairs' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.731 226109 INFO nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating config drive at /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.736 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps0b9wty8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.865 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps0b9wty8" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.892 226109 DEBUG nova.storage.rbd_utils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:14.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:14 compute-1 nova_compute[226101]: 2025-12-06 07:06:14.895 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config dd7ff314-b789-4550-ab9a-44dc02948350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:15.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.141 226109 DEBUG oslo_concurrency.processutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config dd7ff314-b789-4550-ab9a-44dc02948350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.142 226109 INFO nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deleting local config drive /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config because it was imported into RBD.
Dec 06 07:06:15 compute-1 systemd-machined[190302]: New machine qemu-19-instance-0000001c.
Dec 06 07:06:15 compute-1 systemd[1]: Started Virtual Machine qemu-19-instance-0000001c.
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.897 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004775.8964791, dd7ff314-b789-4550-ab9a-44dc02948350 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.897 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Resumed (Lifecycle Event)
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.900 226109 DEBUG nova.compute.manager [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.901 226109 DEBUG nova.virt.libvirt.driver [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.904 226109 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance spawned successfully.
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.918 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.921 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.948 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.948 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004775.900352, dd7ff314-b789-4550-ab9a-44dc02948350 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.948 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Started (Lifecycle Event)
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.970 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.975 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:15 compute-1 nova_compute[226101]: 2025-12-06 07:06:15.995 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:16 compute-1 ceph-mon[81689]: pgmap v1373: 305 pgs: 305 active+clean; 247 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.9 MiB/s wr, 119 op/s
Dec 06 07:06:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1077149545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1384302522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Dec 06 07:06:16 compute-1 nova_compute[226101]: 2025-12-06 07:06:16.780 226109 DEBUG nova.compute.manager [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:16 compute-1 nova_compute[226101]: 2025-12-06 07:06:16.861 226109 DEBUG oslo_concurrency.lockutils [None req-b5be6769-6fb4-4faa-9e27-249ef7cc3e16 fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:16.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:17 compute-1 nova_compute[226101]: 2025-12-06 07:06:17.136 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:17.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:17 compute-1 nova_compute[226101]: 2025-12-06 07:06:17.240 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004762.238692, 1433f316-bd4f-4481-870f-80c64bc51202 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:17 compute-1 nova_compute[226101]: 2025-12-06 07:06:17.240 226109 INFO nova.compute.manager [-] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] VM Stopped (Lifecycle Event)
Dec 06 07:06:17 compute-1 nova_compute[226101]: 2025-12-06 07:06:17.277 226109 DEBUG nova.compute.manager [None req-9f3952ba-1c56-4150-9e72-7973a5737055 - - - - - -] [instance: 1433f316-bd4f-4481-870f-80c64bc51202] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:17 compute-1 ceph-mon[81689]: osdmap e180: 3 total, 3 up, 3 in
Dec 06 07:06:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/117782657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1542511494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:17.661 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.308 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "dd7ff314-b789-4550-ab9a-44dc02948350" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.308 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.309 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "dd7ff314-b789-4550-ab9a-44dc02948350-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.309 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.309 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.310 226109 INFO nova.compute.manager [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Terminating instance
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.310 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.311 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquired lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.311 226109 DEBUG nova.network.neutron [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.451 226109 DEBUG nova.network.neutron [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.597 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:18 compute-1 ceph-mon[81689]: pgmap v1375: 305 pgs: 305 active+clean; 272 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 9.1 MiB/s wr, 198 op/s
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.730 226109 DEBUG nova.network.neutron [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.766 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Releasing lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.767 226109 DEBUG nova.compute.manager [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:06:18 compute-1 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Dec 06 07:06:18 compute-1 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001c.scope: Consumed 3.702s CPU time.
Dec 06 07:06:18 compute-1 systemd-machined[190302]: Machine qemu-19-instance-0000001c terminated.
Dec 06 07:06:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:18.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.987 226109 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance destroyed successfully.
Dec 06 07:06:18 compute-1 nova_compute[226101]: 2025-12-06 07:06:18.988 226109 DEBUG nova.objects.instance [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lazy-loading 'resources' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.050 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.051 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.085 226109 DEBUG nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:06:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:19.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.164 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.165 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.170 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.170 226109 INFO nova.compute.claims [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.331 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.410 226109 INFO nova.virt.libvirt.driver [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deleting instance files /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350_del
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.412 226109 INFO nova.virt.libvirt.driver [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deletion of /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350_del complete
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.515 226109 INFO nova.compute.manager [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Took 0.75 seconds to destroy the instance on the hypervisor.
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.515 226109 DEBUG oslo.service.loopingcall [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.516 226109 DEBUG nova.compute.manager [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.516 226109 DEBUG nova.network.neutron [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.631 226109 DEBUG nova.network.neutron [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.652 226109 DEBUG nova.network.neutron [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.703 226109 INFO nova.compute.manager [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Took 0.19 seconds to deallocate network for instance.
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.746 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1535216654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.779 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.784 226109 DEBUG nova.compute.provider_tree [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.799 226109 DEBUG nova.scheduler.client.report [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.827 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.827 226109 DEBUG nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.830 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.894 226109 DEBUG oslo_concurrency.processutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.943 226109 DEBUG nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.944 226109 DEBUG nova.network.neutron [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:06:19 compute-1 nova_compute[226101]: 2025-12-06 07:06:19.962 226109 INFO nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.015 226109 DEBUG nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.133 226109 DEBUG nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.135 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.135 226109 INFO nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Creating image(s)
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.162 226109 DEBUG nova.storage.rbd_utils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.190 226109 DEBUG nova.storage.rbd_utils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.217 226109 DEBUG nova.storage.rbd_utils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.221 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.242 226109 DEBUG nova.network.neutron [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.242 226109 DEBUG nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.281 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.282 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.282 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.283 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.316 226109 DEBUG nova.storage.rbd_utils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.320 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1582107465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.386 226109 DEBUG oslo_concurrency.processutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.395 226109 DEBUG nova.compute.provider_tree [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.419 226109 DEBUG nova.scheduler.client.report [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.449 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.473 226109 INFO nova.scheduler.client.report [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Deleted allocations for instance dd7ff314-b789-4550-ab9a-44dc02948350
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.571 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.603 226109 DEBUG oslo_concurrency.lockutils [None req-6a74ee4b-c2c7-4898-8eb7-998d0ca71368 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:20 compute-1 ceph-mon[81689]: pgmap v1376: 305 pgs: 305 active+clean; 272 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 8.9 MiB/s wr, 193 op/s
Dec 06 07:06:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1535216654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1582107465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.639 226109 DEBUG nova.storage.rbd_utils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] resizing rbd image 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.735 226109 DEBUG nova.objects.instance [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'migration_context' on Instance uuid 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.765 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.766 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Ensure instance console log exists: /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.766 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.767 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.767 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.769 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.772 226109 WARNING nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.777 226109 DEBUG nova.virt.libvirt.host [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.778 226109 DEBUG nova.virt.libvirt.host [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.781 226109 DEBUG nova.virt.libvirt.host [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.782 226109 DEBUG nova.virt.libvirt.host [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.783 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.783 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.783 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.784 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.784 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.784 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.784 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.785 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.785 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.785 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.785 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.786 226109 DEBUG nova.virt.hardware [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:06:20 compute-1 nova_compute[226101]: 2025-12-06 07:06:20.788 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000079s ======
Dec 06 07:06:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec 06 07:06:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:21.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/182648734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.208 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.234 226109 DEBUG nova.storage.rbd_utils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.237 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2621079158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/182648734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.681 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.683 226109 DEBUG nova.objects.instance [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.708 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <uuid>1f74f35d-e7fa-4c39-bcd6-85af6d7255bf</uuid>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <name>instance-00000024</name>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersOnMultiNodesTest-server-694788108</nova:name>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:06:20</nova:creationTime>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <nova:user uuid="21af856fbd8843c2969956a9587ca48a">tempest-ServersOnMultiNodesTest-646911698-project-member</nova:user>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <nova:project uuid="2c1f48b58dd04b828c83d6350cc4e13d">tempest-ServersOnMultiNodesTest-646911698</nova:project>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <system>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <entry name="serial">1f74f35d-e7fa-4c39-bcd6-85af6d7255bf</entry>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <entry name="uuid">1f74f35d-e7fa-4c39-bcd6-85af6d7255bf</entry>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     </system>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <os>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   </os>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <features>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   </features>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk">
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       </source>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk.config">
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       </source>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:06:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf/console.log" append="off"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <video>
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     </video>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:06:21 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:06:21 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:06:21 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:06:21 compute-1 nova_compute[226101]: </domain>
Dec 06 07:06:21 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.758 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.758 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.759 226109 INFO nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Using config drive
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.782 226109 DEBUG nova.storage.rbd_utils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.951 226109 INFO nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Creating config drive at /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf/disk.config
Dec 06 07:06:21 compute-1 nova_compute[226101]: 2025-12-06 07:06:21.956 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgjpte4u5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.089 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgjpte4u5" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.125 226109 DEBUG nova.storage.rbd_utils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.128 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf/disk.config 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.150 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.287 226109 DEBUG oslo_concurrency.processutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf/disk.config 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.288 226109 INFO nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Deleting local config drive /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf/disk.config because it was imported into RBD.
Dec 06 07:06:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Dec 06 07:06:22 compute-1 systemd-machined[190302]: New machine qemu-20-instance-00000024.
Dec 06 07:06:22 compute-1 systemd[1]: Started Virtual Machine qemu-20-instance-00000024.
Dec 06 07:06:22 compute-1 ceph-mon[81689]: pgmap v1377: 305 pgs: 305 active+clean; 169 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.4 MiB/s wr, 335 op/s
Dec 06 07:06:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2621079158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:22 compute-1 ceph-mon[81689]: osdmap e181: 3 total, 3 up, 3 in
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.709 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004782.7088122, 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.709 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] VM Resumed (Lifecycle Event)
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.712 226109 DEBUG nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.712 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.716 226109 INFO nova.virt.libvirt.driver [-] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Instance spawned successfully.
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.716 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.733 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.737 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.741 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.742 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.742 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.743 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.743 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.744 226109 DEBUG nova.virt.libvirt.driver [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.768 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.769 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004782.7117033, 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.769 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] VM Started (Lifecycle Event)
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.794 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.797 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.803 226109 INFO nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Took 2.67 seconds to spawn the instance on the hypervisor.
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.803 226109 DEBUG nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.848 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.895 226109 INFO nova.compute.manager [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Took 3.76 seconds to build instance.
Dec 06 07:06:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:22.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:22 compute-1 nova_compute[226101]: 2025-12-06 07:06:22.912 226109 DEBUG oslo_concurrency.lockutils [None req-a8a22506-7728-4e37-a66a-9f107c9c7b82 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 3.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:23.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:23 compute-1 nova_compute[226101]: 2025-12-06 07:06:23.599 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:24 compute-1 ceph-mon[81689]: pgmap v1379: 305 pgs: 305 active+clean; 162 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.4 MiB/s wr, 385 op/s
Dec 06 07:06:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:24.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:25.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:26 compute-1 ceph-mon[81689]: pgmap v1380: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.4 MiB/s wr, 422 op/s
Dec 06 07:06:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:26.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:27 compute-1 podman[240857]: 2025-12-06 07:06:27.141483451 +0000 UTC m=+0.124198598 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:06:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:27.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:27 compute-1 nova_compute[226101]: 2025-12-06 07:06:27.191 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:28 compute-1 nova_compute[226101]: 2025-12-06 07:06:28.600 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:28 compute-1 ceph-mon[81689]: pgmap v1381: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 9.2 MiB/s rd, 2.2 MiB/s wr, 410 op/s
Dec 06 07:06:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3325075958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4219645129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:29.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2676091815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:30.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:31 compute-1 ceph-mon[81689]: pgmap v1382: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 9.2 MiB/s rd, 2.2 MiB/s wr, 410 op/s
Dec 06 07:06:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:31.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:32 compute-1 nova_compute[226101]: 2025-12-06 07:06:32.194 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:32 compute-1 ceph-mon[81689]: pgmap v1383: 305 pgs: 305 active+clean; 283 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.2 MiB/s wr, 303 op/s
Dec 06 07:06:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3699045936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2700628066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1686269888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3337467087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:32.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:33.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2455388943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1436394406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:33 compute-1 nova_compute[226101]: 2025-12-06 07:06:33.633 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:33 compute-1 nova_compute[226101]: 2025-12-06 07:06:33.985 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004778.9849164, dd7ff314-b789-4550-ab9a-44dc02948350 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:33 compute-1 nova_compute[226101]: 2025-12-06 07:06:33.986 226109 INFO nova.compute.manager [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Stopped (Lifecycle Event)
Dec 06 07:06:34 compute-1 nova_compute[226101]: 2025-12-06 07:06:34.005 226109 DEBUG nova.compute.manager [None req-291fa658-9d74-4fb0-a7f1-77c347631d94 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:34 compute-1 podman[240884]: 2025-12-06 07:06:34.086221266 +0000 UTC m=+0.054014500 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:06:34 compute-1 podman[240883]: 2025-12-06 07:06:34.09450298 +0000 UTC m=+0.069555672 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec 06 07:06:34 compute-1 ceph-mon[81689]: pgmap v1384: 305 pgs: 305 active+clean; 312 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.0 MiB/s wr, 262 op/s
Dec 06 07:06:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:35.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:36 compute-1 ceph-mon[81689]: pgmap v1385: 305 pgs: 305 active+clean; 362 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 9.0 MiB/s wr, 346 op/s
Dec 06 07:06:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:36.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:37.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:37 compute-1 nova_compute[226101]: 2025-12-06 07:06:37.197 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:38 compute-1 ceph-mon[81689]: pgmap v1386: 305 pgs: 305 active+clean; 406 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 12 MiB/s wr, 498 op/s
Dec 06 07:06:38 compute-1 nova_compute[226101]: 2025-12-06 07:06:38.636 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:38.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:39.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:39 compute-1 nova_compute[226101]: 2025-12-06 07:06:39.377 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:39 compute-1 nova_compute[226101]: 2025-12-06 07:06:39.378 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:39 compute-1 nova_compute[226101]: 2025-12-06 07:06:39.413 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:06:39 compute-1 nova_compute[226101]: 2025-12-06 07:06:39.519 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:39 compute-1 nova_compute[226101]: 2025-12-06 07:06:39.519 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:39 compute-1 nova_compute[226101]: 2025-12-06 07:06:39.529 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:06:39 compute-1 nova_compute[226101]: 2025-12-06 07:06:39.530 226109 INFO nova.compute.claims [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:06:39 compute-1 nova_compute[226101]: 2025-12-06 07:06:39.695 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/790145495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.114 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.120 226109 DEBUG nova.compute.provider_tree [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.138 226109 DEBUG nova.scheduler.client.report [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.165 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.193 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "4d9584e0-fddd-493b-86a5-2cdb9087cce3" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.193 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "4d9584e0-fddd-493b-86a5-2cdb9087cce3" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.218 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] No node specified, defaulting to compute-1.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.320 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "4d9584e0-fddd-493b-86a5-2cdb9087cce3" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.321 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.398 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.398 226109 DEBUG nova.network.neutron [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.431 226109 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.468 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:06:40 compute-1 ceph-mon[81689]: pgmap v1387: 305 pgs: 305 active+clean; 406 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 409 op/s
Dec 06 07:06:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/790145495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3249629797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.625 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.628 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.629 226109 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Creating image(s)
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.657 226109 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.683 226109 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.710 226109 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.715 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.777 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.778 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.779 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.779 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.802 226109 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:40 compute-1 nova_compute[226101]: 2025-12-06 07:06:40.805 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:40.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.042 226109 DEBUG nova.network.neutron [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.043 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:06:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:41.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.475 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.547 226109 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] resizing rbd image aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.627 226109 DEBUG nova.objects.instance [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'migration_context' on Instance uuid aed327d9-06a6-43da-9e9d-575c5dc0af5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.642 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.642 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Ensure instance console log exists: /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.643 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.643 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.643 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.645 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.648 226109 WARNING nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.653 226109 DEBUG nova.virt.libvirt.host [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.653 226109 DEBUG nova.virt.libvirt.host [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.656 226109 DEBUG nova.virt.libvirt.host [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.656 226109 DEBUG nova.virt.libvirt.host [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.657 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.657 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.658 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.658 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.658 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.658 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.659 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.659 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.659 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.659 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.660 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.660 226109 DEBUG nova.virt.hardware [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:06:41 compute-1 nova_compute[226101]: 2025-12-06 07:06:41.662 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/770132395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.099 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.125 226109 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.129 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.201 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1413870301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.553 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.554 226109 DEBUG nova.objects.instance [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'pci_devices' on Instance uuid aed327d9-06a6-43da-9e9d-575c5dc0af5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.581 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <uuid>aed327d9-06a6-43da-9e9d-575c5dc0af5c</uuid>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <name>instance-00000028</name>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersOnMultiNodesTest-server-1985926905-1</nova:name>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:06:41</nova:creationTime>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <nova:user uuid="21af856fbd8843c2969956a9587ca48a">tempest-ServersOnMultiNodesTest-646911698-project-member</nova:user>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <nova:project uuid="2c1f48b58dd04b828c83d6350cc4e13d">tempest-ServersOnMultiNodesTest-646911698</nova:project>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <system>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <entry name="serial">aed327d9-06a6-43da-9e9d-575c5dc0af5c</entry>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <entry name="uuid">aed327d9-06a6-43da-9e9d-575c5dc0af5c</entry>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     </system>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <os>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   </os>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <features>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   </features>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk">
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       </source>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk.config">
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       </source>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:06:42 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c/console.log" append="off"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <video>
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     </video>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:06:42 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:06:42 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:06:42 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:06:42 compute-1 nova_compute[226101]: </domain>
Dec 06 07:06:42 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:06:42 compute-1 ceph-mon[81689]: pgmap v1388: 305 pgs: 305 active+clean; 400 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 12 MiB/s wr, 505 op/s
Dec 06 07:06:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/770132395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2056994777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1413870301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.660 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.661 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.661 226109 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Using config drive
Dec 06 07:06:42 compute-1 nova_compute[226101]: 2025-12-06 07:06:42.684 226109 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:42.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:43 compute-1 nova_compute[226101]: 2025-12-06 07:06:43.098 226109 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Creating config drive at /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c/disk.config
Dec 06 07:06:43 compute-1 nova_compute[226101]: 2025-12-06 07:06:43.102 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4bj1rpel execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:43.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:43 compute-1 nova_compute[226101]: 2025-12-06 07:06:43.231 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4bj1rpel" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:43 compute-1 nova_compute[226101]: 2025-12-06 07:06:43.259 226109 DEBUG nova.storage.rbd_utils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:43 compute-1 nova_compute[226101]: 2025-12-06 07:06:43.263 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c/disk.config aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:43 compute-1 nova_compute[226101]: 2025-12-06 07:06:43.484 226109 DEBUG oslo_concurrency.processutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c/disk.config aed327d9-06a6-43da-9e9d-575c5dc0af5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:43 compute-1 nova_compute[226101]: 2025-12-06 07:06:43.485 226109 INFO nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Deleting local config drive /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c/disk.config because it was imported into RBD.
Dec 06 07:06:43 compute-1 systemd-machined[190302]: New machine qemu-21-instance-00000028.
Dec 06 07:06:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:43 compute-1 systemd[1]: Started Virtual Machine qemu-21-instance-00000028.
Dec 06 07:06:43 compute-1 nova_compute[226101]: 2025-12-06 07:06:43.638 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3914379697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4056199244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.067 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004804.0672588, aed327d9-06a6-43da-9e9d-575c5dc0af5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.068 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] VM Resumed (Lifecycle Event)
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.070 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.070 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.073 226109 INFO nova.virt.libvirt.driver [-] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Instance spawned successfully.
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.073 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.098 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.100 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.100 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.101 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.101 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.102 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.102 226109 DEBUG nova.virt.libvirt.driver [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.106 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.145 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.145 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004804.0704722, aed327d9-06a6-43da-9e9d-575c5dc0af5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.146 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] VM Started (Lifecycle Event)
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.177 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.181 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.241 226109 INFO nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Took 3.61 seconds to spawn the instance on the hypervisor.
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.241 226109 DEBUG nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.242 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.314 226109 INFO nova.compute.manager [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Took 4.83 seconds to build instance.
Dec 06 07:06:44 compute-1 nova_compute[226101]: 2025-12-06 07:06:44.335 226109 DEBUG oslo_concurrency.lockutils [None req-86a2d933-a6eb-4a8f-aa86-f181b7d1dc29 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:44 compute-1 ceph-mon[81689]: pgmap v1389: 305 pgs: 305 active+clean; 388 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.9 MiB/s wr, 491 op/s
Dec 06 07:06:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:44.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:45.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:45 compute-1 nova_compute[226101]: 2025-12-06 07:06:45.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:46 compute-1 nova_compute[226101]: 2025-12-06 07:06:46.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:46 compute-1 ceph-mon[81689]: pgmap v1390: 305 pgs: 305 active+clean; 423 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 9.6 MiB/s wr, 515 op/s
Dec 06 07:06:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:46.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:47.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:47 compute-1 nova_compute[226101]: 2025-12-06 07:06:47.204 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:47 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 06 07:06:47 compute-1 nova_compute[226101]: 2025-12-06 07:06:47.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:48 compute-1 nova_compute[226101]: 2025-12-06 07:06:48.640 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:48 compute-1 ceph-mon[81689]: pgmap v1391: 305 pgs: 305 active+clean; 444 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 7.9 MiB/s wr, 488 op/s
Dec 06 07:06:48 compute-1 nova_compute[226101]: 2025-12-06 07:06:48.940 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:48 compute-1 nova_compute[226101]: 2025-12-06 07:06:48.941 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:06:48 compute-1 nova_compute[226101]: 2025-12-06 07:06:48.941 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:06:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:48.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:49.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:49.268 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:06:49 compute-1 nova_compute[226101]: 2025-12-06 07:06:49.268 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:49.269 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:06:49 compute-1 nova_compute[226101]: 2025-12-06 07:06:49.745 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:49 compute-1 nova_compute[226101]: 2025-12-06 07:06:49.745 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:49 compute-1 nova_compute[226101]: 2025-12-06 07:06:49.745 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:06:49 compute-1 nova_compute[226101]: 2025-12-06 07:06:49.746 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1309178670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.230 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.231 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.231 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.231 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.231 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.232 226109 INFO nova.compute.manager [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Terminating instance
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.233 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "refresh_cache-aed327d9-06a6-43da-9e9d-575c5dc0af5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.233 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquired lock "refresh_cache-aed327d9-06a6-43da-9e9d-575c5dc0af5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.233 226109 DEBUG nova.network.neutron [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.568 226109 DEBUG nova.network.neutron [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:50 compute-1 nova_compute[226101]: 2025-12-06 07:06:50.602 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 07:06:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.024 226109 DEBUG nova.network.neutron [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.087 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Releasing lock "refresh_cache-aed327d9-06a6-43da-9e9d-575c5dc0af5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.087 226109 DEBUG nova.compute.manager [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:06:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:51.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:51 compute-1 ceph-mon[81689]: pgmap v1392: 305 pgs: 305 active+clean; 444 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.4 MiB/s wr, 257 op/s
Dec 06 07:06:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1891790592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.212 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:51 compute-1 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000028.scope: Deactivated successfully.
Dec 06 07:06:51 compute-1 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000028.scope: Consumed 7.741s CPU time.
Dec 06 07:06:51 compute-1 systemd-machined[190302]: Machine qemu-21-instance-00000028 terminated.
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.240 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.240 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.240 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.241 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.241 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.241 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.241 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.268 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.269 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.269 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.269 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.269 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.306 226109 INFO nova.virt.libvirt.driver [-] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Instance destroyed successfully.
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.307 226109 DEBUG nova.objects.instance [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'resources' on Instance uuid aed327d9-06a6-43da-9e9d-575c5dc0af5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3186190025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.697 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.794 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.794 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.798 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.798 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.928 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.929 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4515MB free_disk=20.73162841796875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.929 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.930 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.951 226109 INFO nova.virt.libvirt.driver [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Deleting instance files /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c_del
Dec 06 07:06:51 compute-1 nova_compute[226101]: 2025-12-06 07:06:51.952 226109 INFO nova.virt.libvirt.driver [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Deletion of /var/lib/nova/instances/aed327d9-06a6-43da-9e9d-575c5dc0af5c_del complete
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.077 226109 INFO nova.compute.manager [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Took 0.99 seconds to destroy the instance on the hypervisor.
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.078 226109 DEBUG oslo.service.loopingcall [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.078 226109 DEBUG nova.compute.manager [-] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.079 226109 DEBUG nova.network.neutron [-] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.085 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.085 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance aed327d9-06a6-43da-9e9d-575c5dc0af5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.086 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.086 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.148 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.208 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:52 compute-1 ceph-mon[81689]: pgmap v1393: 305 pgs: 305 active+clean; 501 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.9 MiB/s wr, 481 op/s
Dec 06 07:06:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3186190025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.262 226109 DEBUG nova.network.neutron [-] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.287 226109 DEBUG nova.network.neutron [-] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.304 226109 INFO nova.compute.manager [-] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Took 0.23 seconds to deallocate network for instance.
Dec 06 07:06:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:52 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3908023371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.575 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:52 compute-1 nova_compute[226101]: 2025-12-06 07:06:52.580 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3908023371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:53 compute-1 nova_compute[226101]: 2025-12-06 07:06:53.689 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.131 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:54 compute-1 ceph-mon[81689]: pgmap v1394: 305 pgs: 305 active+clean; 528 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.4 MiB/s wr, 426 op/s
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.269 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.331 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.332 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.332 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.428 226109 DEBUG oslo_concurrency.processutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4094548463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.850 226109 DEBUG oslo_concurrency.processutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.860 226109 DEBUG nova.compute.provider_tree [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.891 226109 DEBUG nova.scheduler.client.report [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.927 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:54 compute-1 nova_compute[226101]: 2025-12-06 07:06:54.968 226109 INFO nova.scheduler.client.report [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Deleted allocations for instance aed327d9-06a6-43da-9e9d-575c5dc0af5c
Dec 06 07:06:55 compute-1 nova_compute[226101]: 2025-12-06 07:06:55.065 226109 DEBUG oslo_concurrency.lockutils [None req-6072bed8-ea4a-46cc-a7a3-ff3a5513ac79 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "aed327d9-06a6-43da-9e9d-575c5dc0af5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:06:55.271 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4114717483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4094548463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:55 compute-1 nova_compute[226101]: 2025-12-06 07:06:55.332 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:55 compute-1 nova_compute[226101]: 2025-12-06 07:06:55.333 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:56 compute-1 ceph-mon[81689]: pgmap v1395: 305 pgs: 305 active+clean; 478 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 8.9 MiB/s wr, 411 op/s
Dec 06 07:06:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/199118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1662777040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:56.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:57.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:57 compute-1 nova_compute[226101]: 2025-12-06 07:06:57.209 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:58 compute-1 podman[241375]: 2025-12-06 07:06:58.091895017 +0000 UTC m=+0.077853124 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:06:58 compute-1 ceph-mon[81689]: pgmap v1396: 305 pgs: 305 active+clean; 438 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.6 MiB/s wr, 387 op/s
Dec 06 07:06:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:58 compute-1 nova_compute[226101]: 2025-12-06 07:06:58.692 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:58.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:06:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:59.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:00 compute-1 ceph-mon[81689]: pgmap v1397: 305 pgs: 305 active+clean; 438 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 304 op/s
Dec 06 07:07:00 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 06 07:07:00 compute-1 nova_compute[226101]: 2025-12-06 07:07:00.646 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:00 compute-1 nova_compute[226101]: 2025-12-06 07:07:00.647 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:00 compute-1 nova_compute[226101]: 2025-12-06 07:07:00.662 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:07:00 compute-1 nova_compute[226101]: 2025-12-06 07:07:00.729 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:00 compute-1 nova_compute[226101]: 2025-12-06 07:07:00.729 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:00 compute-1 nova_compute[226101]: 2025-12-06 07:07:00.737 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:07:00 compute-1 nova_compute[226101]: 2025-12-06 07:07:00.738 226109 INFO nova.compute.claims [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:07:00 compute-1 nova_compute[226101]: 2025-12-06 07:07:00.891 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:00.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:00 compute-1 sudo[241404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:00 compute-1 sudo[241404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:00 compute-1 sudo[241404]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:01 compute-1 sudo[241429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:07:01 compute-1 sudo[241429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:01 compute-1 sudo[241429]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:01 compute-1 sudo[241473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:01 compute-1 sudo[241473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:01 compute-1 sudo[241473]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:01 compute-1 sudo[241498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:07:01 compute-1 sudo[241498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:01.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2620495894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.324 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.330 226109 DEBUG nova.compute.provider_tree [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2620495894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.401 226109 DEBUG nova.scheduler.client.report [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.431 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.432 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.499 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.500 226109 DEBUG nova.network.neutron [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.519 226109 INFO nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.538 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:07:01 compute-1 sudo[241498]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:01.622 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:01.622 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:01.623 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.632 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.633 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.633 226109 INFO nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Creating image(s)
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.658 226109 DEBUG nova.storage.rbd_utils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.681 226109 DEBUG nova.storage.rbd_utils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.708 226109 DEBUG nova.storage.rbd_utils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.712 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.747 226109 DEBUG nova.policy [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7c2f91caf1444c68dcf6bd66966d67e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c2236fb6443441618c69ad660b0932dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.778 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.778 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.779 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.779 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.805 226109 DEBUG nova.storage.rbd_utils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:01 compute-1 nova_compute[226101]: 2025-12-06 07:07:01.808 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:02 compute-1 nova_compute[226101]: 2025-12-06 07:07:02.054 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:02 compute-1 nova_compute[226101]: 2025-12-06 07:07:02.125 226109 DEBUG nova.storage.rbd_utils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] resizing rbd image e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:07:02 compute-1 nova_compute[226101]: 2025-12-06 07:07:02.227 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:02 compute-1 nova_compute[226101]: 2025-12-06 07:07:02.232 226109 DEBUG nova.objects.instance [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'migration_context' on Instance uuid e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:07:02 compute-1 ceph-mon[81689]: pgmap v1398: 305 pgs: 305 active+clean; 368 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 326 op/s
Dec 06 07:07:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:02.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:02 compute-1 nova_compute[226101]: 2025-12-06 07:07:02.998 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:07:02 compute-1 nova_compute[226101]: 2025-12-06 07:07:02.999 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Ensure instance console log exists: /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:07:03 compute-1 nova_compute[226101]: 2025-12-06 07:07:02.999 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:03 compute-1 nova_compute[226101]: 2025-12-06 07:07:03.000 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:03 compute-1 nova_compute[226101]: 2025-12-06 07:07:03.000 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:03.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/686088067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:03 compute-1 nova_compute[226101]: 2025-12-06 07:07:03.694 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:04 compute-1 ceph-mon[81689]: pgmap v1399: 305 pgs: 305 active+clean; 314 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 1.1 MiB/s wr, 125 op/s
Dec 06 07:07:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:07:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:07:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:07:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:07:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:07:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2557369398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1887594548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:04 compute-1 nova_compute[226101]: 2025-12-06 07:07:04.720 226109 DEBUG nova.network.neutron [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Successfully created port: bf5bb400-e5e8-47f6-bb49-2b370d100cd3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:07:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:04.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:05 compute-1 podman[241723]: 2025-12-06 07:07:05.070703641 +0000 UTC m=+0.051246096 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:07:05 compute-1 podman[241722]: 2025-12-06 07:07:05.076254571 +0000 UTC m=+0.058635346 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.079 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.080 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.080 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.081 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.081 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.082 226109 INFO nova.compute.manager [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Terminating instance
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.084 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "refresh_cache-1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.084 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquired lock "refresh_cache-1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.084 226109 DEBUG nova.network.neutron [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:07:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:05 compute-1 nova_compute[226101]: 2025-12-06 07:07:05.301 226109 DEBUG nova.network.neutron [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.304 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004811.303578, aed327d9-06a6-43da-9e9d-575c5dc0af5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.305 226109 INFO nova.compute.manager [-] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] VM Stopped (Lifecycle Event)
Dec 06 07:07:06 compute-1 ceph-mon[81689]: pgmap v1400: 305 pgs: 305 active+clean; 222 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 560 KiB/s wr, 130 op/s
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.578 226109 DEBUG nova.compute.manager [None req-944bc9d6-2c8a-4ec3-80c7-f6e33b507362 - - - - - -] [instance: aed327d9-06a6-43da-9e9d-575c5dc0af5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.688 226109 DEBUG nova.network.neutron [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.708 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Releasing lock "refresh_cache-1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.710 226109 DEBUG nova.compute.manager [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.786 226109 DEBUG nova.network.neutron [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Successfully updated port: bf5bb400-e5e8-47f6-bb49-2b370d100cd3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.812 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.813 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquired lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.814 226109 DEBUG nova.network.neutron [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.894 226109 DEBUG nova.compute.manager [req-cf2df072-a6db-43dc-8bfd-ce158e08d543 req-dfa6169d-9ec4-4902-a2a9-dc824bf39646 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-changed-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.894 226109 DEBUG nova.compute.manager [req-cf2df072-a6db-43dc-8bfd-ce158e08d543 req-dfa6169d-9ec4-4902-a2a9-dc824bf39646 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Refreshing instance network info cache due to event network-changed-bf5bb400-e5e8-47f6-bb49-2b370d100cd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.895 226109 DEBUG oslo_concurrency.lockutils [req-cf2df072-a6db-43dc-8bfd-ce158e08d543 req-dfa6169d-9ec4-4902-a2a9-dc824bf39646 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:07:06 compute-1 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000024.scope: Deactivated successfully.
Dec 06 07:07:06 compute-1 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000024.scope: Consumed 14.126s CPU time.
Dec 06 07:07:06 compute-1 systemd-machined[190302]: Machine qemu-20-instance-00000024 terminated.
Dec 06 07:07:06 compute-1 nova_compute[226101]: 2025-12-06 07:07:06.964 226109 DEBUG nova.network.neutron [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:07:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:06.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.137 226109 INFO nova.virt.libvirt.driver [-] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Instance destroyed successfully.
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.138 226109 DEBUG nova.objects.instance [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'resources' on Instance uuid 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:07:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:07.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.230 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.538 226109 INFO nova.virt.libvirt.driver [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Deleting instance files /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_del
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.539 226109 INFO nova.virt.libvirt.driver [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Deletion of /var/lib/nova/instances/1f74f35d-e7fa-4c39-bcd6-85af6d7255bf_del complete
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.585 226109 INFO nova.compute.manager [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Took 0.88 seconds to destroy the instance on the hypervisor.
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.586 226109 DEBUG oslo.service.loopingcall [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.586 226109 DEBUG nova.compute.manager [-] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.587 226109 DEBUG nova.network.neutron [-] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.716 226109 DEBUG nova.network.neutron [-] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.736 226109 DEBUG nova.network.neutron [-] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.754 226109 INFO nova.compute.manager [-] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Took 0.17 seconds to deallocate network for instance.
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.760 226109 DEBUG nova.network.neutron [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updating instance_info_cache with network_info: [{"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.807 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Releasing lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.808 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Instance network_info: |[{"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.809 226109 DEBUG oslo_concurrency.lockutils [req-cf2df072-a6db-43dc-8bfd-ce158e08d543 req-dfa6169d-9ec4-4902-a2a9-dc824bf39646 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.809 226109 DEBUG nova.network.neutron [req-cf2df072-a6db-43dc-8bfd-ce158e08d543 req-dfa6169d-9ec4-4902-a2a9-dc824bf39646 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Refreshing network info cache for port bf5bb400-e5e8-47f6-bb49-2b370d100cd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.815 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Start _get_guest_xml network_info=[{"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.822 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.823 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.825 226109 WARNING nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.838 226109 DEBUG nova.virt.libvirt.host [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.840 226109 DEBUG nova.virt.libvirt.host [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.844 226109 DEBUG nova.virt.libvirt.host [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.844 226109 DEBUG nova.virt.libvirt.host [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.846 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.846 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.846 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.847 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.847 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.847 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.847 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.848 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.848 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.848 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.848 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.849 226109 DEBUG nova.virt.hardware [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.851 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:07 compute-1 nova_compute[226101]: 2025-12-06 07:07:07.911 226109 DEBUG oslo_concurrency.processutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:07:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2724217004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.287 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.312 226109 DEBUG nova.storage.rbd_utils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.316 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1639568485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.350 226109 DEBUG oslo_concurrency.processutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.355 226109 DEBUG nova.compute.provider_tree [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.372 226109 DEBUG nova.scheduler.client.report [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.409 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:08 compute-1 ceph-mon[81689]: pgmap v1401: 305 pgs: 305 active+clean; 246 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 134 op/s
Dec 06 07:07:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2724217004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1639568485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.435 226109 INFO nova.scheduler.client.report [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Deleted allocations for instance 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf
Dec 06 07:07:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.565 226109 DEBUG oslo_concurrency.lockutils [None req-6d97add7-2bcc-46be-a5c4-7d7ec3eaea98 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "1f74f35d-e7fa-4c39-bcd6-85af6d7255bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.696 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:07:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/255844533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.727 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.729 226109 DEBUG nova.virt.libvirt.vif [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:06:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1113807590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1113807590',id=42,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxNM3sYiKzp1+NUb/2De95mJs+SH1m1mxJqrRUBmO5hfmaPmDEbehrvD5V1R8B784do2XvfbohRCGUURA5SWZg3+xgRCk33hYLHpa3KVfm+1jfWf3JbwWLhSjPluoLz2w==',key_name='tempest-keypair-1560414487',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2236fb6443441618c69ad660b0932dd',ramdisk_id='',reservation_id='r-0udoz3m9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:07:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7c2f91caf1444c68dcf6bd66966d67e',uuid=e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.729 226109 DEBUG nova.network.os_vif_util [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converting VIF {"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.730 226109 DEBUG nova.network.os_vif_util [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:ca:f4,bridge_name='br-int',has_traffic_filtering=True,id=bf5bb400-e5e8-47f6-bb49-2b370d100cd3,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5bb400-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.731 226109 DEBUG nova.objects.instance [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'pci_devices' on Instance uuid e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.764 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <uuid>e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e</uuid>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <name>instance-0000002a</name>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-1113807590</nova:name>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:07:07</nova:creationTime>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <nova:user uuid="b7c2f91caf1444c68dcf6bd66966d67e">tempest-UpdateMultiattachVolumeNegativeTest-1086119788-project-member</nova:user>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <nova:project uuid="c2236fb6443441618c69ad660b0932dd">tempest-UpdateMultiattachVolumeNegativeTest-1086119788</nova:project>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <nova:port uuid="bf5bb400-e5e8-47f6-bb49-2b370d100cd3">
Dec 06 07:07:08 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <system>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <entry name="serial">e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e</entry>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <entry name="uuid">e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e</entry>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </system>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <os>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   </os>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <features>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   </features>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk">
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       </source>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk.config">
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       </source>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:07:08 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:8c:ca:f4"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <target dev="tapbf5bb400-e5"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e/console.log" append="off"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <video>
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </video>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:07:08 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:07:08 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:07:08 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:07:08 compute-1 nova_compute[226101]: </domain>
Dec 06 07:07:08 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.766 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Preparing to wait for external event network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.767 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.767 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.768 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.768 226109 DEBUG nova.virt.libvirt.vif [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:06:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1113807590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1113807590',id=42,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxNM3sYiKzp1+NUb/2De95mJs+SH1m1mxJqrRUBmO5hfmaPmDEbehrvD5V1R8B784do2XvfbohRCGUURA5SWZg3+xgRCk33hYLHpa3KVfm+1jfWf3JbwWLhSjPluoLz2w==',key_name='tempest-keypair-1560414487',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2236fb6443441618c69ad660b0932dd',ramdisk_id='',reservation_id='r-0udoz3m9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:07:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7c2f91caf1444c68dcf6bd66966d67e',uuid=e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.769 226109 DEBUG nova.network.os_vif_util [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converting VIF {"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.769 226109 DEBUG nova.network.os_vif_util [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:ca:f4,bridge_name='br-int',has_traffic_filtering=True,id=bf5bb400-e5e8-47f6-bb49-2b370d100cd3,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5bb400-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.770 226109 DEBUG os_vif [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:ca:f4,bridge_name='br-int',has_traffic_filtering=True,id=bf5bb400-e5e8-47f6-bb49-2b370d100cd3,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5bb400-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.771 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.771 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.772 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.775 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.776 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5bb400-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.776 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf5bb400-e5, col_values=(('external_ids', {'iface-id': 'bf5bb400-e5e8-47f6-bb49-2b370d100cd3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:ca:f4', 'vm-uuid': 'e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.778 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:08 compute-1 NetworkManager[49031]: <info>  [1765004828.7794] manager: (tapbf5bb400-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.780 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.784 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.785 226109 INFO os_vif [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:ca:f4,bridge_name='br-int',has_traffic_filtering=True,id=bf5bb400-e5e8-47f6-bb49-2b370d100cd3,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5bb400-e5')
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.840 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.840 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.841 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No VIF found with MAC fa:16:3e:8c:ca:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.841 226109 INFO nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Using config drive
Dec 06 07:07:08 compute-1 nova_compute[226101]: 2025-12-06 07:07:08.864 226109 DEBUG nova.storage.rbd_utils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:08.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:09.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:09 compute-1 sudo[241888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:09 compute-1 sudo[241888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:09 compute-1 sudo[241888]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:09 compute-1 sudo[241913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:07:09 compute-1 sudo[241913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:09 compute-1 sudo[241913]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1832287170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:07:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1832287170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:07:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/255844533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:09 compute-1 nova_compute[226101]: 2025-12-06 07:07:09.960 226109 INFO nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Creating config drive at /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e/disk.config
Dec 06 07:07:09 compute-1 nova_compute[226101]: 2025-12-06 07:07:09.965 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwif6p37p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:09 compute-1 nova_compute[226101]: 2025-12-06 07:07:09.991 226109 DEBUG nova.network.neutron [req-cf2df072-a6db-43dc-8bfd-ce158e08d543 req-dfa6169d-9ec4-4902-a2a9-dc824bf39646 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updated VIF entry in instance network info cache for port bf5bb400-e5e8-47f6-bb49-2b370d100cd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:07:09 compute-1 nova_compute[226101]: 2025-12-06 07:07:09.993 226109 DEBUG nova.network.neutron [req-cf2df072-a6db-43dc-8bfd-ce158e08d543 req-dfa6169d-9ec4-4902-a2a9-dc824bf39646 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updating instance_info_cache with network_info: [{"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.014 226109 DEBUG oslo_concurrency.lockutils [req-cf2df072-a6db-43dc-8bfd-ce158e08d543 req-dfa6169d-9ec4-4902-a2a9-dc824bf39646 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.100 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwif6p37p" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.131 226109 DEBUG nova.storage.rbd_utils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] rbd image e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.134 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e/disk.config e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.293 226109 DEBUG oslo_concurrency.processutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e/disk.config e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.293 226109 INFO nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Deleting local config drive /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e/disk.config because it was imported into RBD.
Dec 06 07:07:10 compute-1 kernel: tapbf5bb400-e5: entered promiscuous mode
Dec 06 07:07:10 compute-1 NetworkManager[49031]: <info>  [1765004830.3414] manager: (tapbf5bb400-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.385 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:10 compute-1 ovn_controller[130279]: 2025-12-06T07:07:10Z|00143|binding|INFO|Claiming lport bf5bb400-e5e8-47f6-bb49-2b370d100cd3 for this chassis.
Dec 06 07:07:10 compute-1 ovn_controller[130279]: 2025-12-06T07:07:10Z|00144|binding|INFO|bf5bb400-e5e8-47f6-bb49-2b370d100cd3: Claiming fa:16:3e:8c:ca:f4 10.100.0.12
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.393 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:10 compute-1 systemd-machined[190302]: New machine qemu-22-instance-0000002a.
Dec 06 07:07:10 compute-1 systemd[1]: Started Virtual Machine qemu-22-instance-0000002a.
Dec 06 07:07:10 compute-1 ceph-mon[81689]: pgmap v1402: 305 pgs: 305 active+clean; 246 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 74 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Dec 06 07:07:10 compute-1 systemd-udevd[241991]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:07:10 compute-1 NetworkManager[49031]: <info>  [1765004830.4424] device (tapbf5bb400-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:07:10 compute-1 NetworkManager[49031]: <info>  [1765004830.4430] device (tapbf5bb400-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:07:10 compute-1 ovn_controller[130279]: 2025-12-06T07:07:10Z|00145|binding|INFO|Setting lport bf5bb400-e5e8-47f6-bb49-2b370d100cd3 ovn-installed in OVS
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:10 compute-1 ovn_controller[130279]: 2025-12-06T07:07:10Z|00146|binding|INFO|Setting lport bf5bb400-e5e8-47f6-bb49-2b370d100cd3 up in Southbound
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.586 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:ca:f4 10.100.0.12'], port_security=['fa:16:3e:8c:ca:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2236fb6443441618c69ad660b0932dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1faad6c4-429b-4e24-95b0-2a5a82e4cdc8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed024f0c-371e-4b3f-951f-85c5c8e0d63d, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=bf5bb400-e5e8-47f6-bb49-2b370d100cd3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.587 139580 INFO neutron.agent.ovn.metadata.agent [-] Port bf5bb400-e5e8-47f6-bb49-2b370d100cd3 in datapath 81801c54-c543-4e0f-94c1-cdf24ddb6bce bound to our chassis
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.589 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81801c54-c543-4e0f-94c1-cdf24ddb6bce
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.600 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5720aaaf-2510-43c1-ac57-80d2e0697943]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.602 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81801c54-c1 in ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.605 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81801c54-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.605 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9eafaa-db63-424a-91ed-a5f8b02e3168]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.606 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[903c7a6b-4be9-4477-a735-62a7919f0717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.621 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bfb477-a35f-4dd1-aad4-f63bdb17ba17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.634 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[287a3261-a07d-42db-acc0-1fe9e3026a6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.659 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ec559dc0-e499-457b-8a03-cf0e50df1d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.665 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e66dc063-4d90-480b-922f-5b339e873851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 NetworkManager[49031]: <info>  [1765004830.6660] manager: (tap81801c54-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.696 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbdb585-ff00-4359-b533-835f99b41492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.700 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[29545050-856f-4dfe-8262-f27b9b65355a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 NetworkManager[49031]: <info>  [1765004830.7246] device (tap81801c54-c0): carrier: link connected
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.729 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f7c794-cbeb-4093-98f1-ec55ecd9ab59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.745 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c81d92-ffdb-4878-9944-0f8168f1cb36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81801c54-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:5f:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511022, 'reachable_time': 32979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242058, 'error': None, 'target': 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.760 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0486b7-9165-4972-a4be-8bc75ad388f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:5f24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511022, 'tstamp': 511022}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242061, 'error': None, 'target': 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.776 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[70f32a2f-f8e1-4ce0-97e3-30b5f35aa24e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81801c54-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:5f:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511022, 'reachable_time': 32979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242065, 'error': None, 'target': 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.803 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cd39cf-d171-48fa-be62-05ea755a96a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.848 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004830.8478074, e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.848 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] VM Started (Lifecycle Event)
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.856 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[941dc231-c52a-447d-9d22-6400f9091f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.857 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81801c54-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.857 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.857 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81801c54-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.859 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:10 compute-1 NetworkManager[49031]: <info>  [1765004830.8596] manager: (tap81801c54-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec 06 07:07:10 compute-1 kernel: tap81801c54-c0: entered promiscuous mode
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.860 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.864 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81801c54-c0, col_values=(('external_ids', {'iface-id': '6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.865 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:10 compute-1 ovn_controller[130279]: 2025-12-06T07:07:10Z|00147|binding|INFO|Releasing lport 6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41 from this chassis (sb_readonly=0)
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.866 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81801c54-c543-4e0f-94c1-cdf24ddb6bce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81801c54-c543-4e0f-94c1-cdf24ddb6bce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.866 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6bafab9d-d41f-4eb7-89b1-0eb32c304df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.867 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-81801c54-c543-4e0f-94c1-cdf24ddb6bce
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/81801c54-c543-4e0f-94c1-cdf24ddb6bce.pid.haproxy
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 81801c54-c543-4e0f-94c1-cdf24ddb6bce
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:07:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:10.868 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'env', 'PROCESS_TAG=haproxy-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81801c54-c543-4e0f-94c1-cdf24ddb6bce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:07:10 compute-1 nova_compute[226101]: 2025-12-06 07:07:10.879 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:11.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.511 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.679 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004830.8491812, e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.679 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] VM Paused (Lifecycle Event)
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.719 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.723 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.748 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:07:11 compute-1 podman[242099]: 2025-12-06 07:07:11.770641049 +0000 UTC m=+0.053981160 container create 3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:07:11 compute-1 systemd[1]: Started libpod-conmon-3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a.scope.
Dec 06 07:07:11 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:07:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364a49b1693336ba2a3227da2c0cf613ab90c6ac2678b9a21e84a05627f3cd3c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:11 compute-1 podman[242099]: 2025-12-06 07:07:11.73996501 +0000 UTC m=+0.023305161 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:07:11 compute-1 podman[242099]: 2025-12-06 07:07:11.844935007 +0000 UTC m=+0.128275118 container init 3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:07:11 compute-1 podman[242099]: 2025-12-06 07:07:11.850063706 +0000 UTC m=+0.133403827 container start 3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 07:07:11 compute-1 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[242114]: [NOTICE]   (242118) : New worker (242120) forked
Dec 06 07:07:11 compute-1 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[242114]: [NOTICE]   (242118) : Loading success.
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.956 226109 DEBUG nova.compute.manager [req-dec073fb-1c0c-4d00-9308-9d965413e77d req-f8cc1446-ad50-483b-81bd-087b9a9dc917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.957 226109 DEBUG oslo_concurrency.lockutils [req-dec073fb-1c0c-4d00-9308-9d965413e77d req-f8cc1446-ad50-483b-81bd-087b9a9dc917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.957 226109 DEBUG oslo_concurrency.lockutils [req-dec073fb-1c0c-4d00-9308-9d965413e77d req-f8cc1446-ad50-483b-81bd-087b9a9dc917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.957 226109 DEBUG oslo_concurrency.lockutils [req-dec073fb-1c0c-4d00-9308-9d965413e77d req-f8cc1446-ad50-483b-81bd-087b9a9dc917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.957 226109 DEBUG nova.compute.manager [req-dec073fb-1c0c-4d00-9308-9d965413e77d req-f8cc1446-ad50-483b-81bd-087b9a9dc917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Processing event network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.958 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.961 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004831.961147, e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.961 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] VM Resumed (Lifecycle Event)
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.963 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.965 226109 INFO nova.virt.libvirt.driver [-] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Instance spawned successfully.
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.965 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.990 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:11 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.995 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:11.999 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.000 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.000 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.000 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.001 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.001 226109 DEBUG nova.virt.libvirt.driver [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.050 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.118 226109 INFO nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Took 10.49 seconds to spawn the instance on the hypervisor.
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.119 226109 DEBUG nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.246 226109 INFO nova.compute.manager [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Took 11.54 seconds to build instance.
Dec 06 07:07:12 compute-1 nova_compute[226101]: 2025-12-06 07:07:12.272 226109 DEBUG oslo_concurrency.lockutils [None req-9b554eb2-f32b-41b9-9c90-1ce678608dce b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:12 compute-1 ceph-mon[81689]: pgmap v1403: 305 pgs: 305 active+clean; 191 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 1.8 MiB/s wr, 137 op/s
Dec 06 07:07:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:13.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:13 compute-1 nova_compute[226101]: 2025-12-06 07:07:13.728 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:13 compute-1 nova_compute[226101]: 2025-12-06 07:07:13.778 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:14 compute-1 ceph-mon[81689]: pgmap v1404: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:07:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:15 compute-1 nova_compute[226101]: 2025-12-06 07:07:15.201 226109 DEBUG nova.compute.manager [req-7d2c77b0-1d48-4195-9e3f-2c5a21dc1b31 req-e99d72e6-0cc9-43f7-bc8c-4a6f3a7efd89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:15 compute-1 nova_compute[226101]: 2025-12-06 07:07:15.202 226109 DEBUG oslo_concurrency.lockutils [req-7d2c77b0-1d48-4195-9e3f-2c5a21dc1b31 req-e99d72e6-0cc9-43f7-bc8c-4a6f3a7efd89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:15 compute-1 nova_compute[226101]: 2025-12-06 07:07:15.203 226109 DEBUG oslo_concurrency.lockutils [req-7d2c77b0-1d48-4195-9e3f-2c5a21dc1b31 req-e99d72e6-0cc9-43f7-bc8c-4a6f3a7efd89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:15 compute-1 nova_compute[226101]: 2025-12-06 07:07:15.203 226109 DEBUG oslo_concurrency.lockutils [req-7d2c77b0-1d48-4195-9e3f-2c5a21dc1b31 req-e99d72e6-0cc9-43f7-bc8c-4a6f3a7efd89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:15 compute-1 nova_compute[226101]: 2025-12-06 07:07:15.203 226109 DEBUG nova.compute.manager [req-7d2c77b0-1d48-4195-9e3f-2c5a21dc1b31 req-e99d72e6-0cc9-43f7-bc8c-4a6f3a7efd89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] No waiting events found dispatching network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:07:15 compute-1 nova_compute[226101]: 2025-12-06 07:07:15.203 226109 WARNING nova.compute.manager [req-7d2c77b0-1d48-4195-9e3f-2c5a21dc1b31 req-e99d72e6-0cc9-43f7-bc8c-4a6f3a7efd89 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received unexpected event network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 for instance with vm_state active and task_state None.
Dec 06 07:07:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:15.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:16.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:17 compute-1 ceph-mon[81689]: pgmap v1405: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 623 KiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec 06 07:07:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:17.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:18 compute-1 ceph-mon[81689]: pgmap v1406: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 122 op/s
Dec 06 07:07:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:18 compute-1 nova_compute[226101]: 2025-12-06 07:07:18.729 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:18 compute-1 nova_compute[226101]: 2025-12-06 07:07:18.779 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:18.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:19.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:20 compute-1 ceph-mon[81689]: pgmap v1407: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 102 op/s
Dec 06 07:07:20 compute-1 nova_compute[226101]: 2025-12-06 07:07:20.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-1 NetworkManager[49031]: <info>  [1765004840.5960] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/60)
Dec 06 07:07:20 compute-1 NetworkManager[49031]: <info>  [1765004840.5973] device (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 07:07:20 compute-1 NetworkManager[49031]: <info>  [1765004840.5985] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/61)
Dec 06 07:07:20 compute-1 NetworkManager[49031]: <info>  [1765004840.5990] device (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 07:07:20 compute-1 NetworkManager[49031]: <info>  [1765004840.6006] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec 06 07:07:20 compute-1 NetworkManager[49031]: <info>  [1765004840.6016] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec 06 07:07:20 compute-1 NetworkManager[49031]: <info>  [1765004840.6023] device (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 07:07:20 compute-1 NetworkManager[49031]: <info>  [1765004840.6028] device (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 07:07:20 compute-1 nova_compute[226101]: 2025-12-06 07:07:20.745 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-1 ovn_controller[130279]: 2025-12-06T07:07:20Z|00148|binding|INFO|Releasing lport 6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41 from this chassis (sb_readonly=0)
Dec 06 07:07:20 compute-1 nova_compute[226101]: 2025-12-06 07:07:20.761 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:20.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:22 compute-1 nova_compute[226101]: 2025-12-06 07:07:22.137 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004827.1359732, 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:22 compute-1 nova_compute[226101]: 2025-12-06 07:07:22.138 226109 INFO nova.compute.manager [-] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] VM Stopped (Lifecycle Event)
Dec 06 07:07:22 compute-1 ceph-mon[81689]: pgmap v1408: 305 pgs: 305 active+clean; 149 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 114 op/s
Dec 06 07:07:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:22.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:23.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:23 compute-1 nova_compute[226101]: 2025-12-06 07:07:23.635 226109 DEBUG nova.compute.manager [req-2cad4352-3f71-446f-b9be-e5fb3fc71adb req-8f15b234-43b0-4dac-b1af-db6db212f6c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-changed-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:23 compute-1 nova_compute[226101]: 2025-12-06 07:07:23.636 226109 DEBUG nova.compute.manager [req-2cad4352-3f71-446f-b9be-e5fb3fc71adb req-8f15b234-43b0-4dac-b1af-db6db212f6c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Refreshing instance network info cache due to event network-changed-bf5bb400-e5e8-47f6-bb49-2b370d100cd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:07:23 compute-1 nova_compute[226101]: 2025-12-06 07:07:23.637 226109 DEBUG oslo_concurrency.lockutils [req-2cad4352-3f71-446f-b9be-e5fb3fc71adb req-8f15b234-43b0-4dac-b1af-db6db212f6c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:07:23 compute-1 nova_compute[226101]: 2025-12-06 07:07:23.637 226109 DEBUG oslo_concurrency.lockutils [req-2cad4352-3f71-446f-b9be-e5fb3fc71adb req-8f15b234-43b0-4dac-b1af-db6db212f6c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:07:23 compute-1 nova_compute[226101]: 2025-12-06 07:07:23.637 226109 DEBUG nova.network.neutron [req-2cad4352-3f71-446f-b9be-e5fb3fc71adb req-8f15b234-43b0-4dac-b1af-db6db212f6c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Refreshing network info cache for port bf5bb400-e5e8-47f6-bb49-2b370d100cd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:07:23 compute-1 nova_compute[226101]: 2025-12-06 07:07:23.702 226109 DEBUG nova.compute.manager [None req-2995085b-b995-423c-a547-ecaeffdf44b8 - - - - - -] [instance: 1f74f35d-e7fa-4c39-bcd6-85af6d7255bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:23 compute-1 nova_compute[226101]: 2025-12-06 07:07:23.731 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:23 compute-1 nova_compute[226101]: 2025-12-06 07:07:23.780 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:24 compute-1 ceph-mon[81689]: pgmap v1409: 305 pgs: 305 active+clean; 121 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.5 KiB/s wr, 97 op/s
Dec 06 07:07:24 compute-1 nova_compute[226101]: 2025-12-06 07:07:24.511 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:24.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:25.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:25 compute-1 ovn_controller[130279]: 2025-12-06T07:07:25Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:ca:f4 10.100.0.12
Dec 06 07:07:25 compute-1 ovn_controller[130279]: 2025-12-06T07:07:25Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:ca:f4 10.100.0.12
Dec 06 07:07:26 compute-1 ceph-mon[81689]: pgmap v1410: 305 pgs: 305 active+clean; 96 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 115 op/s
Dec 06 07:07:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/781877424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:26.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:27.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:27 compute-1 nova_compute[226101]: 2025-12-06 07:07:27.347 226109 DEBUG nova.network.neutron [req-2cad4352-3f71-446f-b9be-e5fb3fc71adb req-8f15b234-43b0-4dac-b1af-db6db212f6c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updated VIF entry in instance network info cache for port bf5bb400-e5e8-47f6-bb49-2b370d100cd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:07:27 compute-1 nova_compute[226101]: 2025-12-06 07:07:27.348 226109 DEBUG nova.network.neutron [req-2cad4352-3f71-446f-b9be-e5fb3fc71adb req-8f15b234-43b0-4dac-b1af-db6db212f6c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updating instance_info_cache with network_info: [{"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:27 compute-1 nova_compute[226101]: 2025-12-06 07:07:27.389 226109 DEBUG oslo_concurrency.lockutils [req-2cad4352-3f71-446f-b9be-e5fb3fc71adb req-8f15b234-43b0-4dac-b1af-db6db212f6c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:07:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:28 compute-1 ceph-mon[81689]: pgmap v1411: 305 pgs: 305 active+clean; 115 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Dec 06 07:07:28 compute-1 nova_compute[226101]: 2025-12-06 07:07:28.733 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:28 compute-1 nova_compute[226101]: 2025-12-06 07:07:28.781 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:29.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:29 compute-1 podman[242130]: 2025-12-06 07:07:29.09549271 +0000 UTC m=+0.079990893 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:07:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:29.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:30 compute-1 ceph-mon[81689]: pgmap v1412: 305 pgs: 305 active+clean; 115 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 07:07:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:31.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:31.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:32 compute-1 ceph-mon[81689]: pgmap v1413: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 07:07:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:33.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:33.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:33 compute-1 nova_compute[226101]: 2025-12-06 07:07:33.734 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:33 compute-1 nova_compute[226101]: 2025-12-06 07:07:33.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:35.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:35 compute-1 ceph-mon[81689]: pgmap v1414: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 07:07:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:35.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:36 compute-1 podman[242157]: 2025-12-06 07:07:36.084362847 +0000 UTC m=+0.057272719 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:07:36 compute-1 podman[242156]: 2025-12-06 07:07:36.084506901 +0000 UTC m=+0.060839075 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd)
Dec 06 07:07:36 compute-1 ovn_controller[130279]: 2025-12-06T07:07:36Z|00149|binding|INFO|Releasing lport 6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41 from this chassis (sb_readonly=0)
Dec 06 07:07:36 compute-1 nova_compute[226101]: 2025-12-06 07:07:36.726 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:37.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:37 compute-1 ceph-mon[81689]: pgmap v1415: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 07:07:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:37.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3861690964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:38 compute-1 ceph-mon[81689]: pgmap v1416: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 1.2 MiB/s wr, 45 op/s
Dec 06 07:07:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:38 compute-1 nova_compute[226101]: 2025-12-06 07:07:38.736 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:38 compute-1 nova_compute[226101]: 2025-12-06 07:07:38.783 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:39.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:39.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:40 compute-1 ceph-mon[81689]: pgmap v1417: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 60 KiB/s wr, 5 op/s
Dec 06 07:07:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:41.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:41.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:42 compute-1 ceph-mon[81689]: pgmap v1418: 305 pgs: 305 active+clean; 129 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 399 KiB/s wr, 9 op/s
Dec 06 07:07:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:43.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:43.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:43 compute-1 nova_compute[226101]: 2025-12-06 07:07:43.739 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:43 compute-1 nova_compute[226101]: 2025-12-06 07:07:43.784 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:44 compute-1 ceph-mon[81689]: pgmap v1419: 305 pgs: 305 active+clean; 134 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 610 KiB/s wr, 5 op/s
Dec 06 07:07:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:45.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:46 compute-1 nova_compute[226101]: 2025-12-06 07:07:46.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:46 compute-1 nova_compute[226101]: 2025-12-06 07:07:46.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:46 compute-1 ceph-mon[81689]: pgmap v1420: 305 pgs: 305 active+clean; 163 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Dec 06 07:07:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:47.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:47.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4060825194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.740 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.782 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.783 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.783 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.783 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.784 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:48 compute-1 nova_compute[226101]: 2025-12-06 07:07:48.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:48 compute-1 ceph-mon[81689]: pgmap v1421: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:07:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3405180293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3892501038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2210710338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2129314298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:49 compute-1 nova_compute[226101]: 2025-12-06 07:07:49.235 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:49 compute-1 nova_compute[226101]: 2025-12-06 07:07:49.570 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:07:49 compute-1 nova_compute[226101]: 2025-12-06 07:07:49.570 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:07:49 compute-1 nova_compute[226101]: 2025-12-06 07:07:49.739 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:07:49 compute-1 nova_compute[226101]: 2025-12-06 07:07:49.740 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4631MB free_disk=20.921947479248047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:07:49 compute-1 nova_compute[226101]: 2025-12-06 07:07:49.740 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:49 compute-1 nova_compute[226101]: 2025-12-06 07:07:49.740 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1525017526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2129314298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:50.040 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.040 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:07:50.041 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.163 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.163 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.164 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.237 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.368 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.368 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.386 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.435 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:07:50 compute-1 nova_compute[226101]: 2025-12-06 07:07:50.748 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:51 compute-1 ceph-mon[81689]: pgmap v1422: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:07:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2763915287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:51.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/237580491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:51 compute-1 nova_compute[226101]: 2025-12-06 07:07:51.173 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:51 compute-1 nova_compute[226101]: 2025-12-06 07:07:51.175 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:51 compute-1 nova_compute[226101]: 2025-12-06 07:07:51.180 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:51 compute-1 nova_compute[226101]: 2025-12-06 07:07:51.226 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:51.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:51 compute-1 nova_compute[226101]: 2025-12-06 07:07:51.553 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:07:51 compute-1 nova_compute[226101]: 2025-12-06 07:07:51.553 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:51 compute-1 nova_compute[226101]: 2025-12-06 07:07:51.554 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:51 compute-1 nova_compute[226101]: 2025-12-06 07:07:51.554 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:07:52 compute-1 ceph-mon[81689]: pgmap v1423: 305 pgs: 305 active+clean; 203 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 3.2 MiB/s wr, 33 op/s
Dec 06 07:07:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/237580491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2213247620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:52 compute-1 nova_compute[226101]: 2025-12-06 07:07:52.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:52 compute-1 nova_compute[226101]: 2025-12-06 07:07:52.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:07:52 compute-1 nova_compute[226101]: 2025-12-06 07:07:52.608 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:07:52 compute-1 nova_compute[226101]: 2025-12-06 07:07:52.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:52 compute-1 nova_compute[226101]: 2025-12-06 07:07:52.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:52 compute-1 nova_compute[226101]: 2025-12-06 07:07:52.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:52 compute-1 nova_compute[226101]: 2025-12-06 07:07:52.609 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:07:52 compute-1 nova_compute[226101]: 2025-12-06 07:07:52.610 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:53.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:53.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:53 compute-1 nova_compute[226101]: 2025-12-06 07:07:53.611 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:53 compute-1 nova_compute[226101]: 2025-12-06 07:07:53.743 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:53 compute-1 nova_compute[226101]: 2025-12-06 07:07:53.806 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-1 ceph-mon[81689]: pgmap v1424: 305 pgs: 305 active+clean; 213 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.2 MiB/s wr, 57 op/s
Dec 06 07:07:54 compute-1 nova_compute[226101]: 2025-12-06 07:07:54.503 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-1 nova_compute[226101]: 2025-12-06 07:07:54.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:54 compute-1 nova_compute[226101]: 2025-12-06 07:07:54.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:07:54 compute-1 nova_compute[226101]: 2025-12-06 07:07:54.621 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:07:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:55.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:55.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:56 compute-1 ceph-mon[81689]: pgmap v1425: 305 pgs: 305 active+clean; 185 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 MiB/s wr, 103 op/s
Dec 06 07:07:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/797452098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3883785481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:57.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4143644919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:57.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:57 compute-1 nova_compute[226101]: 2025-12-06 07:07:57.603 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:58 compute-1 ceph-mon[81689]: pgmap v1426: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Dec 06 07:07:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:58 compute-1 nova_compute[226101]: 2025-12-06 07:07:58.744 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:58 compute-1 nova_compute[226101]: 2025-12-06 07:07:58.808 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:07:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:59.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:59 compute-1 nova_compute[226101]: 2025-12-06 07:07:59.570 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:00.043 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:00 compute-1 podman[242240]: 2025-12-06 07:08:00.081218179 +0000 UTC m=+0.072400708 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 06 07:08:00 compute-1 ceph-mon[81689]: pgmap v1427: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 06 07:08:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:01.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:01.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:01.624 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:01.625 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:01.625 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:08:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:08:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:03.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:03 compute-1 ceph-mon[81689]: pgmap v1428: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Dec 06 07:08:03 compute-1 nova_compute[226101]: 2025-12-06 07:08:03.745 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:03 compute-1 nova_compute[226101]: 2025-12-06 07:08:03.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:04 compute-1 ceph-mon[81689]: pgmap v1429: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 424 KiB/s wr, 131 op/s
Dec 06 07:08:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:05.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:05.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:05 compute-1 ovn_controller[130279]: 2025-12-06T07:08:05Z|00150|binding|INFO|Releasing lport 6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41 from this chassis (sb_readonly=0)
Dec 06 07:08:06 compute-1 nova_compute[226101]: 2025-12-06 07:08:06.029 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:06 compute-1 nova_compute[226101]: 2025-12-06 07:08:06.345 226109 DEBUG nova.compute.manager [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-changed-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:06 compute-1 nova_compute[226101]: 2025-12-06 07:08:06.345 226109 DEBUG nova.compute.manager [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Refreshing instance network info cache due to event network-changed-bf5bb400-e5e8-47f6-bb49-2b370d100cd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:08:06 compute-1 nova_compute[226101]: 2025-12-06 07:08:06.345 226109 DEBUG oslo_concurrency.lockutils [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:08:06 compute-1 nova_compute[226101]: 2025-12-06 07:08:06.345 226109 DEBUG oslo_concurrency.lockutils [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:08:06 compute-1 nova_compute[226101]: 2025-12-06 07:08:06.346 226109 DEBUG nova.network.neutron [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Refreshing network info cache for port bf5bb400-e5e8-47f6-bb49-2b370d100cd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:08:06 compute-1 ceph-mon[81689]: pgmap v1430: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 30 KiB/s wr, 144 op/s
Dec 06 07:08:06 compute-1 ovn_controller[130279]: 2025-12-06T07:08:06Z|00151|binding|INFO|Releasing lport 6cb7c4fe-88f9-400c-a7b5-d8d7714c7b41 from this chassis (sb_readonly=0)
Dec 06 07:08:06 compute-1 nova_compute[226101]: 2025-12-06 07:08:06.976 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:07.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:07 compute-1 podman[242268]: 2025-12-06 07:08:07.091031371 +0000 UTC m=+0.073412225 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Dec 06 07:08:07 compute-1 podman[242267]: 2025-12-06 07:08:07.102737378 +0000 UTC m=+0.082027868 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 07:08:07 compute-1 nova_compute[226101]: 2025-12-06 07:08:07.182 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:07.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:08 compute-1 ceph-mon[81689]: pgmap v1431: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 18 KiB/s wr, 120 op/s
Dec 06 07:08:08 compute-1 nova_compute[226101]: 2025-12-06 07:08:08.750 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:08 compute-1 nova_compute[226101]: 2025-12-06 07:08:08.813 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:09.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:09.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:09 compute-1 nova_compute[226101]: 2025-12-06 07:08:09.482 226109 DEBUG nova.network.neutron [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updated VIF entry in instance network info cache for port bf5bb400-e5e8-47f6-bb49-2b370d100cd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:08:09 compute-1 nova_compute[226101]: 2025-12-06 07:08:09.482 226109 DEBUG nova.network.neutron [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updating instance_info_cache with network_info: [{"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:08:09 compute-1 nova_compute[226101]: 2025-12-06 07:08:09.509 226109 DEBUG oslo_concurrency.lockutils [req-114b27e9-d542-47fd-becc-efc2c82f3e4f req-791bc8f7-a0e8-480f-ad94-3aea36736175 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:08:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2977333581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:08:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2977333581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:08:09 compute-1 sudo[242306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:09 compute-1 sudo[242306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:09 compute-1 sudo[242306]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:09 compute-1 sudo[242331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:08:09 compute-1 sudo[242331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:09 compute-1 sudo[242331]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:09 compute-1 sudo[242356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:09 compute-1 sudo[242356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:09 compute-1 sudo[242356]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:09 compute-1 sudo[242381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:08:09 compute-1 sudo[242381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:10 compute-1 podman[242473]: 2025-12-06 07:08:10.216523429 +0000 UTC m=+0.056133249 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 07:08:10 compute-1 podman[242473]: 2025-12-06 07:08:10.322685937 +0000 UTC m=+0.162295757 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:08:10 compute-1 ceph-mon[81689]: pgmap v1432: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Dec 06 07:08:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:10 compute-1 sudo[242381]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:10 compute-1 sudo[242598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:10 compute-1 sudo[242598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:10 compute-1 sudo[242598]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:10 compute-1 sudo[242623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:08:10 compute-1 sudo[242623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:10 compute-1 sudo[242623]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:10 compute-1 sudo[242648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:10 compute-1 sudo[242648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:10 compute-1 sudo[242648]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:10 compute-1 sudo[242673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:08:10 compute-1 sudo[242673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:11.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:11.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:11 compute-1 sudo[242673]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:08:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:08:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:08:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:08:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:08:12 compute-1 ceph-mon[81689]: pgmap v1433: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Dec 06 07:08:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:13.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:13 compute-1 nova_compute[226101]: 2025-12-06 07:08:13.751 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:13 compute-1 nova_compute[226101]: 2025-12-06 07:08:13.815 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:14 compute-1 ceph-mon[81689]: pgmap v1434: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 66 op/s
Dec 06 07:08:14 compute-1 nova_compute[226101]: 2025-12-06 07:08:14.909 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:15.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:15.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:17.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Dec 06 07:08:18 compute-1 ceph-mon[81689]: pgmap v1435: 305 pgs: 305 active+clean; 177 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 957 KiB/s wr, 83 op/s
Dec 06 07:08:18 compute-1 nova_compute[226101]: 2025-12-06 07:08:18.754 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:18 compute-1 nova_compute[226101]: 2025-12-06 07:08:18.817 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:19.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:19 compute-1 sudo[242730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:19 compute-1 sudo[242730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:19 compute-1 sudo[242730]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:19 compute-1 sudo[242755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:08:19 compute-1 sudo[242755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:19 compute-1 sudo[242755]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:19.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:19 compute-1 ceph-mon[81689]: pgmap v1436: 305 pgs: 305 active+clean; 192 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Dec 06 07:08:19 compute-1 ceph-mon[81689]: osdmap e182: 3 total, 3 up, 3 in
Dec 06 07:08:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:21.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:21 compute-1 ceph-mon[81689]: pgmap v1438: 305 pgs: 305 active+clean; 192 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Dec 06 07:08:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:21.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:21 compute-1 nova_compute[226101]: 2025-12-06 07:08:21.331 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:08:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2405.4 total, 600.0 interval
                                           Cumulative writes: 17K writes, 71K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 17K writes, 5445 syncs, 3.24 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9756 writes, 40K keys, 9756 commit groups, 1.0 writes per commit group, ingest: 42.33 MB, 0.07 MB/s
                                           Interval WAL: 9756 writes, 3787 syncs, 2.58 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:08:23 compute-1 ceph-mon[81689]: pgmap v1439: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Dec 06 07:08:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:23.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:23.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:23 compute-1 nova_compute[226101]: 2025-12-06 07:08:23.787 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:23 compute-1 nova_compute[226101]: 2025-12-06 07:08:23.818 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:24 compute-1 ceph-mon[81689]: pgmap v1440: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Dec 06 07:08:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:25.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:25.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:25 compute-1 nova_compute[226101]: 2025-12-06 07:08:25.953 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:26 compute-1 ceph-mon[81689]: pgmap v1441: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 1.4 MiB/s wr, 71 op/s
Dec 06 07:08:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/191893823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:27.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:27.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:28 compute-1 ceph-mon[81689]: pgmap v1442: 305 pgs: 305 active+clean; 215 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 606 KiB/s wr, 33 op/s
Dec 06 07:08:28 compute-1 nova_compute[226101]: 2025-12-06 07:08:28.790 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:28 compute-1 nova_compute[226101]: 2025-12-06 07:08:28.820 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:29.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.209 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.210 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.225 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.322 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.322 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.330 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.330 226109 INFO nova.compute.claims [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:08:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:29.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.469 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:08:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2037535136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.898 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.904 226109 DEBUG nova.compute.provider_tree [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.923 226109 DEBUG nova.scheduler.client.report [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.952 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:29 compute-1 nova_compute[226101]: 2025-12-06 07:08:29.954 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.025 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.026 226109 DEBUG nova.network.neutron [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.059 226109 INFO nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.076 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.180 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.181 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.182 226109 INFO nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Creating image(s)
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.210 226109 DEBUG nova.storage.rbd_utils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] rbd image f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.237 226109 DEBUG nova.storage.rbd_utils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] rbd image f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.266 226109 DEBUG nova.storage.rbd_utils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] rbd image f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.269 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.296 226109 DEBUG nova.policy [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '98f77d99921448508eb3ac7bf8c4fa34', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '87ee3f11e794407692b09a4b77c91610', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.346 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.347 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.348 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.348 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.372 226109 DEBUG nova.storage.rbd_utils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] rbd image f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.375 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.708 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:30 compute-1 nova_compute[226101]: 2025-12-06 07:08:30.774 226109 DEBUG nova.storage.rbd_utils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] resizing rbd image f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:08:30 compute-1 ceph-mon[81689]: pgmap v1443: 305 pgs: 305 active+clean; 215 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 125 KiB/s rd, 527 KiB/s wr, 29 op/s
Dec 06 07:08:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2037535136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:31 compute-1 nova_compute[226101]: 2025-12-06 07:08:31.089 226109 DEBUG nova.objects.instance [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lazy-loading 'migration_context' on Instance uuid f9624722-30f1-4e59-ae6a-347ad7a8a4fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:31.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:31 compute-1 podman[242952]: 2025-12-06 07:08:31.103151868 +0000 UTC m=+0.085629975 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:08:31 compute-1 nova_compute[226101]: 2025-12-06 07:08:31.110 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:08:31 compute-1 nova_compute[226101]: 2025-12-06 07:08:31.111 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Ensure instance console log exists: /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:08:31 compute-1 nova_compute[226101]: 2025-12-06 07:08:31.111 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:31 compute-1 nova_compute[226101]: 2025-12-06 07:08:31.111 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:31 compute-1 nova_compute[226101]: 2025-12-06 07:08:31.112 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:31.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:31 compute-1 nova_compute[226101]: 2025-12-06 07:08:31.337 226109 DEBUG nova.network.neutron [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Successfully created port: 41ed45c4-775a-4f28-9048-e796534620db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:08:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1502816875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3750598982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:32 compute-1 nova_compute[226101]: 2025-12-06 07:08:32.709 226109 DEBUG nova.network.neutron [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Successfully updated port: 41ed45c4-775a-4f28-9048-e796534620db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:08:32 compute-1 nova_compute[226101]: 2025-12-06 07:08:32.731 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "refresh_cache-f9624722-30f1-4e59-ae6a-347ad7a8a4fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:08:32 compute-1 nova_compute[226101]: 2025-12-06 07:08:32.732 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquired lock "refresh_cache-f9624722-30f1-4e59-ae6a-347ad7a8a4fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:08:32 compute-1 nova_compute[226101]: 2025-12-06 07:08:32.732 226109 DEBUG nova.network.neutron [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:08:32 compute-1 nova_compute[226101]: 2025-12-06 07:08:32.813 226109 DEBUG nova.compute.manager [req-c356b816-f42c-4329-8476-9919e427877e req-4d6950cd-adac-4b5f-99b0-d63dd79e32ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received event network-changed-41ed45c4-775a-4f28-9048-e796534620db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:32 compute-1 nova_compute[226101]: 2025-12-06 07:08:32.813 226109 DEBUG nova.compute.manager [req-c356b816-f42c-4329-8476-9919e427877e req-4d6950cd-adac-4b5f-99b0-d63dd79e32ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Refreshing instance network info cache due to event network-changed-41ed45c4-775a-4f28-9048-e796534620db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:08:32 compute-1 nova_compute[226101]: 2025-12-06 07:08:32.813 226109 DEBUG oslo_concurrency.lockutils [req-c356b816-f42c-4329-8476-9919e427877e req-4d6950cd-adac-4b5f-99b0-d63dd79e32ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f9624722-30f1-4e59-ae6a-347ad7a8a4fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:08:32 compute-1 nova_compute[226101]: 2025-12-06 07:08:32.888 226109 DEBUG nova.network.neutron [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:08:33 compute-1 ceph-mon[81689]: pgmap v1444: 305 pgs: 305 active+clean; 254 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Dec 06 07:08:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:33.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:33.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:33 compute-1 nova_compute[226101]: 2025-12-06 07:08:33.791 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-1 nova_compute[226101]: 2025-12-06 07:08:33.821 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:34 compute-1 ceph-mon[81689]: pgmap v1445: 305 pgs: 305 active+clean; 262 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 29 op/s
Dec 06 07:08:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:35.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.196 226109 DEBUG nova.network.neutron [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Updating instance_info_cache with network_info: [{"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.213 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Releasing lock "refresh_cache-f9624722-30f1-4e59-ae6a-347ad7a8a4fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.213 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Instance network_info: |[{"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.213 226109 DEBUG oslo_concurrency.lockutils [req-c356b816-f42c-4329-8476-9919e427877e req-4d6950cd-adac-4b5f-99b0-d63dd79e32ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f9624722-30f1-4e59-ae6a-347ad7a8a4fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.213 226109 DEBUG nova.network.neutron [req-c356b816-f42c-4329-8476-9919e427877e req-4d6950cd-adac-4b5f-99b0-d63dd79e32ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Refreshing network info cache for port 41ed45c4-775a-4f28-9048-e796534620db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.216 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Start _get_guest_xml network_info=[{"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.221 226109 WARNING nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.226 226109 DEBUG nova.virt.libvirt.host [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.226 226109 DEBUG nova.virt.libvirt.host [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.232 226109 DEBUG nova.virt.libvirt.host [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.233 226109 DEBUG nova.virt.libvirt.host [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.234 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.234 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.234 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.235 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.235 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.235 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.235 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.235 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.236 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.236 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.236 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.236 226109 DEBUG nova.virt.hardware [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.239 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:35.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:08:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2803480774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.658 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.692 226109 DEBUG nova.storage.rbd_utils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] rbd image f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:35 compute-1 nova_compute[226101]: 2025-12-06 07:08:35.697 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:08:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/442084149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.104 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.106 226109 DEBUG nova.virt.libvirt.vif [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:08:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1003383917',display_name='tempest-ImagesOneServerTestJSON-server-1003383917',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1003383917',id=46,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87ee3f11e794407692b09a4b77c91610',ramdisk_id='',reservation_id='r-uj0nvdxv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1888020106',owner_user_name='tempest-ImagesOneServerTestJSON-1888020106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:08:30Z,user_data=None,user_id='98f77d99921448508eb3ac7bf8c4fa34',uuid=f9624722-30f1-4e59-ae6a-347ad7a8a4fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.107 226109 DEBUG nova.network.os_vif_util [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Converting VIF {"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.108 226109 DEBUG nova.network.os_vif_util [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:6f:da,bridge_name='br-int',has_traffic_filtering=True,id=41ed45c4-775a-4f28-9048-e796534620db,network=Network(ced2dfbe-6872-45ca-a2ac-dc0d6b713369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41ed45c4-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.109 226109 DEBUG nova.objects.instance [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lazy-loading 'pci_devices' on Instance uuid f9624722-30f1-4e59-ae6a-347ad7a8a4fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.130 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <uuid>f9624722-30f1-4e59-ae6a-347ad7a8a4fd</uuid>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <name>instance-0000002e</name>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <nova:name>tempest-ImagesOneServerTestJSON-server-1003383917</nova:name>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:08:35</nova:creationTime>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <nova:user uuid="98f77d99921448508eb3ac7bf8c4fa34">tempest-ImagesOneServerTestJSON-1888020106-project-member</nova:user>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <nova:project uuid="87ee3f11e794407692b09a4b77c91610">tempest-ImagesOneServerTestJSON-1888020106</nova:project>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <nova:port uuid="41ed45c4-775a-4f28-9048-e796534620db">
Dec 06 07:08:36 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <system>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <entry name="serial">f9624722-30f1-4e59-ae6a-347ad7a8a4fd</entry>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <entry name="uuid">f9624722-30f1-4e59-ae6a-347ad7a8a4fd</entry>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </system>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <os>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   </os>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <features>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   </features>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk">
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       </source>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk.config">
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       </source>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:08:36 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:7f:6f:da"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <target dev="tap41ed45c4-77"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd/console.log" append="off"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <video>
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </video>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:08:36 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:08:36 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:08:36 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:08:36 compute-1 nova_compute[226101]: </domain>
Dec 06 07:08:36 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.131 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Preparing to wait for external event network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.132 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.132 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.132 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.133 226109 DEBUG nova.virt.libvirt.vif [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:08:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1003383917',display_name='tempest-ImagesOneServerTestJSON-server-1003383917',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1003383917',id=46,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87ee3f11e794407692b09a4b77c91610',ramdisk_id='',reservation_id='r-uj0nvdxv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1888020106',owner_user_name='tempest-ImagesOneServerTestJSON-1888020106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:08:30Z,user_data=None,user_id='98f77d99921448508eb3ac7bf8c4fa34',uuid=f9624722-30f1-4e59-ae6a-347ad7a8a4fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.133 226109 DEBUG nova.network.os_vif_util [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Converting VIF {"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.134 226109 DEBUG nova.network.os_vif_util [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:6f:da,bridge_name='br-int',has_traffic_filtering=True,id=41ed45c4-775a-4f28-9048-e796534620db,network=Network(ced2dfbe-6872-45ca-a2ac-dc0d6b713369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41ed45c4-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.134 226109 DEBUG os_vif [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:6f:da,bridge_name='br-int',has_traffic_filtering=True,id=41ed45c4-775a-4f28-9048-e796534620db,network=Network(ced2dfbe-6872-45ca-a2ac-dc0d6b713369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41ed45c4-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.135 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.135 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.136 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.139 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.139 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41ed45c4-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.140 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41ed45c4-77, col_values=(('external_ids', {'iface-id': '41ed45c4-775a-4f28-9048-e796534620db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:6f:da', 'vm-uuid': 'f9624722-30f1-4e59-ae6a-347ad7a8a4fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:36 compute-1 NetworkManager[49031]: <info>  [1765004916.1423] manager: (tap41ed45c4-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.143 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.148 226109 INFO os_vif [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:6f:da,bridge_name='br-int',has_traffic_filtering=True,id=41ed45c4-775a-4f28-9048-e796534620db,network=Network(ced2dfbe-6872-45ca-a2ac-dc0d6b713369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41ed45c4-77')
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.198 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.199 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.200 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] No VIF found with MAC fa:16:3e:7f:6f:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.200 226109 INFO nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Using config drive
Dec 06 07:08:36 compute-1 ceph-mon[81689]: pgmap v1446: 305 pgs: 305 active+clean; 293 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Dec 06 07:08:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2803480774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/442084149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Dec 06 07:08:36 compute-1 nova_compute[226101]: 2025-12-06 07:08:36.253 226109 DEBUG nova.storage.rbd_utils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] rbd image f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.025 226109 INFO nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Creating config drive at /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd/disk.config
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.030 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4wce5m9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:37.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.155 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4wce5m9x" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.200 226109 DEBUG nova.storage.rbd_utils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] rbd image f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.205 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd/disk.config f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.242 226109 DEBUG nova.network.neutron [req-c356b816-f42c-4329-8476-9919e427877e req-4d6950cd-adac-4b5f-99b0-d63dd79e32ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Updated VIF entry in instance network info cache for port 41ed45c4-775a-4f28-9048-e796534620db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.244 226109 DEBUG nova.network.neutron [req-c356b816-f42c-4329-8476-9919e427877e req-4d6950cd-adac-4b5f-99b0-d63dd79e32ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Updating instance_info_cache with network_info: [{"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:08:37 compute-1 ceph-mon[81689]: osdmap e183: 3 total, 3 up, 3 in
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.272 226109 DEBUG oslo_concurrency.lockutils [req-c356b816-f42c-4329-8476-9919e427877e req-4d6950cd-adac-4b5f-99b0-d63dd79e32ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f9624722-30f1-4e59-ae6a-347ad7a8a4fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:08:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:37.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.392 226109 DEBUG oslo_concurrency.processutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd/disk.config f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.393 226109 INFO nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Deleting local config drive /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd/disk.config because it was imported into RBD.
Dec 06 07:08:37 compute-1 kernel: tap41ed45c4-77: entered promiscuous mode
Dec 06 07:08:37 compute-1 NetworkManager[49031]: <info>  [1765004917.4472] manager: (tap41ed45c4-77): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Dec 06 07:08:37 compute-1 ovn_controller[130279]: 2025-12-06T07:08:37Z|00152|binding|INFO|Claiming lport 41ed45c4-775a-4f28-9048-e796534620db for this chassis.
Dec 06 07:08:37 compute-1 ovn_controller[130279]: 2025-12-06T07:08:37Z|00153|binding|INFO|41ed45c4-775a-4f28-9048-e796534620db: Claiming fa:16:3e:7f:6f:da 10.100.0.12
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.449 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.453 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:6f:da 10.100.0.12'], port_security=['fa:16:3e:7f:6f:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f9624722-30f1-4e59-ae6a-347ad7a8a4fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ced2dfbe-6872-45ca-a2ac-dc0d6b713369', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87ee3f11e794407692b09a4b77c91610', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0edf0cc8-9296-46d9-b9a9-54c30b2ef4f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=198f7e4a-c850-4807-898b-519c5756f8ea, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=41ed45c4-775a-4f28-9048-e796534620db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.455 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 41ed45c4-775a-4f28-9048-e796534620db in datapath ced2dfbe-6872-45ca-a2ac-dc0d6b713369 bound to our chassis
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.456 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ced2dfbe-6872-45ca-a2ac-dc0d6b713369
Dec 06 07:08:37 compute-1 ovn_controller[130279]: 2025-12-06T07:08:37Z|00154|binding|INFO|Setting lport 41ed45c4-775a-4f28-9048-e796534620db ovn-installed in OVS
Dec 06 07:08:37 compute-1 ovn_controller[130279]: 2025-12-06T07:08:37Z|00155|binding|INFO|Setting lport 41ed45c4-775a-4f28-9048-e796534620db up in Southbound
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.470 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0c89671b-ef5b-419c-8fca-12dc730238d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.471 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapced2dfbe-61 in ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.473 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.475 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapced2dfbe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.475 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0e078020-bc53-4936-abf3-24ee6a5d5f89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.476 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b137a9e6-fb50-47e1-baf6-ba357da9c9ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.476 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.486 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7f3d32-c825-462c-be65-c98f9166e21b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 systemd-machined[190302]: New machine qemu-23-instance-0000002e.
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.520 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e73ab64c-d8b1-49a0-8642-bc7ab700d002]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 systemd[1]: Started Virtual Machine qemu-23-instance-0000002e.
Dec 06 07:08:37 compute-1 systemd-udevd[243166]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.551 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[531a3df8-0d08-43cf-9429-607d38b5829b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 NetworkManager[49031]: <info>  [1765004917.5595] device (tap41ed45c4-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.560 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba073f8-15ae-4185-8748-089c438be5d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 NetworkManager[49031]: <info>  [1765004917.5633] manager: (tapced2dfbe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Dec 06 07:08:37 compute-1 NetworkManager[49031]: <info>  [1765004917.5641] device (tap41ed45c4-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:08:37 compute-1 systemd-udevd[243180]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:08:37 compute-1 podman[243130]: 2025-12-06 07:08:37.575007952 +0000 UTC m=+0.094792353 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:08:37 compute-1 podman[243132]: 2025-12-06 07:08:37.587942692 +0000 UTC m=+0.100278792 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.591 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8eec88-846d-49a1-a078-15c02e27504e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.595 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[174e2359-f460-4445-8516-e47fe6057ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 NetworkManager[49031]: <info>  [1765004917.6171] device (tapced2dfbe-60): carrier: link connected
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.624 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7e02f782-5409-47a8-9b26-41c029d6612a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.640 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b95afa-339a-42d1-9c69-99afb41ed8f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapced2dfbe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:6f:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519712, 'reachable_time': 19015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243204, 'error': None, 'target': 'ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.654 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9c88b8d6-fc21-4616-91d3-93f927e74630]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:6f65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519712, 'tstamp': 519712}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243205, 'error': None, 'target': 'ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.668 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[414acc5e-cece-4ad5-abdf-d8840a8b0038]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapced2dfbe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:6f:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519712, 'reachable_time': 19015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243206, 'error': None, 'target': 'ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.693 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1b844c06-db3f-45ac-b2cf-8d727ba88ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.745 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[21c13646-3712-491b-b899-84ddfb606e64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.746 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapced2dfbe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.746 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.747 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapced2dfbe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.749 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:37 compute-1 kernel: tapced2dfbe-60: entered promiscuous mode
Dec 06 07:08:37 compute-1 NetworkManager[49031]: <info>  [1765004917.7519] manager: (tapced2dfbe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.751 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapced2dfbe-60, col_values=(('external_ids', {'iface-id': 'fc634c0d-107f-49bb-9d7c-acad38858186'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.753 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:37 compute-1 ovn_controller[130279]: 2025-12-06T07:08:37Z|00156|binding|INFO|Releasing lport fc634c0d-107f-49bb-9d7c-acad38858186 from this chassis (sb_readonly=0)
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.768 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.769 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ced2dfbe-6872-45ca-a2ac-dc0d6b713369.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ced2dfbe-6872-45ca-a2ac-dc0d6b713369.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.770 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[75dfca53-f442-4387-86c6-78ca681fd33a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.771 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-ced2dfbe-6872-45ca-a2ac-dc0d6b713369
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/ced2dfbe-6872-45ca-a2ac-dc0d6b713369.pid.haproxy
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID ced2dfbe-6872-45ca-a2ac-dc0d6b713369
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:37.772 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369', 'env', 'PROCESS_TAG=haproxy-ced2dfbe-6872-45ca-a2ac-dc0d6b713369', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ced2dfbe-6872-45ca-a2ac-dc0d6b713369.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.877 226109 DEBUG oslo_concurrency.lockutils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.878 226109 DEBUG oslo_concurrency.lockutils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.897 226109 DEBUG nova.objects.instance [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'flavor' on Instance uuid e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.944 226109 DEBUG oslo_concurrency.lockutils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.952 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004917.9517212, f9624722-30f1-4e59-ae6a-347ad7a8a4fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.952 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] VM Started (Lifecycle Event)
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.972 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.976 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004917.951961, f9624722-30f1-4e59-ae6a-347ad7a8a4fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:08:37 compute-1 nova_compute[226101]: 2025-12-06 07:08:37.977 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] VM Paused (Lifecycle Event)
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.000 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.003 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.032 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:08:38 compute-1 podman[243279]: 2025-12-06 07:08:38.139188729 +0000 UTC m=+0.043364153 container create 9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.174 226109 DEBUG nova.compute.manager [req-3971e2bc-640a-41d2-be5a-abd6a3599a3c req-d3dffedc-349d-476b-9516-6c661b128a3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received event network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.175 226109 DEBUG oslo_concurrency.lockutils [req-3971e2bc-640a-41d2-be5a-abd6a3599a3c req-d3dffedc-349d-476b-9516-6c661b128a3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.175 226109 DEBUG oslo_concurrency.lockutils [req-3971e2bc-640a-41d2-be5a-abd6a3599a3c req-d3dffedc-349d-476b-9516-6c661b128a3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.175 226109 DEBUG oslo_concurrency.lockutils [req-3971e2bc-640a-41d2-be5a-abd6a3599a3c req-d3dffedc-349d-476b-9516-6c661b128a3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:38 compute-1 systemd[1]: Started libpod-conmon-9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521.scope.
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.175 226109 DEBUG nova.compute.manager [req-3971e2bc-640a-41d2-be5a-abd6a3599a3c req-d3dffedc-349d-476b-9516-6c661b128a3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Processing event network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.176 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.179 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004918.1788738, f9624722-30f1-4e59-ae6a-347ad7a8a4fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.179 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] VM Resumed (Lifecycle Event)
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.182 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.184 226109 INFO nova.virt.libvirt.driver [-] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Instance spawned successfully.
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.185 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:08:38 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:08:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511b0d75f6a702174f819041c46ba902b9f87c680fd8ae9c077e0efc81b2de6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.204 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.209 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:08:38 compute-1 podman[243279]: 2025-12-06 07:08:38.115851789 +0000 UTC m=+0.020027243 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.216 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.217 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.217 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:38 compute-1 podman[243279]: 2025-12-06 07:08:38.217953718 +0000 UTC m=+0.122129152 container init 9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.218 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.218 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.219 226109 DEBUG nova.virt.libvirt.driver [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:38 compute-1 podman[243279]: 2025-12-06 07:08:38.223049425 +0000 UTC m=+0.127224839 container start 9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:08:38 compute-1 neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369[243294]: [NOTICE]   (243298) : New worker (243300) forked
Dec 06 07:08:38 compute-1 neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369[243294]: [NOTICE]   (243298) : Loading success.
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.252 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:08:38 compute-1 ceph-mon[81689]: pgmap v1448: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.293 226109 DEBUG oslo_concurrency.lockutils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.293 226109 DEBUG oslo_concurrency.lockutils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.294 226109 INFO nova.compute.manager [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Attaching volume 86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6 to /dev/vdb
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.312 226109 INFO nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Took 8.13 seconds to spawn the instance on the hypervisor.
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.312 226109 DEBUG nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.388 226109 INFO nova.compute.manager [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Took 9.09 seconds to build instance.
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.406 226109 DEBUG oslo_concurrency.lockutils [None req-f9a7daf1-56ce-4651-8c8d-0953ac026491 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.504 226109 DEBUG os_brick.utils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.506 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.515 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.516 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[d22e81d0-9bf0-4a56-9c37-f84a31106ce6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.520 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.527 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.528 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[6401dbef-f3ff-419c-8b65-eb6f807e2501]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.529 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.535 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.535 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbc5668-d1bb-440b-9043-a9e167e6ad7e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.536 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5c18dd-6d8f-4ee8-81d9-8ce8ec6069fa]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.536 226109 DEBUG oslo_concurrency.processutils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.555 226109 DEBUG oslo_concurrency.processutils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.557 226109 DEBUG os_brick.initiator.connectors.lightos [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.558 226109 DEBUG os_brick.initiator.connectors.lightos [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.558 226109 DEBUG os_brick.initiator.connectors.lightos [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.558 226109 DEBUG os_brick.utils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.558 226109 DEBUG nova.virt.block_device [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updating existing volume attachment record: bed1f198-b9f2-4672-8da5-e9e95a574eba _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:08:38 compute-1 nova_compute[226101]: 2025-12-06 07:08:38.793 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:39.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:39.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4017702487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:39 compute-1 nova_compute[226101]: 2025-12-06 07:08:39.442 226109 DEBUG nova.objects.instance [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'flavor' on Instance uuid e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:39 compute-1 nova_compute[226101]: 2025-12-06 07:08:39.466 226109 DEBUG nova.virt.libvirt.driver [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Attempting to attach volume 86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:08:39 compute-1 nova_compute[226101]: 2025-12-06 07:08:39.470 226109 DEBUG nova.virt.libvirt.guest [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:08:39 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:08:39 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6">
Dec 06 07:08:39 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:39 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:39 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:39 compute-1 nova_compute[226101]:   </source>
Dec 06 07:08:39 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:08:39 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:08:39 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:08:39 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:08:39 compute-1 nova_compute[226101]:   <serial>86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6</serial>
Dec 06 07:08:39 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:08:39 compute-1 nova_compute[226101]: </disk>
Dec 06 07:08:39 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:08:39 compute-1 nova_compute[226101]: 2025-12-06 07:08:39.611 226109 DEBUG nova.virt.libvirt.driver [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:39 compute-1 nova_compute[226101]: 2025-12-06 07:08:39.612 226109 DEBUG nova.virt.libvirt.driver [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:39 compute-1 nova_compute[226101]: 2025-12-06 07:08:39.612 226109 DEBUG nova.virt.libvirt.driver [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:39 compute-1 nova_compute[226101]: 2025-12-06 07:08:39.612 226109 DEBUG nova.virt.libvirt.driver [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] No VIF found with MAC fa:16:3e:8c:ca:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:08:39 compute-1 nova_compute[226101]: 2025-12-06 07:08:39.838 226109 DEBUG oslo_concurrency.lockutils [None req-82da4ab4-1d9f-453a-8d9b-560ca6a9bc58 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:40 compute-1 nova_compute[226101]: 2025-12-06 07:08:40.331 226109 DEBUG nova.compute.manager [req-de886f95-6263-4381-8eaa-728082b46ebf req-0ce25590-38fd-4076-bba0-1ec76b6526d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received event network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:40 compute-1 nova_compute[226101]: 2025-12-06 07:08:40.331 226109 DEBUG oslo_concurrency.lockutils [req-de886f95-6263-4381-8eaa-728082b46ebf req-0ce25590-38fd-4076-bba0-1ec76b6526d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:40 compute-1 nova_compute[226101]: 2025-12-06 07:08:40.332 226109 DEBUG oslo_concurrency.lockutils [req-de886f95-6263-4381-8eaa-728082b46ebf req-0ce25590-38fd-4076-bba0-1ec76b6526d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:40 compute-1 nova_compute[226101]: 2025-12-06 07:08:40.332 226109 DEBUG oslo_concurrency.lockutils [req-de886f95-6263-4381-8eaa-728082b46ebf req-0ce25590-38fd-4076-bba0-1ec76b6526d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:40 compute-1 nova_compute[226101]: 2025-12-06 07:08:40.332 226109 DEBUG nova.compute.manager [req-de886f95-6263-4381-8eaa-728082b46ebf req-0ce25590-38fd-4076-bba0-1ec76b6526d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] No waiting events found dispatching network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:08:40 compute-1 nova_compute[226101]: 2025-12-06 07:08:40.332 226109 WARNING nova.compute.manager [req-de886f95-6263-4381-8eaa-728082b46ebf req-0ce25590-38fd-4076-bba0-1ec76b6526d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received unexpected event network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db for instance with vm_state active and task_state None.
Dec 06 07:08:40 compute-1 ceph-mon[81689]: pgmap v1449: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Dec 06 07:08:41 compute-1 nova_compute[226101]: 2025-12-06 07:08:41.024 226109 DEBUG nova.compute.manager [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:41 compute-1 nova_compute[226101]: 2025-12-06 07:08:41.075 226109 INFO nova.compute.manager [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] instance snapshotting
Dec 06 07:08:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:08:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:41.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:08:41 compute-1 nova_compute[226101]: 2025-12-06 07:08:41.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:41.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:41 compute-1 nova_compute[226101]: 2025-12-06 07:08:41.360 226109 INFO nova.virt.libvirt.driver [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Beginning live snapshot process
Dec 06 07:08:41 compute-1 nova_compute[226101]: 2025-12-06 07:08:41.499 226109 DEBUG nova.virt.libvirt.imagebackend [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:08:41 compute-1 nova_compute[226101]: 2025-12-06 07:08:41.920 226109 DEBUG nova.storage.rbd_utils [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] creating snapshot(c27895ae44274322b5d74b82372238ed) on rbd image(f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:08:42 compute-1 ceph-mon[81689]: pgmap v1450: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 210 op/s
Dec 06 07:08:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e184 e184: 3 total, 3 up, 3 in
Dec 06 07:08:42 compute-1 nova_compute[226101]: 2025-12-06 07:08:42.385 226109 DEBUG nova.storage.rbd_utils [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] cloning vms/f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk@c27895ae44274322b5d74b82372238ed to images/a2519fc1-5a48-4adb-858b-30c6d08d1a53 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:08:42 compute-1 nova_compute[226101]: 2025-12-06 07:08:42.751 226109 DEBUG nova.storage.rbd_utils [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] flattening images/a2519fc1-5a48-4adb-858b-30c6d08d1a53 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:08:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:43.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:43.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:43 compute-1 nova_compute[226101]: 2025-12-06 07:08:43.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e185 e185: 3 total, 3 up, 3 in
Dec 06 07:08:44 compute-1 ceph-mon[81689]: osdmap e184: 3 total, 3 up, 3 in
Dec 06 07:08:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2836448693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:44 compute-1 nova_compute[226101]: 2025-12-06 07:08:44.257 226109 DEBUG nova.storage.rbd_utils [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] removing snapshot(c27895ae44274322b5d74b82372238ed) on rbd image(f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:08:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:45.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e186 e186: 3 total, 3 up, 3 in
Dec 06 07:08:45 compute-1 ceph-mon[81689]: pgmap v1452: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 22 KiB/s wr, 245 op/s
Dec 06 07:08:45 compute-1 ceph-mon[81689]: osdmap e185: 3 total, 3 up, 3 in
Dec 06 07:08:45 compute-1 nova_compute[226101]: 2025-12-06 07:08:45.262 226109 DEBUG nova.storage.rbd_utils [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] creating snapshot(snap) on rbd image(a2519fc1-5a48-4adb-858b-30c6d08d1a53) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:08:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:45.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:46 compute-1 nova_compute[226101]: 2025-12-06 07:08:46.143 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e187 e187: 3 total, 3 up, 3 in
Dec 06 07:08:46 compute-1 ceph-mon[81689]: pgmap v1454: 305 pgs: 305 active+clean; 312 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.0 MiB/s wr, 233 op/s
Dec 06 07:08:46 compute-1 ceph-mon[81689]: osdmap e186: 3 total, 3 up, 3 in
Dec 06 07:08:46 compute-1 nova_compute[226101]: 2025-12-06 07:08:46.622 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:47.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:47.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:47 compute-1 ceph-mon[81689]: osdmap e187: 3 total, 3 up, 3 in
Dec 06 07:08:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e188 e188: 3 total, 3 up, 3 in
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.612 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.613 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.613 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.690 226109 INFO nova.virt.libvirt.driver [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Snapshot image upload complete
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.691 226109 INFO nova.compute.manager [None req-289424b9-0708-42d1-90e2-1cd9360a5493 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Took 7.62 seconds to snapshot the instance on the hypervisor.
Dec 06 07:08:48 compute-1 nova_compute[226101]: 2025-12-06 07:08:48.797 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:08:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2194169652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.068 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:49.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.141 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.141 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:08:49 compute-1 ceph-mon[81689]: pgmap v1457: 305 pgs: 305 active+clean; 339 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 158 op/s
Dec 06 07:08:49 compute-1 ceph-mon[81689]: osdmap e188: 3 total, 3 up, 3 in
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.145 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.146 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.146 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.156 226109 DEBUG oslo_concurrency.lockutils [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.156 226109 DEBUG oslo_concurrency.lockutils [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.175 226109 INFO nova.compute.manager [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Detaching volume 86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.341 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.342 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4348MB free_disk=20.85525894165039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.342 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.342 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:49.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.389 226109 INFO nova.virt.block_device [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Attempting to driver detach volume 86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6 from mountpoint /dev/vdb
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.401 226109 DEBUG nova.virt.libvirt.driver [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Attempting to detach device vdb from instance e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.401 226109 DEBUG nova.virt.libvirt.guest [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6">
Dec 06 07:08:49 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   </source>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <serial>86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6</serial>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]: </disk>
Dec 06 07:08:49 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.410 226109 INFO nova.virt.libvirt.driver [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Successfully detached device vdb from instance e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e from the persistent domain config.
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.411 226109 DEBUG nova.virt.libvirt.driver [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.411 226109 DEBUG nova.virt.libvirt.guest [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6">
Dec 06 07:08:49 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   </source>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <serial>86c40c18-0a1f-4cdf-8dd9-d48abd4ceed6</serial>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:08:49 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:08:49 compute-1 nova_compute[226101]: </disk>
Dec 06 07:08:49 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.431 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.431 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance f9624722-30f1-4e59-ae6a-347ad7a8a4fd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.432 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.432 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.479 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.780 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765004929.780437, e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.782 226109 DEBUG nova.virt.libvirt.driver [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.785 226109 INFO nova.virt.libvirt.driver [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Successfully detached device vdb from instance e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e from the live domain config.
Dec 06 07:08:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:08:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1698194847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.992 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:49 compute-1 nova_compute[226101]: 2025-12-06 07:08:49.998 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:08:50 compute-1 nova_compute[226101]: 2025-12-06 07:08:50.017 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:08:50 compute-1 nova_compute[226101]: 2025-12-06 07:08:50.038 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:08:50 compute-1 nova_compute[226101]: 2025-12-06 07:08:50.039 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:50 compute-1 nova_compute[226101]: 2025-12-06 07:08:50.104 226109 DEBUG nova.objects.instance [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'flavor' on Instance uuid e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:50 compute-1 nova_compute[226101]: 2025-12-06 07:08:50.142 226109 DEBUG oslo_concurrency.lockutils [None req-2eb4a4c0-f553-43c6-a54a-1f5897934e0c b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2194169652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:50 compute-1 ceph-mon[81689]: pgmap v1459: 305 pgs: 305 active+clean; 339 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Dec 06 07:08:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1698194847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.033 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.034 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.034 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.034 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:08:51 compute-1 ovn_controller[130279]: 2025-12-06T07:08:51Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:6f:da 10.100.0.12
Dec 06 07:08:51 compute-1 ovn_controller[130279]: 2025-12-06T07:08:51Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:6f:da 10.100.0.12
Dec 06 07:08:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:08:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:51.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.145 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:51.192 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:08:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:51.193 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.193 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/212881929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.263 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.264350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931264383, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2488, "num_deletes": 254, "total_data_size": 5747271, "memory_usage": 5823520, "flush_reason": "Manual Compaction"}
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.268 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.269 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:08:51 compute-1 nova_compute[226101]: 2025-12-06 07:08:51.269 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931291591, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3744233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28673, "largest_seqno": 31156, "table_properties": {"data_size": 3734041, "index_size": 6494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21767, "raw_average_key_size": 20, "raw_value_size": 3713428, "raw_average_value_size": 3567, "num_data_blocks": 282, "num_entries": 1041, "num_filter_entries": 1041, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004735, "oldest_key_time": 1765004735, "file_creation_time": 1765004931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 27361 microseconds, and 11027 cpu microseconds.
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.291710) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3744233 bytes OK
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.291765) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.296250) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.296293) EVENT_LOG_v1 {"time_micros": 1765004931296284, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.296315) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5736273, prev total WAL file size 5782707, number of live WAL files 2.
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.297713) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3656KB)], [57(7803KB)]
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931297777, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 11734928, "oldest_snapshot_seqno": -1}
Dec 06 07:08:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e189 e189: 3 total, 3 up, 3 in
Dec 06 07:08:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:51.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5974 keys, 9720483 bytes, temperature: kUnknown
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931391315, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 9720483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9680391, "index_size": 24046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 153167, "raw_average_key_size": 25, "raw_value_size": 9572766, "raw_average_value_size": 1602, "num_data_blocks": 965, "num_entries": 5974, "num_filter_entries": 5974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765004931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.391562) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9720483 bytes
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.394092) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.4 rd, 103.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6499, records dropped: 525 output_compression: NoCompression
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.394135) EVENT_LOG_v1 {"time_micros": 1765004931394102, "job": 34, "event": "compaction_finished", "compaction_time_micros": 93601, "compaction_time_cpu_micros": 36768, "output_level": 6, "num_output_files": 1, "total_output_size": 9720483, "num_input_records": 6499, "num_output_records": 5974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931395290, "job": 34, "event": "table_file_deletion", "file_number": 59}
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931397244, "job": 34, "event": "table_file_deletion", "file_number": 57}
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.297644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.397276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.397280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.397282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.397284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:08:51.397286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e190 e190: 3 total, 3 up, 3 in
Dec 06 07:08:52 compute-1 nova_compute[226101]: 2025-12-06 07:08:52.586 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updating instance_info_cache with network_info: [{"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:08:52 compute-1 nova_compute[226101]: 2025-12-06 07:08:52.607 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:08:52 compute-1 nova_compute[226101]: 2025-12-06 07:08:52.607 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:08:52 compute-1 nova_compute[226101]: 2025-12-06 07:08:52.607 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:52 compute-1 nova_compute[226101]: 2025-12-06 07:08:52.607 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:52 compute-1 nova_compute[226101]: 2025-12-06 07:08:52.607 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:52 compute-1 nova_compute[226101]: 2025-12-06 07:08:52.607 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:08:52 compute-1 ceph-mon[81689]: pgmap v1460: 305 pgs: 305 active+clean; 360 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 234 op/s
Dec 06 07:08:52 compute-1 ceph-mon[81689]: osdmap e189: 3 total, 3 up, 3 in
Dec 06 07:08:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3602284135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2587063817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:53.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:53.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:53 compute-1 nova_compute[226101]: 2025-12-06 07:08:53.798 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:53 compute-1 ceph-mon[81689]: osdmap e190: 3 total, 3 up, 3 in
Dec 06 07:08:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1084889315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4138188771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:54 compute-1 nova_compute[226101]: 2025-12-06 07:08:54.156 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:54 compute-1 ceph-mon[81689]: pgmap v1463: 305 pgs: 305 active+clean; 357 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 895 KiB/s rd, 5.4 MiB/s wr, 242 op/s
Dec 06 07:08:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2408473293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:55.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.221 226109 DEBUG nova.compute.manager [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.265 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.265 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.265 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.266 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.266 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.267 226109 INFO nova.compute.manager [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Terminating instance
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.268 226109 DEBUG nova.compute.manager [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.270 226109 INFO nova.compute.manager [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] instance snapshotting
Dec 06 07:08:55 compute-1 kernel: tapbf5bb400-e5 (unregistering): left promiscuous mode
Dec 06 07:08:55 compute-1 NetworkManager[49031]: <info>  [1765004935.3485] device (tapbf5bb400-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.357 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:55 compute-1 ovn_controller[130279]: 2025-12-06T07:08:55Z|00157|binding|INFO|Releasing lport bf5bb400-e5e8-47f6-bb49-2b370d100cd3 from this chassis (sb_readonly=0)
Dec 06 07:08:55 compute-1 ovn_controller[130279]: 2025-12-06T07:08:55Z|00158|binding|INFO|Setting lport bf5bb400-e5e8-47f6-bb49-2b370d100cd3 down in Southbound
Dec 06 07:08:55 compute-1 ovn_controller[130279]: 2025-12-06T07:08:55Z|00159|binding|INFO|Removing iface tapbf5bb400-e5 ovn-installed in OVS
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.359 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:55.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:55.371 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:ca:f4 10.100.0.12'], port_security=['fa:16:3e:8c:ca:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2236fb6443441618c69ad660b0932dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1faad6c4-429b-4e24-95b0-2a5a82e4cdc8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed024f0c-371e-4b3f-951f-85c5c8e0d63d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=bf5bb400-e5e8-47f6-bb49-2b370d100cd3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:08:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:55.372 139580 INFO neutron.agent.ovn.metadata.agent [-] Port bf5bb400-e5e8-47f6-bb49-2b370d100cd3 in datapath 81801c54-c543-4e0f-94c1-cdf24ddb6bce unbound from our chassis
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.372 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:55.373 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81801c54-c543-4e0f-94c1-cdf24ddb6bce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:08:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:55.374 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0abe416d-4a6d-4484-831b-13cf02a3f781]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:55.375 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce namespace which is not needed anymore
Dec 06 07:08:55 compute-1 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Dec 06 07:08:55 compute-1 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002a.scope: Consumed 17.293s CPU time.
Dec 06 07:08:55 compute-1 systemd-machined[190302]: Machine qemu-22-instance-0000002a terminated.
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.488 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.494 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.503 226109 INFO nova.virt.libvirt.driver [-] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Instance destroyed successfully.
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.504 226109 DEBUG nova.objects.instance [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lazy-loading 'resources' on Instance uuid e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.519 226109 DEBUG nova.virt.libvirt.vif [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:06:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1113807590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1113807590',id=42,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxNM3sYiKzp1+NUb/2De95mJs+SH1m1mxJqrRUBmO5hfmaPmDEbehrvD5V1R8B784do2XvfbohRCGUURA5SWZg3+xgRCk33hYLHpa3KVfm+1jfWf3JbwWLhSjPluoLz2w==',key_name='tempest-keypair-1560414487',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:07:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c2236fb6443441618c69ad660b0932dd',ramdisk_id='',reservation_id='r-0udoz3m9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1086119788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:07:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7c2f91caf1444c68dcf6bd66966d67e',uuid=e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.519 226109 DEBUG nova.network.os_vif_util [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converting VIF {"id": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "address": "fa:16:3e:8c:ca:f4", "network": {"id": "81801c54-c543-4e0f-94c1-cdf24ddb6bce", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1315951976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2236fb6443441618c69ad660b0932dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5bb400-e5", "ovs_interfaceid": "bf5bb400-e5e8-47f6-bb49-2b370d100cd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.520 226109 DEBUG nova.network.os_vif_util [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:ca:f4,bridge_name='br-int',has_traffic_filtering=True,id=bf5bb400-e5e8-47f6-bb49-2b370d100cd3,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5bb400-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.521 226109 DEBUG os_vif [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:ca:f4,bridge_name='br-int',has_traffic_filtering=True,id=bf5bb400-e5e8-47f6-bb49-2b370d100cd3,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5bb400-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.522 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.523 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5bb400-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.525 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.527 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.529 226109 INFO os_vif [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:ca:f4,bridge_name='br-int',has_traffic_filtering=True,id=bf5bb400-e5e8-47f6-bb49-2b370d100cd3,network=Network(81801c54-c543-4e0f-94c1-cdf24ddb6bce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5bb400-e5')
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.558 226109 INFO nova.virt.libvirt.driver [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Beginning live snapshot process
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.638 226109 DEBUG nova.compute.manager [req-3d84ebce-c19d-499a-98a5-276ba1f223e8 req-c7ff5ce2-bbe9-47f8-a7a0-6a650f98d1a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-vif-unplugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.638 226109 DEBUG oslo_concurrency.lockutils [req-3d84ebce-c19d-499a-98a5-276ba1f223e8 req-c7ff5ce2-bbe9-47f8-a7a0-6a650f98d1a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.638 226109 DEBUG oslo_concurrency.lockutils [req-3d84ebce-c19d-499a-98a5-276ba1f223e8 req-c7ff5ce2-bbe9-47f8-a7a0-6a650f98d1a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.639 226109 DEBUG oslo_concurrency.lockutils [req-3d84ebce-c19d-499a-98a5-276ba1f223e8 req-c7ff5ce2-bbe9-47f8-a7a0-6a650f98d1a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.639 226109 DEBUG nova.compute.manager [req-3d84ebce-c19d-499a-98a5-276ba1f223e8 req-c7ff5ce2-bbe9-47f8-a7a0-6a650f98d1a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] No waiting events found dispatching network-vif-unplugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.639 226109 DEBUG nova.compute.manager [req-3d84ebce-c19d-499a-98a5-276ba1f223e8 req-c7ff5ce2-bbe9-47f8-a7a0-6a650f98d1a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-vif-unplugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.694 226109 DEBUG nova.virt.libvirt.imagebackend [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:08:55 compute-1 nova_compute[226101]: 2025-12-06 07:08:55.938 226109 DEBUG nova.storage.rbd_utils [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] creating snapshot(a78019ea249045289a2e0959751ad015) on rbd image(f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:08:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:08:56.194 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:57.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:57 compute-1 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[242114]: [NOTICE]   (242118) : haproxy version is 2.8.14-c23fe91
Dec 06 07:08:57 compute-1 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[242114]: [NOTICE]   (242118) : path to executable is /usr/sbin/haproxy
Dec 06 07:08:57 compute-1 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[242114]: [WARNING]  (242118) : Exiting Master process...
Dec 06 07:08:57 compute-1 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[242114]: [ALERT]    (242118) : Current worker (242120) exited with code 143 (Terminated)
Dec 06 07:08:57 compute-1 neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce[242114]: [WARNING]  (242118) : All workers exited. Exiting... (0)
Dec 06 07:08:57 compute-1 systemd[1]: libpod-3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a.scope: Deactivated successfully.
Dec 06 07:08:57 compute-1 podman[243546]: 2025-12-06 07:08:57.205857393 +0000 UTC m=+1.744258641 container died 3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:08:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:57 compute-1 nova_compute[226101]: 2025-12-06 07:08:57.719 226109 DEBUG nova.compute.manager [req-ebc9a71c-5a02-482f-b9bc-219dae3155c9 req-d462df7e-abfc-41fb-8ae7-86c809fde53c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:57 compute-1 nova_compute[226101]: 2025-12-06 07:08:57.720 226109 DEBUG oslo_concurrency.lockutils [req-ebc9a71c-5a02-482f-b9bc-219dae3155c9 req-d462df7e-abfc-41fb-8ae7-86c809fde53c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:57 compute-1 nova_compute[226101]: 2025-12-06 07:08:57.720 226109 DEBUG oslo_concurrency.lockutils [req-ebc9a71c-5a02-482f-b9bc-219dae3155c9 req-d462df7e-abfc-41fb-8ae7-86c809fde53c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:57 compute-1 nova_compute[226101]: 2025-12-06 07:08:57.720 226109 DEBUG oslo_concurrency.lockutils [req-ebc9a71c-5a02-482f-b9bc-219dae3155c9 req-d462df7e-abfc-41fb-8ae7-86c809fde53c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:57 compute-1 nova_compute[226101]: 2025-12-06 07:08:57.720 226109 DEBUG nova.compute.manager [req-ebc9a71c-5a02-482f-b9bc-219dae3155c9 req-d462df7e-abfc-41fb-8ae7-86c809fde53c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] No waiting events found dispatching network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:08:57 compute-1 nova_compute[226101]: 2025-12-06 07:08:57.720 226109 WARNING nova.compute.manager [req-ebc9a71c-5a02-482f-b9bc-219dae3155c9 req-d462df7e-abfc-41fb-8ae7-86c809fde53c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received unexpected event network-vif-plugged-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 for instance with vm_state active and task_state deleting.
Dec 06 07:08:58 compute-1 nova_compute[226101]: 2025-12-06 07:08:58.800 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:59.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e191 e191: 3 total, 3 up, 3 in
Dec 06 07:08:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1692578340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:59 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a-userdata-shm.mount: Deactivated successfully.
Dec 06 07:08:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-364a49b1693336ba2a3227da2c0cf613ab90c6ac2678b9a21e84a05627f3cd3c-merged.mount: Deactivated successfully.
Dec 06 07:08:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:08:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:59.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:59 compute-1 nova_compute[226101]: 2025-12-06 07:08:59.487 226109 DEBUG nova.storage.rbd_utils [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] cloning vms/f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk@a78019ea249045289a2e0959751ad015 to images/4038f20e-02ea-4a6a-aa3a-eaf0b9450896 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:08:59 compute-1 podman[243546]: 2025-12-06 07:08:59.68381537 +0000 UTC m=+4.222216598 container cleanup 3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:08:59 compute-1 systemd[1]: libpod-conmon-3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a.scope: Deactivated successfully.
Dec 06 07:09:00 compute-1 podman[243688]: 2025-12-06 07:09:00.392235726 +0000 UTC m=+0.685714064 container remove 3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.399 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[601728fe-8d7b-47b1-a107-dae1f324e40f]: (4, ('Sat Dec  6 07:08:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce (3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a)\n3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a\nSat Dec  6 07:08:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce (3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a)\n3b4d95cd7b2d15e6c0417104cac89bab9a25f3997608867c347e55e82295259a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.402 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[aec237a7-059d-40c6-a80b-d115cc598d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.403 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81801c54-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:00 compute-1 nova_compute[226101]: 2025-12-06 07:09:00.406 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:00 compute-1 kernel: tap81801c54-c0: left promiscuous mode
Dec 06 07:09:00 compute-1 nova_compute[226101]: 2025-12-06 07:09:00.431 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.435 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1804da-4a9b-4821-8cdf-69b81c9ad5d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.451 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[61a0dbc6-ed1f-4a7a-97d8-8dd7e08f27c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.456 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5cd473-7bee-4b24-81e1-73e22106a4ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.479 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa94ae0-805e-409c-bbb6-0e3281f5bcfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511015, 'reachable_time': 27727, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243704, 'error': None, 'target': 'ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.481 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81801c54-c543-4e0f-94c1-cdf24ddb6bce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:09:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:00.482 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[57fbe222-1dad-42cc-accd-2e7a2eab1fce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:00 compute-1 systemd[1]: run-netns-ovnmeta\x2d81801c54\x2dc543\x2d4e0f\x2d94c1\x2dcdf24ddb6bce.mount: Deactivated successfully.
Dec 06 07:09:00 compute-1 nova_compute[226101]: 2025-12-06 07:09:00.524 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:00 compute-1 ceph-mon[81689]: pgmap v1464: 305 pgs: 305 active+clean; 311 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.7 MiB/s wr, 322 op/s
Dec 06 07:09:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2567383709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4167133663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:00 compute-1 ceph-mon[81689]: pgmap v1465: 305 pgs: 305 active+clean; 313 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.0 MiB/s wr, 359 op/s
Dec 06 07:09:00 compute-1 ceph-mon[81689]: pgmap v1467: 305 pgs: 305 active+clean; 313 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 616 KiB/s rd, 5.9 MiB/s wr, 226 op/s
Dec 06 07:09:00 compute-1 ceph-mon[81689]: osdmap e191: 3 total, 3 up, 3 in
Dec 06 07:09:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2855757156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:00 compute-1 nova_compute[226101]: 2025-12-06 07:09:00.598 226109 DEBUG nova.storage.rbd_utils [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] flattening images/4038f20e-02ea-4a6a-aa3a-eaf0b9450896 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:09:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:01.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:01.626 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:01.627 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:01.627 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:02 compute-1 podman[243731]: 2025-12-06 07:09:02.141093219 +0000 UTC m=+0.120687582 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 07:09:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2898849422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:02 compute-1 nova_compute[226101]: 2025-12-06 07:09:02.442 226109 INFO nova.virt.libvirt.driver [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Deleting instance files /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_del
Dec 06 07:09:02 compute-1 nova_compute[226101]: 2025-12-06 07:09:02.443 226109 INFO nova.virt.libvirt.driver [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Deletion of /var/lib/nova/instances/e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e_del complete
Dec 06 07:09:02 compute-1 nova_compute[226101]: 2025-12-06 07:09:02.446 226109 DEBUG nova.storage.rbd_utils [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] removing snapshot(a78019ea249045289a2e0959751ad015) on rbd image(f9624722-30f1-4e59-ae6a-347ad7a8a4fd_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:09:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e192 e192: 3 total, 3 up, 3 in
Dec 06 07:09:02 compute-1 nova_compute[226101]: 2025-12-06 07:09:02.514 226109 INFO nova.compute.manager [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Took 7.25 seconds to destroy the instance on the hypervisor.
Dec 06 07:09:02 compute-1 nova_compute[226101]: 2025-12-06 07:09:02.515 226109 DEBUG oslo.service.loopingcall [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:09:02 compute-1 nova_compute[226101]: 2025-12-06 07:09:02.515 226109 DEBUG nova.compute.manager [-] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:09:02 compute-1 nova_compute[226101]: 2025-12-06 07:09:02.516 226109 DEBUG nova.network.neutron [-] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:09:02 compute-1 nova_compute[226101]: 2025-12-06 07:09:02.523 226109 DEBUG nova.storage.rbd_utils [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] creating snapshot(snap) on rbd image(4038f20e-02ea-4a6a-aa3a-eaf0b9450896) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:09:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:03.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:03 compute-1 ceph-mon[81689]: pgmap v1468: 305 pgs: 305 active+clean; 374 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 9.7 MiB/s wr, 289 op/s
Dec 06 07:09:03 compute-1 ceph-mon[81689]: osdmap e192: 3 total, 3 up, 3 in
Dec 06 07:09:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:03.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e193 e193: 3 total, 3 up, 3 in
Dec 06 07:09:03 compute-1 nova_compute[226101]: 2025-12-06 07:09:03.801 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.025 226109 DEBUG nova.network.neutron [-] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.042 226109 INFO nova.compute.manager [-] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Took 1.53 seconds to deallocate network for instance.
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.082 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.083 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.120 226109 DEBUG nova.compute.manager [req-762a4247-1730-45cf-9cef-ca53c37b985d req-5168bc64-a49d-4ca0-aa33-dcc0a4a92568 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Received event network-vif-deleted-bf5bb400-e5e8-47f6-bb49-2b370d100cd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.169 226109 DEBUG oslo_concurrency.processutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:09:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1998551748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.606 226109 DEBUG oslo_concurrency.processutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.612 226109 DEBUG nova.compute.provider_tree [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.629 226109 DEBUG nova.scheduler.client.report [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.654 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:04 compute-1 ceph-mon[81689]: pgmap v1470: 305 pgs: 305 active+clean; 375 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 9.0 MiB/s wr, 220 op/s
Dec 06 07:09:04 compute-1 ceph-mon[81689]: osdmap e193: 3 total, 3 up, 3 in
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.680 226109 INFO nova.scheduler.client.report [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Deleted allocations for instance e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e
Dec 06 07:09:04 compute-1 nova_compute[226101]: 2025-12-06 07:09:04.741 226109 DEBUG oslo_concurrency.lockutils [None req-64be7e0d-a310-4fd5-a096-84803b8aacc4 b7c2f91caf1444c68dcf6bd66966d67e c2236fb6443441618c69ad660b0932dd - - default default] Lock "e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:05 compute-1 nova_compute[226101]: 2025-12-06 07:09:05.038 226109 INFO nova.virt.libvirt.driver [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Snapshot image upload complete
Dec 06 07:09:05 compute-1 nova_compute[226101]: 2025-12-06 07:09:05.039 226109 INFO nova.compute.manager [None req-d2b7a8f6-8d4b-449c-9527-93a16cfb6b84 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Took 9.77 seconds to snapshot the instance on the hypervisor.
Dec 06 07:09:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:05.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:05.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:05 compute-1 nova_compute[226101]: 2025-12-06 07:09:05.525 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1998551748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:06 compute-1 ceph-mon[81689]: pgmap v1472: 305 pgs: 305 active+clean; 367 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 410 op/s
Dec 06 07:09:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e194 e194: 3 total, 3 up, 3 in
Dec 06 07:09:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:07.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:07.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:07 compute-1 ceph-mon[81689]: osdmap e194: 3 total, 3 up, 3 in
Dec 06 07:09:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e195 e195: 3 total, 3 up, 3 in
Dec 06 07:09:08 compute-1 podman[243816]: 2025-12-06 07:09:08.110855672 +0000 UTC m=+0.089514420 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:09:08 compute-1 podman[243817]: 2025-12-06 07:09:08.110878793 +0000 UTC m=+0.087473314 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.632 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.632 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.632 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.632 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.633 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.633 226109 INFO nova.compute.manager [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Terminating instance
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.634 226109 DEBUG nova.compute.manager [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:09:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:09:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2023570458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:09:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2023570458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-1 kernel: tap41ed45c4-77 (unregistering): left promiscuous mode
Dec 06 07:09:08 compute-1 NetworkManager[49031]: <info>  [1765004948.7757] device (tap41ed45c4-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.777 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:08 compute-1 ovn_controller[130279]: 2025-12-06T07:09:08Z|00160|binding|INFO|Releasing lport 41ed45c4-775a-4f28-9048-e796534620db from this chassis (sb_readonly=0)
Dec 06 07:09:08 compute-1 ovn_controller[130279]: 2025-12-06T07:09:08Z|00161|binding|INFO|Setting lport 41ed45c4-775a-4f28-9048-e796534620db down in Southbound
Dec 06 07:09:08 compute-1 ovn_controller[130279]: 2025-12-06T07:09:08Z|00162|binding|INFO|Removing iface tap41ed45c4-77 ovn-installed in OVS
Dec 06 07:09:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:08.783 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:6f:da 10.100.0.12'], port_security=['fa:16:3e:7f:6f:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f9624722-30f1-4e59-ae6a-347ad7a8a4fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ced2dfbe-6872-45ca-a2ac-dc0d6b713369', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87ee3f11e794407692b09a4b77c91610', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0edf0cc8-9296-46d9-b9a9-54c30b2ef4f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=198f7e4a-c850-4807-898b-519c5756f8ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=41ed45c4-775a-4f28-9048-e796534620db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:09:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:08.784 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 41ed45c4-775a-4f28-9048-e796534620db in datapath ced2dfbe-6872-45ca-a2ac-dc0d6b713369 unbound from our chassis
Dec 06 07:09:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:08.786 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ced2dfbe-6872-45ca-a2ac-dc0d6b713369, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:09:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:08.787 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[efc0d98a-1396-42e3-b87b-013d504a20d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:08.788 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369 namespace which is not needed anymore
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.798 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:08 compute-1 nova_compute[226101]: 2025-12-06 07:09:08.802 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:08 compute-1 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Dec 06 07:09:08 compute-1 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002e.scope: Consumed 14.643s CPU time.
Dec 06 07:09:08 compute-1 systemd-machined[190302]: Machine qemu-23-instance-0000002e terminated.
Dec 06 07:09:08 compute-1 ceph-mon[81689]: pgmap v1474: 305 pgs: 305 active+clean; 372 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 4.3 MiB/s wr, 436 op/s
Dec 06 07:09:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3000496672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3000496672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-1 ceph-mon[81689]: osdmap e195: 3 total, 3 up, 3 in
Dec 06 07:09:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2023570458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2023570458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-1 neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369[243294]: [NOTICE]   (243298) : haproxy version is 2.8.14-c23fe91
Dec 06 07:09:08 compute-1 neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369[243294]: [NOTICE]   (243298) : path to executable is /usr/sbin/haproxy
Dec 06 07:09:08 compute-1 neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369[243294]: [WARNING]  (243298) : Exiting Master process...
Dec 06 07:09:08 compute-1 neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369[243294]: [ALERT]    (243298) : Current worker (243300) exited with code 143 (Terminated)
Dec 06 07:09:08 compute-1 neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369[243294]: [WARNING]  (243298) : All workers exited. Exiting... (0)
Dec 06 07:09:08 compute-1 systemd[1]: libpod-9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521.scope: Deactivated successfully.
Dec 06 07:09:08 compute-1 podman[243877]: 2025-12-06 07:09:08.926835825 +0000 UTC m=+0.044601977 container died 9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:09:08 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521-userdata-shm.mount: Deactivated successfully.
Dec 06 07:09:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-511b0d75f6a702174f819041c46ba902b9f87c680fd8ae9c077e0efc81b2de6b-merged.mount: Deactivated successfully.
Dec 06 07:09:08 compute-1 podman[243877]: 2025-12-06 07:09:08.969589151 +0000 UTC m=+0.087355313 container cleanup 9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:09:08 compute-1 systemd[1]: libpod-conmon-9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521.scope: Deactivated successfully.
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.085 226109 INFO nova.virt.libvirt.driver [-] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Instance destroyed successfully.
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.086 226109 DEBUG nova.objects.instance [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lazy-loading 'resources' on Instance uuid f9624722-30f1-4e59-ae6a-347ad7a8a4fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.106 226109 DEBUG nova.virt.libvirt.vif [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:08:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1003383917',display_name='tempest-ImagesOneServerTestJSON-server-1003383917',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1003383917',id=46,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:08:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='87ee3f11e794407692b09a4b77c91610',ramdisk_id='',reservation_id='r-uj0nvdxv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1888020106',owner_user_name='tempest-ImagesOneServerTestJSON-1888020106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:09:05Z,user_data=None,user_id='98f77d99921448508eb3ac7bf8c4fa34',uuid=f9624722-30f1-4e59-ae6a-347ad7a8a4fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.106 226109 DEBUG nova.network.os_vif_util [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Converting VIF {"id": "41ed45c4-775a-4f28-9048-e796534620db", "address": "fa:16:3e:7f:6f:da", "network": {"id": "ced2dfbe-6872-45ca-a2ac-dc0d6b713369", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-298856787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87ee3f11e794407692b09a4b77c91610", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41ed45c4-77", "ovs_interfaceid": "41ed45c4-775a-4f28-9048-e796534620db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.107 226109 DEBUG nova.network.os_vif_util [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:6f:da,bridge_name='br-int',has_traffic_filtering=True,id=41ed45c4-775a-4f28-9048-e796534620db,network=Network(ced2dfbe-6872-45ca-a2ac-dc0d6b713369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41ed45c4-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.107 226109 DEBUG os_vif [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:6f:da,bridge_name='br-int',has_traffic_filtering=True,id=41ed45c4-775a-4f28-9048-e796534620db,network=Network(ced2dfbe-6872-45ca-a2ac-dc0d6b713369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41ed45c4-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.110 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.110 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41ed45c4-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.124 226109 DEBUG nova.compute.manager [req-3f9c6588-a44a-43fd-b5e4-3fb41b6e57fc req-a77cdf00-9465-4b72-9b77-6e6979d6ded9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received event network-vif-unplugged-41ed45c4-775a-4f28-9048-e796534620db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.124 226109 DEBUG oslo_concurrency.lockutils [req-3f9c6588-a44a-43fd-b5e4-3fb41b6e57fc req-a77cdf00-9465-4b72-9b77-6e6979d6ded9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.124 226109 DEBUG oslo_concurrency.lockutils [req-3f9c6588-a44a-43fd-b5e4-3fb41b6e57fc req-a77cdf00-9465-4b72-9b77-6e6979d6ded9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.124 226109 DEBUG oslo_concurrency.lockutils [req-3f9c6588-a44a-43fd-b5e4-3fb41b6e57fc req-a77cdf00-9465-4b72-9b77-6e6979d6ded9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.125 226109 DEBUG nova.compute.manager [req-3f9c6588-a44a-43fd-b5e4-3fb41b6e57fc req-a77cdf00-9465-4b72-9b77-6e6979d6ded9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] No waiting events found dispatching network-vif-unplugged-41ed45c4-775a-4f28-9048-e796534620db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.125 226109 DEBUG nova.compute.manager [req-3f9c6588-a44a-43fd-b5e4-3fb41b6e57fc req-a77cdf00-9465-4b72-9b77-6e6979d6ded9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received event network-vif-unplugged-41ed45c4-775a-4f28-9048-e796534620db for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.131 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.135 226109 INFO os_vif [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:6f:da,bridge_name='br-int',has_traffic_filtering=True,id=41ed45c4-775a-4f28-9048-e796534620db,network=Network(ced2dfbe-6872-45ca-a2ac-dc0d6b713369),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41ed45c4-77')
Dec 06 07:09:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:09.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:09.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e196 e196: 3 total, 3 up, 3 in
Dec 06 07:09:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:09 compute-1 podman[243901]: 2025-12-06 07:09:09.729611351 +0000 UTC m=+0.739753564 container remove 9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.735 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[053ca278-84df-4723-9141-32529735d0f0]: (4, ('Sat Dec  6 07:09:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369 (9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521)\n9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521\nSat Dec  6 07:09:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369 (9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521)\n9314a730df924bbf2266951edfbb3611b71d0d90d8e42e1dfeed1b04ab844521\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.737 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[164bd71d-f284-423c-afd9-9fc564158412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.738 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapced2dfbe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.740 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:09 compute-1 kernel: tapced2dfbe-60: left promiscuous mode
Dec 06 07:09:09 compute-1 nova_compute[226101]: 2025-12-06 07:09:09.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.759 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f66f5a72-890a-45bc-a112-f0c73fe08c30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.773 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9701f2b7-8d30-4c67-99eb-de2f6484afb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.775 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[012026de-0ed7-4ce7-96dd-479067c2a057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.790 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a1deba10-d387-4cb7-b58d-115488818079]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519705, 'reachable_time': 18622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243948, 'error': None, 'target': 'ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.792 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ced2dfbe-6872-45ca-a2ac-dc0d6b713369 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:09:09 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:09.792 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[e0188f4f-61a7-4256-8c70-991d2555bd1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:09 compute-1 systemd[1]: run-netns-ovnmeta\x2dced2dfbe\x2d6872\x2d45ca\x2da2ac\x2ddc0d6b713369.mount: Deactivated successfully.
Dec 06 07:09:10 compute-1 ceph-mon[81689]: osdmap e196: 3 total, 3 up, 3 in
Dec 06 07:09:10 compute-1 ceph-mon[81689]: pgmap v1477: 305 pgs: 305 active+clean; 372 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 3.1 MiB/s wr, 411 op/s
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.503 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004935.502126, e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.503 226109 INFO nova.compute.manager [-] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] VM Stopped (Lifecycle Event)
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.535 226109 INFO nova.virt.libvirt.driver [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Deleting instance files /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd_del
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.536 226109 INFO nova.virt.libvirt.driver [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Deletion of /var/lib/nova/instances/f9624722-30f1-4e59-ae6a-347ad7a8a4fd_del complete
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.542 226109 DEBUG nova.compute.manager [None req-a9fd4d37-b2bf-437a-bccb-a2a6d59e292c - - - - - -] [instance: e6b21faf-c7e9-4efe-9ce7-6aaeb2a4e96e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.591 226109 INFO nova.compute.manager [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Took 1.96 seconds to destroy the instance on the hypervisor.
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.592 226109 DEBUG oslo.service.loopingcall [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.592 226109 DEBUG nova.compute.manager [-] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:09:10 compute-1 nova_compute[226101]: 2025-12-06 07:09:10.592 226109 DEBUG nova.network.neutron [-] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:09:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.228 226109 DEBUG nova.compute.manager [req-4a0c62c3-47a8-4487-8fd4-e0d8f26058a4 req-30d29342-0ddd-4042-bbc1-65d514b198cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received event network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.229 226109 DEBUG oslo_concurrency.lockutils [req-4a0c62c3-47a8-4487-8fd4-e0d8f26058a4 req-30d29342-0ddd-4042-bbc1-65d514b198cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.229 226109 DEBUG oslo_concurrency.lockutils [req-4a0c62c3-47a8-4487-8fd4-e0d8f26058a4 req-30d29342-0ddd-4042-bbc1-65d514b198cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.229 226109 DEBUG oslo_concurrency.lockutils [req-4a0c62c3-47a8-4487-8fd4-e0d8f26058a4 req-30d29342-0ddd-4042-bbc1-65d514b198cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.229 226109 DEBUG nova.compute.manager [req-4a0c62c3-47a8-4487-8fd4-e0d8f26058a4 req-30d29342-0ddd-4042-bbc1-65d514b198cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] No waiting events found dispatching network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.229 226109 WARNING nova.compute.manager [req-4a0c62c3-47a8-4487-8fd4-e0d8f26058a4 req-30d29342-0ddd-4042-bbc1-65d514b198cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received unexpected event network-vif-plugged-41ed45c4-775a-4f28-9048-e796534620db for instance with vm_state active and task_state deleting.
Dec 06 07:09:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1388176424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1388176424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.310 226109 DEBUG nova.network.neutron [-] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.339 226109 INFO nova.compute.manager [-] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Took 0.75 seconds to deallocate network for instance.
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.385 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.386 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:11.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.434 226109 DEBUG nova.compute.manager [req-01cf5792-348f-47ab-99fe-78503789cbf2 req-dea45568-5cd1-4e96-997f-a5b74c8a25a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Received event network-vif-deleted-41ed45c4-775a-4f28-9048-e796534620db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.439 226109 DEBUG oslo_concurrency.processutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:09:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2443906861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.850 226109 DEBUG oslo_concurrency.processutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.856 226109 DEBUG nova.compute.provider_tree [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.872 226109 DEBUG nova.scheduler.client.report [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.901 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:11 compute-1 nova_compute[226101]: 2025-12-06 07:09:11.922 226109 INFO nova.scheduler.client.report [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Deleted allocations for instance f9624722-30f1-4e59-ae6a-347ad7a8a4fd
Dec 06 07:09:12 compute-1 nova_compute[226101]: 2025-12-06 07:09:12.077 226109 DEBUG oslo_concurrency.lockutils [None req-7436ec81-60ce-4dc5-90f3-6dfd38ce7b01 98f77d99921448508eb3ac7bf8c4fa34 87ee3f11e794407692b09a4b77c91610 - - default default] Lock "f9624722-30f1-4e59-ae6a-347ad7a8a4fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e197 e197: 3 total, 3 up, 3 in
Dec 06 07:09:12 compute-1 ceph-mon[81689]: pgmap v1478: 305 pgs: 305 active+clean; 327 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.4 MiB/s wr, 319 op/s
Dec 06 07:09:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2443906861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:12 compute-1 ceph-mon[81689]: osdmap e197: 3 total, 3 up, 3 in
Dec 06 07:09:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:13.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e198 e198: 3 total, 3 up, 3 in
Dec 06 07:09:13 compute-1 nova_compute[226101]: 2025-12-06 07:09:13.805 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:14 compute-1 nova_compute[226101]: 2025-12-06 07:09:14.131 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:14 compute-1 ceph-mon[81689]: pgmap v1480: 305 pgs: 305 active+clean; 311 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 241 op/s
Dec 06 07:09:14 compute-1 ceph-mon[81689]: osdmap e198: 3 total, 3 up, 3 in
Dec 06 07:09:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e199 e199: 3 total, 3 up, 3 in
Dec 06 07:09:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:15 compute-1 nova_compute[226101]: 2025-12-06 07:09:15.221 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:15.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:15 compute-1 nova_compute[226101]: 2025-12-06 07:09:15.483 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:15 compute-1 ceph-mon[81689]: osdmap e199: 3 total, 3 up, 3 in
Dec 06 07:09:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2476696267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e200 e200: 3 total, 3 up, 3 in
Dec 06 07:09:16 compute-1 ceph-mon[81689]: pgmap v1483: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 276 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 5.4 MiB/s wr, 297 op/s
Dec 06 07:09:16 compute-1 ceph-mon[81689]: osdmap e200: 3 total, 3 up, 3 in
Dec 06 07:09:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:17.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e201 e201: 3 total, 3 up, 3 in
Dec 06 07:09:18 compute-1 nova_compute[226101]: 2025-12-06 07:09:18.807 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:18 compute-1 ceph-mon[81689]: pgmap v1485: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 329 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 10 MiB/s wr, 315 op/s
Dec 06 07:09:18 compute-1 ceph-mon[81689]: osdmap e201: 3 total, 3 up, 3 in
Dec 06 07:09:19 compute-1 nova_compute[226101]: 2025-12-06 07:09:19.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:19.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:19 compute-1 sudo[243973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:19 compute-1 sudo[243973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-1 sudo[243973]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:19 compute-1 sudo[243998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:09:19 compute-1 sudo[243998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-1 sudo[243998]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:19 compute-1 sudo[244023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:19 compute-1 sudo[244023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-1 sudo[244023]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:19 compute-1 sudo[244048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:09:19 compute-1 sudo[244048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:19 compute-1 sudo[244048]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:19 compute-1 sudo[244092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:19 compute-1 sudo[244092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-1 sudo[244092]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:19 compute-1 sudo[244117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:09:19 compute-1 sudo[244117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-1 sudo[244117]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:20 compute-1 sudo[244142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:20 compute-1 sudo[244142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:20 compute-1 sudo[244142]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:20 compute-1 sudo[244167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:09:20 compute-1 sudo[244167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:20 compute-1 sudo[244167]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:21.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:21 compute-1 ceph-mon[81689]: pgmap v1487: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 329 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 8.3 MiB/s wr, 263 op/s
Dec 06 07:09:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:09:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:09:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:09:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:09:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.319 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "4fe2e237-029b-49b8-8dcb-380c127f28f6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.320 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.356 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.473 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.474 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.481 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.482 226109 INFO nova.compute.claims [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.562 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e202 e202: 3 total, 3 up, 3 in
Dec 06 07:09:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:09:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3664114671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.989 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:22 compute-1 nova_compute[226101]: 2025-12-06 07:09:22.995 226109 DEBUG nova.compute.provider_tree [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:09:23 compute-1 ceph-mon[81689]: pgmap v1488: 305 pgs: 305 active+clean; 367 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 11 MiB/s wr, 371 op/s
Dec 06 07:09:23 compute-1 ceph-mon[81689]: osdmap e202: 3 total, 3 up, 3 in
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.042 226109 DEBUG nova.scheduler.client.report [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.069 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.070 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.140 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.140 226109 DEBUG nova.network.neutron [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.162 226109 INFO nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:09:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:23.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.186 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.283 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.285 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.285 226109 INFO nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Creating image(s)
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.309 226109 DEBUG nova.storage.rbd_utils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.333 226109 DEBUG nova.storage.rbd_utils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.355 226109 DEBUG nova.storage.rbd_utils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.358 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:23.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.420 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.421 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.421 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.422 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.442 226109 DEBUG nova.storage.rbd_utils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.445 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.767 226109 DEBUG nova.policy [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33518fed43cc4fdfbdce993ccb4cc360', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c41abd44bbf46f39df642d2a2cd19eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:09:23 compute-1 nova_compute[226101]: 2025-12-06 07:09:23.808 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.083 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004949.0819662, f9624722-30f1-4e59-ae6a-347ad7a8a4fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.084 226109 INFO nova.compute.manager [-] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] VM Stopped (Lifecycle Event)
Dec 06 07:09:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3664114671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:24 compute-1 ceph-mon[81689]: pgmap v1490: 305 pgs: 305 active+clean; 372 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 8.5 MiB/s wr, 312 op/s
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.105 226109 DEBUG nova.compute.manager [None req-8f803213-0a81-41be-ad62-875b011835d9 - - - - - -] [instance: f9624722-30f1-4e59-ae6a-347ad7a8a4fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.134 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.214 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.769s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.281 226109 DEBUG nova.storage.rbd_utils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] resizing rbd image 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.521 226109 DEBUG nova.objects.instance [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'migration_context' on Instance uuid 4fe2e237-029b-49b8-8dcb-380c127f28f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.536 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.537 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Ensure instance console log exists: /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.538 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.538 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.538 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:24 compute-1 nova_compute[226101]: 2025-12-06 07:09:24.807 226109 DEBUG nova.network.neutron [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Successfully created port: eae098c6-7562-4d8f-adc3-b7172a1c035d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:09:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e203 e203: 3 total, 3 up, 3 in
Dec 06 07:09:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:25.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:25.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:25 compute-1 nova_compute[226101]: 2025-12-06 07:09:25.607 226109 DEBUG nova.network.neutron [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Successfully updated port: eae098c6-7562-4d8f-adc3-b7172a1c035d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:09:25 compute-1 nova_compute[226101]: 2025-12-06 07:09:25.623 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "refresh_cache-4fe2e237-029b-49b8-8dcb-380c127f28f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:09:25 compute-1 nova_compute[226101]: 2025-12-06 07:09:25.623 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquired lock "refresh_cache-4fe2e237-029b-49b8-8dcb-380c127f28f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:09:25 compute-1 nova_compute[226101]: 2025-12-06 07:09:25.623 226109 DEBUG nova.network.neutron [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:09:25 compute-1 nova_compute[226101]: 2025-12-06 07:09:25.788 226109 DEBUG nova.compute.manager [req-1de34dac-5429-409e-a7e8-f74cf2882ddd req-d11c8aa1-c9e0-4bec-9be7-c316c0c0ba2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received event network-changed-eae098c6-7562-4d8f-adc3-b7172a1c035d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:25 compute-1 nova_compute[226101]: 2025-12-06 07:09:25.788 226109 DEBUG nova.compute.manager [req-1de34dac-5429-409e-a7e8-f74cf2882ddd req-d11c8aa1-c9e0-4bec-9be7-c316c0c0ba2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Refreshing instance network info cache due to event network-changed-eae098c6-7562-4d8f-adc3-b7172a1c035d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:09:25 compute-1 nova_compute[226101]: 2025-12-06 07:09:25.788 226109 DEBUG oslo_concurrency.lockutils [req-1de34dac-5429-409e-a7e8-f74cf2882ddd req-d11c8aa1-c9e0-4bec-9be7-c316c0c0ba2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4fe2e237-029b-49b8-8dcb-380c127f28f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:09:25 compute-1 nova_compute[226101]: 2025-12-06 07:09:25.930 226109 DEBUG nova.network.neutron [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:09:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e204 e204: 3 total, 3 up, 3 in
Dec 06 07:09:26 compute-1 ceph-mon[81689]: osdmap e203: 3 total, 3 up, 3 in
Dec 06 07:09:26 compute-1 ceph-mon[81689]: pgmap v1492: 305 pgs: 305 active+clean; 414 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.9 MiB/s wr, 233 op/s
Dec 06 07:09:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:27 compute-1 ceph-mon[81689]: osdmap e204: 3 total, 3 up, 3 in
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.285 226109 DEBUG nova.network.neutron [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Updating instance_info_cache with network_info: [{"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.313 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Releasing lock "refresh_cache-4fe2e237-029b-49b8-8dcb-380c127f28f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.313 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Instance network_info: |[{"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.314 226109 DEBUG oslo_concurrency.lockutils [req-1de34dac-5429-409e-a7e8-f74cf2882ddd req-d11c8aa1-c9e0-4bec-9be7-c316c0c0ba2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4fe2e237-029b-49b8-8dcb-380c127f28f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.314 226109 DEBUG nova.network.neutron [req-1de34dac-5429-409e-a7e8-f74cf2882ddd req-d11c8aa1-c9e0-4bec-9be7-c316c0c0ba2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Refreshing network info cache for port eae098c6-7562-4d8f-adc3-b7172a1c035d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.317 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Start _get_guest_xml network_info=[{"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.322 226109 WARNING nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.326 226109 DEBUG nova.virt.libvirt.host [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.326 226109 DEBUG nova.virt.libvirt.host [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.329 226109 DEBUG nova.virt.libvirt.host [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.329 226109 DEBUG nova.virt.libvirt.host [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.330 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.331 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.331 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.331 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.332 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.332 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.332 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.332 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.333 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.333 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.333 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.333 226109 DEBUG nova.virt.hardware [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.336 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:27 compute-1 sudo[244432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:27 compute-1 sudo[244432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:27 compute-1 sudo[244432]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:27 compute-1 sudo[244457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:09:27 compute-1 sudo[244457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:27 compute-1 sudo[244457]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:09:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1455735636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.922 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.947 226109 DEBUG nova.storage.rbd_utils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:09:27 compute-1 nova_compute[226101]: 2025-12-06 07:09:27.951 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:09:28 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3910499721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.421 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.422 226109 DEBUG nova.virt.libvirt.vif [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:09:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1876359925',display_name='tempest-VolumesAdminNegativeTest-server-1876359925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1876359925',id=49,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c41abd44bbf46f39df642d2a2cd19eb',ramdisk_id='',reservation_id='r-qffxzxax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-201901816',owner_user_name='tempest-VolumesAdminNegativeTest-201901816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:09:23Z,user_data=None,user_id='33518fed43cc4fdfbdce993ccb4cc360',uuid=4fe2e237-029b-49b8-8dcb-380c127f28f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.423 226109 DEBUG nova.network.os_vif_util [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converting VIF {"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.424 226109 DEBUG nova.network.os_vif_util [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:cb:4e,bridge_name='br-int',has_traffic_filtering=True,id=eae098c6-7562-4d8f-adc3-b7172a1c035d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae098c6-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.425 226109 DEBUG nova.objects.instance [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fe2e237-029b-49b8-8dcb-380c127f28f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.452 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <uuid>4fe2e237-029b-49b8-8dcb-380c127f28f6</uuid>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <name>instance-00000031</name>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <nova:name>tempest-VolumesAdminNegativeTest-server-1876359925</nova:name>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:09:27</nova:creationTime>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <nova:user uuid="33518fed43cc4fdfbdce993ccb4cc360">tempest-VolumesAdminNegativeTest-201901816-project-member</nova:user>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <nova:project uuid="4c41abd44bbf46f39df642d2a2cd19eb">tempest-VolumesAdminNegativeTest-201901816</nova:project>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <nova:port uuid="eae098c6-7562-4d8f-adc3-b7172a1c035d">
Dec 06 07:09:28 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <system>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <entry name="serial">4fe2e237-029b-49b8-8dcb-380c127f28f6</entry>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <entry name="uuid">4fe2e237-029b-49b8-8dcb-380c127f28f6</entry>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </system>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <os>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   </os>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <features>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   </features>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/4fe2e237-029b-49b8-8dcb-380c127f28f6_disk">
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       </source>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/4fe2e237-029b-49b8-8dcb-380c127f28f6_disk.config">
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       </source>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:09:28 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:58:cb:4e"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <target dev="tapeae098c6-75"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6/console.log" append="off"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <video>
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </video>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:09:28 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:09:28 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:09:28 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:09:28 compute-1 nova_compute[226101]: </domain>
Dec 06 07:09:28 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.454 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Preparing to wait for external event network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.454 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.454 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.455 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.455 226109 DEBUG nova.virt.libvirt.vif [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:09:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1876359925',display_name='tempest-VolumesAdminNegativeTest-server-1876359925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1876359925',id=49,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c41abd44bbf46f39df642d2a2cd19eb',ramdisk_id='',reservation_id='r-qffxzxax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-201901816',owner_user_name='tempest-VolumesAdminNegativeTest-201901816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:09:23Z,user_data=None,user_id='33518fed43cc4fdfbdce993ccb4cc360',uuid=4fe2e237-029b-49b8-8dcb-380c127f28f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.456 226109 DEBUG nova.network.os_vif_util [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converting VIF {"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.456 226109 DEBUG nova.network.os_vif_util [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:cb:4e,bridge_name='br-int',has_traffic_filtering=True,id=eae098c6-7562-4d8f-adc3-b7172a1c035d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae098c6-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.457 226109 DEBUG os_vif [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:cb:4e,bridge_name='br-int',has_traffic_filtering=True,id=eae098c6-7562-4d8f-adc3-b7172a1c035d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae098c6-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.458 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.458 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.459 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.461 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.461 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeae098c6-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.461 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeae098c6-75, col_values=(('external_ids', {'iface-id': 'eae098c6-7562-4d8f-adc3-b7172a1c035d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:cb:4e', 'vm-uuid': '4fe2e237-029b-49b8-8dcb-380c127f28f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.463 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:28 compute-1 NetworkManager[49031]: <info>  [1765004968.4643] manager: (tapeae098c6-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.469 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.470 226109 INFO os_vif [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:cb:4e,bridge_name='br-int',has_traffic_filtering=True,id=eae098c6-7562-4d8f-adc3-b7172a1c035d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae098c6-75')
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.601 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.601 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.601 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No VIF found with MAC fa:16:3e:58:cb:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.602 226109 INFO nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Using config drive
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.625 226109 DEBUG nova.storage.rbd_utils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:09:28 compute-1 nova_compute[226101]: 2025-12-06 07:09:28.811 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:28 compute-1 ceph-mon[81689]: pgmap v1494: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 268 op/s
Dec 06 07:09:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1455735636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3910499721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e205 e205: 3 total, 3 up, 3 in
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.112 226109 INFO nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Creating config drive at /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6/disk.config
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.117 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpecoe1ej3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:29.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.183 226109 DEBUG nova.network.neutron [req-1de34dac-5429-409e-a7e8-f74cf2882ddd req-d11c8aa1-c9e0-4bec-9be7-c316c0c0ba2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Updated VIF entry in instance network info cache for port eae098c6-7562-4d8f-adc3-b7172a1c035d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.184 226109 DEBUG nova.network.neutron [req-1de34dac-5429-409e-a7e8-f74cf2882ddd req-d11c8aa1-c9e0-4bec-9be7-c316c0c0ba2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Updating instance_info_cache with network_info: [{"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.200 226109 DEBUG oslo_concurrency.lockutils [req-1de34dac-5429-409e-a7e8-f74cf2882ddd req-d11c8aa1-c9e0-4bec-9be7-c316c0c0ba2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4fe2e237-029b-49b8-8dcb-380c127f28f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.248 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpecoe1ej3" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.276 226109 DEBUG nova.storage.rbd_utils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.280 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6/disk.config 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.503 226109 DEBUG oslo_concurrency.processutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6/disk.config 4fe2e237-029b-49b8-8dcb-380c127f28f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.504 226109 INFO nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Deleting local config drive /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6/disk.config because it was imported into RBD.
Dec 06 07:09:29 compute-1 kernel: tapeae098c6-75: entered promiscuous mode
Dec 06 07:09:29 compute-1 NetworkManager[49031]: <info>  [1765004969.5642] manager: (tapeae098c6-75): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Dec 06 07:09:29 compute-1 ovn_controller[130279]: 2025-12-06T07:09:29Z|00163|binding|INFO|Claiming lport eae098c6-7562-4d8f-adc3-b7172a1c035d for this chassis.
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.564 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 ovn_controller[130279]: 2025-12-06T07:09:29Z|00164|binding|INFO|eae098c6-7562-4d8f-adc3-b7172a1c035d: Claiming fa:16:3e:58:cb:4e 10.100.0.14
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.569 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.572 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 NetworkManager[49031]: <info>  [1765004969.5765] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.575 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 NetworkManager[49031]: <info>  [1765004969.5777] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.579 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:cb:4e 10.100.0.14'], port_security=['fa:16:3e:58:cb:4e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4fe2e237-029b-49b8-8dcb-380c127f28f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c41abd44bbf46f39df642d2a2cd19eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bb6b7416-eb95-4ed4-85b0-4bb22735f53e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=934cdf1d-ea09-45f0-b1f2-a1da72094e2e, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=eae098c6-7562-4d8f-adc3-b7172a1c035d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.580 139580 INFO neutron.agent.ovn.metadata.agent [-] Port eae098c6-7562-4d8f-adc3-b7172a1c035d in datapath d62d33de-d7cc-4103-8a83-88ba86c97b8f bound to our chassis
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.581 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62d33de-d7cc-4103-8a83-88ba86c97b8f
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.592 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f89c340d-d895-4017-99e4-b56a81a2e6e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.593 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd62d33de-d1 in ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.595 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd62d33de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.595 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5213de84-7b61-49f9-ac46-6aef99337e30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.596 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fea5314a-8cf3-4054-8dcc-ddb6180e445d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 systemd-udevd[244599]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:09:29 compute-1 systemd-machined[190302]: New machine qemu-24-instance-00000031.
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.606 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb34426-d5c8-422a-9c24-358fa94e0124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 NetworkManager[49031]: <info>  [1765004969.6113] device (tapeae098c6-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:09:29 compute-1 NetworkManager[49031]: <info>  [1765004969.6119] device (tapeae098c6-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:09:29 compute-1 systemd[1]: Started Virtual Machine qemu-24-instance-00000031.
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.633 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[93307566-3b6d-49e7-be27-2b20afb7344c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.661 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6aa5fb-ccdc-4d4b-a1a2-19c830c5c10d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 NetworkManager[49031]: <info>  [1765004969.6698] manager: (tapd62d33de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.668 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[818352c1-f030-4329-8b2e-2dc98616c9e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.700 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5e29bd-01f1-4ab9-9034-7e4e634a9a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.703 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.704 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[37c983f8-12a8-43f4-8232-1a59326a2051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.719 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:29 compute-1 NetworkManager[49031]: <info>  [1765004969.7295] device (tapd62d33de-d0): carrier: link connected
Dec 06 07:09:29 compute-1 ovn_controller[130279]: 2025-12-06T07:09:29Z|00165|binding|INFO|Setting lport eae098c6-7562-4d8f-adc3-b7172a1c035d ovn-installed in OVS
Dec 06 07:09:29 compute-1 ovn_controller[130279]: 2025-12-06T07:09:29Z|00166|binding|INFO|Setting lport eae098c6-7562-4d8f-adc3-b7172a1c035d up in Southbound
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.735 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.735 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba29daa-66e2-40f0-8f72-b99d24c822c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.750 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c449a575-f40f-4cd3-b6dd-54a9ded91a41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62d33de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:c1:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524923, 'reachable_time': 44683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244631, 'error': None, 'target': 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.765 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bafb4750-f450-460b-9f94-3ba87ea59a4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:c1e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524923, 'tstamp': 524923}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244632, 'error': None, 'target': 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.779 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[123c608f-f9e7-4f6c-85eb-92c18616b77b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62d33de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:c1:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524923, 'reachable_time': 44683, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244633, 'error': None, 'target': 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.809 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a7665af4-8404-4b1a-963c-e70d09989476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.866 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a5d741-e12f-4384-a86a-f16e85ce2dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.868 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62d33de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.868 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.868 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62d33de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 kernel: tapd62d33de-d0: entered promiscuous mode
Dec 06 07:09:29 compute-1 NetworkManager[49031]: <info>  [1765004969.8721] manager: (tapd62d33de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.873 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62d33de-d0, col_values=(('external_ids', {'iface-id': '0d779b99-8ce3-4716-88ae-2ec54698edd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.874 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 ovn_controller[130279]: 2025-12-06T07:09:29Z|00167|binding|INFO|Releasing lport 0d779b99-8ce3-4716-88ae-2ec54698edd7 from this chassis (sb_readonly=0)
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.876 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.876 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d62d33de-d7cc-4103-8a83-88ba86c97b8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d62d33de-d7cc-4103-8a83-88ba86c97b8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.877 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8a7b2b-8a01-4f3b-a322-54fb0d8fc95c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.878 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-d62d33de-d7cc-4103-8a83-88ba86c97b8f
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/d62d33de-d7cc-4103-8a83-88ba86c97b8f.pid.haproxy
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID d62d33de-d7cc-4103-8a83-88ba86c97b8f
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.878 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'env', 'PROCESS_TAG=haproxy-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d62d33de-d7cc-4103-8a83-88ba86c97b8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.890 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:29.989 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:09:29 compute-1 nova_compute[226101]: 2025-12-06 07:09:29.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.006 226109 DEBUG nova.compute.manager [req-f747a80d-c98f-4b4c-864e-fd8788a7d1a2 req-02edbcdd-05be-4643-9775-c74f737952c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received event network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.006 226109 DEBUG oslo_concurrency.lockutils [req-f747a80d-c98f-4b4c-864e-fd8788a7d1a2 req-02edbcdd-05be-4643-9775-c74f737952c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.007 226109 DEBUG oslo_concurrency.lockutils [req-f747a80d-c98f-4b4c-864e-fd8788a7d1a2 req-02edbcdd-05be-4643-9775-c74f737952c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.007 226109 DEBUG oslo_concurrency.lockutils [req-f747a80d-c98f-4b4c-864e-fd8788a7d1a2 req-02edbcdd-05be-4643-9775-c74f737952c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.007 226109 DEBUG nova.compute.manager [req-f747a80d-c98f-4b4c-864e-fd8788a7d1a2 req-02edbcdd-05be-4643-9775-c74f737952c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Processing event network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:09:30 compute-1 ceph-mon[81689]: osdmap e205: 3 total, 3 up, 3 in
Dec 06 07:09:30 compute-1 podman[244673]: 2025-12-06 07:09:30.282095657 +0000 UTC m=+0.104156316 container create 7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:09:30 compute-1 podman[244673]: 2025-12-06 07:09:30.197585313 +0000 UTC m=+0.019645992 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:09:30 compute-1 systemd[1]: Started libpod-conmon-7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711.scope.
Dec 06 07:09:30 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:09:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d3054a13b19db148096165436fd7d360b3917e3b899b6370dd1ef13f9de37a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:09:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.5 total, 600.0 interval
                                           Cumulative writes: 5834 writes, 31K keys, 5834 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 5834 writes, 5834 syncs, 1.00 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1686 writes, 8061 keys, 1686 commit groups, 1.0 writes per commit group, ingest: 16.63 MB, 0.03 MB/s
                                           Interval WAL: 1686 writes, 1686 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.4      2.26              0.11        17    0.133       0      0       0.0       0.0
                                             L6      1/0    9.27 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7     40.4     33.4      4.32              0.42        16    0.270     85K   8920       0.0       0.0
                                            Sum      1/0    9.27 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     26.5     27.9      6.58              0.53        33    0.199     85K   8920       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7     35.7     36.6      1.25              0.14         8    0.156     25K   2540       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     40.4     33.4      4.32              0.42        16    0.270     85K   8920       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.8      2.21              0.11        16    0.138       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.08 MB/s write, 0.17 GB read, 0.07 MB/s read, 6.6 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 18.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000137 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1036,17.63 MB,5.79878%) FilterBlock(33,246.48 KB,0.0791801%) IndexBlock(33,434.86 KB,0.139693%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.429 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004970.429112, 4fe2e237-029b-49b8-8dcb-380c127f28f6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.430 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] VM Started (Lifecycle Event)
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.432 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.435 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.438 226109 INFO nova.virt.libvirt.driver [-] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Instance spawned successfully.
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.439 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:09:30 compute-1 podman[244673]: 2025-12-06 07:09:30.450458497 +0000 UTC m=+0.272519176 container init 7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:09:30 compute-1 podman[244673]: 2025-12-06 07:09:30.455565775 +0000 UTC m=+0.277626434 container start 7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.466 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.471 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.475 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.475 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.476 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.476 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.477 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.477 226109 DEBUG nova.virt.libvirt.driver [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:30 compute-1 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[244714]: [NOTICE]   (244729) : New worker (244731) forked
Dec 06 07:09:30 compute-1 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[244714]: [NOTICE]   (244729) : Loading success.
Dec 06 07:09:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:30.508 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.509 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.509 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004970.4301498, 4fe2e237-029b-49b8-8dcb-380c127f28f6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.510 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] VM Paused (Lifecycle Event)
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.536 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.539 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765004970.4343371, 4fe2e237-029b-49b8-8dcb-380c127f28f6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.540 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] VM Resumed (Lifecycle Event)
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.562 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.565 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.571 226109 INFO nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Took 7.29 seconds to spawn the instance on the hypervisor.
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.572 226109 DEBUG nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.599 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.641 226109 INFO nova.compute.manager [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Took 8.20 seconds to build instance.
Dec 06 07:09:30 compute-1 nova_compute[226101]: 2025-12-06 07:09:30.661 226109 DEBUG oslo_concurrency.lockutils [None req-4b722127-09ab-4982-bb93-e62eccf1c09b 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:31.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:31 compute-1 ceph-mon[81689]: pgmap v1496: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 237 op/s
Dec 06 07:09:32 compute-1 nova_compute[226101]: 2025-12-06 07:09:32.208 226109 DEBUG nova.compute.manager [req-f15c61d8-c242-48ef-aea9-7c074bc568ab req-f78e645a-17c1-4bfd-8511-448904383f22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received event network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:32 compute-1 nova_compute[226101]: 2025-12-06 07:09:32.208 226109 DEBUG oslo_concurrency.lockutils [req-f15c61d8-c242-48ef-aea9-7c074bc568ab req-f78e645a-17c1-4bfd-8511-448904383f22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:32 compute-1 nova_compute[226101]: 2025-12-06 07:09:32.209 226109 DEBUG oslo_concurrency.lockutils [req-f15c61d8-c242-48ef-aea9-7c074bc568ab req-f78e645a-17c1-4bfd-8511-448904383f22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:32 compute-1 nova_compute[226101]: 2025-12-06 07:09:32.209 226109 DEBUG oslo_concurrency.lockutils [req-f15c61d8-c242-48ef-aea9-7c074bc568ab req-f78e645a-17c1-4bfd-8511-448904383f22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:32 compute-1 nova_compute[226101]: 2025-12-06 07:09:32.209 226109 DEBUG nova.compute.manager [req-f15c61d8-c242-48ef-aea9-7c074bc568ab req-f78e645a-17c1-4bfd-8511-448904383f22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] No waiting events found dispatching network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:09:32 compute-1 nova_compute[226101]: 2025-12-06 07:09:32.209 226109 WARNING nova.compute.manager [req-f15c61d8-c242-48ef-aea9-7c074bc568ab req-f78e645a-17c1-4bfd-8511-448904383f22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received unexpected event network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d for instance with vm_state active and task_state None.
Dec 06 07:09:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e206 e206: 3 total, 3 up, 3 in
Dec 06 07:09:32 compute-1 ceph-mon[81689]: pgmap v1497: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.0 MiB/s wr, 189 op/s
Dec 06 07:09:32 compute-1 ceph-mon[81689]: osdmap e206: 3 total, 3 up, 3 in
Dec 06 07:09:33 compute-1 podman[244740]: 2025-12-06 07:09:33.111518103 +0000 UTC m=+0.103200481 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 07:09:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:33.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:33 compute-1 nova_compute[226101]: 2025-12-06 07:09:33.464 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:33 compute-1 nova_compute[226101]: 2025-12-06 07:09:33.818 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:35 compute-1 ceph-mon[81689]: pgmap v1499: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.4 MiB/s wr, 201 op/s
Dec 06 07:09:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:35.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:35.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:36 compute-1 ceph-mon[81689]: pgmap v1500: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 137 op/s
Dec 06 07:09:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e207 e207: 3 total, 3 up, 3 in
Dec 06 07:09:38 compute-1 ceph-mon[81689]: pgmap v1501: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 21 KiB/s wr, 158 op/s
Dec 06 07:09:38 compute-1 ceph-mon[81689]: osdmap e207: 3 total, 3 up, 3 in
Dec 06 07:09:38 compute-1 nova_compute[226101]: 2025-12-06 07:09:38.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:38 compute-1 nova_compute[226101]: 2025-12-06 07:09:38.820 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:39 compute-1 podman[244768]: 2025-12-06 07:09:39.064547806 +0000 UTC m=+0.047352971 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 07:09:39 compute-1 podman[244767]: 2025-12-06 07:09:39.115598645 +0000 UTC m=+0.090901328 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:09:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:39.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:40.510 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:40 compute-1 nova_compute[226101]: 2025-12-06 07:09:40.916 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "4fe2e237-029b-49b8-8dcb-380c127f28f6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:40 compute-1 nova_compute[226101]: 2025-12-06 07:09:40.916 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:40 compute-1 nova_compute[226101]: 2025-12-06 07:09:40.917 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:40 compute-1 nova_compute[226101]: 2025-12-06 07:09:40.917 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:40 compute-1 nova_compute[226101]: 2025-12-06 07:09:40.917 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:40 compute-1 nova_compute[226101]: 2025-12-06 07:09:40.918 226109 INFO nova.compute.manager [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Terminating instance
Dec 06 07:09:40 compute-1 nova_compute[226101]: 2025-12-06 07:09:40.919 226109 DEBUG nova.compute.manager [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:09:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:41.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:09:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877826368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:09:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877826368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:43.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:43 compute-1 kernel: tapeae098c6-75 (unregistering): left promiscuous mode
Dec 06 07:09:43 compute-1 NetworkManager[49031]: <info>  [1765004983.3941] device (tapeae098c6-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:09:43 compute-1 ovn_controller[130279]: 2025-12-06T07:09:43Z|00168|binding|INFO|Releasing lport eae098c6-7562-4d8f-adc3-b7172a1c035d from this chassis (sb_readonly=0)
Dec 06 07:09:43 compute-1 ovn_controller[130279]: 2025-12-06T07:09:43Z|00169|binding|INFO|Setting lport eae098c6-7562-4d8f-adc3-b7172a1c035d down in Southbound
Dec 06 07:09:43 compute-1 ovn_controller[130279]: 2025-12-06T07:09:43Z|00170|binding|INFO|Removing iface tapeae098c6-75 ovn-installed in OVS
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.402 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.404 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.422 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000031.scope: Deactivated successfully.
Dec 06 07:09:43 compute-1 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000031.scope: Consumed 11.393s CPU time.
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.436 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:cb:4e 10.100.0.14'], port_security=['fa:16:3e:58:cb:4e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4fe2e237-029b-49b8-8dcb-380c127f28f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c41abd44bbf46f39df642d2a2cd19eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bb6b7416-eb95-4ed4-85b0-4bb22735f53e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=934cdf1d-ea09-45f0-b1f2-a1da72094e2e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=eae098c6-7562-4d8f-adc3-b7172a1c035d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:09:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:43.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.438 139580 INFO neutron.agent.ovn.metadata.agent [-] Port eae098c6-7562-4d8f-adc3-b7172a1c035d in datapath d62d33de-d7cc-4103-8a83-88ba86c97b8f unbound from our chassis
Dec 06 07:09:43 compute-1 systemd-machined[190302]: Machine qemu-24-instance-00000031 terminated.
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.439 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d62d33de-d7cc-4103-8a83-88ba86c97b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.439 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0e69ed-bf83-4a2d-8270-a99e4b8f3a99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.440 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f namespace which is not needed anymore
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.536 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.542 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[244714]: [NOTICE]   (244729) : haproxy version is 2.8.14-c23fe91
Dec 06 07:09:43 compute-1 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[244714]: [NOTICE]   (244729) : path to executable is /usr/sbin/haproxy
Dec 06 07:09:43 compute-1 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[244714]: [WARNING]  (244729) : Exiting Master process...
Dec 06 07:09:43 compute-1 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[244714]: [ALERT]    (244729) : Current worker (244731) exited with code 143 (Terminated)
Dec 06 07:09:43 compute-1 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[244714]: [WARNING]  (244729) : All workers exited. Exiting... (0)
Dec 06 07:09:43 compute-1 systemd[1]: libpod-7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711.scope: Deactivated successfully.
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.553 226109 INFO nova.virt.libvirt.driver [-] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Instance destroyed successfully.
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.554 226109 DEBUG nova.objects.instance [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'resources' on Instance uuid 4fe2e237-029b-49b8-8dcb-380c127f28f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:09:43 compute-1 ceph-mon[81689]: pgmap v1503: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 KiB/s wr, 121 op/s
Dec 06 07:09:43 compute-1 podman[244828]: 2025-12-06 07:09:43.555856236 +0000 UTC m=+0.042938332 container died 7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:09:43 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711-userdata-shm.mount: Deactivated successfully.
Dec 06 07:09:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-52d3054a13b19db148096165436fd7d360b3917e3b899b6370dd1ef13f9de37a-merged.mount: Deactivated successfully.
Dec 06 07:09:43 compute-1 podman[244828]: 2025-12-06 07:09:43.586795842 +0000 UTC m=+0.073877928 container cleanup 7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.586 226109 DEBUG nova.virt.libvirt.vif [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:09:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1876359925',display_name='tempest-VolumesAdminNegativeTest-server-1876359925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1876359925',id=49,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:09:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c41abd44bbf46f39df642d2a2cd19eb',ramdisk_id='',reservation_id='r-qffxzxax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-201901816',owner_user_name='tempest-VolumesAdminNegativeTest-201901816-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:09:30Z,user_data=None,user_id='33518fed43cc4fdfbdce993ccb4cc360',uuid=4fe2e237-029b-49b8-8dcb-380c127f28f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.587 226109 DEBUG nova.network.os_vif_util [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converting VIF {"id": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "address": "fa:16:3e:58:cb:4e", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae098c6-75", "ovs_interfaceid": "eae098c6-7562-4d8f-adc3-b7172a1c035d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.588 226109 DEBUG nova.network.os_vif_util [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:cb:4e,bridge_name='br-int',has_traffic_filtering=True,id=eae098c6-7562-4d8f-adc3-b7172a1c035d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae098c6-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.588 226109 DEBUG os_vif [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:cb:4e,bridge_name='br-int',has_traffic_filtering=True,id=eae098c6-7562-4d8f-adc3-b7172a1c035d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae098c6-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.590 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.590 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeae098c6-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 systemd[1]: libpod-conmon-7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711.scope: Deactivated successfully.
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.594 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.596 226109 INFO os_vif [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:cb:4e,bridge_name='br-int',has_traffic_filtering=True,id=eae098c6-7562-4d8f-adc3-b7172a1c035d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae098c6-75')
Dec 06 07:09:43 compute-1 podman[244872]: 2025-12-06 07:09:43.653150834 +0000 UTC m=+0.041366268 container remove 7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.658 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[259ed0ec-6dc4-44df-9bcd-c2f9f4e49fb3]: (4, ('Sat Dec  6 07:09:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f (7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711)\n7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711\nSat Dec  6 07:09:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f (7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711)\n7e552e52acf056bf6d553c5a2b339932c8a2428dc05377dea5347dc2f396a711\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.660 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[24c8e047-c5dd-4fa0-9d81-81bb947e392e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.660 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62d33de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.662 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 kernel: tapd62d33de-d0: left promiscuous mode
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.675 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.677 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb97515-45c4-4a73-9b25-d2f44ce84ea1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.693 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d66c58e4-6f89-4af0-b731-b2a08eb76b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.693 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[283b23d6-773d-40d5-8675-d4828ec9c40d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.707 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f0077e3d-2c79-45ed-829f-651e903e659c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524916, 'reachable_time': 37165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244905, 'error': None, 'target': 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:43 compute-1 systemd[1]: run-netns-ovnmeta\x2dd62d33de\x2dd7cc\x2d4103\x2d8a83\x2d88ba86c97b8f.mount: Deactivated successfully.
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.711 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:09:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:09:43.712 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[95ec5f8a-2bc9-4056-980d-51f7494fe167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:43 compute-1 nova_compute[226101]: 2025-12-06 07:09:43.821 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:44 compute-1 nova_compute[226101]: 2025-12-06 07:09:44.041 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:44 compute-1 nova_compute[226101]: 2025-12-06 07:09:44.089 226109 DEBUG nova.compute.manager [req-d50223dd-0fe6-4893-9196-9a8300d1e9a4 req-7535e143-6a84-46fa-bf5a-baa528ab8170 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received event network-vif-unplugged-eae098c6-7562-4d8f-adc3-b7172a1c035d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:44 compute-1 nova_compute[226101]: 2025-12-06 07:09:44.090 226109 DEBUG oslo_concurrency.lockutils [req-d50223dd-0fe6-4893-9196-9a8300d1e9a4 req-7535e143-6a84-46fa-bf5a-baa528ab8170 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:44 compute-1 nova_compute[226101]: 2025-12-06 07:09:44.090 226109 DEBUG oslo_concurrency.lockutils [req-d50223dd-0fe6-4893-9196-9a8300d1e9a4 req-7535e143-6a84-46fa-bf5a-baa528ab8170 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:44 compute-1 nova_compute[226101]: 2025-12-06 07:09:44.090 226109 DEBUG oslo_concurrency.lockutils [req-d50223dd-0fe6-4893-9196-9a8300d1e9a4 req-7535e143-6a84-46fa-bf5a-baa528ab8170 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:44 compute-1 nova_compute[226101]: 2025-12-06 07:09:44.091 226109 DEBUG nova.compute.manager [req-d50223dd-0fe6-4893-9196-9a8300d1e9a4 req-7535e143-6a84-46fa-bf5a-baa528ab8170 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] No waiting events found dispatching network-vif-unplugged-eae098c6-7562-4d8f-adc3-b7172a1c035d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:09:44 compute-1 nova_compute[226101]: 2025-12-06 07:09:44.091 226109 DEBUG nova.compute.manager [req-d50223dd-0fe6-4893-9196-9a8300d1e9a4 req-7535e143-6a84-46fa-bf5a-baa528ab8170 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received event network-vif-unplugged-eae098c6-7562-4d8f-adc3-b7172a1c035d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:09:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:44 compute-1 ceph-mon[81689]: pgmap v1504: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 81 op/s
Dec 06 07:09:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2877826368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2877826368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:44 compute-1 ceph-mon[81689]: pgmap v1505: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 11 KiB/s wr, 70 op/s
Dec 06 07:09:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:45.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:46 compute-1 nova_compute[226101]: 2025-12-06 07:09:46.251 226109 DEBUG nova.compute.manager [req-de603f7f-e730-4b4e-83df-27c3ba4d808e req-de0c0768-81bd-4480-b0eb-cfab74a6fa4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received event network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:46 compute-1 nova_compute[226101]: 2025-12-06 07:09:46.251 226109 DEBUG oslo_concurrency.lockutils [req-de603f7f-e730-4b4e-83df-27c3ba4d808e req-de0c0768-81bd-4480-b0eb-cfab74a6fa4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:46 compute-1 nova_compute[226101]: 2025-12-06 07:09:46.251 226109 DEBUG oslo_concurrency.lockutils [req-de603f7f-e730-4b4e-83df-27c3ba4d808e req-de0c0768-81bd-4480-b0eb-cfab74a6fa4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:46 compute-1 nova_compute[226101]: 2025-12-06 07:09:46.251 226109 DEBUG oslo_concurrency.lockutils [req-de603f7f-e730-4b4e-83df-27c3ba4d808e req-de0c0768-81bd-4480-b0eb-cfab74a6fa4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:46 compute-1 nova_compute[226101]: 2025-12-06 07:09:46.252 226109 DEBUG nova.compute.manager [req-de603f7f-e730-4b4e-83df-27c3ba4d808e req-de0c0768-81bd-4480-b0eb-cfab74a6fa4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] No waiting events found dispatching network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:09:46 compute-1 nova_compute[226101]: 2025-12-06 07:09:46.252 226109 WARNING nova.compute.manager [req-de603f7f-e730-4b4e-83df-27c3ba4d808e req-de0c0768-81bd-4480-b0eb-cfab74a6fa4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received unexpected event network-vif-plugged-eae098c6-7562-4d8f-adc3-b7172a1c035d for instance with vm_state active and task_state deleting.
Dec 06 07:09:46 compute-1 ceph-mon[81689]: pgmap v1506: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 644 KiB/s rd, 11 KiB/s wr, 43 op/s
Dec 06 07:09:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:47.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:47 compute-1 nova_compute[226101]: 2025-12-06 07:09:47.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.489 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.623 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.623 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.623 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.624 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.624 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:48 compute-1 nova_compute[226101]: 2025-12-06 07:09:48.824 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:09:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/913651263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.072 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.150 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.151 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:09:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e208 e208: 3 total, 3 up, 3 in
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.277 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.278 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4701MB free_disk=20.83084487915039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.278 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.278 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.442 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 4fe2e237-029b-49b8-8dcb-380c127f28f6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.442 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.443 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:09:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:49 compute-1 nova_compute[226101]: 2025-12-06 07:09:49.559 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:09:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1104098664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:50 compute-1 nova_compute[226101]: 2025-12-06 07:09:50.000 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:50 compute-1 nova_compute[226101]: 2025-12-06 07:09:50.007 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:09:50 compute-1 nova_compute[226101]: 2025-12-06 07:09:50.032 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:09:50 compute-1 nova_compute[226101]: 2025-12-06 07:09:50.056 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:09:50 compute-1 nova_compute[226101]: 2025-12-06 07:09:50.057 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:50 compute-1 ceph-mon[81689]: pgmap v1507: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 32 op/s
Dec 06 07:09:51 compute-1 nova_compute[226101]: 2025-12-06 07:09:51.057 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:51 compute-1 nova_compute[226101]: 2025-12-06 07:09:51.058 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:09:51 compute-1 nova_compute[226101]: 2025-12-06 07:09:51.095 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:09:51 compute-1 nova_compute[226101]: 2025-12-06 07:09:51.096 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:51 compute-1 nova_compute[226101]: 2025-12-06 07:09:51.096 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:51 compute-1 nova_compute[226101]: 2025-12-06 07:09:51.096 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:51 compute-1 nova_compute[226101]: 2025-12-06 07:09:51.096 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:09:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:51.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/913651263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:51 compute-1 ceph-mon[81689]: pgmap v1508: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 10 KiB/s wr, 28 op/s
Dec 06 07:09:51 compute-1 ceph-mon[81689]: osdmap e208: 3 total, 3 up, 3 in
Dec 06 07:09:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1104098664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/114967645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:51.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:51 compute-1 nova_compute[226101]: 2025-12-06 07:09:51.621 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:52 compute-1 nova_compute[226101]: 2025-12-06 07:09:52.013 226109 INFO nova.virt.libvirt.driver [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Deleting instance files /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6_del
Dec 06 07:09:52 compute-1 nova_compute[226101]: 2025-12-06 07:09:52.014 226109 INFO nova.virt.libvirt.driver [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Deletion of /var/lib/nova/instances/4fe2e237-029b-49b8-8dcb-380c127f28f6_del complete
Dec 06 07:09:52 compute-1 nova_compute[226101]: 2025-12-06 07:09:52.064 226109 INFO nova.compute.manager [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Took 11.15 seconds to destroy the instance on the hypervisor.
Dec 06 07:09:52 compute-1 nova_compute[226101]: 2025-12-06 07:09:52.065 226109 DEBUG oslo.service.loopingcall [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:09:52 compute-1 nova_compute[226101]: 2025-12-06 07:09:52.066 226109 DEBUG nova.compute.manager [-] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:09:52 compute-1 nova_compute[226101]: 2025-12-06 07:09:52.066 226109 DEBUG nova.network.neutron [-] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:09:52 compute-1 ceph-mon[81689]: pgmap v1510: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 438 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 23 KiB/s wr, 29 op/s
Dec 06 07:09:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2420057684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.069 226109 DEBUG nova.network.neutron [-] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.101 226109 INFO nova.compute.manager [-] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Took 1.03 seconds to deallocate network for instance.
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.209 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.210 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:53.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e209 e209: 3 total, 3 up, 3 in
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.255 226109 DEBUG nova.compute.manager [req-3b1c1a18-8984-46bb-b369-3f82f8b2b20c req-b4a3410b-ab2f-4e51-a2af-6b075c8f117c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Received event network-vif-deleted-eae098c6-7562-4d8f-adc3-b7172a1c035d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.311 226109 DEBUG oslo_concurrency.processutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:53.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.594 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:09:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2932571782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.741 226109 DEBUG oslo_concurrency.processutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.745 226109 DEBUG nova.compute.provider_tree [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:09:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/599168457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:53 compute-1 ceph-mon[81689]: osdmap e209: 3 total, 3 up, 3 in
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.825 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:53 compute-1 nova_compute[226101]: 2025-12-06 07:09:53.992 226109 DEBUG nova.scheduler.client.report [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:09:54 compute-1 nova_compute[226101]: 2025-12-06 07:09:54.055 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:54 compute-1 nova_compute[226101]: 2025-12-06 07:09:54.079 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:54 compute-1 nova_compute[226101]: 2025-12-06 07:09:54.120 226109 INFO nova.scheduler.client.report [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Deleted allocations for instance 4fe2e237-029b-49b8-8dcb-380c127f28f6
Dec 06 07:09:54 compute-1 nova_compute[226101]: 2025-12-06 07:09:54.234 226109 DEBUG oslo_concurrency.lockutils [None req-29edb317-351f-407e-941e-c6c09095a50c 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "4fe2e237-029b-49b8-8dcb-380c127f28f6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:55.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:55.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:55 compute-1 ceph-mon[81689]: pgmap v1511: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 414 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 25 KiB/s wr, 53 op/s
Dec 06 07:09:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2932571782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2339433088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:56 compute-1 nova_compute[226101]: 2025-12-06 07:09:56.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e210 e210: 3 total, 3 up, 3 in
Dec 06 07:09:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:57.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:57.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:58 compute-1 nova_compute[226101]: 2025-12-06 07:09:58.551 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004983.5501618, 4fe2e237-029b-49b8-8dcb-380c127f28f6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:09:58 compute-1 nova_compute[226101]: 2025-12-06 07:09:58.551 226109 INFO nova.compute.manager [-] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] VM Stopped (Lifecycle Event)
Dec 06 07:09:58 compute-1 nova_compute[226101]: 2025-12-06 07:09:58.581 226109 DEBUG nova.compute.manager [None req-d4f9ecbf-385d-434a-a0ef-9bc40435a0f8 - - - - - -] [instance: 4fe2e237-029b-49b8-8dcb-380c127f28f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:58 compute-1 nova_compute[226101]: 2025-12-06 07:09:58.596 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:58 compute-1 nova_compute[226101]: 2025-12-06 07:09:58.827 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:58 compute-1 ceph-mon[81689]: pgmap v1513: 305 pgs: 305 active+clean; 372 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 54 op/s
Dec 06 07:09:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e211 e211: 3 total, 3 up, 3 in
Dec 06 07:09:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:59.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:09:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:59.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:00 compute-1 ceph-mon[81689]: osdmap e210: 3 total, 3 up, 3 in
Dec 06 07:10:00 compute-1 ceph-mon[81689]: pgmap v1515: 305 pgs: 305 active+clean; 341 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 34 KiB/s wr, 114 op/s
Dec 06 07:10:00 compute-1 ceph-mon[81689]: osdmap e211: 3 total, 3 up, 3 in
Dec 06 07:10:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:01.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:01 compute-1 ceph-mon[81689]: pgmap v1517: 305 pgs: 305 active+clean; 341 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 85 op/s
Dec 06 07:10:01 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:10:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:01.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:10:01.627 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:10:01.628 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:10:01.628 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:02 compute-1 ceph-mon[81689]: pgmap v1518: 305 pgs: 305 active+clean; 295 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.8 KiB/s wr, 81 op/s
Dec 06 07:10:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1088743501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:10:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e212 e212: 3 total, 3 up, 3 in
Dec 06 07:10:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:10:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:10:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:03.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:03 compute-1 nova_compute[226101]: 2025-12-06 07:10:03.600 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1088743501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:10:03 compute-1 ceph-mon[81689]: osdmap e212: 3 total, 3 up, 3 in
Dec 06 07:10:03 compute-1 nova_compute[226101]: 2025-12-06 07:10:03.875 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:04 compute-1 podman[244974]: 2025-12-06 07:10:04.095116876 +0000 UTC m=+0.080122716 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 06 07:10:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:04 compute-1 ceph-mon[81689]: pgmap v1519: 305 pgs: 305 active+clean; 279 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 4.7 KiB/s wr, 94 op/s
Dec 06 07:10:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:05.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:07 compute-1 ceph-mon[81689]: pgmap v1521: 305 pgs: 305 active+clean; 236 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 KiB/s wr, 58 op/s
Dec 06 07:10:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:07.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:07.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:08 compute-1 nova_compute[226101]: 2025-12-06 07:10:08.602 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:08 compute-1 nova_compute[226101]: 2025-12-06 07:10:08.877 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e213 e213: 3 total, 3 up, 3 in
Dec 06 07:10:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:09.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:09.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:10 compute-1 podman[245000]: 2025-12-06 07:10:10.066611127 +0000 UTC m=+0.053558458 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:10:10 compute-1 podman[245001]: 2025-12-06 07:10:10.073036691 +0000 UTC m=+0.056113498 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:10:10 compute-1 ceph-mon[81689]: pgmap v1522: 305 pgs: 305 active+clean; 144 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 93 op/s
Dec 06 07:10:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3796797556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:11 compute-1 nova_compute[226101]: 2025-12-06 07:10:11.068 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:11 compute-1 nova_compute[226101]: 2025-12-06 07:10:11.334 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:11.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:11 compute-1 ceph-mon[81689]: osdmap e213: 3 total, 3 up, 3 in
Dec 06 07:10:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3597249496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:10:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3597249496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:10:11 compute-1 ceph-mon[81689]: pgmap v1524: 305 pgs: 305 active+clean; 144 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Dec 06 07:10:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:10:12.353 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:10:12 compute-1 nova_compute[226101]: 2025-12-06 07:10:12.354 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:10:12.354 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:10:12 compute-1 ceph-mon[81689]: pgmap v1525: 305 pgs: 305 active+clean; 121 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 3.1 KiB/s wr, 81 op/s
Dec 06 07:10:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2476187685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:13.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:13.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:13 compute-1 nova_compute[226101]: 2025-12-06 07:10:13.656 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:13 compute-1 nova_compute[226101]: 2025-12-06 07:10:13.880 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:15.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:15 compute-1 ceph-mon[81689]: pgmap v1526: 305 pgs: 305 active+clean; 121 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 2.9 KiB/s wr, 69 op/s
Dec 06 07:10:17 compute-1 ceph-mon[81689]: pgmap v1527: 305 pgs: 305 active+clean; 91 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Dec 06 07:10:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:17.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:10:17.356 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:10:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:17.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2285027204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:18 compute-1 ceph-mon[81689]: pgmap v1528: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.7 KiB/s wr, 53 op/s
Dec 06 07:10:18 compute-1 nova_compute[226101]: 2025-12-06 07:10:18.657 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:18 compute-1 nova_compute[226101]: 2025-12-06 07:10:18.882 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:19.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:19.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:20 compute-1 ceph-mon[81689]: pgmap v1529: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.6 KiB/s wr, 51 op/s
Dec 06 07:10:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:21.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:21.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e214 e214: 3 total, 3 up, 3 in
Dec 06 07:10:22 compute-1 ceph-mon[81689]: pgmap v1530: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.7 KiB/s wr, 50 op/s
Dec 06 07:10:22 compute-1 ceph-mon[81689]: osdmap e214: 3 total, 3 up, 3 in
Dec 06 07:10:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:23.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:23.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:23 compute-1 nova_compute[226101]: 2025-12-06 07:10:23.659 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:23 compute-1 nova_compute[226101]: 2025-12-06 07:10:23.883 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:24 compute-1 ceph-mon[81689]: pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Dec 06 07:10:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e215 e215: 3 total, 3 up, 3 in
Dec 06 07:10:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:25.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:25.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:26 compute-1 ceph-mon[81689]: osdmap e215: 3 total, 3 up, 3 in
Dec 06 07:10:26 compute-1 ceph-mon[81689]: pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Dec 06 07:10:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e216 e216: 3 total, 3 up, 3 in
Dec 06 07:10:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:27.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:27.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:28 compute-1 sudo[245038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:28 compute-1 sudo[245038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:28 compute-1 sudo[245038]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:28 compute-1 ceph-mon[81689]: osdmap e216: 3 total, 3 up, 3 in
Dec 06 07:10:28 compute-1 ceph-mon[81689]: pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 3.8 KiB/s wr, 70 op/s
Dec 06 07:10:28 compute-1 sudo[245063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:10:28 compute-1 sudo[245063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:28 compute-1 sudo[245063]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:28 compute-1 sudo[245088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:28 compute-1 sudo[245088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:28 compute-1 sudo[245088]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:28 compute-1 sudo[245113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:10:28 compute-1 sudo[245113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:28 compute-1 sudo[245113]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:28 compute-1 nova_compute[226101]: 2025-12-06 07:10:28.661 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:28 compute-1 nova_compute[226101]: 2025-12-06 07:10:28.886 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:29.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:29.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:10:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:10:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:30 compute-1 ceph-mon[81689]: pgmap v1537: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 3.1 KiB/s wr, 56 op/s
Dec 06 07:10:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:31.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:10:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:31.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:10:32 compute-1 ceph-mon[81689]: pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 4.2 KiB/s wr, 82 op/s
Dec 06 07:10:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e217 e217: 3 total, 3 up, 3 in
Dec 06 07:10:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:33.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:33.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:33 compute-1 nova_compute[226101]: 2025-12-06 07:10:33.663 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:33 compute-1 nova_compute[226101]: 2025-12-06 07:10:33.886 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:34 compute-1 ceph-mon[81689]: osdmap e217: 3 total, 3 up, 3 in
Dec 06 07:10:34 compute-1 ceph-mon[81689]: pgmap v1540: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 4.0 KiB/s wr, 77 op/s
Dec 06 07:10:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:35 compute-1 sudo[245168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:35 compute-1 sudo[245168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:35 compute-1 sudo[245168]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:35 compute-1 sudo[245199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:10:35 compute-1 sudo[245199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:35 compute-1 sudo[245199]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:35 compute-1 podman[245191]: 2025-12-06 07:10:35.094966147 +0000 UTC m=+0.079024638 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:10:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:35.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:35.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:37.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:37 compute-1 ceph-mon[81689]: pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 06 07:10:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 e218: 3 total, 3 up, 3 in
Dec 06 07:10:38 compute-1 nova_compute[226101]: 2025-12-06 07:10:38.664 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:38 compute-1 nova_compute[226101]: 2025-12-06 07:10:38.888 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:39 compute-1 ceph-mon[81689]: pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 06 07:10:39 compute-1 ceph-mon[81689]: osdmap e218: 3 total, 3 up, 3 in
Dec 06 07:10:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:39.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:39.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:40 compute-1 ceph-mon[81689]: pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 383 B/s wr, 1 op/s
Dec 06 07:10:41 compute-1 podman[245245]: 2025-12-06 07:10:41.067521088 +0000 UTC m=+0.045459860 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:10:41 compute-1 podman[245244]: 2025-12-06 07:10:41.07206242 +0000 UTC m=+0.053739993 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:10:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:41.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:41.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:42 compute-1 ceph-mon[81689]: pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:43.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:43.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:43 compute-1 nova_compute[226101]: 2025-12-06 07:10:43.667 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:43 compute-1 nova_compute[226101]: 2025-12-06 07:10:43.891 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:45.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:45 compute-1 ceph-mon[81689]: pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:45.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:46 compute-1 ceph-mon[81689]: pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:47.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:47.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:48 compute-1 nova_compute[226101]: 2025-12-06 07:10:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:48 compute-1 ceph-mon[81689]: pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:48 compute-1 nova_compute[226101]: 2025-12-06 07:10:48.618 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:48 compute-1 nova_compute[226101]: 2025-12-06 07:10:48.618 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:48 compute-1 nova_compute[226101]: 2025-12-06 07:10:48.618 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:48 compute-1 nova_compute[226101]: 2025-12-06 07:10:48.618 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:10:48 compute-1 nova_compute[226101]: 2025-12-06 07:10:48.619 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:10:48 compute-1 nova_compute[226101]: 2025-12-06 07:10:48.669 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:48 compute-1 nova_compute[226101]: 2025-12-06 07:10:48.892 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:10:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/964958267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:49 compute-1 nova_compute[226101]: 2025-12-06 07:10:49.037 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:10:49 compute-1 nova_compute[226101]: 2025-12-06 07:10:49.190 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:10:49 compute-1 nova_compute[226101]: 2025-12-06 07:10:49.192 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4778MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:10:49 compute-1 nova_compute[226101]: 2025-12-06 07:10:49.192 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:49 compute-1 nova_compute[226101]: 2025-12-06 07:10:49.192 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:49 compute-1 nova_compute[226101]: 2025-12-06 07:10:49.300 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:10:49 compute-1 nova_compute[226101]: 2025-12-06 07:10:49.300 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:10:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:49.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:49 compute-1 nova_compute[226101]: 2025-12-06 07:10:49.332 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:10:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:10:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:49.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:10:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:51.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:51.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:10:51.735 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:10:51 compute-1 nova_compute[226101]: 2025-12-06 07:10:51.735 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:10:51.737 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:10:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1779807841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/964958267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:10:52 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1803060845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:52 compute-1 nova_compute[226101]: 2025-12-06 07:10:52.901 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:10:52 compute-1 nova_compute[226101]: 2025-12-06 07:10:52.907 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:10:52 compute-1 nova_compute[226101]: 2025-12-06 07:10:52.934 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:10:52 compute-1 nova_compute[226101]: 2025-12-06 07:10:52.953 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:10:52 compute-1 nova_compute[226101]: 2025-12-06 07:10:52.953 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:53.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:53 compute-1 ceph-mon[81689]: pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:53 compute-1 ceph-mon[81689]: pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 07:10:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1779705932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1803060845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2596050169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.671 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.893 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.953 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.953 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.954 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.954 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.991 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.991 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.992 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.992 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.993 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:53 compute-1 nova_compute[226101]: 2025-12-06 07:10:53.993 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:10:54 compute-1 nova_compute[226101]: 2025-12-06 07:10:54.621 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:55.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:55.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:55 compute-1 nova_compute[226101]: 2025-12-06 07:10:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:55 compute-1 ceph-mon[81689]: pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 07:10:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4067379754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:56 compute-1 ceph-mon[81689]: pgmap v1552: 305 pgs: 305 active+clean; 56 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 841 KiB/s wr, 13 op/s
Dec 06 07:10:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2114790814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:57.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:57 compute-1 ovn_controller[130279]: 2025-12-06T07:10:57Z|00171|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:10:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:57.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:57 compute-1 nova_compute[226101]: 2025-12-06 07:10:57.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1615937358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:10:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1468335041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:10:58 compute-1 nova_compute[226101]: 2025-12-06 07:10:58.673 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:58 compute-1 nova_compute[226101]: 2025-12-06 07:10:58.894 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:59 compute-1 ceph-mon[81689]: pgmap v1553: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:10:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:59.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:10:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:10:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:59.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:10:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:00 compute-1 ceph-mon[81689]: pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:11:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:00.739 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:01.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:01.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:01.628 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:01.628 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:01.628 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:02 compute-1 ceph-mon[81689]: pgmap v1555: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:11:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:03.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:03.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:03 compute-1 nova_compute[226101]: 2025-12-06 07:11:03.674 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:03 compute-1 nova_compute[226101]: 2025-12-06 07:11:03.896 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:05 compute-1 ceph-mon[81689]: pgmap v1556: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 452 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Dec 06 07:11:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:05.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:06 compute-1 podman[245328]: 2025-12-06 07:11:06.087120622 +0000 UTC m=+0.071763799 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:11:06 compute-1 ceph-mon[81689]: pgmap v1557: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Dec 06 07:11:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:07.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:07.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3367998681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.072 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.073 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.118 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.208 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.208 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.214 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.214 226109 INFO nova.compute.claims [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.334 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.727 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4167333267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.757 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.763 226109 DEBUG nova.compute.provider_tree [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.782 226109 DEBUG nova.scheduler.client.report [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.814 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.814 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.861 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.862 226109 DEBUG nova.network.neutron [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.880 226109 INFO nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.898 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:08 compute-1 nova_compute[226101]: 2025-12-06 07:11:08.901 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:11:08 compute-1 ceph-mon[81689]: pgmap v1558: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 988 KiB/s wr, 88 op/s
Dec 06 07:11:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3960716737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:11:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3960716737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:11:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4167333267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.021 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.022 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.022 226109 INFO nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Creating image(s)
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.046 226109 DEBUG nova.storage.rbd_utils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.071 226109 DEBUG nova.storage.rbd_utils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.100 226109 DEBUG nova.storage.rbd_utils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:09.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:09.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.740 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.766 226109 DEBUG nova.policy [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdd7994b0ebb4035a373b6560aa7dbcf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.817 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.818 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.819 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.819 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.843 226109 DEBUG nova.storage.rbd_utils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:09 compute-1 nova_compute[226101]: 2025-12-06 07:11:09.847 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:10 compute-1 nova_compute[226101]: 2025-12-06 07:11:10.138 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:10 compute-1 nova_compute[226101]: 2025-12-06 07:11:10.214 226109 DEBUG nova.storage.rbd_utils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] resizing rbd image ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:11:10 compute-1 nova_compute[226101]: 2025-12-06 07:11:10.326 226109 DEBUG nova.objects.instance [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'migration_context' on Instance uuid ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:10 compute-1 nova_compute[226101]: 2025-12-06 07:11:10.373 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:11:10 compute-1 nova_compute[226101]: 2025-12-06 07:11:10.373 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Ensure instance console log exists: /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:11:10 compute-1 nova_compute[226101]: 2025-12-06 07:11:10.374 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:10 compute-1 nova_compute[226101]: 2025-12-06 07:11:10.374 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:10 compute-1 nova_compute[226101]: 2025-12-06 07:11:10.374 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:10 compute-1 ceph-mon[81689]: pgmap v1559: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 07:11:11 compute-1 nova_compute[226101]: 2025-12-06 07:11:11.287 226109 DEBUG nova.network.neutron [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Successfully created port: 91927e49-c8f1-4749-a7b3-ec4cd118677a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:11:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:11.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:11.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:12 compute-1 podman[245543]: 2025-12-06 07:11:12.055110153 +0000 UTC m=+0.043604248 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:11:12 compute-1 podman[245542]: 2025-12-06 07:11:12.105375471 +0000 UTC m=+0.086399675 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:11:12 compute-1 nova_compute[226101]: 2025-12-06 07:11:12.819 226109 DEBUG nova.network.neutron [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Successfully updated port: 91927e49-c8f1-4749-a7b3-ec4cd118677a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:11:12 compute-1 nova_compute[226101]: 2025-12-06 07:11:12.836 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "refresh_cache-ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:12 compute-1 nova_compute[226101]: 2025-12-06 07:11:12.836 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquired lock "refresh_cache-ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:12 compute-1 nova_compute[226101]: 2025-12-06 07:11:12.836 226109 DEBUG nova.network.neutron [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:11:12 compute-1 ceph-mon[81689]: pgmap v1560: 305 pgs: 305 active+clean; 137 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Dec 06 07:11:13 compute-1 nova_compute[226101]: 2025-12-06 07:11:13.043 226109 DEBUG nova.compute.manager [req-34810cea-727c-4e44-b8db-ebcf332f6bf1 req-e78c7603-9029-4aa5-b77a-d2f7b64c0a7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received event network-changed-91927e49-c8f1-4749-a7b3-ec4cd118677a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:13 compute-1 nova_compute[226101]: 2025-12-06 07:11:13.044 226109 DEBUG nova.compute.manager [req-34810cea-727c-4e44-b8db-ebcf332f6bf1 req-e78c7603-9029-4aa5-b77a-d2f7b64c0a7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Refreshing instance network info cache due to event network-changed-91927e49-c8f1-4749-a7b3-ec4cd118677a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:11:13 compute-1 nova_compute[226101]: 2025-12-06 07:11:13.044 226109 DEBUG oslo_concurrency.lockutils [req-34810cea-727c-4e44-b8db-ebcf332f6bf1 req-e78c7603-9029-4aa5-b77a-d2f7b64c0a7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:13 compute-1 nova_compute[226101]: 2025-12-06 07:11:13.079 226109 DEBUG nova.network.neutron [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:11:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:13.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:13 compute-1 nova_compute[226101]: 2025-12-06 07:11:13.729 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:13.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:13 compute-1 nova_compute[226101]: 2025-12-06 07:11:13.900 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.498 226109 DEBUG nova.network.neutron [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Updating instance_info_cache with network_info: [{"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.529 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Releasing lock "refresh_cache-ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.529 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Instance network_info: |[{"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.530 226109 DEBUG oslo_concurrency.lockutils [req-34810cea-727c-4e44-b8db-ebcf332f6bf1 req-e78c7603-9029-4aa5-b77a-d2f7b64c0a7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.530 226109 DEBUG nova.network.neutron [req-34810cea-727c-4e44-b8db-ebcf332f6bf1 req-e78c7603-9029-4aa5-b77a-d2f7b64c0a7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Refreshing network info cache for port 91927e49-c8f1-4749-a7b3-ec4cd118677a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.533 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Start _get_guest_xml network_info=[{"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.537 226109 WARNING nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.544 226109 DEBUG nova.virt.libvirt.host [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.544 226109 DEBUG nova.virt.libvirt.host [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.549 226109 DEBUG nova.virt.libvirt.host [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.550 226109 DEBUG nova.virt.libvirt.host [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.551 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.551 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.551 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.551 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.552 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.552 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.552 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.552 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.552 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.553 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.553 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.553 226109 DEBUG nova.virt.hardware [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:11:14 compute-1 nova_compute[226101]: 2025-12-06 07:11:14.556 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:15 compute-1 ceph-mon[81689]: pgmap v1561: 305 pgs: 305 active+clean; 159 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Dec 06 07:11:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:15.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:11:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4056302847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:15 compute-1 nova_compute[226101]: 2025-12-06 07:11:15.491 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.936s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:15 compute-1 nova_compute[226101]: 2025-12-06 07:11:15.517 226109 DEBUG nova.storage.rbd_utils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:15 compute-1 nova_compute[226101]: 2025-12-06 07:11:15.521 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:15.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.008 226109 DEBUG nova.network.neutron [req-34810cea-727c-4e44-b8db-ebcf332f6bf1 req-e78c7603-9029-4aa5-b77a-d2f7b64c0a7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Updated VIF entry in instance network info cache for port 91927e49-c8f1-4749-a7b3-ec4cd118677a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.009 226109 DEBUG nova.network.neutron [req-34810cea-727c-4e44-b8db-ebcf332f6bf1 req-e78c7603-9029-4aa5-b77a-d2f7b64c0a7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Updating instance_info_cache with network_info: [{"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.024 226109 DEBUG oslo_concurrency.lockutils [req-34810cea-727c-4e44-b8db-ebcf332f6bf1 req-e78c7603-9029-4aa5-b77a-d2f7b64c0a7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:11:16 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1187363185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.409 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.888s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.411 226109 DEBUG nova.virt.libvirt.vif [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1629415623',display_name='tempest-ImagesTestJSON-server-1629415623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1629415623',id=52,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-0eyami9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:08Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.411 226109 DEBUG nova.network.os_vif_util [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.412 226109 DEBUG nova.network.os_vif_util [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:ba:15,bridge_name='br-int',has_traffic_filtering=True,id=91927e49-c8f1-4749-a7b3-ec4cd118677a,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91927e49-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.413 226109 DEBUG nova.objects.instance [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.431 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <uuid>ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d</uuid>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <name>instance-00000034</name>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <nova:name>tempest-ImagesTestJSON-server-1629415623</nova:name>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:11:14</nova:creationTime>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <nova:user uuid="bdd7994b0ebb4035a373b6560aa7dbcf">tempest-ImagesTestJSON-134159412-project-member</nova:user>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <nova:project uuid="af7365adc05f4624a08a71cd5a77ada6">tempest-ImagesTestJSON-134159412</nova:project>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <nova:port uuid="91927e49-c8f1-4749-a7b3-ec4cd118677a">
Dec 06 07:11:16 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <system>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <entry name="serial">ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d</entry>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <entry name="uuid">ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d</entry>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </system>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <os>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   </os>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <features>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   </features>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk">
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       </source>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk.config">
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       </source>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:11:16 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:16:ba:15"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <target dev="tap91927e49-c8"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d/console.log" append="off"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <video>
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </video>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:11:16 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:11:16 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:11:16 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:11:16 compute-1 nova_compute[226101]: </domain>
Dec 06 07:11:16 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.433 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Preparing to wait for external event network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.433 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.433 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.434 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.434 226109 DEBUG nova.virt.libvirt.vif [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1629415623',display_name='tempest-ImagesTestJSON-server-1629415623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1629415623',id=52,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-0eyami9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:08Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.435 226109 DEBUG nova.network.os_vif_util [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.435 226109 DEBUG nova.network.os_vif_util [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:ba:15,bridge_name='br-int',has_traffic_filtering=True,id=91927e49-c8f1-4749-a7b3-ec4cd118677a,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91927e49-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.435 226109 DEBUG os_vif [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:ba:15,bridge_name='br-int',has_traffic_filtering=True,id=91927e49-c8f1-4749-a7b3-ec4cd118677a,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91927e49-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.436 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.436 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.437 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.440 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.440 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91927e49-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.441 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91927e49-c8, col_values=(('external_ids', {'iface-id': '91927e49-c8f1-4749-a7b3-ec4cd118677a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:ba:15', 'vm-uuid': 'ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.442 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:16 compute-1 NetworkManager[49031]: <info>  [1765005076.4431] manager: (tap91927e49-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.444 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.448 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.449 226109 INFO os_vif [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:ba:15,bridge_name='br-int',has_traffic_filtering=True,id=91927e49-c8f1-4749-a7b3-ec4cd118677a,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91927e49-c8')
Dec 06 07:11:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4056302847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.551 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.551 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.552 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No VIF found with MAC fa:16:3e:16:ba:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.552 226109 INFO nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Using config drive
Dec 06 07:11:16 compute-1 nova_compute[226101]: 2025-12-06 07:11:16.580 226109 DEBUG nova.storage.rbd_utils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:17 compute-1 nova_compute[226101]: 2025-12-06 07:11:17.230 226109 INFO nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Creating config drive at /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d/disk.config
Dec 06 07:11:17 compute-1 nova_compute[226101]: 2025-12-06 07:11:17.235 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu0jsqwi9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:17.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:17 compute-1 nova_compute[226101]: 2025-12-06 07:11:17.365 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu0jsqwi9" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:17 compute-1 nova_compute[226101]: 2025-12-06 07:11:17.392 226109 DEBUG nova.storage.rbd_utils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:17 compute-1 nova_compute[226101]: 2025-12-06 07:11:17.395 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d/disk.config ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:17.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:18 compute-1 ceph-mon[81689]: pgmap v1562: 305 pgs: 305 active+clean; 192 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.1 MiB/s wr, 196 op/s
Dec 06 07:11:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1187363185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:18 compute-1 nova_compute[226101]: 2025-12-06 07:11:18.902 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:19.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:19 compute-1 ceph-mon[81689]: pgmap v1563: 305 pgs: 305 active+clean; 208 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 971 KiB/s rd, 5.6 MiB/s wr, 251 op/s
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.587 226109 DEBUG oslo_concurrency.processutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d/disk.config ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.588 226109 INFO nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Deleting local config drive /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d/disk.config because it was imported into RBD.
Dec 06 07:11:19 compute-1 kernel: tap91927e49-c8: entered promiscuous mode
Dec 06 07:11:19 compute-1 NetworkManager[49031]: <info>  [1765005079.6438] manager: (tap91927e49-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Dec 06 07:11:19 compute-1 ovn_controller[130279]: 2025-12-06T07:11:19Z|00172|binding|INFO|Claiming lport 91927e49-c8f1-4749-a7b3-ec4cd118677a for this chassis.
Dec 06 07:11:19 compute-1 ovn_controller[130279]: 2025-12-06T07:11:19Z|00173|binding|INFO|91927e49-c8f1-4749-a7b3-ec4cd118677a: Claiming fa:16:3e:16:ba:15 10.100.0.12
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.644 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.651 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.662 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:ba:15 10.100.0.12'], port_security=['fa:16:3e:16:ba:15 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=91927e49-c8f1-4749-a7b3-ec4cd118677a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.664 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 91927e49-c8f1-4749-a7b3-ec4cd118677a in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad bound to our chassis
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.666 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:11:19 compute-1 systemd-udevd[245713]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.677 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[81b1cbb8-b893-4f87-8475-64a517d0f21d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.678 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b0835d7-81 in ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.680 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b0835d7-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.680 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca60a10-6151-435c-ae8b-5f91418170f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.681 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fc648ffa-cfc7-49d4-98dd-da4d2da6b143]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 systemd-machined[190302]: New machine qemu-25-instance-00000034.
Dec 06 07:11:19 compute-1 NetworkManager[49031]: <info>  [1765005079.6865] device (tap91927e49-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:11:19 compute-1 NetworkManager[49031]: <info>  [1765005079.6880] device (tap91927e49-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.694 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[19df9de2-0bdb-4410-955b-649f9b2fde49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 systemd[1]: Started Virtual Machine qemu-25-instance-00000034.
Dec 06 07:11:19 compute-1 ovn_controller[130279]: 2025-12-06T07:11:19Z|00174|binding|INFO|Setting lport 91927e49-c8f1-4749-a7b3-ec4cd118677a ovn-installed in OVS
Dec 06 07:11:19 compute-1 ovn_controller[130279]: 2025-12-06T07:11:19Z|00175|binding|INFO|Setting lport 91927e49-c8f1-4749-a7b3-ec4cd118677a up in Southbound
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.713 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.715 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.718 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0f194a8a-aae1-4f88-9faa-e8c33bfe85a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.747 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[83ac2303-a274-408b-b396-7386d317b084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.752 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f21c809c-1a72-47d9-888f-8d05fdf336a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 NetworkManager[49031]: <info>  [1765005079.7537] manager: (tap2b0835d7-80): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Dec 06 07:11:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.780 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[1676f356-4668-4bde-86e2-fde542ba13de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.784 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6bcba5-20e7-4ea9-aa87-dad9eff6c22d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 NetworkManager[49031]: <info>  [1765005079.8068] device (tap2b0835d7-80): carrier: link connected
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.812 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[45179ae5-5117-4993-9821-1babfcd5e1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.826 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[632b1a1e-778f-4011-8c9e-bf192ff70e64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535931, 'reachable_time': 36587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245747, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.839 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3b8922-fd79-4453-9be4-8cef2953d460]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:4e19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535931, 'tstamp': 535931}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245748, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.855 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f74af1d5-ca7a-495f-884e-99630a188179]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535931, 'reachable_time': 36587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245749, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.882 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9c105e85-67e6-4a4f-a45a-9ce613917344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.946 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd78e35-22bd-45a8-bf19-f9f1f84af3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.950 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.951 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.951 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b0835d7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:19 compute-1 kernel: tap2b0835d7-80: entered promiscuous mode
Dec 06 07:11:19 compute-1 NetworkManager[49031]: <info>  [1765005079.9535] manager: (tap2b0835d7-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.952 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.955 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.957 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b0835d7-80, col_values=(('external_ids', {'iface-id': '87f2c5b0-3684-4269-9fbf-5a4dfd5a8759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:19 compute-1 ovn_controller[130279]: 2025-12-06T07:11:19Z|00176|binding|INFO|Releasing lport 87f2c5b0-3684-4269-9fbf-5a4dfd5a8759 from this chassis (sb_readonly=0)
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.958 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.958 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.959 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.960 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef9b65c-c2c5-4639-9446-8ffbb6d85382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.960 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:11:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:19.961 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'env', 'PROCESS_TAG=haproxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:11:19 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.970 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:19.999 226109 DEBUG nova.compute.manager [req-156caf91-99d9-4f41-b52c-9abd554a3e2d req-233c2253-8757-48df-a06e-96c85b09915c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received event network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.000 226109 DEBUG oslo_concurrency.lockutils [req-156caf91-99d9-4f41-b52c-9abd554a3e2d req-233c2253-8757-48df-a06e-96c85b09915c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.000 226109 DEBUG oslo_concurrency.lockutils [req-156caf91-99d9-4f41-b52c-9abd554a3e2d req-233c2253-8757-48df-a06e-96c85b09915c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.001 226109 DEBUG oslo_concurrency.lockutils [req-156caf91-99d9-4f41-b52c-9abd554a3e2d req-233c2253-8757-48df-a06e-96c85b09915c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.001 226109 DEBUG nova.compute.manager [req-156caf91-99d9-4f41-b52c-9abd554a3e2d req-233c2253-8757-48df-a06e-96c85b09915c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Processing event network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:11:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:20 compute-1 podman[245817]: 2025-12-06 07:11:20.27432768 +0000 UTC m=+0.022135249 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.603 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005080.6029243, ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.604 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] VM Started (Lifecycle Event)
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.606 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.610 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.613 226109 INFO nova.virt.libvirt.driver [-] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Instance spawned successfully.
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.613 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.626 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.632 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.635 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.635 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.635 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.636 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.636 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.636 226109 DEBUG nova.virt.libvirt.driver [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.661 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.662 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005080.6031373, ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.662 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] VM Paused (Lifecycle Event)
Dec 06 07:11:20 compute-1 podman[245817]: 2025-12-06 07:11:20.677644842 +0000 UTC m=+0.425452391 container create 16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:11:20 compute-1 systemd[1]: Started libpod-conmon-16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85.scope.
Dec 06 07:11:20 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:11:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c795c2373b1087a0f481e21f133cb28ba01ddde6d26dfbb5111e3151792926b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.871 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.874 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005080.609336, ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.875 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] VM Resumed (Lifecycle Event)
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.883 226109 INFO nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Took 11.86 seconds to spawn the instance on the hypervisor.
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.884 226109 DEBUG nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.897 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.902 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.921 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.965 226109 INFO nova.compute.manager [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Took 12.79 seconds to build instance.
Dec 06 07:11:20 compute-1 nova_compute[226101]: 2025-12-06 07:11:20.986 226109 DEBUG oslo_concurrency.lockutils [None req-cf304d76-149b-4f8e-9c4a-48631fd480e0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:21 compute-1 podman[245817]: 2025-12-06 07:11:21.296366651 +0000 UTC m=+1.044174220 container init 16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:11:21 compute-1 ceph-mon[81689]: pgmap v1564: 305 pgs: 305 active+clean; 208 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 411 KiB/s rd, 5.6 MiB/s wr, 232 op/s
Dec 06 07:11:21 compute-1 podman[245817]: 2025-12-06 07:11:21.308208892 +0000 UTC m=+1.056016461 container start 16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:11:21 compute-1 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[245839]: [NOTICE]   (245843) : New worker (245845) forked
Dec 06 07:11:21 compute-1 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[245839]: [NOTICE]   (245843) : Loading success.
Dec 06 07:11:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:21.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:21 compute-1 nova_compute[226101]: 2025-12-06 07:11:21.443 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:22 compute-1 nova_compute[226101]: 2025-12-06 07:11:22.082 226109 DEBUG nova.compute.manager [req-89edead2-115a-4d77-88cf-92cf7c005cc6 req-32009678-f414-413f-8e15-8e8716aaaba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received event network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:22 compute-1 nova_compute[226101]: 2025-12-06 07:11:22.082 226109 DEBUG oslo_concurrency.lockutils [req-89edead2-115a-4d77-88cf-92cf7c005cc6 req-32009678-f414-413f-8e15-8e8716aaaba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:22 compute-1 nova_compute[226101]: 2025-12-06 07:11:22.082 226109 DEBUG oslo_concurrency.lockutils [req-89edead2-115a-4d77-88cf-92cf7c005cc6 req-32009678-f414-413f-8e15-8e8716aaaba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:22 compute-1 nova_compute[226101]: 2025-12-06 07:11:22.082 226109 DEBUG oslo_concurrency.lockutils [req-89edead2-115a-4d77-88cf-92cf7c005cc6 req-32009678-f414-413f-8e15-8e8716aaaba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:22 compute-1 nova_compute[226101]: 2025-12-06 07:11:22.083 226109 DEBUG nova.compute.manager [req-89edead2-115a-4d77-88cf-92cf7c005cc6 req-32009678-f414-413f-8e15-8e8716aaaba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] No waiting events found dispatching network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:22 compute-1 nova_compute[226101]: 2025-12-06 07:11:22.083 226109 WARNING nova.compute.manager [req-89edead2-115a-4d77-88cf-92cf7c005cc6 req-32009678-f414-413f-8e15-8e8716aaaba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received unexpected event network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a for instance with vm_state active and task_state None.
Dec 06 07:11:22 compute-1 ceph-mon[81689]: pgmap v1565: 305 pgs: 305 active+clean; 213 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 460 KiB/s rd, 5.7 MiB/s wr, 280 op/s
Dec 06 07:11:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2418961624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2830245454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:23.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:23 compute-1 nova_compute[226101]: 2025-12-06 07:11:23.874 226109 INFO nova.compute.manager [None req-a6833bd1-b74c-4a26-84ed-37c5eaf5ea41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Pausing
Dec 06 07:11:23 compute-1 nova_compute[226101]: 2025-12-06 07:11:23.875 226109 DEBUG nova.objects.instance [None req-a6833bd1-b74c-4a26-84ed-37c5eaf5ea41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'flavor' on Instance uuid ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:23 compute-1 nova_compute[226101]: 2025-12-06 07:11:23.903 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:23 compute-1 nova_compute[226101]: 2025-12-06 07:11:23.906 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005083.9064896, ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:23 compute-1 nova_compute[226101]: 2025-12-06 07:11:23.906 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] VM Paused (Lifecycle Event)
Dec 06 07:11:23 compute-1 nova_compute[226101]: 2025-12-06 07:11:23.908 226109 DEBUG nova.compute.manager [None req-a6833bd1-b74c-4a26-84ed-37c5eaf5ea41 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:23 compute-1 nova_compute[226101]: 2025-12-06 07:11:23.983 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:23 compute-1 nova_compute[226101]: 2025-12-06 07:11:23.986 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:11:25 compute-1 ceph-mon[81689]: pgmap v1566: 305 pgs: 305 active+clean; 213 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 737 KiB/s rd, 3.7 MiB/s wr, 243 op/s
Dec 06 07:11:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:25.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:25.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:26 compute-1 ceph-mon[81689]: pgmap v1567: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 250 op/s
Dec 06 07:11:26 compute-1 nova_compute[226101]: 2025-12-06 07:11:26.445 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:27.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:27.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:28 compute-1 nova_compute[226101]: 2025-12-06 07:11:28.635 226109 DEBUG nova.compute.manager [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:28 compute-1 nova_compute[226101]: 2025-12-06 07:11:28.672 226109 INFO nova.compute.manager [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] instance snapshotting
Dec 06 07:11:28 compute-1 nova_compute[226101]: 2025-12-06 07:11:28.672 226109 WARNING nova.compute.manager [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] trying to snapshot a non-running instance: (state: 3 expected: 1)
Dec 06 07:11:28 compute-1 nova_compute[226101]: 2025-12-06 07:11:28.907 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:29 compute-1 nova_compute[226101]: 2025-12-06 07:11:29.139 226109 INFO nova.virt.libvirt.driver [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Beginning live snapshot process
Dec 06 07:11:29 compute-1 nova_compute[226101]: 2025-12-06 07:11:29.277 226109 DEBUG nova.virt.libvirt.imagebackend [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:11:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:29.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:29 compute-1 nova_compute[226101]: 2025-12-06 07:11:29.764 226109 DEBUG nova.storage.rbd_utils [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(d28bd57cc94248188d61a67de8066880) on rbd image(ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:11:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:29 compute-1 ceph-mon[81689]: pgmap v1568: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 234 op/s
Dec 06 07:11:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:30 compute-1 ceph-mon[81689]: pgmap v1569: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 122 KiB/s wr, 148 op/s
Dec 06 07:11:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:31.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e219 e219: 3 total, 3 up, 3 in
Dec 06 07:11:31 compute-1 nova_compute[226101]: 2025-12-06 07:11:31.461 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-1 nova_compute[226101]: 2025-12-06 07:11:31.669 226109 DEBUG nova.storage.rbd_utils [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] cloning vms/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk@d28bd57cc94248188d61a67de8066880 to images/29145f63-7097-42b2-b998-dd2d2a5d0fda clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:11:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:31 compute-1 nova_compute[226101]: 2025-12-06 07:11:31.786 226109 DEBUG nova.storage.rbd_utils [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] flattening images/29145f63-7097-42b2-b998-dd2d2a5d0fda flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:11:32 compute-1 nova_compute[226101]: 2025-12-06 07:11:32.067 226109 DEBUG nova.storage.rbd_utils [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] removing snapshot(d28bd57cc94248188d61a67de8066880) on rbd image(ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:11:32 compute-1 ceph-mon[81689]: pgmap v1570: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 123 KiB/s wr, 169 op/s
Dec 06 07:11:32 compute-1 ceph-mon[81689]: osdmap e219: 3 total, 3 up, 3 in
Dec 06 07:11:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e220 e220: 3 total, 3 up, 3 in
Dec 06 07:11:32 compute-1 nova_compute[226101]: 2025-12-06 07:11:32.905 226109 DEBUG nova.storage.rbd_utils [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(snap) on rbd image(29145f63-7097-42b2-b998-dd2d2a5d0fda) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:11:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:33.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:33 compute-1 nova_compute[226101]: 2025-12-06 07:11:33.462 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:33.463 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:33.464 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:11:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:33.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:33 compute-1 ceph-mon[81689]: osdmap e220: 3 total, 3 up, 3 in
Dec 06 07:11:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e221 e221: 3 total, 3 up, 3 in
Dec 06 07:11:33 compute-1 nova_compute[226101]: 2025-12-06 07:11:33.909 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:34 compute-1 ceph-mon[81689]: pgmap v1573: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 21 KiB/s wr, 130 op/s
Dec 06 07:11:34 compute-1 ceph-mon[81689]: osdmap e221: 3 total, 3 up, 3 in
Dec 06 07:11:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:35 compute-1 sudo[245996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:35 compute-1 sudo[245996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:35 compute-1 sudo[245996]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:35 compute-1 sudo[246021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:11:35 compute-1 sudo[246021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:35 compute-1 sudo[246021]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:35.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:35 compute-1 sudo[246046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:35 compute-1 sudo[246046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:35 compute-1 sudo[246046]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:35 compute-1 sudo[246071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:11:35 compute-1 sudo[246071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:35 compute-1 nova_compute[226101]: 2025-12-06 07:11:35.718 226109 INFO nova.virt.libvirt.driver [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Snapshot image upload complete
Dec 06 07:11:35 compute-1 nova_compute[226101]: 2025-12-06 07:11:35.718 226109 INFO nova.compute.manager [None req-6f56cfc5-a918-4670-ba82-1f769ca9f6c0 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Took 7.04 seconds to snapshot the instance on the hypervisor.
Dec 06 07:11:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:35 compute-1 sudo[246071]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:36 compute-1 ceph-mon[81689]: pgmap v1575: 305 pgs: 305 active+clean; 200 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 666 KiB/s wr, 238 op/s
Dec 06 07:11:36 compute-1 nova_compute[226101]: 2025-12-06 07:11:36.463 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:37 compute-1 podman[246127]: 2025-12-06 07:11:37.153581564 +0000 UTC m=+0.122221072 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 07:11:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:37.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:37.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3627254312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:38 compute-1 ceph-mon[81689]: pgmap v1576: 305 pgs: 305 active+clean; 213 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.6 MiB/s wr, 253 op/s
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:11:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:11:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e222 e222: 3 total, 3 up, 3 in
Dec 06 07:11:38 compute-1 nova_compute[226101]: 2025-12-06 07:11:38.910 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:39 compute-1 nova_compute[226101]: 2025-12-06 07:11:39.204 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:39 compute-1 nova_compute[226101]: 2025-12-06 07:11:39.205 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:39 compute-1 nova_compute[226101]: 2025-12-06 07:11:39.206 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:39 compute-1 nova_compute[226101]: 2025-12-06 07:11:39.206 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:39 compute-1 nova_compute[226101]: 2025-12-06 07:11:39.207 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:39 compute-1 nova_compute[226101]: 2025-12-06 07:11:39.208 226109 INFO nova.compute.manager [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Terminating instance
Dec 06 07:11:39 compute-1 nova_compute[226101]: 2025-12-06 07:11:39.210 226109 DEBUG nova.compute.manager [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:11:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:39.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:39.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:41.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e223 e223: 3 total, 3 up, 3 in
Dec 06 07:11:41 compute-1 nova_compute[226101]: 2025-12-06 07:11:41.521 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:41 compute-1 ceph-mon[81689]: osdmap e222: 3 total, 3 up, 3 in
Dec 06 07:11:41 compute-1 kernel: tap91927e49-c8 (unregistering): left promiscuous mode
Dec 06 07:11:41 compute-1 NetworkManager[49031]: <info>  [1765005101.5417] device (tap91927e49-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:11:41 compute-1 nova_compute[226101]: 2025-12-06 07:11:41.551 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:41 compute-1 ovn_controller[130279]: 2025-12-06T07:11:41Z|00177|binding|INFO|Releasing lport 91927e49-c8f1-4749-a7b3-ec4cd118677a from this chassis (sb_readonly=0)
Dec 06 07:11:41 compute-1 ovn_controller[130279]: 2025-12-06T07:11:41Z|00178|binding|INFO|Setting lport 91927e49-c8f1-4749-a7b3-ec4cd118677a down in Southbound
Dec 06 07:11:41 compute-1 ovn_controller[130279]: 2025-12-06T07:11:41Z|00179|binding|INFO|Removing iface tap91927e49-c8 ovn-installed in OVS
Dec 06 07:11:41 compute-1 nova_compute[226101]: 2025-12-06 07:11:41.554 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:41 compute-1 nova_compute[226101]: 2025-12-06 07:11:41.573 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:41 compute-1 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000034.scope: Deactivated successfully.
Dec 06 07:11:41 compute-1 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000034.scope: Consumed 3.926s CPU time.
Dec 06 07:11:41 compute-1 systemd-machined[190302]: Machine qemu-25-instance-00000034 terminated.
Dec 06 07:11:41 compute-1 nova_compute[226101]: 2025-12-06 07:11:41.645 226109 INFO nova.virt.libvirt.driver [-] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Instance destroyed successfully.
Dec 06 07:11:41 compute-1 nova_compute[226101]: 2025-12-06 07:11:41.645 226109 DEBUG nova.objects.instance [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'resources' on Instance uuid ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:41.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:41.997 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:ba:15 10.100.0.12'], port_security=['fa:16:3e:16:ba:15 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=91927e49-c8f1-4749-a7b3-ec4cd118677a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:41.998 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 91927e49-c8f1-4749-a7b3-ec4cd118677a in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad unbound from our chassis
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:41.999 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.000 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f15cdf57-4bb7-4a8b-b5f9-d6b4f86a32ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.001 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace which is not needed anymore
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.023 226109 DEBUG nova.virt.libvirt.vif [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1629415623',display_name='tempest-ImagesTestJSON-server-1629415623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1629415623',id=52,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:11:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-0eyami9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:11:36Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.023 226109 DEBUG nova.network.os_vif_util [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "address": "fa:16:3e:16:ba:15", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91927e49-c8", "ovs_interfaceid": "91927e49-c8f1-4749-a7b3-ec4cd118677a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.024 226109 DEBUG nova.network.os_vif_util [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:ba:15,bridge_name='br-int',has_traffic_filtering=True,id=91927e49-c8f1-4749-a7b3-ec4cd118677a,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91927e49-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.024 226109 DEBUG os_vif [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:ba:15,bridge_name='br-int',has_traffic_filtering=True,id=91927e49-c8f1-4749-a7b3-ec4cd118677a,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91927e49-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.025 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.026 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91927e49-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.027 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.028 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.030 226109 INFO os_vif [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:ba:15,bridge_name='br-int',has_traffic_filtering=True,id=91927e49-c8f1-4749-a7b3-ec4cd118677a,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91927e49-c8')
Dec 06 07:11:42 compute-1 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[245839]: [NOTICE]   (245843) : haproxy version is 2.8.14-c23fe91
Dec 06 07:11:42 compute-1 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[245839]: [NOTICE]   (245843) : path to executable is /usr/sbin/haproxy
Dec 06 07:11:42 compute-1 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[245839]: [WARNING]  (245843) : Exiting Master process...
Dec 06 07:11:42 compute-1 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[245839]: [WARNING]  (245843) : Exiting Master process...
Dec 06 07:11:42 compute-1 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[245839]: [ALERT]    (245843) : Current worker (245845) exited with code 143 (Terminated)
Dec 06 07:11:42 compute-1 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[245839]: [WARNING]  (245843) : All workers exited. Exiting... (0)
Dec 06 07:11:42 compute-1 systemd[1]: libpod-16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85.scope: Deactivated successfully.
Dec 06 07:11:42 compute-1 podman[246206]: 2025-12-06 07:11:42.128217764 +0000 UTC m=+0.045297714 container died 16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:11:42 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85-userdata-shm.mount: Deactivated successfully.
Dec 06 07:11:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-7c795c2373b1087a0f481e21f133cb28ba01ddde6d26dfbb5111e3151792926b-merged.mount: Deactivated successfully.
Dec 06 07:11:42 compute-1 podman[246206]: 2025-12-06 07:11:42.169374326 +0000 UTC m=+0.086454276 container cleanup 16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:11:42 compute-1 systemd[1]: libpod-conmon-16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85.scope: Deactivated successfully.
Dec 06 07:11:42 compute-1 podman[246230]: 2025-12-06 07:11:42.215554224 +0000 UTC m=+0.062662294 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:11:42 compute-1 podman[246221]: 2025-12-06 07:11:42.218178025 +0000 UTC m=+0.072119070 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 06 07:11:42 compute-1 podman[246256]: 2025-12-06 07:11:42.225903993 +0000 UTC m=+0.037577756 container remove 16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.230 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bff2c027-aeb4-496d-86ca-2a2c5a16341d]: (4, ('Sat Dec  6 07:11:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85)\n16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85\nSat Dec  6 07:11:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85)\n16c38115d8fdd46ae71a294273e4d89bfe34122c9a94ebe4e6b4dfdbd304df85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.232 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b304485-4c3b-4928-90e6-752779351098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.233 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:42 compute-1 kernel: tap2b0835d7-80: left promiscuous mode
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.234 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.247 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.249 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8fecb907-bddb-477a-a3e7-33d4016edf6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.260 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[20035f60-d7b8-4482-a39f-0f4cd4c32dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.261 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fd539e7b-0070-420a-99b0-ae6b7488a4db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.277 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a2647b-e7d8-49f4-a3bf-773cd7d61b78]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535924, 'reachable_time': 40129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246286, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.279 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:11:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:42.279 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[13ee7c38-2b91-4f0e-8a5d-fc063ea19df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:42 compute-1 systemd[1]: run-netns-ovnmeta\x2d2b0835d7\x2d87e4\x2d46cc\x2d8a94\x2de4e042bd4bad.mount: Deactivated successfully.
Dec 06 07:11:42 compute-1 ceph-mon[81689]: pgmap v1578: 305 pgs: 305 active+clean; 213 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.4 MiB/s wr, 194 op/s
Dec 06 07:11:42 compute-1 ceph-mon[81689]: pgmap v1579: 305 pgs: 305 active+clean; 194 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 201 op/s
Dec 06 07:11:42 compute-1 ceph-mon[81689]: osdmap e223: 3 total, 3 up, 3 in
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.676 226109 DEBUG nova.compute.manager [req-1bc21941-2e6e-4bb2-bacd-19524e43c788 req-3436c418-ecd6-4e9a-9832-4d2056ad7bd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received event network-vif-unplugged-91927e49-c8f1-4749-a7b3-ec4cd118677a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.676 226109 DEBUG oslo_concurrency.lockutils [req-1bc21941-2e6e-4bb2-bacd-19524e43c788 req-3436c418-ecd6-4e9a-9832-4d2056ad7bd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.676 226109 DEBUG oslo_concurrency.lockutils [req-1bc21941-2e6e-4bb2-bacd-19524e43c788 req-3436c418-ecd6-4e9a-9832-4d2056ad7bd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.677 226109 DEBUG oslo_concurrency.lockutils [req-1bc21941-2e6e-4bb2-bacd-19524e43c788 req-3436c418-ecd6-4e9a-9832-4d2056ad7bd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.677 226109 DEBUG nova.compute.manager [req-1bc21941-2e6e-4bb2-bacd-19524e43c788 req-3436c418-ecd6-4e9a-9832-4d2056ad7bd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] No waiting events found dispatching network-vif-unplugged-91927e49-c8f1-4749-a7b3-ec4cd118677a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:42 compute-1 nova_compute[226101]: 2025-12-06 07:11:42.677 226109 DEBUG nova.compute.manager [req-1bc21941-2e6e-4bb2-bacd-19524e43c788 req-3436c418-ecd6-4e9a-9832-4d2056ad7bd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received event network-vif-unplugged-91927e49-c8f1-4749-a7b3-ec4cd118677a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:11:43 compute-1 nova_compute[226101]: 2025-12-06 07:11:43.210 226109 INFO nova.virt.libvirt.driver [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Deleting instance files /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_del
Dec 06 07:11:43 compute-1 nova_compute[226101]: 2025-12-06 07:11:43.211 226109 INFO nova.virt.libvirt.driver [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Deletion of /var/lib/nova/instances/ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d_del complete
Dec 06 07:11:43 compute-1 nova_compute[226101]: 2025-12-06 07:11:43.302 226109 INFO nova.compute.manager [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Took 4.09 seconds to destroy the instance on the hypervisor.
Dec 06 07:11:43 compute-1 nova_compute[226101]: 2025-12-06 07:11:43.302 226109 DEBUG oslo.service.loopingcall [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:11:43 compute-1 nova_compute[226101]: 2025-12-06 07:11:43.303 226109 DEBUG nova.compute.manager [-] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:11:43 compute-1 nova_compute[226101]: 2025-12-06 07:11:43.303 226109 DEBUG nova.network.neutron [-] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:11:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:43.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:11:43.465 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:43.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e224 e224: 3 total, 3 up, 3 in
Dec 06 07:11:43 compute-1 nova_compute[226101]: 2025-12-06 07:11:43.912 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.121 226109 DEBUG nova.network.neutron [-] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.147 226109 INFO nova.compute.manager [-] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Took 0.84 seconds to deallocate network for instance.
Dec 06 07:11:44 compute-1 sudo[246288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:44 compute-1 sudo[246288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:44 compute-1 sudo[246288]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.223 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.224 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:44 compute-1 sudo[246313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:11:44 compute-1 sudo[246313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:44 compute-1 sudo[246313]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.290 226109 DEBUG oslo_concurrency.processutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3058757587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.700 226109 DEBUG oslo_concurrency.processutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.707 226109 DEBUG nova.compute.provider_tree [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.732 226109 DEBUG nova.scheduler.client.report [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.758 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.827 226109 INFO nova.scheduler.client.report [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Deleted allocations for instance ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.832 226109 DEBUG nova.compute.manager [req-8732fbaf-a037-4a58-9cc2-d7c573d098f9 req-959b0760-d01c-40bc-a292-0e64d871b8e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received event network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.832 226109 DEBUG oslo_concurrency.lockutils [req-8732fbaf-a037-4a58-9cc2-d7c573d098f9 req-959b0760-d01c-40bc-a292-0e64d871b8e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.833 226109 DEBUG oslo_concurrency.lockutils [req-8732fbaf-a037-4a58-9cc2-d7c573d098f9 req-959b0760-d01c-40bc-a292-0e64d871b8e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.833 226109 DEBUG oslo_concurrency.lockutils [req-8732fbaf-a037-4a58-9cc2-d7c573d098f9 req-959b0760-d01c-40bc-a292-0e64d871b8e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.833 226109 DEBUG nova.compute.manager [req-8732fbaf-a037-4a58-9cc2-d7c573d098f9 req-959b0760-d01c-40bc-a292-0e64d871b8e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] No waiting events found dispatching network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.833 226109 WARNING nova.compute.manager [req-8732fbaf-a037-4a58-9cc2-d7c573d098f9 req-959b0760-d01c-40bc-a292-0e64d871b8e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received unexpected event network-vif-plugged-91927e49-c8f1-4749-a7b3-ec4cd118677a for instance with vm_state deleted and task_state None.
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.833 226109 DEBUG nova.compute.manager [req-8732fbaf-a037-4a58-9cc2-d7c573d098f9 req-959b0760-d01c-40bc-a292-0e64d871b8e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Received event network-vif-deleted-91927e49-c8f1-4749-a7b3-ec4cd118677a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:44 compute-1 ceph-mon[81689]: pgmap v1581: 305 pgs: 305 active+clean; 145 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 92 op/s
Dec 06 07:11:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:44 compute-1 ceph-mon[81689]: osdmap e224: 3 total, 3 up, 3 in
Dec 06 07:11:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2485373490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3058757587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/14396442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:44 compute-1 nova_compute[226101]: 2025-12-06 07:11:44.948 226109 DEBUG oslo_concurrency.lockutils [None req-38caed24-b2b2-4e3e-ae89-71f99d86c1a3 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:45.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:45.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:46 compute-1 ceph-mon[81689]: pgmap v1583: 305 pgs: 305 active+clean; 92 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.7 KiB/s wr, 113 op/s
Dec 06 07:11:47 compute-1 nova_compute[226101]: 2025-12-06 07:11:47.029 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:47.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:47.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.615 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.616 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.646 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.647 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.737 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.878 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.878 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.887 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.888 226109 INFO nova.compute.claims [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:11:48 compute-1 nova_compute[226101]: 2025-12-06 07:11:48.913 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.022 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3351923020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:49 compute-1 ceph-mon[81689]: pgmap v1584: 305 pgs: 305 active+clean; 66 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 1.5 MiB/s wr, 147 op/s
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.144 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.315 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.317 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4740MB free_disk=20.976886749267578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.317 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:49.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3783602994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.473 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.480 226109 DEBUG nova.compute.provider_tree [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.499 226109 DEBUG nova.scheduler.client.report [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.522 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.523 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.530 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.580 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.581 226109 DEBUG nova.network.neutron [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.610 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 1445e39c-6c8c-4a8f-9eab-64e13b262e40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.611 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.611 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.616 226109 INFO nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.639 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.646 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.730 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.732 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.733 226109 INFO nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Creating image(s)
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.780 226109 DEBUG nova.storage.rbd_utils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:49.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.827 226109 DEBUG nova.storage.rbd_utils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.851 226109 DEBUG nova.storage.rbd_utils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.854 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.913 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.914 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.915 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.915 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.942 226109 DEBUG nova.storage.rbd_utils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:49 compute-1 nova_compute[226101]: 2025-12-06 07:11:49.945 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.086 226109 DEBUG nova.policy [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1ed181a1103481fa4d0b29ce1009dca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:11:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:50 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/815777043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.111 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.117 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.131 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.150 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.150 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3351923020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:50 compute-1 ceph-mon[81689]: pgmap v1585: 305 pgs: 305 active+clean; 66 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 1.5 MiB/s wr, 100 op/s
Dec 06 07:11:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3783602994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/815777043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.588 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.698 226109 DEBUG nova.storage.rbd_utils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] resizing rbd image 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.910 226109 DEBUG nova.objects.instance [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'migration_context' on Instance uuid 1445e39c-6c8c-4a8f-9eab-64e13b262e40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.928 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.928 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Ensure instance console log exists: /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.929 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.929 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:50 compute-1 nova_compute[226101]: 2025-12-06 07:11:50.930 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.150 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:51.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.605 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.606 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.606 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.606 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:51 compute-1 nova_compute[226101]: 2025-12-06 07:11:51.606 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:11:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/906275142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/413864327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:51.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:52 compute-1 nova_compute[226101]: 2025-12-06 07:11:52.031 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:52 compute-1 nova_compute[226101]: 2025-12-06 07:11:52.493 226109 DEBUG nova.network.neutron [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Successfully created port: f7172ad0-f24b-4d05-81ac-adaec7bac7d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:11:53 compute-1 ceph-mon[81689]: pgmap v1586: 305 pgs: 305 active+clean; 99 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Dec 06 07:11:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3573529648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.538 226109 DEBUG nova.network.neutron [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Successfully updated port: f7172ad0-f24b-4d05-81ac-adaec7bac7d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.560 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "refresh_cache-1445e39c-6c8c-4a8f-9eab-64e13b262e40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.560 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquired lock "refresh_cache-1445e39c-6c8c-4a8f-9eab-64e13b262e40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.560 226109 DEBUG nova.network.neutron [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.677 226109 DEBUG nova.compute.manager [req-fa4ff118-b7d4-4522-b7d8-7b3d993a81c7 req-eae0bd01-407c-4b29-a3d5-f456057b9acc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received event network-changed-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.677 226109 DEBUG nova.compute.manager [req-fa4ff118-b7d4-4522-b7d8-7b3d993a81c7 req-eae0bd01-407c-4b29-a3d5-f456057b9acc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Refreshing instance network info cache due to event network-changed-f7172ad0-f24b-4d05-81ac-adaec7bac7d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.677 226109 DEBUG oslo_concurrency.lockutils [req-fa4ff118-b7d4-4522-b7d8-7b3d993a81c7 req-eae0bd01-407c-4b29-a3d5-f456057b9acc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1445e39c-6c8c-4a8f-9eab-64e13b262e40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.760 226109 DEBUG nova.network.neutron [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:11:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:53.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:53 compute-1 nova_compute[226101]: 2025-12-06 07:11:53.915 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2494911632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:54 compute-1 ceph-mon[81689]: pgmap v1587: 305 pgs: 305 active+clean; 113 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.2 MiB/s wr, 122 op/s
Dec 06 07:11:54 compute-1 sshd-session[246593]: Invalid user 1 from 91.202.233.33 port 26514
Dec 06 07:11:55 compute-1 sshd-session[246593]: Connection reset by invalid user 1 91.202.233.33 port 26514 [preauth]
Dec 06 07:11:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/159950985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:55.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:55.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.853 226109 DEBUG nova.network.neutron [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Updating instance_info_cache with network_info: [{"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.887 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Releasing lock "refresh_cache-1445e39c-6c8c-4a8f-9eab-64e13b262e40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.887 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Instance network_info: |[{"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.888 226109 DEBUG oslo_concurrency.lockutils [req-fa4ff118-b7d4-4522-b7d8-7b3d993a81c7 req-eae0bd01-407c-4b29-a3d5-f456057b9acc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1445e39c-6c8c-4a8f-9eab-64e13b262e40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.888 226109 DEBUG nova.network.neutron [req-fa4ff118-b7d4-4522-b7d8-7b3d993a81c7 req-eae0bd01-407c-4b29-a3d5-f456057b9acc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Refreshing network info cache for port f7172ad0-f24b-4d05-81ac-adaec7bac7d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.890 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Start _get_guest_xml network_info=[{"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.895 226109 WARNING nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.900 226109 DEBUG nova.virt.libvirt.host [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.900 226109 DEBUG nova.virt.libvirt.host [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.903 226109 DEBUG nova.virt.libvirt.host [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.903 226109 DEBUG nova.virt.libvirt.host [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.904 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.905 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.905 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.905 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.905 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.906 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.906 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.906 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.906 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.906 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.906 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.907 226109 DEBUG nova.virt.hardware [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:11:55 compute-1 nova_compute[226101]: 2025-12-06 07:11:55.909 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:11:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2776394950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.325 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.352 226109 DEBUG nova.storage.rbd_utils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.356 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:56 compute-1 ceph-mon[81689]: pgmap v1588: 305 pgs: 305 active+clean; 134 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 717 KiB/s rd, 3.7 MiB/s wr, 133 op/s
Dec 06 07:11:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2847361391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2776394950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1606640908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.644 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005101.6435266, ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.645 226109 INFO nova.compute.manager [-] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] VM Stopped (Lifecycle Event)
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.728 226109 DEBUG nova.compute.manager [None req-2012e06e-3f9e-425e-a3d5-70679c9660b1 - - - - - -] [instance: ea037582-b3d9-4ad8-ab2c-0b47adfb9c3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.992 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.993 226109 DEBUG nova.virt.libvirt.vif [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-36807915',display_name='tempest-ImagesOneServerNegativeTestJSON-server-36807915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-36807915',id=54,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-swxu2zy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:49Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=1445e39c-6c8c-4a8f-9eab-64e13b262e40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.994 226109 DEBUG nova.network.os_vif_util [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.995 226109 DEBUG nova.network.os_vif_util [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:61:ee,bridge_name='br-int',has_traffic_filtering=True,id=f7172ad0-f24b-4d05-81ac-adaec7bac7d3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7172ad0-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:56 compute-1 nova_compute[226101]: 2025-12-06 07:11:56.996 226109 DEBUG nova.objects.instance [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1445e39c-6c8c-4a8f-9eab-64e13b262e40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.024 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <uuid>1445e39c-6c8c-4a8f-9eab-64e13b262e40</uuid>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <name>instance-00000036</name>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-36807915</nova:name>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:11:55</nova:creationTime>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <nova:user uuid="a1ed181a1103481fa4d0b29ce1009dca">tempest-ImagesOneServerNegativeTestJSON-324135674-project-member</nova:user>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <nova:project uuid="c297e84c3a9f48a9a82aebc9e5ade875">tempest-ImagesOneServerNegativeTestJSON-324135674</nova:project>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <nova:port uuid="f7172ad0-f24b-4d05-81ac-adaec7bac7d3">
Dec 06 07:11:57 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <system>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <entry name="serial">1445e39c-6c8c-4a8f-9eab-64e13b262e40</entry>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <entry name="uuid">1445e39c-6c8c-4a8f-9eab-64e13b262e40</entry>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </system>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <os>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   </os>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <features>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   </features>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk">
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       </source>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk.config">
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       </source>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:11:57 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:1c:61:ee"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <target dev="tapf7172ad0-f2"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40/console.log" append="off"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <video>
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </video>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:11:57 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:11:57 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:11:57 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:11:57 compute-1 nova_compute[226101]: </domain>
Dec 06 07:11:57 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.025 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Preparing to wait for external event network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.025 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.025 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.026 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.026 226109 DEBUG nova.virt.libvirt.vif [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-36807915',display_name='tempest-ImagesOneServerNegativeTestJSON-server-36807915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-36807915',id=54,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-swxu2zy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:49Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=1445e39c-6c8c-4a8f-9eab-64e13b262e40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.027 226109 DEBUG nova.network.os_vif_util [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.027 226109 DEBUG nova.network.os_vif_util [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:61:ee,bridge_name='br-int',has_traffic_filtering=True,id=f7172ad0-f24b-4d05-81ac-adaec7bac7d3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7172ad0-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.028 226109 DEBUG os_vif [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:61:ee,bridge_name='br-int',has_traffic_filtering=True,id=f7172ad0-f24b-4d05-81ac-adaec7bac7d3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7172ad0-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.028 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.029 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.029 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.031 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.031 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7172ad0-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.032 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf7172ad0-f2, col_values=(('external_ids', {'iface-id': 'f7172ad0-f24b-4d05-81ac-adaec7bac7d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:61:ee', 'vm-uuid': '1445e39c-6c8c-4a8f-9eab-64e13b262e40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.033 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:57 compute-1 NetworkManager[49031]: <info>  [1765005117.0343] manager: (tapf7172ad0-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.035 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.039 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.040 226109 INFO os_vif [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:61:ee,bridge_name='br-int',has_traffic_filtering=True,id=f7172ad0-f24b-4d05-81ac-adaec7bac7d3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7172ad0-f2')
Dec 06 07:11:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:57.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.534 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.535 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.535 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No VIF found with MAC fa:16:3e:1c:61:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.536 226109 INFO nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Using config drive
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.573 226109 DEBUG nova.storage.rbd_utils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:57 compute-1 sshd-session[246595]: Invalid user admin from 91.202.233.33 port 26546
Dec 06 07:11:57 compute-1 nova_compute[226101]: 2025-12-06 07:11:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:11:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:57.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:11:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2715161050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:58 compute-1 sshd-session[246595]: Connection reset by invalid user admin 91.202.233.33 port 26546 [preauth]
Dec 06 07:11:58 compute-1 nova_compute[226101]: 2025-12-06 07:11:58.421 226109 INFO nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Creating config drive at /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40/disk.config
Dec 06 07:11:58 compute-1 nova_compute[226101]: 2025-12-06 07:11:58.426 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyv70lfx5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:58 compute-1 nova_compute[226101]: 2025-12-06 07:11:58.558 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyv70lfx5" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:58 compute-1 nova_compute[226101]: 2025-12-06 07:11:58.595 226109 DEBUG nova.storage.rbd_utils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:58 compute-1 nova_compute[226101]: 2025-12-06 07:11:58.599 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40/disk.config 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:58 compute-1 nova_compute[226101]: 2025-12-06 07:11:58.657 226109 DEBUG nova.network.neutron [req-fa4ff118-b7d4-4522-b7d8-7b3d993a81c7 req-eae0bd01-407c-4b29-a3d5-f456057b9acc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Updated VIF entry in instance network info cache for port f7172ad0-f24b-4d05-81ac-adaec7bac7d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:11:58 compute-1 nova_compute[226101]: 2025-12-06 07:11:58.658 226109 DEBUG nova.network.neutron [req-fa4ff118-b7d4-4522-b7d8-7b3d993a81c7 req-eae0bd01-407c-4b29-a3d5-f456057b9acc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Updating instance_info_cache with network_info: [{"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:58 compute-1 nova_compute[226101]: 2025-12-06 07:11:58.917 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:59 compute-1 nova_compute[226101]: 2025-12-06 07:11:59.003 226109 DEBUG oslo_concurrency.lockutils [req-fa4ff118-b7d4-4522-b7d8-7b3d993a81c7 req-eae0bd01-407c-4b29-a3d5-f456057b9acc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1445e39c-6c8c-4a8f-9eab-64e13b262e40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:59 compute-1 ceph-mon[81689]: pgmap v1589: 305 pgs: 305 active+clean; 155 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Dec 06 07:11:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:59.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:59 compute-1 nova_compute[226101]: 2025-12-06 07:11:59.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:11:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:59.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:00 compute-1 sshd-session[246679]: Invalid user www from 91.202.233.33 port 26560
Dec 06 07:12:00 compute-1 ceph-mon[81689]: pgmap v1590: 305 pgs: 305 active+clean; 155 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 113 op/s
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.193 226109 DEBUG oslo_concurrency.processutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40/disk.config 1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.194 226109 INFO nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Deleting local config drive /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40/disk.config because it was imported into RBD.
Dec 06 07:12:00 compute-1 kernel: tapf7172ad0-f2: entered promiscuous mode
Dec 06 07:12:00 compute-1 ovn_controller[130279]: 2025-12-06T07:12:00Z|00180|binding|INFO|Claiming lport f7172ad0-f24b-4d05-81ac-adaec7bac7d3 for this chassis.
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.257 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:00 compute-1 ovn_controller[130279]: 2025-12-06T07:12:00Z|00181|binding|INFO|f7172ad0-f24b-4d05-81ac-adaec7bac7d3: Claiming fa:16:3e:1c:61:ee 10.100.0.11
Dec 06 07:12:00 compute-1 NetworkManager[49031]: <info>  [1765005120.2597] manager: (tapf7172ad0-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.263 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:00 compute-1 systemd-udevd[246733]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:12:00 compute-1 systemd-machined[190302]: New machine qemu-26-instance-00000036.
Dec 06 07:12:00 compute-1 NetworkManager[49031]: <info>  [1765005120.3060] device (tapf7172ad0-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:12:00 compute-1 NetworkManager[49031]: <info>  [1765005120.3071] device (tapf7172ad0-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:12:00 compute-1 systemd[1]: Started Virtual Machine qemu-26-instance-00000036.
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.326 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:00 compute-1 ovn_controller[130279]: 2025-12-06T07:12:00Z|00182|binding|INFO|Setting lport f7172ad0-f24b-4d05-81ac-adaec7bac7d3 ovn-installed in OVS
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.336 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:00 compute-1 ovn_controller[130279]: 2025-12-06T07:12:00Z|00183|binding|INFO|Setting lport f7172ad0-f24b-4d05-81ac-adaec7bac7d3 up in Southbound
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.342 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:61:ee 10.100.0.11'], port_security=['fa:16:3e:1c:61:ee 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1445e39c-6c8c-4a8f-9eab-64e13b262e40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49680c77-2db5-4d0f-bd5b-08899440c38e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44ffcb05-d145-4d07-b800-cc9e3941da49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2602f87-ec0b-4d1c-8f8b-eee8bbcfddb2, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f7172ad0-f24b-4d05-81ac-adaec7bac7d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.343 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f7172ad0-f24b-4d05-81ac-adaec7bac7d3 in datapath 49680c77-2db5-4d0f-bd5b-08899440c38e bound to our chassis
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.345 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49680c77-2db5-4d0f-bd5b-08899440c38e
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.355 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[132650e5-5013-4d62-8d1e-fe5099573906]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.356 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap49680c77-21 in ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.358 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap49680c77-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.358 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[abf6fdb1-468b-4bfa-b3de-164b033eb0d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.359 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[17f7535f-88ce-4ef7-8e62-f01edbd488f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.369 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[49cb486c-127f-4124-85a0-0250e2307565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.392 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[caf5cfb7-d4da-46d0-8768-1cc3a3498d2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.422 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[570d6871-7e1c-4cfe-8eac-acf4455db2cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 NetworkManager[49031]: <info>  [1765005120.4283] manager: (tap49680c77-20): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.427 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[aba47436-5be1-4618-a954-17b265e41ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.454 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8d89406f-8a8c-44a5-a77c-3926e13b1d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.457 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3c568c-698a-4635-9bb9-6672a39328b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 NetworkManager[49031]: <info>  [1765005120.4803] device (tap49680c77-20): carrier: link connected
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.484 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[fea9b311-1058-4d5f-98c6-88bdb5ef46b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.498 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[833ccfff-78fd-4516-b5ea-4c404535391b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49680c77-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:86:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539998, 'reachable_time': 44048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246767, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.511 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[079f6106-a77d-4336-abf8-dd35f1f3a013]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:86b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539998, 'tstamp': 539998}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246768, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.525 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9af39b7b-9c2a-4852-877c-6f4bdd482603]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49680c77-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:86:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539998, 'reachable_time': 44048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246769, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.553 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5e414f22-6323-408b-b50f-cc9d30f5efe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.606 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[869c3456-be52-4c56-a59c-b329a2def206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.606 226109 DEBUG nova.compute.manager [req-a3196cb1-d263-414c-82c5-0243d369ef91 req-cf6a2a7b-3bf1-42ba-9e25-1b8c11fefcb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received event network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.607 226109 DEBUG oslo_concurrency.lockutils [req-a3196cb1-d263-414c-82c5-0243d369ef91 req-cf6a2a7b-3bf1-42ba-9e25-1b8c11fefcb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.608 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49680c77-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.608 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.608 226109 DEBUG oslo_concurrency.lockutils [req-a3196cb1-d263-414c-82c5-0243d369ef91 req-cf6a2a7b-3bf1-42ba-9e25-1b8c11fefcb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.608 226109 DEBUG oslo_concurrency.lockutils [req-a3196cb1-d263-414c-82c5-0243d369ef91 req-cf6a2a7b-3bf1-42ba-9e25-1b8c11fefcb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.609 226109 DEBUG nova.compute.manager [req-a3196cb1-d263-414c-82c5-0243d369ef91 req-cf6a2a7b-3bf1-42ba-9e25-1b8c11fefcb2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Processing event network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.609 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49680c77-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.611 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:00 compute-1 NetworkManager[49031]: <info>  [1765005120.6117] manager: (tap49680c77-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Dec 06 07:12:00 compute-1 kernel: tap49680c77-20: entered promiscuous mode
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.613 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.614 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49680c77-20, col_values=(('external_ids', {'iface-id': '585005cf-d18f-4bcc-9942-2eb7dec20acd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.615 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:00 compute-1 ovn_controller[130279]: 2025-12-06T07:12:00Z|00184|binding|INFO|Releasing lport 585005cf-d18f-4bcc-9942-2eb7dec20acd from this chassis (sb_readonly=0)
Dec 06 07:12:00 compute-1 nova_compute[226101]: 2025-12-06 07:12:00.629 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.630 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.632 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1a52c5e4-8fbc-4d5b-8cfb-fd82de68d6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.633 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-49680c77-2db5-4d0f-bd5b-08899440c38e
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 49680c77-2db5-4d0f-bd5b-08899440c38e
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:12:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:00.633 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'env', 'PROCESS_TAG=haproxy-49680c77-2db5-4d0f-bd5b-08899440c38e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/49680c77-2db5-4d0f-bd5b-08899440c38e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:12:00 compute-1 sshd-session[246679]: Connection reset by invalid user www 91.202.233.33 port 26560 [preauth]
Dec 06 07:12:01 compute-1 podman[246817]: 2025-12-06 07:12:00.979871179 +0000 UTC m=+0.026252191 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:12:01 compute-1 podman[246817]: 2025-12-06 07:12:01.174271578 +0000 UTC m=+0.220652570 container create ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.221 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005121.2208204, 1445e39c-6c8c-4a8f-9eab-64e13b262e40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.221 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] VM Started (Lifecycle Event)
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.223 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.227 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.230 226109 INFO nova.virt.libvirt.driver [-] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Instance spawned successfully.
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.230 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:12:01 compute-1 systemd[1]: Started libpod-conmon-ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81.scope.
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.276 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.282 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.285 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.286 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.286 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.287 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.287 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.288 226109 DEBUG nova.virt.libvirt.driver [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:01 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:12:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0261c66c5389c9989c22bd735345898f4b1f36e9f0d024a3c1efec22ce2e5c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.327 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.327 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005121.2210245, 1445e39c-6c8c-4a8f-9eab-64e13b262e40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.327 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] VM Paused (Lifecycle Event)
Dec 06 07:12:01 compute-1 podman[246817]: 2025-12-06 07:12:01.35172365 +0000 UTC m=+0.398104662 container init ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:12:01 compute-1 podman[246817]: 2025-12-06 07:12:01.357028833 +0000 UTC m=+0.403409825 container start ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.379 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:01 compute-1 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[246859]: [NOTICE]   (246864) : New worker (246866) forked
Dec 06 07:12:01 compute-1 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[246859]: [NOTICE]   (246864) : Loading success.
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.382 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005121.2261713, 1445e39c-6c8c-4a8f-9eab-64e13b262e40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.382 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] VM Resumed (Lifecycle Event)
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.407 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.409 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:12:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.436 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.459 226109 INFO nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Took 11.73 seconds to spawn the instance on the hypervisor.
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.459 226109 DEBUG nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.535 226109 INFO nova.compute.manager [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Took 12.72 seconds to build instance.
Dec 06 07:12:01 compute-1 nova_compute[226101]: 2025-12-06 07:12:01.557 226109 DEBUG oslo_concurrency.lockutils [None req-a8dbb2c1-d22a-4ca3-9ec9-5fb5a2897706 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:01.629 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:01.630 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:01.630 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:01 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Dec 06 07:12:01 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:01.694759) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:12:01 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Dec 06 07:12:01 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121694789, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2614, "num_deletes": 269, "total_data_size": 6004089, "memory_usage": 6070544, "flush_reason": "Manual Compaction"}
Dec 06 07:12:01 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Dec 06 07:12:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:01.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:02 compute-1 nova_compute[226101]: 2025-12-06 07:12:02.034 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005122219701, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3921227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31161, "largest_seqno": 33770, "table_properties": {"data_size": 3910173, "index_size": 7228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23428, "raw_average_key_size": 21, "raw_value_size": 3888025, "raw_average_value_size": 3566, "num_data_blocks": 310, "num_entries": 1090, "num_filter_entries": 1090, "num_deletions": 269, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004931, "oldest_key_time": 1765004931, "file_creation_time": 1765005121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 524995 microseconds, and 7552 cpu microseconds.
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.219749) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3921227 bytes OK
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.219770) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.228354) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.228384) EVENT_LOG_v1 {"time_micros": 1765005122228377, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.228403) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5992422, prev total WAL file size 5992422, number of live WAL files 2.
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.229933) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3829KB)], [60(9492KB)]
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005122229996, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 13641710, "oldest_snapshot_seqno": -1}
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6517 keys, 11756480 bytes, temperature: kUnknown
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005122333155, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 11756480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11710493, "index_size": 28597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 165749, "raw_average_key_size": 25, "raw_value_size": 11590955, "raw_average_value_size": 1778, "num_data_blocks": 1153, "num_entries": 6517, "num_filter_entries": 6517, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765005122, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.333357) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11756480 bytes
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.334661) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.2 rd, 113.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 9.3 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7064, records dropped: 547 output_compression: NoCompression
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.334677) EVENT_LOG_v1 {"time_micros": 1765005122334669, "job": 36, "event": "compaction_finished", "compaction_time_micros": 103216, "compaction_time_cpu_micros": 26517, "output_level": 6, "num_output_files": 1, "total_output_size": 11756480, "num_input_records": 7064, "num_output_records": 6517, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005122335314, "job": 36, "event": "table_file_deletion", "file_number": 62}
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005122336804, "job": 36, "event": "table_file_deletion", "file_number": 60}
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.229867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.336886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.336891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.336893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.336895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:12:02.336897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:02 compute-1 ceph-mon[81689]: pgmap v1591: 305 pgs: 305 active+clean; 171 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Dec 06 07:12:02 compute-1 nova_compute[226101]: 2025-12-06 07:12:02.703 226109 DEBUG nova.compute.manager [req-66dbb47f-2200-453f-9def-a1535a759106 req-c17e2454-59ad-48a3-8ced-6c2967a5d3cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received event network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:02 compute-1 nova_compute[226101]: 2025-12-06 07:12:02.703 226109 DEBUG oslo_concurrency.lockutils [req-66dbb47f-2200-453f-9def-a1535a759106 req-c17e2454-59ad-48a3-8ced-6c2967a5d3cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:02 compute-1 nova_compute[226101]: 2025-12-06 07:12:02.704 226109 DEBUG oslo_concurrency.lockutils [req-66dbb47f-2200-453f-9def-a1535a759106 req-c17e2454-59ad-48a3-8ced-6c2967a5d3cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:02 compute-1 nova_compute[226101]: 2025-12-06 07:12:02.704 226109 DEBUG oslo_concurrency.lockutils [req-66dbb47f-2200-453f-9def-a1535a759106 req-c17e2454-59ad-48a3-8ced-6c2967a5d3cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:02 compute-1 nova_compute[226101]: 2025-12-06 07:12:02.704 226109 DEBUG nova.compute.manager [req-66dbb47f-2200-453f-9def-a1535a759106 req-c17e2454-59ad-48a3-8ced-6c2967a5d3cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] No waiting events found dispatching network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:12:02 compute-1 nova_compute[226101]: 2025-12-06 07:12:02.705 226109 WARNING nova.compute.manager [req-66dbb47f-2200-453f-9def-a1535a759106 req-c17e2454-59ad-48a3-8ced-6c2967a5d3cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received unexpected event network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 for instance with vm_state active and task_state None.
Dec 06 07:12:03 compute-1 sshd-session[246779]: Invalid user oracle from 91.202.233.33 port 26570
Dec 06 07:12:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:03.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:03.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:03 compute-1 nova_compute[226101]: 2025-12-06 07:12:03.945 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:04 compute-1 sshd-session[246779]: Connection reset by invalid user oracle 91.202.233.33 port 26570 [preauth]
Dec 06 07:12:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:05.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:05 compute-1 ceph-mon[81689]: pgmap v1592: 305 pgs: 305 active+clean; 181 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 135 op/s
Dec 06 07:12:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:06 compute-1 ceph-mon[81689]: pgmap v1593: 305 pgs: 305 active+clean; 181 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 148 op/s
Dec 06 07:12:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:07 compute-1 nova_compute[226101]: 2025-12-06 07:12:07.037 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:07 compute-1 sshd-session[246875]: Connection reset by authenticating user root 91.202.233.33 port 53824 [preauth]
Dec 06 07:12:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:07.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/885063946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/506353290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:08 compute-1 podman[246877]: 2025-12-06 07:12:08.089318703 +0000 UTC m=+0.073159127 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:12:08 compute-1 ceph-mon[81689]: pgmap v1594: 305 pgs: 305 active+clean; 182 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 160 op/s
Dec 06 07:12:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:12:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3786266720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:12:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:12:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3786266720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:12:08 compute-1 nova_compute[226101]: 2025-12-06 07:12:08.947 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:09 compute-1 nova_compute[226101]: 2025-12-06 07:12:09.103 226109 DEBUG nova.compute.manager [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:09 compute-1 nova_compute[226101]: 2025-12-06 07:12:09.309 226109 INFO nova.compute.manager [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] instance snapshotting
Dec 06 07:12:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:09.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:09 compute-1 nova_compute[226101]: 2025-12-06 07:12:09.795 226109 INFO nova.virt.libvirt.driver [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Beginning live snapshot process
Dec 06 07:12:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:09.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3786266720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:12:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3786266720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:12:09 compute-1 nova_compute[226101]: 2025-12-06 07:12:09.979 226109 DEBUG nova.virt.libvirt.imagebackend [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:12:10 compute-1 nova_compute[226101]: 2025-12-06 07:12:10.247 226109 DEBUG nova.storage.rbd_utils [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] creating snapshot(5736143ad17741db8edeff987f973a08) on rbd image(1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:12:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:12:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:12.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:12 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:12.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:12 compute-1 nova_compute[226101]: 2025-12-06 07:12:12.105 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:13 compute-1 podman[246956]: 2025-12-06 07:12:13.072269091 +0000 UTC m=+0.055553591 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 06 07:12:13 compute-1 podman[246955]: 2025-12-06 07:12:13.124029879 +0000 UTC m=+0.101880902 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:12:13 compute-1 nova_compute[226101]: 2025-12-06 07:12:13.949 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:12:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:14.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:14.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:14.714 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:14 compute-1 nova_compute[226101]: 2025-12-06 07:12:14.715 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:14.717 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:12:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:16.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:17 compute-1 nova_compute[226101]: 2025-12-06 07:12:17.106 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:17 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:12:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:18 compute-1 nova_compute[226101]: 2025-12-06 07:12:18.682 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:18 compute-1 nova_compute[226101]: 2025-12-06 07:12:18.682 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:18 compute-1 nova_compute[226101]: 2025-12-06 07:12:18.701 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:12:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:18.719 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:18 compute-1 nova_compute[226101]: 2025-12-06 07:12:18.811 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:18 compute-1 nova_compute[226101]: 2025-12-06 07:12:18.812 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:18 compute-1 nova_compute[226101]: 2025-12-06 07:12:18.819 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:12:18 compute-1 nova_compute[226101]: 2025-12-06 07:12:18.819 226109 INFO nova.compute.claims [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:12:18 compute-1 nova_compute[226101]: 2025-12-06 07:12:18.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:19 compute-1 nova_compute[226101]: 2025-12-06 07:12:19.193 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:12:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:21 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:12:22 compute-1 nova_compute[226101]: 2025-12-06 07:12:22.109 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:22.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:22.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:23 compute-1 nova_compute[226101]: 2025-12-06 07:12:23.952 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:12:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:12:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:24.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 2762..3274) lease_timeout -- calling new election
Dec 06 07:12:24 compute-1 ceph-mon[81689]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 06 07:12:24 compute-1 ceph-mon[81689]: paxos.2).electionLogic(46) init, last seen epoch 46
Dec 06 07:12:24 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:24 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:24 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 13.102766037s
Dec 06 07:12:24 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 13.102766037s
Dec 06 07:12:24 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.102984428s, txc = 0x55b553407800
Dec 06 07:12:24 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:24 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.749808311s, txc = 0x55b552a41800
Dec 06 07:12:24 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:25 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:25 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:25 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:12:25 compute-1 nova_compute[226101]: 2025-12-06 07:12:25.864 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:25 compute-1 nova_compute[226101]: 2025-12-06 07:12:25.871 226109 DEBUG nova.compute.provider_tree [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:25 compute-1 nova_compute[226101]: 2025-12-06 07:12:25.891 226109 DEBUG nova.scheduler.client.report [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:25 compute-1 nova_compute[226101]: 2025-12-06 07:12:25.912 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 7.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:25 compute-1 nova_compute[226101]: 2025-12-06 07:12:25.912 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:12:25 compute-1 nova_compute[226101]: 2025-12-06 07:12:25.974 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:12:25 compute-1 nova_compute[226101]: 2025-12-06 07:12:25.974 226109 DEBUG nova.network.neutron [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:12:25 compute-1 nova_compute[226101]: 2025-12-06 07:12:25.996 226109 INFO nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.014 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.108 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.110 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.110 226109 INFO nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Creating image(s)
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:12:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:26.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.139 226109 DEBUG nova.storage.rbd_utils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] rbd image 6538434f-992a-4978-b5ca-284ce0f6a66e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.165 226109 DEBUG nova.storage.rbd_utils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] rbd image 6538434f-992a-4978-b5ca-284ce0f6a66e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.189 226109 DEBUG nova.storage.rbd_utils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] rbd image 6538434f-992a-4978-b5ca-284ce0f6a66e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.192 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.248 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.249 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.250 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.250 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.280 226109 DEBUG nova.storage.rbd_utils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] rbd image 6538434f-992a-4978-b5ca-284ce0f6a66e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.283 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6538434f-992a-4978-b5ca-284ce0f6a66e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:26 compute-1 nova_compute[226101]: 2025-12-06 07:12:26.393 226109 DEBUG nova.policy [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e090a5aa1bf4e4394b5fbf3418f0e6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84491fd4e10b457ab46523330f88bb07', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:27 compute-1 nova_compute[226101]: 2025-12-06 07:12:27.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:27 compute-1 nova_compute[226101]: 2025-12-06 07:12:27.764 226109 DEBUG nova.network.neutron [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Successfully created port: d5dc1a46-0992-4f53-8948-7dd36462be0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:12:27 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:28.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.440 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6538434f-992a-4978-b5ca-284ce0f6a66e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:28 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.517 226109 DEBUG nova.storage.rbd_utils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] resizing rbd image 6538434f-992a-4978-b5ca-284ce0f6a66e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:12:28 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.921 226109 DEBUG nova.network.neutron [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Successfully updated port: d5dc1a46-0992-4f53-8948-7dd36462be0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.927 226109 DEBUG nova.objects.instance [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lazy-loading 'migration_context' on Instance uuid 6538434f-992a-4978-b5ca-284ce0f6a66e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.942 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.942 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquired lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.942 226109 DEBUG nova.network.neutron [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.945 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.945 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Ensure instance console log exists: /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.946 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.946 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.946 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:28 compute-1 nova_compute[226101]: 2025-12-06 07:12:28.954 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:29 compute-1 ceph-mon[81689]: paxos.2).electionLogic(47) init, last seen epoch 47, mid-election, bumping
Dec 06 07:12:29 compute-1 nova_compute[226101]: 2025-12-06 07:12:29.308 226109 DEBUG nova.compute.manager [req-83dc9b7a-e476-45fe-8d1c-134c08827f08 req-5a2556a8-39a2-4593-919d-c9f320c4cb39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received event network-changed-d5dc1a46-0992-4f53-8948-7dd36462be0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:29 compute-1 nova_compute[226101]: 2025-12-06 07:12:29.308 226109 DEBUG nova.compute.manager [req-83dc9b7a-e476-45fe-8d1c-134c08827f08 req-5a2556a8-39a2-4593-919d-c9f320c4cb39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Refreshing instance network info cache due to event network-changed-d5dc1a46-0992-4f53-8948-7dd36462be0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:12:29 compute-1 nova_compute[226101]: 2025-12-06 07:12:29.308 226109 DEBUG oslo_concurrency.lockutils [req-83dc9b7a-e476-45fe-8d1c-134c08827f08 req-5a2556a8-39a2-4593-919d-c9f320c4cb39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:29 compute-1 nova_compute[226101]: 2025-12-06 07:12:29.421 226109 DEBUG nova.network.neutron [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:12:29 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e225 e225: 3 total, 3 up, 3 in
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e226 e226: 3 total, 3 up, 3 in
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e227 e227: 3 total, 3 up, 3 in
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1596: 305 pgs: 305 active+clean; 202 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 158 op/s
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1597: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1598: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1599: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1600: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 151 op/s
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1601: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 151 op/s
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1602: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 246 KiB/s wr, 101 op/s
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 07:12:29 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:12:29 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:12:29 compute-1 ceph-mon[81689]: osdmap e225: 3 total, 3 up, 3 in
Dec 06 07:12:29 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 46m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:12:29 compute-1 ceph-mon[81689]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 07:12:29 compute-1 ceph-mon[81689]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:12:29 compute-1 ceph-mon[81689]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:12:29 compute-1 ceph-mon[81689]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 07:12:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3405406043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1604: 305 pgs: 305 active+clean; 198 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 17 op/s
Dec 06 07:12:29 compute-1 ceph-mon[81689]: osdmap e226: 3 total, 3 up, 3 in
Dec 06 07:12:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/821615078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:29 compute-1 ceph-mon[81689]: pgmap v1606: 305 pgs: 305 active+clean; 194 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 807 KiB/s rd, 740 KiB/s wr, 64 op/s
Dec 06 07:12:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:12:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:30.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:12:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.334 226109 DEBUG nova.network.neutron [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Updating instance_info_cache with network_info: [{"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Failed to snapshot image: rbd.PermissionError: [errno 1] RBD permission error (error creating snapshot b'5736143ad17741db8edeff987f973a08' from b'1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk')
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3077, in snapshot
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver     metadata['location'] = root_disk.direct_snapshot(
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py", line 1206, in direct_snapshot
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver     self.driver.create_snap(self.rbd_name, snapshot_name, protect=True)
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py", line 465, in create_snap
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver     vol.create_snap(name)
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 193, in doit
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver     result = proxy_call(self._autowrap, f, *args, **kwargs)
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 151, in proxy_call
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver     rv = execute(f, *args, **kwargs)
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 132, in execute
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver     six.reraise(c, e, tb)
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver     raise value
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 86, in tworker
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver     rv = meth(*args, **kwargs)
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "rbd.pyx", line 2790, in rbd.requires_not_closed.wrapper
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver   File "rbd.pyx", line 3549, in rbd.Image.create_snap
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver rbd.PermissionError: [errno 1] RBD permission error (error creating snapshot b'5736143ad17741db8edeff987f973a08' from b'1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk')
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.345 226109 ERROR nova.virt.libvirt.driver 
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.363 226109 DEBUG nova.compute.utils [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Cleaning up image 1353436e-de17-4a6b-a172-0c01d62ca482 delete_image /usr/lib/python3.9/site-packages/nova/compute/utils.py:1308
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.367 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Releasing lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.367 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Instance network_info: |[{"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.370 226109 DEBUG oslo_concurrency.lockutils [req-83dc9b7a-e476-45fe-8d1c-134c08827f08 req-5a2556a8-39a2-4593-919d-c9f320c4cb39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.371 226109 DEBUG nova.network.neutron [req-83dc9b7a-e476-45fe-8d1c-134c08827f08 req-5a2556a8-39a2-4593-919d-c9f320c4cb39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Refreshing network info cache for port d5dc1a46-0992-4f53-8948-7dd36462be0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.375 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Start _get_guest_xml network_info=[{"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.383 226109 WARNING nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.387 226109 DEBUG nova.virt.libvirt.host [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.388 226109 DEBUG nova.virt.libvirt.host [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.391 226109 DEBUG nova.virt.libvirt.host [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.392 226109 DEBUG nova.virt.libvirt.host [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.394 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.394 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.395 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.396 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.396 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.396 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.397 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.397 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.397 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.398 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.398 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.398 226109 DEBUG nova.virt.hardware [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.402 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e228 e228: 3 total, 3 up, 3 in
Dec 06 07:12:30 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 07:12:30 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 07:12:30 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:12:30 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:12:30 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:12:30 compute-1 ceph-mon[81689]: osdmap e227: 3 total, 3 up, 3 in
Dec 06 07:12:30 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 46m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:12:30 compute-1 ceph-mon[81689]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 07:12:30 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.745 226109 DEBUG oslo_concurrency.lockutils [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.746 226109 DEBUG oslo_concurrency.lockutils [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:12:30 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3363960885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.895 226109 DEBUG oslo_concurrency.processutils [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.933 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.967 226109 DEBUG nova.storage.rbd_utils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] rbd image 6538434f-992a-4978-b5ca-284ce0f6a66e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:30 compute-1 nova_compute[226101]: 2025-12-06 07:12:30.973 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:31 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1472456111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.345 226109 DEBUG oslo_concurrency.processutils [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.351 226109 DEBUG nova.compute.provider_tree [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.373 226109 DEBUG nova.scheduler.client.report [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.414 226109 DEBUG oslo_concurrency.lockutils [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.414 226109 INFO nova.compute.manager [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Successfully reverted task state from image_uploading on failure for instance.
Dec 06 07:12:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:12:31 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2845239760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server [None req-d125ce1a-a379-416f-92e9-dd554c984c30 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Exception during message handling: rbd.PermissionError: [errno 1] RBD permission error (error creating snapshot b'5736143ad17741db8edeff987f973a08' from b'1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk')
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     raise self.value
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 186, in decorated_function
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     LOG.warning("Failed to revert task state for instance. "
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     raise self.value
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 157, in decorated_function
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     raise self.value
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 234, in decorated_function
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     compute_utils.delete_image(
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     raise self.value
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 230, in decorated_function
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     return function(self, context, image_id, instance,
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4338, in snapshot_instance
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     self._snapshot_instance(context, image_id, instance,
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4371, in _snapshot_instance
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     self.driver.snapshot(context, instance, image_id,
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3158, in snapshot
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     root_disk.cleanup_direct_snapshot(
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     raise self.value
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3077, in snapshot
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     metadata['location'] = root_disk.direct_snapshot(
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py", line 1206, in direct_snapshot
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     self.driver.create_snap(self.rbd_name, snapshot_name, protect=True)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py", line 465, in create_snap
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     vol.create_snap(name)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 193, in doit
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     result = proxy_call(self._autowrap, f, *args, **kwargs)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 151, in proxy_call
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     rv = execute(f, *args, **kwargs)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 132, in execute
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     six.reraise(c, e, tb)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     raise value
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 86, in tworker
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server     rv = meth(*args, **kwargs)
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "rbd.pyx", line 2790, in rbd.requires_not_closed.wrapper
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server   File "rbd.pyx", line 3549, in rbd.Image.create_snap
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server rbd.PermissionError: [errno 1] RBD permission error (error creating snapshot b'5736143ad17741db8edeff987f973a08' from b'1445e39c-6c8c-4a8f-9eab-64e13b262e40_disk')
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.416 226109 ERROR oslo_messaging.rpc.server 
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.431 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.432 226109 DEBUG nova.virt.libvirt.vif [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=56,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEDHs3oEJey8VHUpTQ60S9SUwR4J1otbSj2hYCdlgIn/308O9184J7uLWjclqCGjpxdxNPhKZ1WyKTsvUdEjvkgROyZlA1Ykr5MyPbQDeyTaadVbVEV7g8pBoc6Ajlbiww==',key_name='tempest-keypair-1981076157',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84491fd4e10b457ab46523330f88bb07',ramdisk_id='',reservation_id='r-n0koef70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-1552629590',owner_user_name='tempest-ServersTestFqdnHostnames-1552629590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e090a5aa1bf4e4394b5fbf3418f0e6d',uuid=6538434f-992a-4978-b5ca-284ce0f6a66e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.432 226109 DEBUG nova.network.os_vif_util [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Converting VIF {"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.433 226109 DEBUG nova.network.os_vif_util [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=d5dc1a46-0992-4f53-8948-7dd36462be0e,network=Network(2516fad8-9125-435f-927d-f6e26c906c07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5dc1a46-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.434 226109 DEBUG nova.objects.instance [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6538434f-992a-4978-b5ca-284ce0f6a66e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.451 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <uuid>6538434f-992a-4978-b5ca-284ce0f6a66e</uuid>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <name>instance-00000038</name>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <nova:name>guest-instance-1.domain.com</nova:name>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:12:30</nova:creationTime>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <nova:user uuid="5e090a5aa1bf4e4394b5fbf3418f0e6d">tempest-ServersTestFqdnHostnames-1552629590-project-member</nova:user>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <nova:project uuid="84491fd4e10b457ab46523330f88bb07">tempest-ServersTestFqdnHostnames-1552629590</nova:project>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <nova:port uuid="d5dc1a46-0992-4f53-8948-7dd36462be0e">
Dec 06 07:12:31 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <system>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <entry name="serial">6538434f-992a-4978-b5ca-284ce0f6a66e</entry>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <entry name="uuid">6538434f-992a-4978-b5ca-284ce0f6a66e</entry>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </system>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <os>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   </os>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <features>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   </features>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/6538434f-992a-4978-b5ca-284ce0f6a66e_disk">
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       </source>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/6538434f-992a-4978-b5ca-284ce0f6a66e_disk.config">
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       </source>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:12:31 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:8b:d1:d1"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <target dev="tapd5dc1a46-09"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e/console.log" append="off"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <video>
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </video>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:12:31 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:12:31 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:12:31 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:12:31 compute-1 nova_compute[226101]: </domain>
Dec 06 07:12:31 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.452 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Preparing to wait for external event network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.453 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.453 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.454 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.455 226109 DEBUG nova.virt.libvirt.vif [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=56,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEDHs3oEJey8VHUpTQ60S9SUwR4J1otbSj2hYCdlgIn/308O9184J7uLWjclqCGjpxdxNPhKZ1WyKTsvUdEjvkgROyZlA1Ykr5MyPbQDeyTaadVbVEV7g8pBoc6Ajlbiww==',key_name='tempest-keypair-1981076157',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84491fd4e10b457ab46523330f88bb07',ramdisk_id='',reservation_id='r-n0koef70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-1552629590',owner_user_name='tempest-ServersTestFqdnHostnames-1552629590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e090a5aa1bf4e4394b5fbf3418f0e6d',uuid=6538434f-992a-4978-b5ca-284ce0f6a66e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.455 226109 DEBUG nova.network.os_vif_util [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Converting VIF {"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.456 226109 DEBUG nova.network.os_vif_util [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=d5dc1a46-0992-4f53-8948-7dd36462be0e,network=Network(2516fad8-9125-435f-927d-f6e26c906c07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5dc1a46-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.456 226109 DEBUG os_vif [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=d5dc1a46-0992-4f53-8948-7dd36462be0e,network=Network(2516fad8-9125-435f-927d-f6e26c906c07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5dc1a46-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.457 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.458 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.461 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.461 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5dc1a46-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.462 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5dc1a46-09, col_values=(('external_ids', {'iface-id': 'd5dc1a46-0992-4f53-8948-7dd36462be0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:d1:d1', 'vm-uuid': '6538434f-992a-4978-b5ca-284ce0f6a66e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:31 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:12:31 compute-1 NetworkManager[49031]: <info>  [1765005151.4644] manager: (tapd5dc1a46-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Dec 06 07:12:31 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.463 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.472 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.472 226109 INFO os_vif [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=d5dc1a46-0992-4f53-8948-7dd36462be0e,network=Network(2516fad8-9125-435f-927d-f6e26c906c07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5dc1a46-09')
Dec 06 07:12:31 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:12:31 compute-1 ceph-mon[81689]: mon.compute-1 calling monitor election
Dec 06 07:12:31 compute-1 ceph-mon[81689]: pgmap v1608: 305 pgs: 305 active+clean; 203 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.7 MiB/s wr, 114 op/s
Dec 06 07:12:31 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:12:31 compute-1 ceph-mon[81689]: osdmap e228: 3 total, 3 up, 3 in
Dec 06 07:12:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3363960885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1472456111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2845239760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.654 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.654 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.654 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] No VIF found with MAC fa:16:3e:8b:d1:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.655 226109 INFO nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Using config drive
Dec 06 07:12:31 compute-1 nova_compute[226101]: 2025-12-06 07:12:31.678 226109 DEBUG nova.storage.rbd_utils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] rbd image 6538434f-992a-4978-b5ca-284ce0f6a66e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:31 compute-1 ovn_controller[130279]: 2025-12-06T07:12:31Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:61:ee 10.100.0.11
Dec 06 07:12:31 compute-1 ovn_controller[130279]: 2025-12-06T07:12:31Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:61:ee 10.100.0.11
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.079 226109 INFO nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Creating config drive at /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e/disk.config
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.091 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa2mg3a_p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:12:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:32.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.178 226109 DEBUG nova.network.neutron [req-83dc9b7a-e476-45fe-8d1c-134c08827f08 req-5a2556a8-39a2-4593-919d-c9f320c4cb39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Updated VIF entry in instance network info cache for port d5dc1a46-0992-4f53-8948-7dd36462be0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.179 226109 DEBUG nova.network.neutron [req-83dc9b7a-e476-45fe-8d1c-134c08827f08 req-5a2556a8-39a2-4593-919d-c9f320c4cb39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Updating instance_info_cache with network_info: [{"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.206 226109 DEBUG oslo_concurrency.lockutils [req-83dc9b7a-e476-45fe-8d1c-134c08827f08 req-5a2556a8-39a2-4593-919d-c9f320c4cb39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.229 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa2mg3a_p" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.262 226109 DEBUG nova.storage.rbd_utils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] rbd image 6538434f-992a-4978-b5ca-284ce0f6a66e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.267 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e/disk.config 6538434f-992a-4978-b5ca-284ce0f6a66e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.337 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.338 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.338 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.338 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.338 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.340 226109 INFO nova.compute.manager [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Terminating instance
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.342 226109 DEBUG nova.compute.manager [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:12:32 compute-1 kernel: tapf7172ad0-f2 (unregistering): left promiscuous mode
Dec 06 07:12:32 compute-1 NetworkManager[49031]: <info>  [1765005152.6448] device (tapf7172ad0-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.647 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.655 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:32 compute-1 ovn_controller[130279]: 2025-12-06T07:12:32Z|00185|binding|INFO|Releasing lport f7172ad0-f24b-4d05-81ac-adaec7bac7d3 from this chassis (sb_readonly=0)
Dec 06 07:12:32 compute-1 ovn_controller[130279]: 2025-12-06T07:12:32Z|00186|binding|INFO|Setting lport f7172ad0-f24b-4d05-81ac-adaec7bac7d3 down in Southbound
Dec 06 07:12:32 compute-1 ovn_controller[130279]: 2025-12-06T07:12:32Z|00187|binding|INFO|Removing iface tapf7172ad0-f2 ovn-installed in OVS
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.658 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:32.664 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:61:ee 10.100.0.11'], port_security=['fa:16:3e:1c:61:ee 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1445e39c-6c8c-4a8f-9eab-64e13b262e40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49680c77-2db5-4d0f-bd5b-08899440c38e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44ffcb05-d145-4d07-b800-cc9e3941da49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2602f87-ec0b-4d1c-8f8b-eee8bbcfddb2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f7172ad0-f24b-4d05-81ac-adaec7bac7d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:32.665 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f7172ad0-f24b-4d05-81ac-adaec7bac7d3 in datapath 49680c77-2db5-4d0f-bd5b-08899440c38e unbound from our chassis
Dec 06 07:12:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:32.666 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49680c77-2db5-4d0f-bd5b-08899440c38e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:12:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:32.667 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9799c56e-8af2-4e9c-9bb7-b340c7543aca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:32.667 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e namespace which is not needed anymore
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:32 compute-1 ceph-mon[81689]: pgmap v1610: 305 pgs: 305 active+clean; 257 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 8.2 MiB/s wr, 151 op/s
Dec 06 07:12:32 compute-1 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000036.scope: Deactivated successfully.
Dec 06 07:12:32 compute-1 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000036.scope: Consumed 14.374s CPU time.
Dec 06 07:12:32 compute-1 systemd-machined[190302]: Machine qemu-26-instance-00000036 terminated.
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.779 226109 INFO nova.virt.libvirt.driver [-] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Instance destroyed successfully.
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.780 226109 DEBUG nova.objects.instance [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'resources' on Instance uuid 1445e39c-6c8c-4a8f-9eab-64e13b262e40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.801 226109 DEBUG nova.virt.libvirt.vif [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-36807915',display_name='tempest-ImagesOneServerNegativeTestJSON-server-36807915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-36807915',id=54,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:12:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-swxu2zy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:12:30Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=1445e39c-6c8c-4a8f-9eab-64e13b262e40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.801 226109 DEBUG nova.network.os_vif_util [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "address": "fa:16:3e:1c:61:ee", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7172ad0-f2", "ovs_interfaceid": "f7172ad0-f24b-4d05-81ac-adaec7bac7d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.802 226109 DEBUG nova.network.os_vif_util [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:61:ee,bridge_name='br-int',has_traffic_filtering=True,id=f7172ad0-f24b-4d05-81ac-adaec7bac7d3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7172ad0-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.802 226109 DEBUG os_vif [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:61:ee,bridge_name='br-int',has_traffic_filtering=True,id=f7172ad0-f24b-4d05-81ac-adaec7bac7d3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7172ad0-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.804 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7172ad0-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.819 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.823 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.826 226109 INFO os_vif [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:61:ee,bridge_name='br-int',has_traffic_filtering=True,id=f7172ad0-f24b-4d05-81ac-adaec7bac7d3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7172ad0-f2')
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.924 226109 DEBUG nova.compute.manager [req-d4212ba7-e975-4f39-946c-82edd42dc275 req-827d15a5-366e-4ca9-bc58-bf28266175c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received event network-vif-unplugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.924 226109 DEBUG oslo_concurrency.lockutils [req-d4212ba7-e975-4f39-946c-82edd42dc275 req-827d15a5-366e-4ca9-bc58-bf28266175c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.925 226109 DEBUG oslo_concurrency.lockutils [req-d4212ba7-e975-4f39-946c-82edd42dc275 req-827d15a5-366e-4ca9-bc58-bf28266175c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.926 226109 DEBUG oslo_concurrency.lockutils [req-d4212ba7-e975-4f39-946c-82edd42dc275 req-827d15a5-366e-4ca9-bc58-bf28266175c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.926 226109 DEBUG nova.compute.manager [req-d4212ba7-e975-4f39-946c-82edd42dc275 req-827d15a5-366e-4ca9-bc58-bf28266175c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] No waiting events found dispatching network-vif-unplugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:12:32 compute-1 nova_compute[226101]: 2025-12-06 07:12:32.926 226109 DEBUG nova.compute.manager [req-d4212ba7-e975-4f39-946c-82edd42dc275 req-827d15a5-366e-4ca9-bc58-bf28266175c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received event network-vif-unplugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:12:33 compute-1 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[246859]: [NOTICE]   (246864) : haproxy version is 2.8.14-c23fe91
Dec 06 07:12:33 compute-1 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[246859]: [NOTICE]   (246864) : path to executable is /usr/sbin/haproxy
Dec 06 07:12:33 compute-1 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[246859]: [WARNING]  (246864) : Exiting Master process...
Dec 06 07:12:33 compute-1 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[246859]: [WARNING]  (246864) : Exiting Master process...
Dec 06 07:12:33 compute-1 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[246859]: [ALERT]    (246864) : Current worker (246866) exited with code 143 (Terminated)
Dec 06 07:12:33 compute-1 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[246859]: [WARNING]  (246864) : All workers exited. Exiting... (0)
Dec 06 07:12:33 compute-1 systemd[1]: libpod-ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81.scope: Deactivated successfully.
Dec 06 07:12:33 compute-1 podman[247354]: 2025-12-06 07:12:33.038578337 +0000 UTC m=+0.266083547 container died ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.070 226109 DEBUG oslo_concurrency.processutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e/disk.config 6538434f-992a-4978-b5ca-284ce0f6a66e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.804s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.072 226109 INFO nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Deleting local config drive /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e/disk.config because it was imported into RBD.
Dec 06 07:12:33 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81-userdata-shm.mount: Deactivated successfully.
Dec 06 07:12:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-f0261c66c5389c9989c22bd735345898f4b1f36e9f0d024a3c1efec22ce2e5c1-merged.mount: Deactivated successfully.
Dec 06 07:12:33 compute-1 podman[247354]: 2025-12-06 07:12:33.08648353 +0000 UTC m=+0.313988750 container cleanup ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:12:33 compute-1 systemd[1]: libpod-conmon-ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81.scope: Deactivated successfully.
Dec 06 07:12:33 compute-1 kernel: tapd5dc1a46-09: entered promiscuous mode
Dec 06 07:12:33 compute-1 systemd-udevd[247329]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:12:33 compute-1 NetworkManager[49031]: <info>  [1765005153.1455] manager: (tapd5dc1a46-09): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Dec 06 07:12:33 compute-1 ovn_controller[130279]: 2025-12-06T07:12:33Z|00188|binding|INFO|Claiming lport d5dc1a46-0992-4f53-8948-7dd36462be0e for this chassis.
Dec 06 07:12:33 compute-1 ovn_controller[130279]: 2025-12-06T07:12:33Z|00189|binding|INFO|d5dc1a46-0992-4f53-8948-7dd36462be0e: Claiming fa:16:3e:8b:d1:d1 10.100.0.13
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.146 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.155 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d1:d1 10.100.0.13'], port_security=['fa:16:3e:8b:d1:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6538434f-992a-4978-b5ca-284ce0f6a66e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2516fad8-9125-435f-927d-f6e26c906c07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84491fd4e10b457ab46523330f88bb07', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e0ef9ba-7bb6-432f-bfc8-b185a67387e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b4b25ba-d8c7-4eb1-bcbd-b5597e2987ee, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=d5dc1a46-0992-4f53-8948-7dd36462be0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:33 compute-1 NetworkManager[49031]: <info>  [1765005153.1610] device (tapd5dc1a46-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:12:33 compute-1 NetworkManager[49031]: <info>  [1765005153.1620] device (tapd5dc1a46-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:12:33 compute-1 systemd-machined[190302]: New machine qemu-27-instance-00000038.
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.208 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-1 systemd[1]: Started Virtual Machine qemu-27-instance-00000038.
Dec 06 07:12:33 compute-1 ovn_controller[130279]: 2025-12-06T07:12:33Z|00190|binding|INFO|Setting lport d5dc1a46-0992-4f53-8948-7dd36462be0e ovn-installed in OVS
Dec 06 07:12:33 compute-1 ovn_controller[130279]: 2025-12-06T07:12:33Z|00191|binding|INFO|Setting lport d5dc1a46-0992-4f53-8948-7dd36462be0e up in Southbound
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.217 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-1 podman[247408]: 2025-12-06 07:12:33.860796541 +0000 UTC m=+0.750163759 container remove ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.867 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[390297d7-2406-4b7b-8aa5-a555db832797]: (4, ('Sat Dec  6 07:12:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e (ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81)\nca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81\nSat Dec  6 07:12:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e (ca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81)\nca7b2e056f03415f9b411310f7586d127dfb9ad54f1ad7a98018683added3d81\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.869 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[39460ece-efe7-4a4c-a808-d3145a228735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.870 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49680c77-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e229 e229: 3 total, 3 up, 3 in
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.909 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-1 kernel: tap49680c77-20: left promiscuous mode
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.911 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.913 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8f0b0d-1ed8-448a-a1db-457877a7ca46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.922 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.928 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[778f5bde-e3a9-4718-844d-69beccf15001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.930 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[db1b2819-bb60-4972-9749-4a8c5779751f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.946 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9940f7-e946-4f4b-a515-50fed6b0a8b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539992, 'reachable_time': 28208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247476, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 systemd[1]: run-netns-ovnmeta\x2d49680c77\x2d2db5\x2d4d0f\x2dbd5b\x2d08899440c38e.mount: Deactivated successfully.
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.949 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.950 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a612a5cd-60e0-4432-8b0a-64497ab1de42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.951 139580 INFO neutron.agent.ovn.metadata.agent [-] Port d5dc1a46-0992-4f53-8948-7dd36462be0e in datapath 2516fad8-9125-435f-927d-f6e26c906c07 unbound from our chassis
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.953 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2516fad8-9125-435f-927d-f6e26c906c07
Dec 06 07:12:33 compute-1 nova_compute[226101]: 2025-12-06 07:12:33.955 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.962 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[182417d5-0cbc-4829-8966-ae2779b46633]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.963 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2516fad8-91 in ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.966 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2516fad8-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.966 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1bac8b-414e-4a24-a103-85c820e777f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.967 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[83ba57ba-57b0-4e0c-bbd2-0712378c2b65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:33.978 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[df38824d-2090-4a40-8bce-6bc86d2b0523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.001 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b943fd9-2cd9-45f0-b5f1-f8e8b2706313]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.027 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5b42b65e-cfbf-4c93-bcd7-19bb194be9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.032 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[05079154-1102-4d8a-b225-c4e3d4aac576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 NetworkManager[49031]: <info>  [1765005154.0331] manager: (tap2516fad8-90): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.051 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005154.0512235, 6538434f-992a-4978-b5ca-284ce0f6a66e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.052 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] VM Started (Lifecycle Event)
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.059 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b610bfff-9cd6-4265-9e05-86b9d9cb6448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.062 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[247b760d-1f7f-4499-9454-753559c281e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 NetworkManager[49031]: <info>  [1765005154.0793] device (tap2516fad8-90): carrier: link connected
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.084 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4d3f82-5dbd-47b9-a39a-0f21e2feb48a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.098 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[27766caf-b455-4113-a131-a3fd1cbdb256]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2516fad8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:19:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543358, 'reachable_time': 15838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247510, 'error': None, 'target': 'ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.112 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ecb328-5a23-421b-b23b-b7f7b516e297]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:19d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543358, 'tstamp': 543358}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247511, 'error': None, 'target': 'ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.125 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7f8145-d80f-4ea7-b655-61c2423353a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2516fad8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:19:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543358, 'reachable_time': 15838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247512, 'error': None, 'target': 'ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.159 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe611d-4ceb-4f58-8d3b-d7ad3a3674d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.171 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.175 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005154.0513697, 6538434f-992a-4978-b5ca-284ce0f6a66e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.175 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] VM Paused (Lifecycle Event)
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.191 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.195 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.213 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.233 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2bba475b-1095-4b50-a582-a36cd3c7893b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.234 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2516fad8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.234 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.235 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2516fad8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:34 compute-1 kernel: tap2516fad8-90: entered promiscuous mode
Dec 06 07:12:34 compute-1 NetworkManager[49031]: <info>  [1765005154.2369] manager: (tap2516fad8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.238 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2516fad8-90, col_values=(('external_ids', {'iface-id': 'db0d80e7-075c-4b65-aebe-df56a261a397'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:34 compute-1 ovn_controller[130279]: 2025-12-06T07:12:34Z|00192|binding|INFO|Releasing lport db0d80e7-075c-4b65-aebe-df56a261a397 from this chassis (sb_readonly=0)
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.241 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2516fad8-9125-435f-927d-f6e26c906c07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2516fad8-9125-435f-927d-f6e26c906c07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.241 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[69420fad-36b0-4f66-9ae3-88359dcf03ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.242 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-2516fad8-9125-435f-927d-f6e26c906c07
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/2516fad8-9125-435f-927d-f6e26c906c07.pid.haproxy
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 2516fad8-9125-435f-927d-f6e26c906c07
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:12:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:34.243 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07', 'env', 'PROCESS_TAG=haproxy-2516fad8-9125-435f-927d-f6e26c906c07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2516fad8-9125-435f-927d-f6e26c906c07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.253 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:34 compute-1 podman[247544]: 2025-12-06 07:12:34.549214882 +0000 UTC m=+0.021312116 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:12:34 compute-1 podman[247544]: 2025-12-06 07:12:34.742614895 +0000 UTC m=+0.214712109 container create ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.759 226109 INFO nova.virt.libvirt.driver [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Deleting instance files /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40_del
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.759 226109 INFO nova.virt.libvirt.driver [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Deletion of /var/lib/nova/instances/1445e39c-6c8c-4a8f-9eab-64e13b262e40_del complete
Dec 06 07:12:34 compute-1 systemd[1]: Started libpod-conmon-ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b.scope.
Dec 06 07:12:34 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:12:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e307407e48075dd8257e2f82b3fea50b4905dd7eaf07993d36d623fa7c1b2580/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.816 226109 INFO nova.compute.manager [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Took 2.47 seconds to destroy the instance on the hypervisor.
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.816 226109 DEBUG oslo.service.loopingcall [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.817 226109 DEBUG nova.compute.manager [-] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:12:34 compute-1 nova_compute[226101]: 2025-12-06 07:12:34.817 226109 DEBUG nova.network.neutron [-] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:12:34 compute-1 podman[247544]: 2025-12-06 07:12:34.828660629 +0000 UTC m=+0.300757873 container init ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:12:34 compute-1 podman[247544]: 2025-12-06 07:12:34.83831179 +0000 UTC m=+0.310409004 container start ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:12:34 compute-1 neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07[247559]: [NOTICE]   (247563) : New worker (247565) forked
Dec 06 07:12:34 compute-1 neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07[247559]: [NOTICE]   (247563) : Loading success.
Dec 06 07:12:34 compute-1 ceph-mon[81689]: pgmap v1611: 305 pgs: 305 active+clean; 303 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 11 MiB/s wr, 249 op/s
Dec 06 07:12:34 compute-1 ceph-mon[81689]: osdmap e229: 3 total, 3 up, 3 in
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.089 226109 DEBUG nova.compute.manager [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received event network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.089 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.090 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.090 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.090 226109 DEBUG nova.compute.manager [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] No waiting events found dispatching network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.090 226109 WARNING nova.compute.manager [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received unexpected event network-vif-plugged-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 for instance with vm_state active and task_state deleting.
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.090 226109 DEBUG nova.compute.manager [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received event network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.091 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.091 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.091 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.091 226109 DEBUG nova.compute.manager [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Processing event network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.092 226109 DEBUG nova.compute.manager [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received event network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.092 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.092 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.092 226109 DEBUG oslo_concurrency.lockutils [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.092 226109 DEBUG nova.compute.manager [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] No waiting events found dispatching network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.093 226109 WARNING nova.compute.manager [req-2bc87dee-5dde-42c8-bb95-2c9661bd7a50 req-84d66d35-2fb8-4a0c-b706-a9d420a4188b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received unexpected event network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e for instance with vm_state building and task_state spawning.
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.093 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.097 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005155.097514, 6538434f-992a-4978-b5ca-284ce0f6a66e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.098 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] VM Resumed (Lifecycle Event)
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.099 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.102 226109 INFO nova.virt.libvirt.driver [-] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Instance spawned successfully.
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.102 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.117 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.121 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.126 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.127 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.127 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.127 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.128 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.128 226109 DEBUG nova.virt.libvirt.driver [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.143 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.189 226109 INFO nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Took 9.08 seconds to spawn the instance on the hypervisor.
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.190 226109 DEBUG nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.272 226109 INFO nova.compute.manager [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Took 16.50 seconds to build instance.
Dec 06 07:12:35 compute-1 nova_compute[226101]: 2025-12-06 07:12:35.298 226109 DEBUG oslo_concurrency.lockutils [None req-5917c16f-e26a-4430-9fa3-3a6072328df1 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.040 226109 DEBUG nova.network.neutron [-] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.071 226109 INFO nova.compute.manager [-] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Took 1.25 seconds to deallocate network for instance.
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.119 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.119 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:36.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:36.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.153 226109 DEBUG nova.compute.manager [req-d715a89e-48bf-4f56-93c3-322fde08d465 req-b41a2147-4365-4929-add1-f03f0b365311 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Received event network-vif-deleted-f7172ad0-f24b-4d05-81ac-adaec7bac7d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.218 226109 DEBUG oslo_concurrency.processutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1748162983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.633 226109 DEBUG oslo_concurrency.processutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.640 226109 DEBUG nova.compute.provider_tree [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.662 226109 DEBUG nova.scheduler.client.report [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.681 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.702 226109 INFO nova.scheduler.client.report [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Deleted allocations for instance 1445e39c-6c8c-4a8f-9eab-64e13b262e40
Dec 06 07:12:36 compute-1 nova_compute[226101]: 2025-12-06 07:12:36.772 226109 DEBUG oslo_concurrency.lockutils [None req-d36190a1-6f80-4c8c-aa50-71cb9aef8132 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "1445e39c-6c8c-4a8f-9eab-64e13b262e40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:37 compute-1 ceph-mon[81689]: pgmap v1613: 305 pgs: 305 active+clean; 250 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 11 MiB/s wr, 242 op/s
Dec 06 07:12:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1748162983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:37 compute-1 NetworkManager[49031]: <info>  [1765005157.6658] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Dec 06 07:12:37 compute-1 NetworkManager[49031]: <info>  [1765005157.6664] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Dec 06 07:12:37 compute-1 ovn_controller[130279]: 2025-12-06T07:12:37Z|00193|binding|INFO|Releasing lport db0d80e7-075c-4b65-aebe-df56a261a397 from this chassis (sb_readonly=0)
Dec 06 07:12:37 compute-1 nova_compute[226101]: 2025-12-06 07:12:37.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:37 compute-1 ovn_controller[130279]: 2025-12-06T07:12:37Z|00194|binding|INFO|Releasing lport db0d80e7-075c-4b65-aebe-df56a261a397 from this chassis (sb_readonly=0)
Dec 06 07:12:37 compute-1 nova_compute[226101]: 2025-12-06 07:12:37.709 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:37 compute-1 nova_compute[226101]: 2025-12-06 07:12:37.716 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:37 compute-1 nova_compute[226101]: 2025-12-06 07:12:37.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:38.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:38.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:38 compute-1 nova_compute[226101]: 2025-12-06 07:12:38.499 226109 DEBUG nova.compute.manager [req-b3668f71-77e1-428f-9b7f-819cb3df4777 req-2fda2c1b-3238-4cfa-9060-706f93dec088 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received event network-changed-d5dc1a46-0992-4f53-8948-7dd36462be0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:38 compute-1 nova_compute[226101]: 2025-12-06 07:12:38.500 226109 DEBUG nova.compute.manager [req-b3668f71-77e1-428f-9b7f-819cb3df4777 req-2fda2c1b-3238-4cfa-9060-706f93dec088 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Refreshing instance network info cache due to event network-changed-d5dc1a46-0992-4f53-8948-7dd36462be0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:12:38 compute-1 nova_compute[226101]: 2025-12-06 07:12:38.500 226109 DEBUG oslo_concurrency.lockutils [req-b3668f71-77e1-428f-9b7f-819cb3df4777 req-2fda2c1b-3238-4cfa-9060-706f93dec088 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:38 compute-1 nova_compute[226101]: 2025-12-06 07:12:38.500 226109 DEBUG oslo_concurrency.lockutils [req-b3668f71-77e1-428f-9b7f-819cb3df4777 req-2fda2c1b-3238-4cfa-9060-706f93dec088 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:38 compute-1 nova_compute[226101]: 2025-12-06 07:12:38.501 226109 DEBUG nova.network.neutron [req-b3668f71-77e1-428f-9b7f-819cb3df4777 req-2fda2c1b-3238-4cfa-9060-706f93dec088 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Refreshing network info cache for port d5dc1a46-0992-4f53-8948-7dd36462be0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:12:38 compute-1 ceph-mon[81689]: pgmap v1614: 305 pgs: 305 active+clean; 205 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 8.8 MiB/s wr, 257 op/s
Dec 06 07:12:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e230 e230: 3 total, 3 up, 3 in
Dec 06 07:12:38 compute-1 nova_compute[226101]: 2025-12-06 07:12:38.958 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:39 compute-1 podman[247597]: 2025-12-06 07:12:39.105374595 +0000 UTC m=+0.093330531 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 06 07:12:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:40 compute-1 ceph-mon[81689]: osdmap e230: 3 total, 3 up, 3 in
Dec 06 07:12:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2194120696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1662024483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:40.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:40.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:41 compute-1 nova_compute[226101]: 2025-12-06 07:12:41.135 226109 DEBUG nova.network.neutron [req-b3668f71-77e1-428f-9b7f-819cb3df4777 req-2fda2c1b-3238-4cfa-9060-706f93dec088 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Updated VIF entry in instance network info cache for port d5dc1a46-0992-4f53-8948-7dd36462be0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:12:41 compute-1 nova_compute[226101]: 2025-12-06 07:12:41.135 226109 DEBUG nova.network.neutron [req-b3668f71-77e1-428f-9b7f-819cb3df4777 req-2fda2c1b-3238-4cfa-9060-706f93dec088 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Updating instance_info_cache with network_info: [{"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:41 compute-1 nova_compute[226101]: 2025-12-06 07:12:41.159 226109 DEBUG oslo_concurrency.lockutils [req-b3668f71-77e1-428f-9b7f-819cb3df4777 req-2fda2c1b-3238-4cfa-9060-706f93dec088 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:41 compute-1 ceph-mon[81689]: pgmap v1616: 305 pgs: 305 active+clean; 129 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 314 op/s
Dec 06 07:12:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:42.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:42.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:42 compute-1 ceph-mon[81689]: pgmap v1617: 305 pgs: 305 active+clean; 88 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.7 MiB/s wr, 247 op/s
Dec 06 07:12:42 compute-1 nova_compute[226101]: 2025-12-06 07:12:42.818 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:43 compute-1 nova_compute[226101]: 2025-12-06 07:12:43.723 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e231 e231: 3 total, 3 up, 3 in
Dec 06 07:12:43 compute-1 nova_compute[226101]: 2025-12-06 07:12:43.959 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:44 compute-1 podman[247623]: 2025-12-06 07:12:44.070172362 +0000 UTC m=+0.053669610 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:12:44 compute-1 podman[247622]: 2025-12-06 07:12:44.077571992 +0000 UTC m=+0.063220468 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:12:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:44.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:44.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:44 compute-1 sudo[247661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:44 compute-1 sudo[247661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:44 compute-1 sudo[247661]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:44 compute-1 sudo[247686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:12:44 compute-1 sudo[247686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:44 compute-1 sudo[247686]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:44 compute-1 sudo[247711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:44 compute-1 sudo[247711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:44 compute-1 sudo[247711]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:44 compute-1 sudo[247736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:12:44 compute-1 sudo[247736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:44 compute-1 ceph-mon[81689]: pgmap v1618: 305 pgs: 305 active+clean; 104 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 240 op/s
Dec 06 07:12:44 compute-1 ceph-mon[81689]: osdmap e231: 3 total, 3 up, 3 in
Dec 06 07:12:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/478934728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1382731662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:45 compute-1 sudo[247736]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1816668834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:12:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:12:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:12:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:12:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:12:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:46.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:46 compute-1 ceph-mon[81689]: pgmap v1620: 305 pgs: 305 active+clean; 134 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Dec 06 07:12:47 compute-1 nova_compute[226101]: 2025-12-06 07:12:47.779 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005152.7778528, 1445e39c-6c8c-4a8f-9eab-64e13b262e40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:47 compute-1 nova_compute[226101]: 2025-12-06 07:12:47.780 226109 INFO nova.compute.manager [-] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] VM Stopped (Lifecycle Event)
Dec 06 07:12:47 compute-1 nova_compute[226101]: 2025-12-06 07:12:47.798 226109 DEBUG nova.compute.manager [None req-fd61e212-e0bb-4204-88a6-77b3eafab858 - - - - - -] [instance: 1445e39c-6c8c-4a8f-9eab-64e13b262e40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:47 compute-1 nova_compute[226101]: 2025-12-06 07:12:47.819 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:47 compute-1 nova_compute[226101]: 2025-12-06 07:12:47.891 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:12:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:48.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:12:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:48.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:48 compute-1 ceph-mon[81689]: pgmap v1621: 305 pgs: 305 active+clean; 143 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 84 op/s
Dec 06 07:12:48 compute-1 nova_compute[226101]: 2025-12-06 07:12:48.962 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:49 compute-1 nova_compute[226101]: 2025-12-06 07:12:49.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:49 compute-1 nova_compute[226101]: 2025-12-06 07:12:49.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:49 compute-1 nova_compute[226101]: 2025-12-06 07:12:49.686 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:49 compute-1 nova_compute[226101]: 2025-12-06 07:12:49.686 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:49 compute-1 nova_compute[226101]: 2025-12-06 07:12:49.686 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:49 compute-1 nova_compute[226101]: 2025-12-06 07:12:49.687 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:12:49 compute-1 nova_compute[226101]: 2025-12-06 07:12:49.687 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:50 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3794954356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.104 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:50.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:50.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.176 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.177 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:12:50 compute-1 ovn_controller[130279]: 2025-12-06T07:12:50Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:d1:d1 10.100.0.13
Dec 06 07:12:50 compute-1 ovn_controller[130279]: 2025-12-06T07:12:50Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:d1:d1 10.100.0.13
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.314 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.315 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4553MB free_disk=20.920528411865234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.315 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.315 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.741 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 6538434f-992a-4978-b5ca-284ce0f6a66e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.741 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.742 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.868 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.931 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.931 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.945 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:12:50 compute-1 nova_compute[226101]: 2025-12-06 07:12:50.975 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:12:51 compute-1 nova_compute[226101]: 2025-12-06 07:12:51.006 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:51 compute-1 ceph-mon[81689]: pgmap v1622: 305 pgs: 305 active+clean; 173 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 4.8 MiB/s wr, 115 op/s
Dec 06 07:12:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3794954356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3431653592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:51 compute-1 nova_compute[226101]: 2025-12-06 07:12:51.462 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:51 compute-1 nova_compute[226101]: 2025-12-06 07:12:51.469 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:51 compute-1 nova_compute[226101]: 2025-12-06 07:12:51.485 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:51 compute-1 nova_compute[226101]: 2025-12-06 07:12:51.511 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:12:51 compute-1 nova_compute[226101]: 2025-12-06 07:12:51.512 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:52.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:52.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3431653592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3104088393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1153011379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.395 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:52.396 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:12:52.397 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.821 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.878 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.878 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.909 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.981 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.981 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.986 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:12:52 compute-1 nova_compute[226101]: 2025-12-06 07:12:52.987 226109 INFO nova.compute.claims [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.156 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:53 compute-1 ceph-mon[81689]: pgmap v1623: 305 pgs: 305 active+clean; 200 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.5 MiB/s wr, 158 op/s
Dec 06 07:12:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2174284925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1535144048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.513 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.514 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.540 226109 WARNING nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.540 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 6538434f-992a-4978-b5ca-284ce0f6a66e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.541 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 8b57f004-f745-47c8-87d9-9d7ba1138ed8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.541 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.541 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.542 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.542 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.542 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:12:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1198184614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.565 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.573 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.579 226109 DEBUG nova.compute.provider_tree [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.611 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.611 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.611 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.612 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.674 226109 DEBUG nova.scheduler.client.report [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.722 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.739 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.739 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.816 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.816 226109 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.838 226109 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.862 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.934 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.934 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.934 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.935 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6538434f-992a-4978-b5ca-284ce0f6a66e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.938 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.939 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.939 226109 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Creating image(s)
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.968 226109 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:53 compute-1 nova_compute[226101]: 2025-12-06 07:12:53.997 226109 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.025 226109 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.029 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.049 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.070 226109 DEBUG nova.policy [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03fb2817729e4b71932023a7637c6244', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.088 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.089 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.089 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.090 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.115 226109 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.118 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:54.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:12:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:54.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.406 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.471 226109 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] resizing rbd image 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:12:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e232 e232: 3 total, 3 up, 3 in
Dec 06 07:12:54 compute-1 ceph-mon[81689]: pgmap v1624: 305 pgs: 305 active+clean; 209 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Dec 06 07:12:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1198184614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3750448777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.592 226109 DEBUG nova.objects.instance [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b57f004-f745-47c8-87d9-9d7ba1138ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.612 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.613 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Ensure instance console log exists: /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.613 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.614 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:54 compute-1 nova_compute[226101]: 2025-12-06 07:12:54.614 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:54 compute-1 sudo[248025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:54 compute-1 sudo[248025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:54 compute-1 sudo[248025]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:54 compute-1 sudo[248050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:12:54 compute-1 sudo[248050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:55 compute-1 sudo[248050]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:55 compute-1 nova_compute[226101]: 2025-12-06 07:12:55.247 226109 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Successfully created port: 757384d6-3566-4a74-b1b3-d658468b4e13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:12:55 compute-1 ceph-mon[81689]: osdmap e232: 3 total, 3 up, 3 in
Dec 06 07:12:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:55 compute-1 nova_compute[226101]: 2025-12-06 07:12:55.693 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Updating instance_info_cache with network_info: [{"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:55 compute-1 nova_compute[226101]: 2025-12-06 07:12:55.713 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-6538434f-992a-4978-b5ca-284ce0f6a66e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:55 compute-1 nova_compute[226101]: 2025-12-06 07:12:55.714 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:12:55 compute-1 nova_compute[226101]: 2025-12-06 07:12:55.714 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:55 compute-1 nova_compute[226101]: 2025-12-06 07:12:55.714 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:55 compute-1 nova_compute[226101]: 2025-12-06 07:12:55.714 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:12:55 compute-1 nova_compute[226101]: 2025-12-06 07:12:55.731 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:12:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e233 e233: 3 total, 3 up, 3 in
Dec 06 07:12:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:56.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:56.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.212 226109 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Successfully updated port: 757384d6-3566-4a74-b1b3-d658468b4e13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.226 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "refresh_cache-8b57f004-f745-47c8-87d9-9d7ba1138ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.227 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquired lock "refresh_cache-8b57f004-f745-47c8-87d9-9d7ba1138ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.227 226109 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.417 226109 DEBUG nova.compute.manager [req-4469cdd7-4d37-4ee9-8bb4-97bed76d4c96 req-2660bd34-32db-4555-9a32-7822c3298642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received event network-changed-757384d6-3566-4a74-b1b3-d658468b4e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.418 226109 DEBUG nova.compute.manager [req-4469cdd7-4d37-4ee9-8bb4-97bed76d4c96 req-2660bd34-32db-4555-9a32-7822c3298642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Refreshing instance network info cache due to event network-changed-757384d6-3566-4a74-b1b3-d658468b4e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.418 226109 DEBUG oslo_concurrency.lockutils [req-4469cdd7-4d37-4ee9-8bb4-97bed76d4c96 req-2660bd34-32db-4555-9a32-7822c3298642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-8b57f004-f745-47c8-87d9-9d7ba1138ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.495 226109 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:12:56 compute-1 ceph-mon[81689]: pgmap v1626: 305 pgs: 305 active+clean; 264 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.2 MiB/s wr, 219 op/s
Dec 06 07:12:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2861895599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:56 compute-1 ceph-mon[81689]: osdmap e233: 3 total, 3 up, 3 in
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.621 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:56 compute-1 nova_compute[226101]: 2025-12-06 07:12:56.622 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:12:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e234 e234: 3 total, 3 up, 3 in
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.681 226109 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Updating instance_info_cache with network_info: [{"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2384847691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:57 compute-1 ceph-mon[81689]: osdmap e234: 3 total, 3 up, 3 in
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.725 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Releasing lock "refresh_cache-8b57f004-f745-47c8-87d9-9d7ba1138ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.726 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Instance network_info: |[{"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.726 226109 DEBUG oslo_concurrency.lockutils [req-4469cdd7-4d37-4ee9-8bb4-97bed76d4c96 req-2660bd34-32db-4555-9a32-7822c3298642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-8b57f004-f745-47c8-87d9-9d7ba1138ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.726 226109 DEBUG nova.network.neutron [req-4469cdd7-4d37-4ee9-8bb4-97bed76d4c96 req-2660bd34-32db-4555-9a32-7822c3298642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Refreshing network info cache for port 757384d6-3566-4a74-b1b3-d658468b4e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.730 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Start _get_guest_xml network_info=[{"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.734 226109 WARNING nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.739 226109 DEBUG nova.virt.libvirt.host [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.740 226109 DEBUG nova.virt.libvirt.host [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.743 226109 DEBUG nova.virt.libvirt.host [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.743 226109 DEBUG nova.virt.libvirt.host [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.744 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.744 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.745 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.745 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.745 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.746 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.746 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.746 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.746 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.747 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.747 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.747 226109 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.750 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:57 compute-1 nova_compute[226101]: 2025-12-06 07:12:57.823 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:58.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:12:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:12:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1024249341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.401 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.429 226109 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.434 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.605 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:12:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2506934581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.846 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.847 226109 DEBUG nova.virt.libvirt.vif [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-481293432',display_name='tempest-tempest.common.compute-instance-481293432-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-481293432-1',id=59,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-tgs77gw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:53Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=8b57f004-f745-47c8-87d9-9d7ba1138ed8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.848 226109 DEBUG nova.network.os_vif_util [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.848 226109 DEBUG nova.network.os_vif_util [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:52:6d,bridge_name='br-int',has_traffic_filtering=True,id=757384d6-3566-4a74-b1b3-d658468b4e13,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap757384d6-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.849 226109 DEBUG nova.objects.instance [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b57f004-f745-47c8-87d9-9d7ba1138ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.868 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <uuid>8b57f004-f745-47c8-87d9-9d7ba1138ed8</uuid>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <name>instance-0000003b</name>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <nova:name>tempest-tempest.common.compute-instance-481293432-1</nova:name>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:12:57</nova:creationTime>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <nova:user uuid="03fb2817729e4b71932023a7637c6244">tempest-MultipleCreateTestJSON-1199242675-project-member</nova:user>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <nova:project uuid="de09de98b3b1445f88b6094b6aac4a30">tempest-MultipleCreateTestJSON-1199242675</nova:project>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <nova:port uuid="757384d6-3566-4a74-b1b3-d658468b4e13">
Dec 06 07:12:58 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <system>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <entry name="serial">8b57f004-f745-47c8-87d9-9d7ba1138ed8</entry>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <entry name="uuid">8b57f004-f745-47c8-87d9-9d7ba1138ed8</entry>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </system>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <os>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   </os>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <features>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   </features>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk">
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       </source>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk.config">
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       </source>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:12:58 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:59:52:6d"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <target dev="tap757384d6-35"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8/console.log" append="off"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <video>
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </video>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:12:58 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:12:58 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:12:58 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:12:58 compute-1 nova_compute[226101]: </domain>
Dec 06 07:12:58 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.869 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Preparing to wait for external event network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.869 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.869 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.870 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.870 226109 DEBUG nova.virt.libvirt.vif [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-481293432',display_name='tempest-tempest.common.compute-instance-481293432-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-481293432-1',id=59,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-tgs77gw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:53Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=8b57f004-f745-47c8-87d9-9d7ba1138ed8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.870 226109 DEBUG nova.network.os_vif_util [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.871 226109 DEBUG nova.network.os_vif_util [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:52:6d,bridge_name='br-int',has_traffic_filtering=True,id=757384d6-3566-4a74-b1b3-d658468b4e13,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap757384d6-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.871 226109 DEBUG os_vif [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:52:6d,bridge_name='br-int',has_traffic_filtering=True,id=757384d6-3566-4a74-b1b3-d658468b4e13,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap757384d6-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.872 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.872 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.873 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.879 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.880 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap757384d6-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.881 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap757384d6-35, col_values=(('external_ids', {'iface-id': '757384d6-3566-4a74-b1b3-d658468b4e13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:52:6d', 'vm-uuid': '8b57f004-f745-47c8-87d9-9d7ba1138ed8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.884 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-1 NetworkManager[49031]: <info>  [1765005178.8848] manager: (tap757384d6-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.886 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.890 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.891 226109 INFO os_vif [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:52:6d,bridge_name='br-int',has_traffic_filtering=True,id=757384d6-3566-4a74-b1b3-d658468b4e13,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap757384d6-35')
Dec 06 07:12:58 compute-1 nova_compute[226101]: 2025-12-06 07:12:58.965 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.016 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.017 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.017 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No VIF found with MAC fa:16:3e:59:52:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.017 226109 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Using config drive
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.045 226109 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.304 226109 DEBUG nova.network.neutron [req-4469cdd7-4d37-4ee9-8bb4-97bed76d4c96 req-2660bd34-32db-4555-9a32-7822c3298642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Updated VIF entry in instance network info cache for port 757384d6-3566-4a74-b1b3-d658468b4e13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.305 226109 DEBUG nova.network.neutron [req-4469cdd7-4d37-4ee9-8bb4-97bed76d4c96 req-2660bd34-32db-4555-9a32-7822c3298642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Updating instance_info_cache with network_info: [{"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.327 226109 DEBUG oslo_concurrency.lockutils [req-4469cdd7-4d37-4ee9-8bb4-97bed76d4c96 req-2660bd34-32db-4555-9a32-7822c3298642 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-8b57f004-f745-47c8-87d9-9d7ba1138ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e235 e235: 3 total, 3 up, 3 in
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.523 226109 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Creating config drive at /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8/disk.config
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.529 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpki6_icwc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.663 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpki6_icwc" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.695 226109 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:59 compute-1 nova_compute[226101]: 2025-12-06 07:12:59.700 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8/disk.config 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:00.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:00.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:00 compute-1 ceph-mon[81689]: pgmap v1629: 305 pgs: 305 active+clean; 292 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.4 MiB/s wr, 271 op/s
Dec 06 07:13:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3324865310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1024249341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2405761541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.399 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:00 compute-1 nova_compute[226101]: 2025-12-06 07:13:00.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:00 compute-1 nova_compute[226101]: 2025-12-06 07:13:00.702 226109 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8/disk.config 8b57f004-f745-47c8-87d9-9d7ba1138ed8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.002s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:00 compute-1 nova_compute[226101]: 2025-12-06 07:13:00.703 226109 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Deleting local config drive /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8/disk.config because it was imported into RBD.
Dec 06 07:13:00 compute-1 kernel: tap757384d6-35: entered promiscuous mode
Dec 06 07:13:00 compute-1 NetworkManager[49031]: <info>  [1765005180.7770] manager: (tap757384d6-35): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Dec 06 07:13:00 compute-1 ovn_controller[130279]: 2025-12-06T07:13:00Z|00195|binding|INFO|Claiming lport 757384d6-3566-4a74-b1b3-d658468b4e13 for this chassis.
Dec 06 07:13:00 compute-1 ovn_controller[130279]: 2025-12-06T07:13:00Z|00196|binding|INFO|757384d6-3566-4a74-b1b3-d658468b4e13: Claiming fa:16:3e:59:52:6d 10.100.0.6
Dec 06 07:13:00 compute-1 nova_compute[226101]: 2025-12-06 07:13:00.778 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.792 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:52:6d 10.100.0.6'], port_security=['fa:16:3e:59:52:6d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8b57f004-f745-47c8-87d9-9d7ba1138ed8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c3e308f8-65fc-402f-96ed-2039201b95ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5d88f71-5dee-4f84-b30d-05c6ecea010a, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=757384d6-3566-4a74-b1b3-d658468b4e13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.798 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 757384d6-3566-4a74-b1b3-d658468b4e13 in datapath c0aeacae-e53d-425f-88e7-942ba0ab660c bound to our chassis
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.801 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:00 compute-1 ovn_controller[130279]: 2025-12-06T07:13:00Z|00197|binding|INFO|Setting lport 757384d6-3566-4a74-b1b3-d658468b4e13 ovn-installed in OVS
Dec 06 07:13:00 compute-1 ovn_controller[130279]: 2025-12-06T07:13:00Z|00198|binding|INFO|Setting lport 757384d6-3566-4a74-b1b3-d658468b4e13 up in Southbound
Dec 06 07:13:00 compute-1 nova_compute[226101]: 2025-12-06 07:13:00.805 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:00 compute-1 nova_compute[226101]: 2025-12-06 07:13:00.807 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:00 compute-1 systemd-udevd[248210]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.815 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2e427bc9-beee-4886-b606-f230df85b6bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.816 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc0aeacae-e1 in ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.818 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc0aeacae-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.818 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d27d4a36-b825-424d-93c1-a2c58b28130b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.819 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c26e674b-7faf-46bb-ae3a-dd37eb140043]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 systemd-machined[190302]: New machine qemu-28-instance-0000003b.
Dec 06 07:13:00 compute-1 NetworkManager[49031]: <info>  [1765005180.8262] device (tap757384d6-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:13:00 compute-1 NetworkManager[49031]: <info>  [1765005180.8280] device (tap757384d6-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.830 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0b1b7b-73ca-4d99-acb0-70c083506886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 systemd[1]: Started Virtual Machine qemu-28-instance-0000003b.
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.843 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a06e1a-36e2-44ec-bd2c-c63a823089a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.872 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b64c96-81fe-413a-a9c2-762ec1449846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 NetworkManager[49031]: <info>  [1765005180.8793] manager: (tapc0aeacae-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.878 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[484e8b22-70f2-4c1f-8533-a95b9123637b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 systemd-udevd[248214]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.921 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a2a2b6-5741-4515-8d8e-b12db364cbaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.924 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3f25c8a8-b7ff-48a4-b298-24a9f6ea48fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 NetworkManager[49031]: <info>  [1765005180.9538] device (tapc0aeacae-e0): carrier: link connected
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.958 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2f9060-62cf-4fa1-97df-9d07a43a7e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.976 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[32d33951-b6a5-4db0-9ea2-abfb6c481955]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0aeacae-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:b3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546045, 'reachable_time': 18125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248243, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:00.990 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8fbfbe06-a0e4-4703-9f7c-c6fafda6c525]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:b3f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546045, 'tstamp': 546045}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248244, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.007 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[07275cef-e593-4cb8-9782-ae65210f6361]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0aeacae-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:b3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546045, 'reachable_time': 18125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248245, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.033 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[257bd97d-8028-499f-8a16-3b32c8c10efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.088 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4967b3af-f991-4604-832d-447bb23f611b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.090 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0aeacae-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.090 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.090 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0aeacae-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:01 compute-1 NetworkManager[49031]: <info>  [1765005181.0929] manager: (tapc0aeacae-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Dec 06 07:13:01 compute-1 kernel: tapc0aeacae-e0: entered promiscuous mode
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.097 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc0aeacae-e0, col_values=(('external_ids', {'iface-id': 'f88eb5d2-a701-4b34-8226-834c175af63c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:01 compute-1 ovn_controller[130279]: 2025-12-06T07:13:01Z|00199|binding|INFO|Releasing lport f88eb5d2-a701-4b34-8226-834c175af63c from this chassis (sb_readonly=0)
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.100 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.101 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e5f8a8-09b0-4cc4-963d-9de46635ecbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.102 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.104 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'env', 'PROCESS_TAG=haproxy-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c0aeacae-e53d-425f-88e7-942ba0ab660c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.115 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.241 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005181.2408123, 8b57f004-f745-47c8-87d9-9d7ba1138ed8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.242 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] VM Started (Lifecycle Event)
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.270 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.273 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005181.242034, 8b57f004-f745-47c8-87d9-9d7ba1138ed8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.274 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] VM Paused (Lifecycle Event)
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.291 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.294 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.322 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:01 compute-1 podman[248319]: 2025-12-06 07:13:01.50814581 +0000 UTC m=+0.049796747 container create 82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 07:13:01 compute-1 systemd[1]: Started libpod-conmon-82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756.scope.
Dec 06 07:13:01 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:13:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/852976e2f4b661efd11bb29b72423abeb0f99f378edb5c8340b4c1995d306c0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:01 compute-1 podman[248319]: 2025-12-06 07:13:01.481287194 +0000 UTC m=+0.022938161 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:01 compute-1 podman[248319]: 2025-12-06 07:13:01.59629848 +0000 UTC m=+0.137949447 container init 82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.601 226109 DEBUG nova.compute.manager [req-7ef65ecd-f651-40a2-9168-bec899e35dcd req-b8247c0b-65f7-4275-b2d4-09af015bd1bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received event network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.602 226109 DEBUG oslo_concurrency.lockutils [req-7ef65ecd-f651-40a2-9168-bec899e35dcd req-b8247c0b-65f7-4275-b2d4-09af015bd1bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.602 226109 DEBUG oslo_concurrency.lockutils [req-7ef65ecd-f651-40a2-9168-bec899e35dcd req-b8247c0b-65f7-4275-b2d4-09af015bd1bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.602 226109 DEBUG oslo_concurrency.lockutils [req-7ef65ecd-f651-40a2-9168-bec899e35dcd req-b8247c0b-65f7-4275-b2d4-09af015bd1bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.602 226109 DEBUG nova.compute.manager [req-7ef65ecd-f651-40a2-9168-bec899e35dcd req-b8247c0b-65f7-4275-b2d4-09af015bd1bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Processing event network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:13:01 compute-1 podman[248319]: 2025-12-06 07:13:01.603544706 +0000 UTC m=+0.145195643 container start 82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.603 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.606 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005181.6065626, 8b57f004-f745-47c8-87d9-9d7ba1138ed8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.607 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] VM Resumed (Lifecycle Event)
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.609 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.612 226109 INFO nova.virt.libvirt.driver [-] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Instance spawned successfully.
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.613 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.630 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.630 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.631 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:01.633 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.635 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:01 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[248334]: [NOTICE]   (248338) : New worker (248340) forked
Dec 06 07:13:01 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[248334]: [NOTICE]   (248338) : Loading success.
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.640 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.641 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.641 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.642 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.642 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.643 226109 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.672 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2506934581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:01 compute-1 ceph-mon[81689]: pgmap v1630: 305 pgs: 305 active+clean; 352 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 329 op/s
Dec 06 07:13:01 compute-1 ceph-mon[81689]: osdmap e235: 3 total, 3 up, 3 in
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.814 226109 INFO nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Took 7.88 seconds to spawn the instance on the hypervisor.
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.814 226109 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.883 226109 INFO nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Took 8.92 seconds to build instance.
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.901 226109 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.901 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:01 compute-1 nova_compute[226101]: 2025-12-06 07:13:01.922 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:02.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:02.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:03 compute-1 ceph-mon[81689]: pgmap v1632: 305 pgs: 305 active+clean; 352 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.5 MiB/s wr, 324 op/s
Dec 06 07:13:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e236 e236: 3 total, 3 up, 3 in
Dec 06 07:13:03 compute-1 nova_compute[226101]: 2025-12-06 07:13:03.713 226109 DEBUG nova.compute.manager [req-c9fca171-5d79-4469-8f22-11bf7a507828 req-cce5bdc2-74da-4576-8a88-4f5320a24a69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received event network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:03 compute-1 nova_compute[226101]: 2025-12-06 07:13:03.713 226109 DEBUG oslo_concurrency.lockutils [req-c9fca171-5d79-4469-8f22-11bf7a507828 req-cce5bdc2-74da-4576-8a88-4f5320a24a69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:03 compute-1 nova_compute[226101]: 2025-12-06 07:13:03.713 226109 DEBUG oslo_concurrency.lockutils [req-c9fca171-5d79-4469-8f22-11bf7a507828 req-cce5bdc2-74da-4576-8a88-4f5320a24a69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:03 compute-1 nova_compute[226101]: 2025-12-06 07:13:03.714 226109 DEBUG oslo_concurrency.lockutils [req-c9fca171-5d79-4469-8f22-11bf7a507828 req-cce5bdc2-74da-4576-8a88-4f5320a24a69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:03 compute-1 nova_compute[226101]: 2025-12-06 07:13:03.714 226109 DEBUG nova.compute.manager [req-c9fca171-5d79-4469-8f22-11bf7a507828 req-cce5bdc2-74da-4576-8a88-4f5320a24a69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] No waiting events found dispatching network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:03 compute-1 nova_compute[226101]: 2025-12-06 07:13:03.714 226109 WARNING nova.compute.manager [req-c9fca171-5d79-4469-8f22-11bf7a507828 req-cce5bdc2-74da-4576-8a88-4f5320a24a69 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received unexpected event network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 for instance with vm_state active and task_state None.
Dec 06 07:13:03 compute-1 nova_compute[226101]: 2025-12-06 07:13:03.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:03 compute-1 nova_compute[226101]: 2025-12-06 07:13:03.968 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:04.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:04.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:04 compute-1 ceph-mon[81689]: pgmap v1633: 305 pgs: 305 active+clean; 387 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.9 MiB/s wr, 385 op/s
Dec 06 07:13:04 compute-1 ceph-mon[81689]: osdmap e236: 3 total, 3 up, 3 in
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.685 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.685 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.686 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.686 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.686 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.687 226109 INFO nova.compute.manager [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Terminating instance
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.688 226109 DEBUG nova.compute.manager [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:13:04 compute-1 kernel: tap757384d6-35 (unregistering): left promiscuous mode
Dec 06 07:13:04 compute-1 NetworkManager[49031]: <info>  [1765005184.7703] device (tap757384d6-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.781 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-1 ovn_controller[130279]: 2025-12-06T07:13:04Z|00200|binding|INFO|Releasing lport 757384d6-3566-4a74-b1b3-d658468b4e13 from this chassis (sb_readonly=0)
Dec 06 07:13:04 compute-1 ovn_controller[130279]: 2025-12-06T07:13:04Z|00201|binding|INFO|Setting lport 757384d6-3566-4a74-b1b3-d658468b4e13 down in Southbound
Dec 06 07:13:04 compute-1 ovn_controller[130279]: 2025-12-06T07:13:04Z|00202|binding|INFO|Removing iface tap757384d6-35 ovn-installed in OVS
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.783 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e237 e237: 3 total, 3 up, 3 in
Dec 06 07:13:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:04.790 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:52:6d 10.100.0.6'], port_security=['fa:16:3e:59:52:6d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8b57f004-f745-47c8-87d9-9d7ba1138ed8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c3e308f8-65fc-402f-96ed-2039201b95ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5d88f71-5dee-4f84-b30d-05c6ecea010a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=757384d6-3566-4a74-b1b3-d658468b4e13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:04.792 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 757384d6-3566-4a74-b1b3-d658468b4e13 in datapath c0aeacae-e53d-425f-88e7-942ba0ab660c unbound from our chassis
Dec 06 07:13:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:04.795 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0aeacae-e53d-425f-88e7-942ba0ab660c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:04.796 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7de36b9f-48a3-4847-91b0-58b979d68b33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:04.797 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c namespace which is not needed anymore
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.797 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-1 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Dec 06 07:13:04 compute-1 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000003b.scope: Consumed 3.599s CPU time.
Dec 06 07:13:04 compute-1 systemd-machined[190302]: Machine qemu-28-instance-0000003b terminated.
Dec 06 07:13:04 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[248334]: [NOTICE]   (248338) : haproxy version is 2.8.14-c23fe91
Dec 06 07:13:04 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[248334]: [NOTICE]   (248338) : path to executable is /usr/sbin/haproxy
Dec 06 07:13:04 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[248334]: [WARNING]  (248338) : Exiting Master process...
Dec 06 07:13:04 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[248334]: [WARNING]  (248338) : Exiting Master process...
Dec 06 07:13:04 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[248334]: [ALERT]    (248338) : Current worker (248340) exited with code 143 (Terminated)
Dec 06 07:13:04 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[248334]: [WARNING]  (248338) : All workers exited. Exiting... (0)
Dec 06 07:13:04 compute-1 systemd[1]: libpod-82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756.scope: Deactivated successfully.
Dec 06 07:13:04 compute-1 podman[248372]: 2025-12-06 07:13:04.954537442 +0000 UTC m=+0.060352511 container died 82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.955 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.962 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.971 226109 INFO nova.virt.libvirt.driver [-] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Instance destroyed successfully.
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.972 226109 DEBUG nova.objects.instance [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'resources' on Instance uuid 8b57f004-f745-47c8-87d9-9d7ba1138ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:04 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756-userdata-shm.mount: Deactivated successfully.
Dec 06 07:13:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-852976e2f4b661efd11bb29b72423abeb0f99f378edb5c8340b4c1995d306c0b-merged.mount: Deactivated successfully.
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.994 226109 DEBUG nova.virt.libvirt.vif [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-481293432',display_name='tempest-tempest.common.compute-instance-481293432-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-481293432-1',id=59,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:13:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-tgs77gw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:01Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=8b57f004-f745-47c8-87d9-9d7ba1138ed8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.994 226109 DEBUG nova.network.os_vif_util [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "757384d6-3566-4a74-b1b3-d658468b4e13", "address": "fa:16:3e:59:52:6d", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap757384d6-35", "ovs_interfaceid": "757384d6-3566-4a74-b1b3-d658468b4e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.995 226109 DEBUG nova.network.os_vif_util [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:52:6d,bridge_name='br-int',has_traffic_filtering=True,id=757384d6-3566-4a74-b1b3-d658468b4e13,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap757384d6-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.995 226109 DEBUG os_vif [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:52:6d,bridge_name='br-int',has_traffic_filtering=True,id=757384d6-3566-4a74-b1b3-d658468b4e13,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap757384d6-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-1 nova_compute[226101]: 2025-12-06 07:13:04.998 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap757384d6-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:05 compute-1 nova_compute[226101]: 2025-12-06 07:13:05.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-1 nova_compute[226101]: 2025-12-06 07:13:05.003 226109 INFO os_vif [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:52:6d,bridge_name='br-int',has_traffic_filtering=True,id=757384d6-3566-4a74-b1b3-d658468b4e13,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap757384d6-35')
Dec 06 07:13:05 compute-1 podman[248372]: 2025-12-06 07:13:05.022313982 +0000 UTC m=+0.128129031 container cleanup 82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:13:05 compute-1 systemd[1]: libpod-conmon-82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756.scope: Deactivated successfully.
Dec 06 07:13:05 compute-1 podman[248424]: 2025-12-06 07:13:05.087844931 +0000 UTC m=+0.044154822 container remove 82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.095 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a027e434-9d0e-43fa-9711-5ec3bf9c04af]: (4, ('Sat Dec  6 07:13:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c (82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756)\n82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756\nSat Dec  6 07:13:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c (82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756)\n82f60ab650b6da5640a12e3a4fac4cd78fe12e285d869badf24443a4b6aed756\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.097 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[aa59a1bb-32fc-42a4-a68f-65e3ccb59237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.098 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0aeacae-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:05 compute-1 nova_compute[226101]: 2025-12-06 07:13:05.100 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-1 kernel: tapc0aeacae-e0: left promiscuous mode
Dec 06 07:13:05 compute-1 nova_compute[226101]: 2025-12-06 07:13:05.114 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.118 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[744d1a86-6edd-4279-8ca3-4ab3b6202d98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.131 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[357e2373-400f-4856-972d-e3cd36b74c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.133 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8b884d84-dea7-49cb-bad6-fb2295537606]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.146 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7b120483-c930-4786-ad24-5ec93a7e3c19]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546037, 'reachable_time': 19054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248442, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-1 systemd[1]: run-netns-ovnmeta\x2dc0aeacae\x2de53d\x2d425f\x2d88e7\x2d942ba0ab660c.mount: Deactivated successfully.
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.148 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:13:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:05.148 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[70e126ae-2073-4d03-836e-51ac6b2109ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-1 ceph-mon[81689]: osdmap e237: 3 total, 3 up, 3 in
Dec 06 07:13:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:06.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:06.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.378 226109 DEBUG nova.compute.manager [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received event network-vif-unplugged-757384d6-3566-4a74-b1b3-d658468b4e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.378 226109 DEBUG oslo_concurrency.lockutils [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.378 226109 DEBUG oslo_concurrency.lockutils [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.378 226109 DEBUG oslo_concurrency.lockutils [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.378 226109 DEBUG nova.compute.manager [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] No waiting events found dispatching network-vif-unplugged-757384d6-3566-4a74-b1b3-d658468b4e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.379 226109 DEBUG nova.compute.manager [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received event network-vif-unplugged-757384d6-3566-4a74-b1b3-d658468b4e13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.379 226109 DEBUG nova.compute.manager [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received event network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.379 226109 DEBUG oslo_concurrency.lockutils [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.379 226109 DEBUG oslo_concurrency.lockutils [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.379 226109 DEBUG oslo_concurrency.lockutils [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.379 226109 DEBUG nova.compute.manager [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] No waiting events found dispatching network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:06 compute-1 nova_compute[226101]: 2025-12-06 07:13:06.380 226109 WARNING nova.compute.manager [req-1b6af63b-867b-4d87-94c7-f676f7bd622c req-653b2688-e94e-486a-bf8a-f6b115f2ad7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received unexpected event network-vif-plugged-757384d6-3566-4a74-b1b3-d658468b4e13 for instance with vm_state active and task_state deleting.
Dec 06 07:13:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:08.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:08.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:09 compute-1 nova_compute[226101]: 2025-12-06 07:13:09.956 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:10 compute-1 nova_compute[226101]: 2025-12-06 07:13:09.999 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:10 compute-1 podman[248444]: 2025-12-06 07:13:10.115150138 +0000 UTC m=+0.087819533 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 06 07:13:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:10.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:10.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.002 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.002 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.003 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.003 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.003 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.004 226109 INFO nova.compute.manager [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Terminating instance
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.005 226109 DEBUG nova.compute.manager [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:13:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e238 e238: 3 total, 3 up, 3 in
Dec 06 07:13:11 compute-1 ceph-mon[81689]: pgmap v1636: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 393 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 3.6 MiB/s wr, 364 op/s
Dec 06 07:13:11 compute-1 kernel: tapd5dc1a46-09 (unregistering): left promiscuous mode
Dec 06 07:13:11 compute-1 NetworkManager[49031]: <info>  [1765005191.4611] device (tapd5dc1a46-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:13:11 compute-1 ovn_controller[130279]: 2025-12-06T07:13:11Z|00203|binding|INFO|Releasing lport d5dc1a46-0992-4f53-8948-7dd36462be0e from this chassis (sb_readonly=0)
Dec 06 07:13:11 compute-1 ovn_controller[130279]: 2025-12-06T07:13:11Z|00204|binding|INFO|Setting lport d5dc1a46-0992-4f53-8948-7dd36462be0e down in Southbound
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.472 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:11 compute-1 ovn_controller[130279]: 2025-12-06T07:13:11Z|00205|binding|INFO|Removing iface tapd5dc1a46-09 ovn-installed in OVS
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.476 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.501 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:11 compute-1 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000038.scope: Deactivated successfully.
Dec 06 07:13:11 compute-1 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000038.scope: Consumed 14.349s CPU time.
Dec 06 07:13:11 compute-1 systemd-machined[190302]: Machine qemu-27-instance-00000038 terminated.
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.647 226109 INFO nova.virt.libvirt.driver [-] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Instance destroyed successfully.
Dec 06 07:13:11 compute-1 nova_compute[226101]: 2025-12-06 07:13:11.647 226109 DEBUG nova.objects.instance [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lazy-loading 'resources' on Instance uuid 6538434f-992a-4978-b5ca-284ce0f6a66e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.024 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:d1:d1 10.100.0.13'], port_security=['fa:16:3e:8b:d1:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6538434f-992a-4978-b5ca-284ce0f6a66e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2516fad8-9125-435f-927d-f6e26c906c07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84491fd4e10b457ab46523330f88bb07', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e0ef9ba-7bb6-432f-bfc8-b185a67387e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b4b25ba-d8c7-4eb1-bcbd-b5597e2987ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=d5dc1a46-0992-4f53-8948-7dd36462be0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.026 139580 INFO neutron.agent.ovn.metadata.agent [-] Port d5dc1a46-0992-4f53-8948-7dd36462be0e in datapath 2516fad8-9125-435f-927d-f6e26c906c07 unbound from our chassis
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.028 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2516fad8-9125-435f-927d-f6e26c906c07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.030 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4474e4-7ce0-4c3a-925b-78548b3692dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.031 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07 namespace which is not needed anymore
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.047 226109 DEBUG nova.virt.libvirt.vif [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:12:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=56,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEDHs3oEJey8VHUpTQ60S9SUwR4J1otbSj2hYCdlgIn/308O9184J7uLWjclqCGjpxdxNPhKZ1WyKTsvUdEjvkgROyZlA1Ykr5MyPbQDeyTaadVbVEV7g8pBoc6Ajlbiww==',key_name='tempest-keypair-1981076157',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:12:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84491fd4e10b457ab46523330f88bb07',ramdisk_id='',reservation_id='r-n0koef70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestFqdnHostnames-1552629590',owner_user_name='tempest-ServersTestFqdnHostnames-1552629590-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:12:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e090a5aa1bf4e4394b5fbf3418f0e6d',uuid=6538434f-992a-4978-b5ca-284ce0f6a66e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.048 226109 DEBUG nova.network.os_vif_util [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Converting VIF {"id": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "address": "fa:16:3e:8b:d1:d1", "network": {"id": "2516fad8-9125-435f-927d-f6e26c906c07", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-2021702000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84491fd4e10b457ab46523330f88bb07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5dc1a46-09", "ovs_interfaceid": "d5dc1a46-0992-4f53-8948-7dd36462be0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.049 226109 DEBUG nova.network.os_vif_util [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=d5dc1a46-0992-4f53-8948-7dd36462be0e,network=Network(2516fad8-9125-435f-927d-f6e26c906c07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5dc1a46-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.049 226109 DEBUG os_vif [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=d5dc1a46-0992-4f53-8948-7dd36462be0e,network=Network(2516fad8-9125-435f-927d-f6e26c906c07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5dc1a46-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.051 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.051 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5dc1a46-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.052 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.054 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.057 226109 INFO os_vif [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=d5dc1a46-0992-4f53-8948-7dd36462be0e,network=Network(2516fad8-9125-435f-927d-f6e26c906c07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5dc1a46-09')
Dec 06 07:13:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:12.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:12.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:12 compute-1 neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07[247559]: [NOTICE]   (247563) : haproxy version is 2.8.14-c23fe91
Dec 06 07:13:12 compute-1 neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07[247559]: [NOTICE]   (247563) : path to executable is /usr/sbin/haproxy
Dec 06 07:13:12 compute-1 neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07[247559]: [WARNING]  (247563) : Exiting Master process...
Dec 06 07:13:12 compute-1 neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07[247559]: [WARNING]  (247563) : Exiting Master process...
Dec 06 07:13:12 compute-1 neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07[247559]: [ALERT]    (247563) : Current worker (247565) exited with code 143 (Terminated)
Dec 06 07:13:12 compute-1 neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07[247559]: [WARNING]  (247563) : All workers exited. Exiting... (0)
Dec 06 07:13:12 compute-1 systemd[1]: libpod-ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b.scope: Deactivated successfully.
Dec 06 07:13:12 compute-1 podman[248524]: 2025-12-06 07:13:12.223245947 +0000 UTC m=+0.074339368 container died ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:13:12 compute-1 ceph-mon[81689]: pgmap v1637: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 359 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 2.7 MiB/s wr, 350 op/s
Dec 06 07:13:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/247600855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2172624807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:13:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2172624807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:13:12 compute-1 ceph-mon[81689]: pgmap v1638: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 305 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 2.9 MiB/s wr, 400 op/s
Dec 06 07:13:12 compute-1 ceph-mon[81689]: osdmap e238: 3 total, 3 up, 3 in
Dec 06 07:13:12 compute-1 ceph-mon[81689]: pgmap v1640: 305 pgs: 305 active+clean; 266 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 306 op/s
Dec 06 07:13:12 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b-userdata-shm.mount: Deactivated successfully.
Dec 06 07:13:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-e307407e48075dd8257e2f82b3fea50b4905dd7eaf07993d36d623fa7c1b2580-merged.mount: Deactivated successfully.
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.561 226109 DEBUG nova.compute.manager [req-bbe0e852-6afa-4b9f-b484-ab7b2a8ef6c8 req-cd3bfae7-d8f3-4322-8116-92d42241b541 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received event network-vif-unplugged-d5dc1a46-0992-4f53-8948-7dd36462be0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.561 226109 DEBUG oslo_concurrency.lockutils [req-bbe0e852-6afa-4b9f-b484-ab7b2a8ef6c8 req-cd3bfae7-d8f3-4322-8116-92d42241b541 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.562 226109 DEBUG oslo_concurrency.lockutils [req-bbe0e852-6afa-4b9f-b484-ab7b2a8ef6c8 req-cd3bfae7-d8f3-4322-8116-92d42241b541 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.562 226109 DEBUG oslo_concurrency.lockutils [req-bbe0e852-6afa-4b9f-b484-ab7b2a8ef6c8 req-cd3bfae7-d8f3-4322-8116-92d42241b541 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.562 226109 DEBUG nova.compute.manager [req-bbe0e852-6afa-4b9f-b484-ab7b2a8ef6c8 req-cd3bfae7-d8f3-4322-8116-92d42241b541 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] No waiting events found dispatching network-vif-unplugged-d5dc1a46-0992-4f53-8948-7dd36462be0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.562 226109 DEBUG nova.compute.manager [req-bbe0e852-6afa-4b9f-b484-ab7b2a8ef6c8 req-cd3bfae7-d8f3-4322-8116-92d42241b541 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received event network-vif-unplugged-d5dc1a46-0992-4f53-8948-7dd36462be0e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:13:12 compute-1 podman[248524]: 2025-12-06 07:13:12.830149498 +0000 UTC m=+0.681242889 container cleanup ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:13:12 compute-1 systemd[1]: libpod-conmon-ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b.scope: Deactivated successfully.
Dec 06 07:13:12 compute-1 podman[248555]: 2025-12-06 07:13:12.92319243 +0000 UTC m=+0.068432709 container remove ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.931 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5a763f-b228-43e9-9afa-a997b7f87ebd]: (4, ('Sat Dec  6 07:13:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07 (ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b)\nba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b\nSat Dec  6 07:13:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07 (ba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b)\nba7bed80f37273b01d03b9a1ab55527fb8e7481f1fc9be5071cb01c88b11f44b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.932 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c517e428-5ae0-4c0f-9d16-fdcf0d886e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.933 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2516fad8-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.935 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:12 compute-1 kernel: tap2516fad8-90: left promiscuous mode
Dec 06 07:13:12 compute-1 nova_compute[226101]: 2025-12-06 07:13:12.963 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.966 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ca68d507-4e7d-4195-8eab-d072f61d2782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.979 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2412aff8-7378-4dc8-bc14-e57262f4fffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.980 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0e926d8b-d149-485e-8f60-2f7339c70938]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.994 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a727a395-1b7c-4a7a-9859-dbf0080ee163]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543352, 'reachable_time': 16991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248571, 'error': None, 'target': 'ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:12 compute-1 systemd[1]: run-netns-ovnmeta\x2d2516fad8\x2d9125\x2d435f\x2d927d\x2df6e26c906c07.mount: Deactivated successfully.
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.997 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2516fad8-9125-435f-927d-f6e26c906c07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:13:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:12.998 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a596294d-ca37-4966-b99b-08bf7d39bd65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:13 compute-1 nova_compute[226101]: 2025-12-06 07:13:13.138 226109 INFO nova.virt.libvirt.driver [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Deleting instance files /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8_del
Dec 06 07:13:13 compute-1 nova_compute[226101]: 2025-12-06 07:13:13.139 226109 INFO nova.virt.libvirt.driver [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Deletion of /var/lib/nova/instances/8b57f004-f745-47c8-87d9-9d7ba1138ed8_del complete
Dec 06 07:13:13 compute-1 nova_compute[226101]: 2025-12-06 07:13:13.217 226109 INFO nova.compute.manager [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Took 8.53 seconds to destroy the instance on the hypervisor.
Dec 06 07:13:13 compute-1 nova_compute[226101]: 2025-12-06 07:13:13.218 226109 DEBUG oslo.service.loopingcall [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:13:13 compute-1 nova_compute[226101]: 2025-12-06 07:13:13.218 226109 DEBUG nova.compute.manager [-] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:13:13 compute-1 nova_compute[226101]: 2025-12-06 07:13:13.218 226109 DEBUG nova.network.neutron [-] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:13:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e239 e239: 3 total, 3 up, 3 in
Dec 06 07:13:13 compute-1 nova_compute[226101]: 2025-12-06 07:13:13.972 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:14.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:14.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:14 compute-1 ceph-mon[81689]: pgmap v1641: 305 pgs: 305 active+clean; 271 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 329 op/s
Dec 06 07:13:14 compute-1 ceph-mon[81689]: osdmap e239: 3 total, 3 up, 3 in
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.480 226109 DEBUG nova.network.neutron [-] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.530 226109 INFO nova.compute.manager [-] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Took 1.31 seconds to deallocate network for instance.
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.662 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.662 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e240 e240: 3 total, 3 up, 3 in
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.727 226109 DEBUG oslo_concurrency.processutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.761 226109 DEBUG nova.compute.manager [req-22ef9806-b0fb-434f-b59d-44379e67885d req-56b4f6b9-5778-43f4-a88c-c329a75c0c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received event network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.762 226109 DEBUG oslo_concurrency.lockutils [req-22ef9806-b0fb-434f-b59d-44379e67885d req-56b4f6b9-5778-43f4-a88c-c329a75c0c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.762 226109 DEBUG oslo_concurrency.lockutils [req-22ef9806-b0fb-434f-b59d-44379e67885d req-56b4f6b9-5778-43f4-a88c-c329a75c0c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.762 226109 DEBUG oslo_concurrency.lockutils [req-22ef9806-b0fb-434f-b59d-44379e67885d req-56b4f6b9-5778-43f4-a88c-c329a75c0c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.762 226109 DEBUG nova.compute.manager [req-22ef9806-b0fb-434f-b59d-44379e67885d req-56b4f6b9-5778-43f4-a88c-c329a75c0c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] No waiting events found dispatching network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.763 226109 WARNING nova.compute.manager [req-22ef9806-b0fb-434f-b59d-44379e67885d req-56b4f6b9-5778-43f4-a88c-c329a75c0c28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received unexpected event network-vif-plugged-d5dc1a46-0992-4f53-8948-7dd36462be0e for instance with vm_state active and task_state deleting.
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.793 226109 INFO nova.virt.libvirt.driver [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Deleting instance files /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e_del
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.794 226109 INFO nova.virt.libvirt.driver [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Deletion of /var/lib/nova/instances/6538434f-992a-4978-b5ca-284ce0f6a66e_del complete
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.893 226109 INFO nova.compute.manager [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Took 3.89 seconds to destroy the instance on the hypervisor.
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.894 226109 DEBUG oslo.service.loopingcall [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.894 226109 DEBUG nova.compute.manager [-] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:13:14 compute-1 nova_compute[226101]: 2025-12-06 07:13:14.894 226109 DEBUG nova.network.neutron [-] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:13:15 compute-1 podman[248594]: 2025-12-06 07:13:15.069585265 +0000 UTC m=+0.053814134 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible)
Dec 06 07:13:15 compute-1 podman[248593]: 2025-12-06 07:13:15.095168167 +0000 UTC m=+0.077830973 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:13:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1549486840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:15 compute-1 nova_compute[226101]: 2025-12-06 07:13:15.258 226109 DEBUG oslo_concurrency.processutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:15 compute-1 nova_compute[226101]: 2025-12-06 07:13:15.264 226109 DEBUG nova.compute.provider_tree [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:15 compute-1 nova_compute[226101]: 2025-12-06 07:13:15.312 226109 DEBUG nova.scheduler.client.report [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:15 compute-1 nova_compute[226101]: 2025-12-06 07:13:15.337 226109 DEBUG nova.compute.manager [req-95520246-401d-410f-b598-ddacbb0a8745 req-6eb40bfd-4b5b-4dd2-a794-6e9fd314ee54 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Received event network-vif-deleted-757384d6-3566-4a74-b1b3-d658468b4e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:15 compute-1 nova_compute[226101]: 2025-12-06 07:13:15.377 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:15 compute-1 nova_compute[226101]: 2025-12-06 07:13:15.449 226109 INFO nova.scheduler.client.report [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Deleted allocations for instance 8b57f004-f745-47c8-87d9-9d7ba1138ed8
Dec 06 07:13:15 compute-1 nova_compute[226101]: 2025-12-06 07:13:15.796 226109 DEBUG oslo_concurrency.lockutils [None req-e87c26d6-cd8a-48db-9ac6-6c8a188fbc21 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "8b57f004-f745-47c8-87d9-9d7ba1138ed8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:15 compute-1 ceph-mon[81689]: osdmap e240: 3 total, 3 up, 3 in
Dec 06 07:13:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1671056831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1549486840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2367303753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:16.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:16.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:16 compute-1 nova_compute[226101]: 2025-12-06 07:13:16.773 226109 DEBUG nova.network.neutron [-] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:16 compute-1 ceph-mon[81689]: pgmap v1644: 305 pgs: 305 active+clean; 236 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 6.5 MiB/s wr, 208 op/s
Dec 06 07:13:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2437772098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3239620160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:16 compute-1 nova_compute[226101]: 2025-12-06 07:13:16.924 226109 INFO nova.compute.manager [-] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Took 2.03 seconds to deallocate network for instance.
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.055 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.094 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.094 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.143 226109 DEBUG oslo_concurrency.processutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:17 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1931217294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.541 226109 DEBUG nova.compute.manager [req-ddf5c289-2c31-4bd7-a284-bbcd036970fb req-edbca169-7898-4fc7-bc7b-5e49793b6e5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Received event network-vif-deleted-d5dc1a46-0992-4f53-8948-7dd36462be0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.544 226109 DEBUG oslo_concurrency.processutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.552 226109 DEBUG nova.compute.provider_tree [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.594 226109 DEBUG nova.scheduler.client.report [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.708 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.761 226109 INFO nova.scheduler.client.report [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Deleted allocations for instance 6538434f-992a-4978-b5ca-284ce0f6a66e
Dec 06 07:13:17 compute-1 nova_compute[226101]: 2025-12-06 07:13:17.900 226109 DEBUG oslo_concurrency.lockutils [None req-582f019f-0cab-4c21-aa88-1ba6c60071f3 5e090a5aa1bf4e4394b5fbf3418f0e6d 84491fd4e10b457ab46523330f88bb07 - - default default] Lock "6538434f-992a-4978-b5ca-284ce0f6a66e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1931217294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:18.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:18.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:18 compute-1 nova_compute[226101]: 2025-12-06 07:13:18.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:19 compute-1 ceph-mon[81689]: pgmap v1645: 305 pgs: 305 active+clean; 237 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 4.9 MiB/s wr, 212 op/s
Dec 06 07:13:19 compute-1 nova_compute[226101]: 2025-12-06 07:13:19.967 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005184.9661877, 8b57f004-f745-47c8-87d9-9d7ba1138ed8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:19 compute-1 nova_compute[226101]: 2025-12-06 07:13:19.968 226109 INFO nova.compute.manager [-] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] VM Stopped (Lifecycle Event)
Dec 06 07:13:20 compute-1 nova_compute[226101]: 2025-12-06 07:13:20.023 226109 DEBUG nova.compute.manager [None req-b735ec29-fad4-49b2-ae3d-54322819987c - - - - - -] [instance: 8b57f004-f745-47c8-87d9-9d7ba1138ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:20.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:20 compute-1 ceph-mon[81689]: pgmap v1646: 305 pgs: 305 active+clean; 151 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 4.4 MiB/s wr, 276 op/s
Dec 06 07:13:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1737763718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.059 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:22.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.682 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.682 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.699 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.790 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.791 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:22 compute-1 ceph-mon[81689]: pgmap v1647: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 282 op/s
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.800 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.800 226109 INFO nova.compute.claims [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:13:22 compute-1 nova_compute[226101]: 2025-12-06 07:13:22.963 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3627339594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.372 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.377 226109 DEBUG nova.compute.provider_tree [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.405 226109 DEBUG nova.scheduler.client.report [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.431 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.433 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.485 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.485 226109 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.509 226109 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.534 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.658 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.660 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.660 226109 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Creating image(s)
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.683 226109 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.705 226109 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.729 226109 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.732 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.755 226109 DEBUG nova.policy [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03fb2817729e4b71932023a7637c6244', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.794 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.795 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.795 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.795 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e241 e241: 3 total, 3 up, 3 in
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.821 226109 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.824 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3627339594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4004435533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:23 compute-1 nova_compute[226101]: 2025-12-06 07:13:23.995 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:24.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:24 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.412 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.456 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.506 226109 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] resizing rbd image 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.586 226109 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Successfully created port: d598fee1-6118-456e-92af-e68a995ba96d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.677 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.682 226109 DEBUG nova.objects.instance [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'migration_context' on Instance uuid 25e88e3a-cb03-4a91-8695-d1c9c589b8c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e242 e242: 3 total, 3 up, 3 in
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.730 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.730 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Ensure instance console log exists: /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.731 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.731 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:24 compute-1 nova_compute[226101]: 2025-12-06 07:13:24.731 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:24 compute-1 ceph-mon[81689]: pgmap v1648: 305 pgs: 305 active+clean; 88 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 241 op/s
Dec 06 07:13:24 compute-1 ceph-mon[81689]: osdmap e241: 3 total, 3 up, 3 in
Dec 06 07:13:24 compute-1 ceph-mon[81689]: osdmap e242: 3 total, 3 up, 3 in
Dec 06 07:13:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e243 e243: 3 total, 3 up, 3 in
Dec 06 07:13:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/895580667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:25 compute-1 ceph-mon[81689]: osdmap e243: 3 total, 3 up, 3 in
Dec 06 07:13:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:26.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.645 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005191.6436162, 6538434f-992a-4978-b5ca-284ce0f6a66e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.645 226109 INFO nova.compute.manager [-] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] VM Stopped (Lifecycle Event)
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.672 226109 DEBUG nova.compute.manager [None req-85e0b977-0b25-409a-865f-38ecd2b2372d - - - - - -] [instance: 6538434f-992a-4978-b5ca-284ce0f6a66e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.783 226109 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Successfully updated port: d598fee1-6118-456e-92af-e68a995ba96d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.812 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "refresh_cache-25e88e3a-cb03-4a91-8695-d1c9c589b8c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.813 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquired lock "refresh_cache-25e88e3a-cb03-4a91-8695-d1c9c589b8c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.813 226109 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:13:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e244 e244: 3 total, 3 up, 3 in
Dec 06 07:13:26 compute-1 ceph-mon[81689]: pgmap v1651: 305 pgs: 305 active+clean; 136 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.0 MiB/s wr, 302 op/s
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.997 226109 DEBUG nova.compute.manager [req-3f1c6e8a-378a-43a9-ba5c-6183d17cf2cd req-26f230ad-6b5b-4a58-a1ec-e9ff515129cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received event network-changed-d598fee1-6118-456e-92af-e68a995ba96d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.998 226109 DEBUG nova.compute.manager [req-3f1c6e8a-378a-43a9-ba5c-6183d17cf2cd req-26f230ad-6b5b-4a58-a1ec-e9ff515129cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Refreshing instance network info cache due to event network-changed-d598fee1-6118-456e-92af-e68a995ba96d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:13:26 compute-1 nova_compute[226101]: 2025-12-06 07:13:26.998 226109 DEBUG oslo_concurrency.lockutils [req-3f1c6e8a-378a-43a9-ba5c-6183d17cf2cd req-26f230ad-6b5b-4a58-a1ec-e9ff515129cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-25e88e3a-cb03-4a91-8695-d1c9c589b8c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:13:27 compute-1 nova_compute[226101]: 2025-12-06 07:13:27.063 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:27 compute-1 nova_compute[226101]: 2025-12-06 07:13:27.318 226109 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:13:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.562 226109 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Updating instance_info_cache with network_info: [{"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.612 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Releasing lock "refresh_cache-25e88e3a-cb03-4a91-8695-d1c9c589b8c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.612 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Instance network_info: |[{"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.613 226109 DEBUG oslo_concurrency.lockutils [req-3f1c6e8a-378a-43a9-ba5c-6183d17cf2cd req-26f230ad-6b5b-4a58-a1ec-e9ff515129cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-25e88e3a-cb03-4a91-8695-d1c9c589b8c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.613 226109 DEBUG nova.network.neutron [req-3f1c6e8a-378a-43a9-ba5c-6183d17cf2cd req-26f230ad-6b5b-4a58-a1ec-e9ff515129cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Refreshing network info cache for port d598fee1-6118-456e-92af-e68a995ba96d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.617 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Start _get_guest_xml network_info=[{"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.621 226109 WARNING nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.625 226109 DEBUG nova.virt.libvirt.host [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.626 226109 DEBUG nova.virt.libvirt.host [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.628 226109 DEBUG nova.virt.libvirt.host [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.628 226109 DEBUG nova.virt.libvirt.host [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.629 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.630 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.630 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.630 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.630 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.631 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.631 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.631 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.631 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.631 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.632 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.632 226109 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:13:28 compute-1 nova_compute[226101]: 2025-12-06 07:13:28.634 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.000 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:13:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2051350914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:29 compute-1 ceph-mon[81689]: osdmap e244: 3 total, 3 up, 3 in
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.447 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.472 226109 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.477 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:13:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2163268125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.930 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.932 226109 DEBUG nova.virt.libvirt.vif [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1838229485',display_name='tempest-MultipleCreateTestJSON-server-1838229485-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1838229485-1',id=62,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-kbd1i45d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:23Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=25e88e3a-cb03-4a91-8695-d1c9c589b8c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.932 226109 DEBUG nova.network.os_vif_util [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.933 226109 DEBUG nova.network.os_vif_util [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:eb:f9,bridge_name='br-int',has_traffic_filtering=True,id=d598fee1-6118-456e-92af-e68a995ba96d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd598fee1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.935 226109 DEBUG nova.objects.instance [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 25e88e3a-cb03-4a91-8695-d1c9c589b8c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.952 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <uuid>25e88e3a-cb03-4a91-8695-d1c9c589b8c5</uuid>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <name>instance-0000003e</name>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <nova:name>tempest-MultipleCreateTestJSON-server-1838229485-1</nova:name>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:13:28</nova:creationTime>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <nova:user uuid="03fb2817729e4b71932023a7637c6244">tempest-MultipleCreateTestJSON-1199242675-project-member</nova:user>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <nova:project uuid="de09de98b3b1445f88b6094b6aac4a30">tempest-MultipleCreateTestJSON-1199242675</nova:project>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <nova:port uuid="d598fee1-6118-456e-92af-e68a995ba96d">
Dec 06 07:13:29 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <system>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <entry name="serial">25e88e3a-cb03-4a91-8695-d1c9c589b8c5</entry>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <entry name="uuid">25e88e3a-cb03-4a91-8695-d1c9c589b8c5</entry>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </system>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <os>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   </os>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <features>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   </features>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk">
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       </source>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk.config">
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       </source>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:13:29 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:66:eb:f9"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <target dev="tapd598fee1-61"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5/console.log" append="off"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <video>
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </video>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:13:29 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:13:29 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:13:29 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:13:29 compute-1 nova_compute[226101]: </domain>
Dec 06 07:13:29 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.954 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Preparing to wait for external event network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.954 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.955 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.955 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.956 226109 DEBUG nova.virt.libvirt.vif [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1838229485',display_name='tempest-MultipleCreateTestJSON-server-1838229485-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1838229485-1',id=62,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-kbd1i45d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:23Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=25e88e3a-cb03-4a91-8695-d1c9c589b8c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.956 226109 DEBUG nova.network.os_vif_util [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.957 226109 DEBUG nova.network.os_vif_util [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:eb:f9,bridge_name='br-int',has_traffic_filtering=True,id=d598fee1-6118-456e-92af-e68a995ba96d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd598fee1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.958 226109 DEBUG os_vif [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:eb:f9,bridge_name='br-int',has_traffic_filtering=True,id=d598fee1-6118-456e-92af-e68a995ba96d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd598fee1-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.958 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.959 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.959 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.962 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.962 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd598fee1-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.963 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd598fee1-61, col_values=(('external_ids', {'iface-id': 'd598fee1-6118-456e-92af-e68a995ba96d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:eb:f9', 'vm-uuid': '25e88e3a-cb03-4a91-8695-d1c9c589b8c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.964 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-1 NetworkManager[49031]: <info>  [1765005209.9651] manager: (tapd598fee1-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.966 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.970 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-1 nova_compute[226101]: 2025-12-06 07:13:29.971 226109 INFO os_vif [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:eb:f9,bridge_name='br-int',has_traffic_filtering=True,id=d598fee1-6118-456e-92af-e68a995ba96d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd598fee1-61')
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.030 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.031 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.031 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No VIF found with MAC fa:16:3e:66:eb:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.031 226109 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Using config drive
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.053 226109 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:30.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.547 226109 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Creating config drive at /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5/disk.config
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.552 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmvvdobjz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.675 226109 DEBUG nova.network.neutron [req-3f1c6e8a-378a-43a9-ba5c-6183d17cf2cd req-26f230ad-6b5b-4a58-a1ec-e9ff515129cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Updated VIF entry in instance network info cache for port d598fee1-6118-456e-92af-e68a995ba96d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.675 226109 DEBUG nova.network.neutron [req-3f1c6e8a-378a-43a9-ba5c-6183d17cf2cd req-26f230ad-6b5b-4a58-a1ec-e9ff515129cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Updating instance_info_cache with network_info: [{"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.680 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmvvdobjz" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.706 226109 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.710 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5/disk.config 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:30 compute-1 nova_compute[226101]: 2025-12-06 07:13:30.737 226109 DEBUG oslo_concurrency.lockutils [req-3f1c6e8a-378a-43a9-ba5c-6183d17cf2cd req-26f230ad-6b5b-4a58-a1ec-e9ff515129cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-25e88e3a-cb03-4a91-8695-d1c9c589b8c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:30 compute-1 ceph-mon[81689]: pgmap v1654: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 192 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 12 MiB/s wr, 346 op/s
Dec 06 07:13:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2051350914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2768478065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:30 compute-1 ceph-mon[81689]: pgmap v1655: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 251 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 14 MiB/s wr, 354 op/s
Dec 06 07:13:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/996662023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2163268125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.054 226109 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5/disk.config 25e88e3a-cb03-4a91-8695-d1c9c589b8c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.054 226109 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Deleting local config drive /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5/disk.config because it was imported into RBD.
Dec 06 07:13:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1271285825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:32 compute-1 kernel: tapd598fee1-61: entered promiscuous mode
Dec 06 07:13:32 compute-1 NetworkManager[49031]: <info>  [1765005212.1047] manager: (tapd598fee1-61): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Dec 06 07:13:32 compute-1 ovn_controller[130279]: 2025-12-06T07:13:32Z|00206|binding|INFO|Claiming lport d598fee1-6118-456e-92af-e68a995ba96d for this chassis.
Dec 06 07:13:32 compute-1 ovn_controller[130279]: 2025-12-06T07:13:32Z|00207|binding|INFO|d598fee1-6118-456e-92af-e68a995ba96d: Claiming fa:16:3e:66:eb:f9 10.100.0.9
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.106 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.111 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.124 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:eb:f9 10.100.0.9'], port_security=['fa:16:3e:66:eb:f9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '25e88e3a-cb03-4a91-8695-d1c9c589b8c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c3e308f8-65fc-402f-96ed-2039201b95ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5d88f71-5dee-4f84-b30d-05c6ecea010a, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=d598fee1-6118-456e-92af-e68a995ba96d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.125 139580 INFO neutron.agent.ovn.metadata.agent [-] Port d598fee1-6118-456e-92af-e68a995ba96d in datapath c0aeacae-e53d-425f-88e7-942ba0ab660c bound to our chassis
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.126 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:32 compute-1 systemd-udevd[248983]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:32 compute-1 systemd-machined[190302]: New machine qemu-29-instance-0000003e.
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.137 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1aef0230-b0ca-4b89-bbe3-94b7f70d32b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.139 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc0aeacae-e1 in ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.140 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc0aeacae-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.140 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0eb496-2bee-4e4c-9192-150674355126]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.141 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b52f322-e19d-4d05-863e-87dc1755e546]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 NetworkManager[49031]: <info>  [1765005212.1456] device (tapd598fee1-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:13:32 compute-1 NetworkManager[49031]: <info>  [1765005212.1467] device (tapd598fee1-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:13:32 compute-1 systemd[1]: Started Virtual Machine qemu-29-instance-0000003e.
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.152 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[f28bfa7a-49e9-4ff0-9d73-f002a3da200e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.175 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[49e4b356-6e31-43ad-b924-f25f602e3c15]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.179 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-1 ovn_controller[130279]: 2025-12-06T07:13:32Z|00208|binding|INFO|Setting lport d598fee1-6118-456e-92af-e68a995ba96d ovn-installed in OVS
Dec 06 07:13:32 compute-1 ovn_controller[130279]: 2025-12-06T07:13:32Z|00209|binding|INFO|Setting lport d598fee1-6118-456e-92af-e68a995ba96d up in Southbound
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.183 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.204 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[45c7c072-0f65-40fc-b30a-4d31bb0b304f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.209 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b6770d15-9b44-441e-976d-0f14dca0a160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 NetworkManager[49031]: <info>  [1765005212.2102] manager: (tapc0aeacae-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Dec 06 07:13:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:32.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.237 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[622279e7-7631-4bb4-95f0-44a8a745be08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.240 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[eddbec68-ddbb-44c4-af24-da338b7396c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:32.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:32 compute-1 NetworkManager[49031]: <info>  [1765005212.2621] device (tapc0aeacae-e0): carrier: link connected
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.270 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ec6256-9a1e-47b9-adb6-8e2212077b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.284 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a0494a-fa51-4b67-86f8-23a1ae107875]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0aeacae-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:b3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549176, 'reachable_time': 23234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249017, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.297 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4fea89c4-bb48-49b2-899f-2ba87d58ccad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:b3f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549176, 'tstamp': 549176}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249018, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.313 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[779594c0-6aaf-4acf-9e29-e3b95bcfda79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0aeacae-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:b3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549176, 'reachable_time': 23234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249019, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.349 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3f7331-4a6c-4319-a330-ae906fb899ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.409 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8027b41a-7b84-4c4d-b88b-c40e370b5aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.410 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0aeacae-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.410 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.411 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0aeacae-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:32 compute-1 kernel: tapc0aeacae-e0: entered promiscuous mode
Dec 06 07:13:32 compute-1 NetworkManager[49031]: <info>  [1765005212.4132] manager: (tapc0aeacae-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.412 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.415 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc0aeacae-e0, col_values=(('external_ids', {'iface-id': 'f88eb5d2-a701-4b34-8226-834c175af63c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:32 compute-1 ovn_controller[130279]: 2025-12-06T07:13:32Z|00210|binding|INFO|Releasing lport f88eb5d2-a701-4b34-8226-834c175af63c from this chassis (sb_readonly=0)
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.431 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.432 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.434 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3e2c66-7d75-406f-bf65-bdd8426d24ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.435 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:13:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:32.436 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'env', 'PROCESS_TAG=haproxy-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c0aeacae-e53d-425f-88e7-942ba0ab660c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.615 226109 DEBUG nova.compute.manager [req-989ddc40-3240-452b-9419-da7770882665 req-11aa4fd8-43df-4f47-9909-c89d0bff6dbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received event network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.616 226109 DEBUG oslo_concurrency.lockutils [req-989ddc40-3240-452b-9419-da7770882665 req-11aa4fd8-43df-4f47-9909-c89d0bff6dbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.617 226109 DEBUG oslo_concurrency.lockutils [req-989ddc40-3240-452b-9419-da7770882665 req-11aa4fd8-43df-4f47-9909-c89d0bff6dbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.617 226109 DEBUG oslo_concurrency.lockutils [req-989ddc40-3240-452b-9419-da7770882665 req-11aa4fd8-43df-4f47-9909-c89d0bff6dbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:32 compute-1 nova_compute[226101]: 2025-12-06 07:13:32.618 226109 DEBUG nova.compute.manager [req-989ddc40-3240-452b-9419-da7770882665 req-11aa4fd8-43df-4f47-9909-c89d0bff6dbd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Processing event network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:13:32 compute-1 podman[249051]: 2025-12-06 07:13:32.828884638 +0000 UTC m=+0.057755630 container create 934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:13:32 compute-1 systemd[1]: Started libpod-conmon-934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa.scope.
Dec 06 07:13:32 compute-1 podman[249051]: 2025-12-06 07:13:32.794324175 +0000 UTC m=+0.023195147 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:13:32 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:13:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47737e9ce221fb2d5f793f358232754bca063f74db90d82103e3984887b3861f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:32 compute-1 podman[249051]: 2025-12-06 07:13:32.917220684 +0000 UTC m=+0.146091666 container init 934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:13:32 compute-1 podman[249051]: 2025-12-06 07:13:32.926192786 +0000 UTC m=+0.155063728 container start 934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:13:32 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[249085]: [NOTICE]   (249112) : New worker (249114) forked
Dec 06 07:13:32 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[249085]: [NOTICE]   (249112) : Loading success.
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.008 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005213.007797, 25e88e3a-cb03-4a91-8695-d1c9c589b8c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.008 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] VM Started (Lifecycle Event)
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.010 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.014 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.017 226109 INFO nova.virt.libvirt.driver [-] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Instance spawned successfully.
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.018 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.048 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.052 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.063 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.064 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.064 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.065 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.065 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.066 226109 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-1 ceph-mon[81689]: pgmap v1656: 305 pgs: 305 active+clean; 273 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 13 MiB/s wr, 339 op/s
Dec 06 07:13:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3851881568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.127 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.128 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005213.0086565, 25e88e3a-cb03-4a91-8695-d1c9c589b8c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.128 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] VM Paused (Lifecycle Event)
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.154 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.163 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005213.0128865, 25e88e3a-cb03-4a91-8695-d1c9c589b8c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.163 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] VM Resumed (Lifecycle Event)
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.171 226109 INFO nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Took 9.51 seconds to spawn the instance on the hypervisor.
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.172 226109 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.192 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.195 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.222 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.262 226109 INFO nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Took 10.50 seconds to build instance.
Dec 06 07:13:33 compute-1 nova_compute[226101]: 2025-12-06 07:13:33.284 226109 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:34 compute-1 nova_compute[226101]: 2025-12-06 07:13:34.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:34.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:34 compute-1 nova_compute[226101]: 2025-12-06 07:13:34.940 226109 DEBUG nova.compute.manager [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received event network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:34 compute-1 nova_compute[226101]: 2025-12-06 07:13:34.941 226109 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:34 compute-1 nova_compute[226101]: 2025-12-06 07:13:34.941 226109 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:34 compute-1 nova_compute[226101]: 2025-12-06 07:13:34.941 226109 DEBUG oslo_concurrency.lockutils [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:34 compute-1 nova_compute[226101]: 2025-12-06 07:13:34.942 226109 DEBUG nova.compute.manager [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] No waiting events found dispatching network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:34 compute-1 nova_compute[226101]: 2025-12-06 07:13:34.942 226109 WARNING nova.compute.manager [req-3e4820f5-b865-48a6-8db6-60127f8e451e req-cb4c07cc-28ce-428d-884a-ab585be004e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received unexpected event network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d for instance with vm_state active and task_state None.
Dec 06 07:13:34 compute-1 nova_compute[226101]: 2025-12-06 07:13:34.965 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e245 e245: 3 total, 3 up, 3 in
Dec 06 07:13:35 compute-1 ceph-mon[81689]: pgmap v1657: 305 pgs: 305 active+clean; 280 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.9 MiB/s wr, 178 op/s
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.463 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.464 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.464 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.464 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.464 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.465 226109 INFO nova.compute.manager [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Terminating instance
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.466 226109 DEBUG nova.compute.manager [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:13:35 compute-1 kernel: tapd598fee1-61 (unregistering): left promiscuous mode
Dec 06 07:13:35 compute-1 NetworkManager[49031]: <info>  [1765005215.8013] device (tapd598fee1-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.857 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:35 compute-1 ovn_controller[130279]: 2025-12-06T07:13:35Z|00211|binding|INFO|Releasing lport d598fee1-6118-456e-92af-e68a995ba96d from this chassis (sb_readonly=0)
Dec 06 07:13:35 compute-1 ovn_controller[130279]: 2025-12-06T07:13:35Z|00212|binding|INFO|Setting lport d598fee1-6118-456e-92af-e68a995ba96d down in Southbound
Dec 06 07:13:35 compute-1 ovn_controller[130279]: 2025-12-06T07:13:35Z|00213|binding|INFO|Removing iface tapd598fee1-61 ovn-installed in OVS
Dec 06 07:13:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:35.868 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:eb:f9 10.100.0.9'], port_security=['fa:16:3e:66:eb:f9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '25e88e3a-cb03-4a91-8695-d1c9c589b8c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c3e308f8-65fc-402f-96ed-2039201b95ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5d88f71-5dee-4f84-b30d-05c6ecea010a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=d598fee1-6118-456e-92af-e68a995ba96d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:35.869 139580 INFO neutron.agent.ovn.metadata.agent [-] Port d598fee1-6118-456e-92af-e68a995ba96d in datapath c0aeacae-e53d-425f-88e7-942ba0ab660c unbound from our chassis
Dec 06 07:13:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:35.871 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0aeacae-e53d-425f-88e7-942ba0ab660c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:35.872 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d9126166-0abb-44de-891c-80c8c24d93ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:35.872 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c namespace which is not needed anymore
Dec 06 07:13:35 compute-1 nova_compute[226101]: 2025-12-06 07:13:35.873 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:35 compute-1 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Dec 06 07:13:35 compute-1 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003e.scope: Consumed 3.323s CPU time.
Dec 06 07:13:35 compute-1 systemd-machined[190302]: Machine qemu-29-instance-0000003e terminated.
Dec 06 07:13:35 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[249085]: [NOTICE]   (249112) : haproxy version is 2.8.14-c23fe91
Dec 06 07:13:35 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[249085]: [NOTICE]   (249112) : path to executable is /usr/sbin/haproxy
Dec 06 07:13:35 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[249085]: [WARNING]  (249112) : Exiting Master process...
Dec 06 07:13:35 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[249085]: [ALERT]    (249112) : Current worker (249114) exited with code 143 (Terminated)
Dec 06 07:13:35 compute-1 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[249085]: [WARNING]  (249112) : All workers exited. Exiting... (0)
Dec 06 07:13:36 compute-1 systemd[1]: libpod-934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa.scope: Deactivated successfully.
Dec 06 07:13:36 compute-1 podman[249148]: 2025-12-06 07:13:36.007639824 +0000 UTC m=+0.043396673 container died 934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:13:36 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa-userdata-shm.mount: Deactivated successfully.
Dec 06 07:13:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-47737e9ce221fb2d5f793f358232754bca063f74db90d82103e3984887b3861f-merged.mount: Deactivated successfully.
Dec 06 07:13:36 compute-1 podman[249148]: 2025-12-06 07:13:36.04380739 +0000 UTC m=+0.079564239 container cleanup 934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:13:36 compute-1 systemd[1]: libpod-conmon-934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa.scope: Deactivated successfully.
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.085 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.098 226109 INFO nova.virt.libvirt.driver [-] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Instance destroyed successfully.
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.098 226109 DEBUG nova.objects.instance [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'resources' on Instance uuid 25e88e3a-cb03-4a91-8695-d1c9c589b8c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:36 compute-1 podman[249179]: 2025-12-06 07:13:36.104012966 +0000 UTC m=+0.040695130 container remove 934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.108 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[274f9ea2-b2d6-415d-b7d5-fd7cf46a0501]: (4, ('Sat Dec  6 07:13:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c (934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa)\n934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa\nSat Dec  6 07:13:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c (934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa)\n934c8c6c5bcfad5b9a051ebfd48a1d9c161b8b55bec1364644ea14c2a9d356fa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.110 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[adcc510b-042c-4324-87fb-3820e6f38169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.111 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0aeacae-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.114 226109 DEBUG nova.virt.libvirt.vif [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1838229485',display_name='tempest-MultipleCreateTestJSON-server-1838229485-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1838229485-1',id=62,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:13:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-kbd1i45d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:33Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=25e88e3a-cb03-4a91-8695-d1c9c589b8c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:13:36 compute-1 kernel: tapc0aeacae-e0: left promiscuous mode
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.115 226109 DEBUG nova.network.os_vif_util [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "d598fee1-6118-456e-92af-e68a995ba96d", "address": "fa:16:3e:66:eb:f9", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd598fee1-61", "ovs_interfaceid": "d598fee1-6118-456e-92af-e68a995ba96d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.116 226109 DEBUG nova.network.os_vif_util [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:eb:f9,bridge_name='br-int',has_traffic_filtering=True,id=d598fee1-6118-456e-92af-e68a995ba96d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd598fee1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.116 226109 DEBUG os_vif [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:eb:f9,bridge_name='br-int',has_traffic_filtering=True,id=d598fee1-6118-456e-92af-e68a995ba96d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd598fee1-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.118 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.119 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd598fee1-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.120 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.120 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.130 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.133 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[65d54c77-066c-4442-bdc4-5ef883b21ab5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-1 nova_compute[226101]: 2025-12-06 07:13:36.134 226109 INFO os_vif [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:eb:f9,bridge_name='br-int',has_traffic_filtering=True,id=d598fee1-6118-456e-92af-e68a995ba96d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd598fee1-61')
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.144 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d81f7e-7c74-4f24-b5cf-b10447ab3c57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.145 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a3585489-f744-47da-91d8-a849c41ba735]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.163 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eece09-2139-4132-bee1-6af2f469732b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549170, 'reachable_time': 25991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249213, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-1 systemd[1]: run-netns-ovnmeta\x2dc0aeacae\x2de53d\x2d425f\x2d88e7\x2d942ba0ab660c.mount: Deactivated successfully.
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.166 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:13:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:36.166 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[669c84ad-705d-4d6e-b48c-306f93cbafdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:36.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:36.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2415543248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:36 compute-1 ceph-mon[81689]: pgmap v1658: 305 pgs: 305 active+clean; 295 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 9.0 MiB/s wr, 259 op/s
Dec 06 07:13:36 compute-1 ceph-mon[81689]: osdmap e245: 3 total, 3 up, 3 in
Dec 06 07:13:37 compute-1 nova_compute[226101]: 2025-12-06 07:13:37.104 226109 DEBUG nova.compute.manager [req-b8ce098f-0f81-4694-8dd8-f0e3cb080b38 req-b9574571-2e79-47d6-94f6-436751d8ae9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received event network-vif-unplugged-d598fee1-6118-456e-92af-e68a995ba96d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:37 compute-1 nova_compute[226101]: 2025-12-06 07:13:37.104 226109 DEBUG oslo_concurrency.lockutils [req-b8ce098f-0f81-4694-8dd8-f0e3cb080b38 req-b9574571-2e79-47d6-94f6-436751d8ae9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:37 compute-1 nova_compute[226101]: 2025-12-06 07:13:37.104 226109 DEBUG oslo_concurrency.lockutils [req-b8ce098f-0f81-4694-8dd8-f0e3cb080b38 req-b9574571-2e79-47d6-94f6-436751d8ae9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:37 compute-1 nova_compute[226101]: 2025-12-06 07:13:37.104 226109 DEBUG oslo_concurrency.lockutils [req-b8ce098f-0f81-4694-8dd8-f0e3cb080b38 req-b9574571-2e79-47d6-94f6-436751d8ae9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:37 compute-1 nova_compute[226101]: 2025-12-06 07:13:37.105 226109 DEBUG nova.compute.manager [req-b8ce098f-0f81-4694-8dd8-f0e3cb080b38 req-b9574571-2e79-47d6-94f6-436751d8ae9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] No waiting events found dispatching network-vif-unplugged-d598fee1-6118-456e-92af-e68a995ba96d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:37 compute-1 nova_compute[226101]: 2025-12-06 07:13:37.105 226109 DEBUG nova.compute.manager [req-b8ce098f-0f81-4694-8dd8-f0e3cb080b38 req-b9574571-2e79-47d6-94f6-436751d8ae9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received event network-vif-unplugged-d598fee1-6118-456e-92af-e68a995ba96d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:13:37 compute-1 nova_compute[226101]: 2025-12-06 07:13:37.113 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:37.112 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:37.114 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:13:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:38.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:38.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:39 compute-1 nova_compute[226101]: 2025-12-06 07:13:39.003 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:39 compute-1 nova_compute[226101]: 2025-12-06 07:13:39.246 226109 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received event network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:39 compute-1 nova_compute[226101]: 2025-12-06 07:13:39.246 226109 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:39 compute-1 nova_compute[226101]: 2025-12-06 07:13:39.247 226109 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:39 compute-1 nova_compute[226101]: 2025-12-06 07:13:39.247 226109 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:39 compute-1 nova_compute[226101]: 2025-12-06 07:13:39.247 226109 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] No waiting events found dispatching network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:39 compute-1 nova_compute[226101]: 2025-12-06 07:13:39.247 226109 WARNING nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received unexpected event network-vif-plugged-d598fee1-6118-456e-92af-e68a995ba96d for instance with vm_state active and task_state deleting.
Dec 06 07:13:39 compute-1 ceph-mon[81689]: pgmap v1660: 305 pgs: 305 active+clean; 304 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 285 op/s
Dec 06 07:13:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:40.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:40.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:41 compute-1 podman[249223]: 2025-12-06 07:13:41.097038057 +0000 UTC m=+0.081484391 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 07:13:41 compute-1 nova_compute[226101]: 2025-12-06 07:13:41.120 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:41 compute-1 nova_compute[226101]: 2025-12-06 07:13:41.661 226109 INFO nova.virt.libvirt.driver [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Deleting instance files /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5_del
Dec 06 07:13:41 compute-1 nova_compute[226101]: 2025-12-06 07:13:41.662 226109 INFO nova.virt.libvirt.driver [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Deletion of /var/lib/nova/instances/25e88e3a-cb03-4a91-8695-d1c9c589b8c5_del complete
Dec 06 07:13:41 compute-1 nova_compute[226101]: 2025-12-06 07:13:41.708 226109 INFO nova.compute.manager [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Took 6.24 seconds to destroy the instance on the hypervisor.
Dec 06 07:13:41 compute-1 nova_compute[226101]: 2025-12-06 07:13:41.708 226109 DEBUG oslo.service.loopingcall [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:13:41 compute-1 nova_compute[226101]: 2025-12-06 07:13:41.709 226109 DEBUG nova.compute.manager [-] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:13:41 compute-1 nova_compute[226101]: 2025-12-06 07:13:41.709 226109 DEBUG nova.network.neutron [-] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:13:41 compute-1 ceph-mon[81689]: pgmap v1661: 305 pgs: 305 active+clean; 279 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 315 op/s
Dec 06 07:13:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:42.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:42.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:43 compute-1 ceph-mon[81689]: pgmap v1662: 305 pgs: 305 active+clean; 214 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 2.6 MiB/s wr, 440 op/s
Dec 06 07:13:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1231508313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3896354293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:43 compute-1 nova_compute[226101]: 2025-12-06 07:13:43.953 226109 DEBUG nova.network.neutron [-] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:43 compute-1 nova_compute[226101]: 2025-12-06 07:13:43.980 226109 INFO nova.compute.manager [-] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Took 2.27 seconds to deallocate network for instance.
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.004 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.025 226109 DEBUG nova.compute.manager [req-8ca3d8d0-f978-44d7-b517-d35d078239f9 req-868ce09a-fca9-4af3-bf3b-41c03aa42b4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Received event network-vif-deleted-d598fee1-6118-456e-92af-e68a995ba96d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.027 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.027 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.071 226109 DEBUG oslo_concurrency.processutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:44.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:44.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1368209956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.523 226109 DEBUG oslo_concurrency.processutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.529 226109 DEBUG nova.compute.provider_tree [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.544 226109 DEBUG nova.scheduler.client.report [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.566 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.600 226109 INFO nova.scheduler.client.report [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Deleted allocations for instance 25e88e3a-cb03-4a91-8695-d1c9c589b8c5
Dec 06 07:13:44 compute-1 nova_compute[226101]: 2025-12-06 07:13:44.659 226109 DEBUG oslo_concurrency.lockutils [None req-f89ec613-f088-4d6f-96ad-34a8be1f94b2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "25e88e3a-cb03-4a91-8695-d1c9c589b8c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:45 compute-1 ceph-mon[81689]: pgmap v1663: 305 pgs: 305 active+clean; 213 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 1.9 MiB/s wr, 417 op/s
Dec 06 07:13:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3150186985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:45.116 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:46 compute-1 podman[249273]: 2025-12-06 07:13:46.056061719 +0000 UTC m=+0.043852115 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:13:46 compute-1 podman[249272]: 2025-12-06 07:13:46.068111965 +0000 UTC m=+0.056543749 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 07:13:46 compute-1 nova_compute[226101]: 2025-12-06 07:13:46.121 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:46.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:46.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1368209956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:46 compute-1 ceph-mon[81689]: pgmap v1664: 305 pgs: 305 active+clean; 192 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 497 KiB/s wr, 315 op/s
Dec 06 07:13:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:48.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:48.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:48 compute-1 nova_compute[226101]: 2025-12-06 07:13:48.978 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "1593534a-7024-4f80-9d34-0d75101dc370" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:48 compute-1 nova_compute[226101]: 2025-12-06 07:13:48.979 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.007 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.015 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.077 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.078 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.084 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.085 226109 INFO nova.compute.claims [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:13:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.183 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.436 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/951077284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.676 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.682 226109 DEBUG nova.compute.provider_tree [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.700 226109 DEBUG nova.scheduler.client.report [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.724 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.725 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.774 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.775 226109 DEBUG nova.network.neutron [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.791 226109 INFO nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.810 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:13:49 compute-1 ceph-mon[81689]: pgmap v1665: 305 pgs: 305 active+clean; 175 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 416 KiB/s wr, 279 op/s
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.910 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.911 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.911 226109 INFO nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Creating image(s)
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.938 226109 DEBUG nova.storage.rbd_utils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] rbd image 1593534a-7024-4f80-9d34-0d75101dc370_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:49 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.969 226109 DEBUG nova.storage.rbd_utils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] rbd image 1593534a-7024-4f80-9d34-0d75101dc370_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:49.999 226109 DEBUG nova.storage.rbd_utils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] rbd image 1593534a-7024-4f80-9d34-0d75101dc370_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.003 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.075 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.076 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.076 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.076 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.105 226109 DEBUG nova.storage.rbd_utils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] rbd image 1593534a-7024-4f80-9d34-0d75101dc370_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.108 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1593534a-7024-4f80-9d34-0d75101dc370_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:50.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:50.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.418 226109 DEBUG nova.policy [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30cfe1ade59e4735affe15c43e86fec2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b693354f7db4a0eb127bf2915dd342e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.616 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.616 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.616 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.616 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:13:50 compute-1 nova_compute[226101]: 2025-12-06 07:13:50.617 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3576366386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.025 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.097 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005216.0957065, 25e88e3a-cb03-4a91-8695-d1c9c589b8c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.097 226109 INFO nova.compute.manager [-] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] VM Stopped (Lifecycle Event)
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.116 226109 DEBUG nova.compute.manager [None req-f6a0691d-6041-4d52-992c-74e74f9804f7 - - - - - -] [instance: 25e88e3a-cb03-4a91-8695-d1c9c589b8c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.122 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.203 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.204 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4694MB free_disk=20.942638397216797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.204 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.205 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.282 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 1593534a-7024-4f80-9d34-0d75101dc370 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.282 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.282 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:13:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1022856760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/951077284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:51 compute-1 ceph-mon[81689]: pgmap v1666: 305 pgs: 305 active+clean; 167 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 44 KiB/s wr, 217 op/s
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.335 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.509936) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231509965, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1739, "num_deletes": 523, "total_data_size": 2884162, "memory_usage": 2937496, "flush_reason": "Manual Compaction"}
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231519595, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1350608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33775, "largest_seqno": 35509, "table_properties": {"data_size": 1344222, "index_size": 2885, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19524, "raw_average_key_size": 20, "raw_value_size": 1328323, "raw_average_value_size": 1390, "num_data_blocks": 126, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 523, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005122, "oldest_key_time": 1765005122, "file_creation_time": 1765005231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 9686 microseconds, and 4282 cpu microseconds.
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.519623) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1350608 bytes OK
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.519638) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.521158) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.521169) EVENT_LOG_v1 {"time_micros": 1765005231521165, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.521184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2874850, prev total WAL file size 2874850, number of live WAL files 2.
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.521945) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303130' seq:72057594037927935, type:22 .. '6D6772737461740031323732' seq:0, type:0; will stop at (end)
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1318KB)], [63(11MB)]
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231522020, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 13107088, "oldest_snapshot_seqno": -1}
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6443 keys, 9571376 bytes, temperature: kUnknown
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231587313, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 9571376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9528409, "index_size": 25769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 166238, "raw_average_key_size": 25, "raw_value_size": 9412777, "raw_average_value_size": 1460, "num_data_blocks": 1029, "num_entries": 6443, "num_filter_entries": 6443, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765005231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.587657) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9571376 bytes
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.597748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.5 rd, 146.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 11.2 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(16.8) write-amplify(7.1) OK, records in: 7472, records dropped: 1029 output_compression: NoCompression
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.597792) EVENT_LOG_v1 {"time_micros": 1765005231597775, "job": 38, "event": "compaction_finished", "compaction_time_micros": 65375, "compaction_time_cpu_micros": 22328, "output_level": 6, "num_output_files": 1, "total_output_size": 9571376, "num_input_records": 7472, "num_output_records": 6443, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231598220, "job": 38, "event": "table_file_deletion", "file_number": 65}
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231600750, "job": 38, "event": "table_file_deletion", "file_number": 63}
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.521805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.600808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.600813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.600815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.600816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:13:51.600818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/603883367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.767 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.773 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.799 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.825 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.826 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.936 226109 DEBUG nova.network.neutron [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Successfully created port: 9879530b-b141-43b3-a86c-e7136c43c4d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:13:51 compute-1 nova_compute[226101]: 2025-12-06 07:13:51.957 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1593534a-7024-4f80-9d34-0d75101dc370_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.849s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.017 226109 DEBUG nova.storage.rbd_utils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] resizing rbd image 1593534a-7024-4f80-9d34-0d75101dc370_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:13:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:52.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.280 226109 DEBUG nova.objects.instance [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lazy-loading 'migration_context' on Instance uuid 1593534a-7024-4f80-9d34-0d75101dc370 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:52.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.300 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.300 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Ensure instance console log exists: /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.301 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.301 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.302 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3576366386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/603883367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.628 226109 DEBUG nova.network.neutron [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Successfully updated port: 9879530b-b141-43b3-a86c-e7136c43c4d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.643 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.643 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquired lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.643 226109 DEBUG nova.network.neutron [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.779 226109 DEBUG nova.compute.manager [req-3f31cc38-372e-4a17-a924-2362dd428eab req-17164c9d-ebbe-407b-8585-06f7b0b2fc4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received event network-changed-9879530b-b141-43b3-a86c-e7136c43c4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.780 226109 DEBUG nova.compute.manager [req-3f31cc38-372e-4a17-a924-2362dd428eab req-17164c9d-ebbe-407b-8585-06f7b0b2fc4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Refreshing instance network info cache due to event network-changed-9879530b-b141-43b3-a86c-e7136c43c4d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.780 226109 DEBUG oslo_concurrency.lockutils [req-3f31cc38-372e-4a17-a924-2362dd428eab req-17164c9d-ebbe-407b-8585-06f7b0b2fc4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.826 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:52 compute-1 nova_compute[226101]: 2025-12-06 07:13:52.827 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:13:53 compute-1 nova_compute[226101]: 2025-12-06 07:13:53.338 226109 DEBUG nova.network.neutron [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:13:53 compute-1 nova_compute[226101]: 2025-12-06 07:13:53.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:53 compute-1 ceph-mon[81689]: pgmap v1667: 305 pgs: 305 active+clean; 167 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 27 KiB/s wr, 193 op/s
Dec 06 07:13:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/205864655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.008 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:54.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:54.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.576 226109 DEBUG nova.network.neutron [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Updating instance_info_cache with network_info: [{"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.602 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Releasing lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.603 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Instance network_info: |[{"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.603 226109 DEBUG oslo_concurrency.lockutils [req-3f31cc38-372e-4a17-a924-2362dd428eab req-17164c9d-ebbe-407b-8585-06f7b0b2fc4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.603 226109 DEBUG nova.network.neutron [req-3f31cc38-372e-4a17-a924-2362dd428eab req-17164c9d-ebbe-407b-8585-06f7b0b2fc4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Refreshing network info cache for port 9879530b-b141-43b3-a86c-e7136c43c4d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.605 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Start _get_guest_xml network_info=[{"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.611 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.611 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.612 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.613 226109 WARNING nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.618 226109 DEBUG nova.virt.libvirt.host [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.619 226109 DEBUG nova.virt.libvirt.host [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.628 226109 DEBUG nova.virt.libvirt.host [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.628 226109 DEBUG nova.virt.libvirt.host [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.629 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.629 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.630 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.630 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.630 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.630 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.631 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.631 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.631 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.631 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.631 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.632 226109 DEBUG nova.virt.hardware [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:13:54 compute-1 nova_compute[226101]: 2025-12-06 07:13:54.634 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1262571811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:13:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3357547717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.132 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.159 226109 DEBUG nova.storage.rbd_utils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] rbd image 1593534a-7024-4f80-9d34-0d75101dc370_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.164 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:55 compute-1 sudo[249587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:55 compute-1 sudo[249587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:55 compute-1 sudo[249587]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:55 compute-1 sudo[249612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:13:55 compute-1 sudo[249612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:55 compute-1 sudo[249612]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:55 compute-1 sudo[249656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:55 compute-1 sudo[249656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:55 compute-1 sudo[249656]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:55 compute-1 sudo[249681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:13:55 compute-1 sudo[249681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.605 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:13:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2222722586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.662 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.664 226109 DEBUG nova.virt.libvirt.vif [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1447106479',display_name='tempest-ServersTestJSON-server-1447106479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1447106479',id=66,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB4LWpEx8TK4J8iUIdEoL+UGkMmaboPUbt0fSaRfu7BLndPZ6NyAQi2AuHQ0TUsU00lHX/vI+qkROsTetEv6jInEgEAa2ZZdW5mPiDbhGcSSekiGMBRFQecx5T18LbOpZA==',key_name='tempest-keypair-1585904709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b693354f7db4a0eb127bf2915dd342e',ramdisk_id='',reservation_id='r-qj832q6n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-256186984',owner_user_name='tempest-ServersTestJSON-256186984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='30cfe1ade59e4735affe15c43e86fec2',uuid=1593534a-7024-4f80-9d34-0d75101dc370,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.665 226109 DEBUG nova.network.os_vif_util [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Converting VIF {"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.666 226109 DEBUG nova.network.os_vif_util [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2a:50,bridge_name='br-int',has_traffic_filtering=True,id=9879530b-b141-43b3-a86c-e7136c43c4d4,network=Network(029c3c2b-c7ec-4776-9c3c-b0ccd42caa15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9879530b-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.667 226109 DEBUG nova.objects.instance [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1593534a-7024-4f80-9d34-0d75101dc370 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.687 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <uuid>1593534a-7024-4f80-9d34-0d75101dc370</uuid>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <name>instance-00000042</name>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersTestJSON-server-1447106479</nova:name>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:13:54</nova:creationTime>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <nova:user uuid="30cfe1ade59e4735affe15c43e86fec2">tempest-ServersTestJSON-256186984-project-member</nova:user>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <nova:project uuid="9b693354f7db4a0eb127bf2915dd342e">tempest-ServersTestJSON-256186984</nova:project>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <nova:port uuid="9879530b-b141-43b3-a86c-e7136c43c4d4">
Dec 06 07:13:55 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <system>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <entry name="serial">1593534a-7024-4f80-9d34-0d75101dc370</entry>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <entry name="uuid">1593534a-7024-4f80-9d34-0d75101dc370</entry>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </system>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <os>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   </os>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <features>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   </features>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1593534a-7024-4f80-9d34-0d75101dc370_disk">
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       </source>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1593534a-7024-4f80-9d34-0d75101dc370_disk.config">
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       </source>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:13:55 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:b2:2a:50"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <target dev="tap9879530b-b1"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370/console.log" append="off"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <video>
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </video>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:13:55 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:13:55 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:13:55 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:13:55 compute-1 nova_compute[226101]: </domain>
Dec 06 07:13:55 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.688 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Preparing to wait for external event network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.688 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "1593534a-7024-4f80-9d34-0d75101dc370-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.688 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.689 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.689 226109 DEBUG nova.virt.libvirt.vif [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1447106479',display_name='tempest-ServersTestJSON-server-1447106479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1447106479',id=66,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB4LWpEx8TK4J8iUIdEoL+UGkMmaboPUbt0fSaRfu7BLndPZ6NyAQi2AuHQ0TUsU00lHX/vI+qkROsTetEv6jInEgEAa2ZZdW5mPiDbhGcSSekiGMBRFQecx5T18LbOpZA==',key_name='tempest-keypair-1585904709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b693354f7db4a0eb127bf2915dd342e',ramdisk_id='',reservation_id='r-qj832q6n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-256186984',owner_user_name='tempest-ServersTestJSON-256186984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='30cfe1ade59e4735affe15c43e86fec2',uuid=1593534a-7024-4f80-9d34-0d75101dc370,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.690 226109 DEBUG nova.network.os_vif_util [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Converting VIF {"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.690 226109 DEBUG nova.network.os_vif_util [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2a:50,bridge_name='br-int',has_traffic_filtering=True,id=9879530b-b141-43b3-a86c-e7136c43c4d4,network=Network(029c3c2b-c7ec-4776-9c3c-b0ccd42caa15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9879530b-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.690 226109 DEBUG os_vif [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2a:50,bridge_name='br-int',has_traffic_filtering=True,id=9879530b-b141-43b3-a86c-e7136c43c4d4,network=Network(029c3c2b-c7ec-4776-9c3c-b0ccd42caa15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9879530b-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.691 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.691 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.692 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.695 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.695 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9879530b-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.696 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9879530b-b1, col_values=(('external_ids', {'iface-id': '9879530b-b141-43b3-a86c-e7136c43c4d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:2a:50', 'vm-uuid': '1593534a-7024-4f80-9d34-0d75101dc370'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.697 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:55 compute-1 NetworkManager[49031]: <info>  [1765005235.6983] manager: (tap9879530b-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.701 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.703 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.704 226109 INFO os_vif [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2a:50,bridge_name='br-int',has_traffic_filtering=True,id=9879530b-b141-43b3-a86c-e7136c43c4d4,network=Network(029c3c2b-c7ec-4776-9c3c-b0ccd42caa15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9879530b-b1')
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.759 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.759 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.760 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] No VIF found with MAC fa:16:3e:b2:2a:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.761 226109 INFO nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Using config drive
Dec 06 07:13:55 compute-1 nova_compute[226101]: 2025-12-06 07:13:55.791 226109 DEBUG nova.storage.rbd_utils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] rbd image 1593534a-7024-4f80-9d34-0d75101dc370_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:55 compute-1 sudo[249681]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:56 compute-1 ceph-mon[81689]: pgmap v1668: 305 pgs: 305 active+clean; 176 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 314 KiB/s wr, 115 op/s
Dec 06 07:13:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3357547717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1297350945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2222722586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:56 compute-1 nova_compute[226101]: 2025-12-06 07:13:56.150 226109 INFO nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Creating config drive at /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370/disk.config
Dec 06 07:13:56 compute-1 nova_compute[226101]: 2025-12-06 07:13:56.157 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4uu61vwy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:56 compute-1 nova_compute[226101]: 2025-12-06 07:13:56.182 226109 DEBUG nova.network.neutron [req-3f31cc38-372e-4a17-a924-2362dd428eab req-17164c9d-ebbe-407b-8585-06f7b0b2fc4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Updated VIF entry in instance network info cache for port 9879530b-b141-43b3-a86c-e7136c43c4d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:13:56 compute-1 nova_compute[226101]: 2025-12-06 07:13:56.183 226109 DEBUG nova.network.neutron [req-3f31cc38-372e-4a17-a924-2362dd428eab req-17164c9d-ebbe-407b-8585-06f7b0b2fc4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Updating instance_info_cache with network_info: [{"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:56 compute-1 nova_compute[226101]: 2025-12-06 07:13:56.199 226109 DEBUG oslo_concurrency.lockutils [req-3f31cc38-372e-4a17-a924-2362dd428eab req-17164c9d-ebbe-407b-8585-06f7b0b2fc4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:56.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:13:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:56.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:13:56 compute-1 nova_compute[226101]: 2025-12-06 07:13:56.286 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4uu61vwy" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:56 compute-1 nova_compute[226101]: 2025-12-06 07:13:56.315 226109 DEBUG nova.storage.rbd_utils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] rbd image 1593534a-7024-4f80-9d34-0d75101dc370_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:56 compute-1 nova_compute[226101]: 2025-12-06 07:13:56.320 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370/disk.config 1593534a-7024-4f80-9d34-0d75101dc370_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:57 compute-1 ceph-mon[81689]: pgmap v1669: 305 pgs: 305 active+clean; 199 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 145 op/s
Dec 06 07:13:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:13:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:13:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:13:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:13:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:13:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:13:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2607273512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.143 226109 DEBUG oslo_concurrency.processutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370/disk.config 1593534a-7024-4f80-9d34-0d75101dc370_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.823s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.143 226109 INFO nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Deleting local config drive /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370/disk.config because it was imported into RBD.
Dec 06 07:13:57 compute-1 kernel: tap9879530b-b1: entered promiscuous mode
Dec 06 07:13:57 compute-1 NetworkManager[49031]: <info>  [1765005237.1955] manager: (tap9879530b-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Dec 06 07:13:57 compute-1 ovn_controller[130279]: 2025-12-06T07:13:57Z|00214|binding|INFO|Claiming lport 9879530b-b141-43b3-a86c-e7136c43c4d4 for this chassis.
Dec 06 07:13:57 compute-1 ovn_controller[130279]: 2025-12-06T07:13:57Z|00215|binding|INFO|9879530b-b141-43b3-a86c-e7136c43c4d4: Claiming fa:16:3e:b2:2a:50 10.100.0.9
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.196 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.210 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2a:50 10.100.0.9'], port_security=['fa:16:3e:b2:2a:50 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1593534a-7024-4f80-9d34-0d75101dc370', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b693354f7db4a0eb127bf2915dd342e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16f66fbc-314e-46a4-a9b8-7348347e57a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=383d018e-0936-4290-addc-3609348cc799, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9879530b-b141-43b3-a86c-e7136c43c4d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.211 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9879530b-b141-43b3-a86c-e7136c43c4d4 in datapath 029c3c2b-c7ec-4776-9c3c-b0ccd42caa15 bound to our chassis
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.212 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 029c3c2b-c7ec-4776-9c3c-b0ccd42caa15
Dec 06 07:13:57 compute-1 systemd-udevd[249812]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.224 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4469982e-ce6a-444c-ae4a-9b89f495ab49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.225 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap029c3c2b-c1 in ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.227 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap029c3c2b-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.227 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa98e6b-34f6-4bf3-8e70-b55f01ddba56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 systemd-machined[190302]: New machine qemu-30-instance-00000042.
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.229 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f72f60df-9e21-495a-83f8-8877389d5246]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 systemd[1]: Started Virtual Machine qemu-30-instance-00000042.
Dec 06 07:13:57 compute-1 NetworkManager[49031]: <info>  [1765005237.2408] device (tap9879530b-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:13:57 compute-1 NetworkManager[49031]: <info>  [1765005237.2415] device (tap9879530b-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.240 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f87851-f553-405e-a5dd-f9f3e8071392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.266 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[57f834cd-1677-423d-a124-071d1d0ab784]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.274 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:57 compute-1 ovn_controller[130279]: 2025-12-06T07:13:57Z|00216|binding|INFO|Setting lport 9879530b-b141-43b3-a86c-e7136c43c4d4 ovn-installed in OVS
Dec 06 07:13:57 compute-1 ovn_controller[130279]: 2025-12-06T07:13:57Z|00217|binding|INFO|Setting lport 9879530b-b141-43b3-a86c-e7136c43c4d4 up in Southbound
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.293 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff687d4-c92c-45a5-8995-12fd33488021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 systemd-udevd[249816]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:57 compute-1 NetworkManager[49031]: <info>  [1765005237.2994] manager: (tap029c3c2b-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.300 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[15a3f8af-6bbc-4bd5-806e-7bb6f26d695b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.326 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8e93fd2c-8bf0-4442-8858-ada7af97f311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.329 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c6275e-eb1d-4b3e-8ca9-f5bd88d6414c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 NetworkManager[49031]: <info>  [1765005237.3520] device (tap029c3c2b-c0): carrier: link connected
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.357 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[14fb72f1-74e3-44cf-abf8-8835c67246b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.373 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3e4580-9077-453d-8449-bd1200ec8743]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap029c3c2b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a6:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551685, 'reachable_time': 44011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249845, 'error': None, 'target': 'ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.389 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f303d8b4-9555-445c-b27d-f178ae0c6dd9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:a6a4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 551685, 'tstamp': 551685}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249846, 'error': None, 'target': 'ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.406 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e06c346e-b174-4f9d-90ce-ef6784b096db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap029c3c2b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a6:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551685, 'reachable_time': 44011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249847, 'error': None, 'target': 'ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.432 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0802203c-73e3-4d75-a6c1-56100592c2f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.487 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8386176d-97ee-4100-b766-bcfda3283e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.488 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap029c3c2b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.488 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.489 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap029c3c2b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.491 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:57 compute-1 NetworkManager[49031]: <info>  [1765005237.4917] manager: (tap029c3c2b-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Dec 06 07:13:57 compute-1 kernel: tap029c3c2b-c0: entered promiscuous mode
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.493 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.494 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap029c3c2b-c0, col_values=(('external_ids', {'iface-id': 'b8de4a3f-9741-430b-9a24-416bb86a915a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.495 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:57 compute-1 ovn_controller[130279]: 2025-12-06T07:13:57Z|00218|binding|INFO|Releasing lport b8de4a3f-9741-430b-9a24-416bb86a915a from this chassis (sb_readonly=0)
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.496 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.497 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/029c3c2b-c7ec-4776-9c3c-b0ccd42caa15.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/029c3c2b-c7ec-4776-9c3c-b0ccd42caa15.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.498 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3c41904b-ff3b-4f11-b3e8-9f4b039d8a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.499 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/029c3c2b-c7ec-4776-9c3c-b0ccd42caa15.pid.haproxy
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 029c3c2b-c7ec-4776-9c3c-b0ccd42caa15
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:13:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:13:57.501 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15', 'env', 'PROCESS_TAG=haproxy-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/029c3c2b-c7ec-4776-9c3c-b0ccd42caa15.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.510 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.541 226109 DEBUG nova.compute.manager [req-4e6f3007-5de0-4d6b-b277-1b72c3c6b314 req-679e5fdd-8583-4e26-9b02-6cada18ffa2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received event network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.542 226109 DEBUG oslo_concurrency.lockutils [req-4e6f3007-5de0-4d6b-b277-1b72c3c6b314 req-679e5fdd-8583-4e26-9b02-6cada18ffa2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1593534a-7024-4f80-9d34-0d75101dc370-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.542 226109 DEBUG oslo_concurrency.lockutils [req-4e6f3007-5de0-4d6b-b277-1b72c3c6b314 req-679e5fdd-8583-4e26-9b02-6cada18ffa2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.542 226109 DEBUG oslo_concurrency.lockutils [req-4e6f3007-5de0-4d6b-b277-1b72c3c6b314 req-679e5fdd-8583-4e26-9b02-6cada18ffa2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.542 226109 DEBUG nova.compute.manager [req-4e6f3007-5de0-4d6b-b277-1b72c3c6b314 req-679e5fdd-8583-4e26-9b02-6cada18ffa2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Processing event network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.633 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.634 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005237.6327012, 1593534a-7024-4f80-9d34-0d75101dc370 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.634 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] VM Started (Lifecycle Event)
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.649 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.653 226109 INFO nova.virt.libvirt.driver [-] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Instance spawned successfully.
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.653 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.665 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.668 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.697 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.698 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.698 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.699 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.699 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.700 226109 DEBUG nova.virt.libvirt.driver [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.704 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.705 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005237.6366494, 1593534a-7024-4f80-9d34-0d75101dc370 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.705 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] VM Paused (Lifecycle Event)
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.745 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.749 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005237.641005, 1593534a-7024-4f80-9d34-0d75101dc370 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.749 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] VM Resumed (Lifecycle Event)
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.778 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.781 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.787 226109 INFO nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Took 7.88 seconds to spawn the instance on the hypervisor.
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.788 226109 DEBUG nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.801 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.843 226109 INFO nova.compute.manager [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Took 8.78 seconds to build instance.
Dec 06 07:13:57 compute-1 nova_compute[226101]: 2025-12-06 07:13:57.868 226109 DEBUG oslo_concurrency.lockutils [None req-e30ed5c8-bb78-4473-9491-512816a28eb0 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:57 compute-1 podman[249921]: 2025-12-06 07:13:57.874196366 +0000 UTC m=+0.047556016 container create 1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:13:57 compute-1 systemd[1]: Started libpod-conmon-1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947.scope.
Dec 06 07:13:57 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:13:57 compute-1 podman[249921]: 2025-12-06 07:13:57.851969075 +0000 UTC m=+0.025328745 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:13:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11f1523f69a51c579d187319dff84e04cb2416b37f2ca899be50d9240237218/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:57 compute-1 podman[249921]: 2025-12-06 07:13:57.964682379 +0000 UTC m=+0.138042049 container init 1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:13:57 compute-1 podman[249921]: 2025-12-06 07:13:57.969620612 +0000 UTC m=+0.142980262 container start 1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:13:57 compute-1 neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15[249934]: [NOTICE]   (249938) : New worker (249940) forked
Dec 06 07:13:57 compute-1 neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15[249934]: [NOTICE]   (249938) : Loading success.
Dec 06 07:13:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e246 e246: 3 total, 3 up, 3 in
Dec 06 07:13:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:58.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:13:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:58.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/567437665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.009 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.722 226109 DEBUG nova.compute.manager [req-7041a6c9-5657-4a3d-b42c-ee846b4d9554 req-f03571cd-54e3-4984-acd1-3a9da8697bcb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received event network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.723 226109 DEBUG oslo_concurrency.lockutils [req-7041a6c9-5657-4a3d-b42c-ee846b4d9554 req-f03571cd-54e3-4984-acd1-3a9da8697bcb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1593534a-7024-4f80-9d34-0d75101dc370-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.723 226109 DEBUG oslo_concurrency.lockutils [req-7041a6c9-5657-4a3d-b42c-ee846b4d9554 req-f03571cd-54e3-4984-acd1-3a9da8697bcb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.724 226109 DEBUG oslo_concurrency.lockutils [req-7041a6c9-5657-4a3d-b42c-ee846b4d9554 req-f03571cd-54e3-4984-acd1-3a9da8697bcb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.724 226109 DEBUG nova.compute.manager [req-7041a6c9-5657-4a3d-b42c-ee846b4d9554 req-f03571cd-54e3-4984-acd1-3a9da8697bcb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] No waiting events found dispatching network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.724 226109 WARNING nova.compute.manager [req-7041a6c9-5657-4a3d-b42c-ee846b4d9554 req-f03571cd-54e3-4984-acd1-3a9da8697bcb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received unexpected event network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 for instance with vm_state active and task_state None.
Dec 06 07:13:59 compute-1 NetworkManager[49031]: <info>  [1765005239.8411] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Dec 06 07:13:59 compute-1 NetworkManager[49031]: <info>  [1765005239.8419] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.840 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:59 compute-1 ceph-mon[81689]: pgmap v1670: 305 pgs: 305 active+clean; 213 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 144 op/s
Dec 06 07:13:59 compute-1 ceph-mon[81689]: osdmap e246: 3 total, 3 up, 3 in
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.953 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:59 compute-1 ovn_controller[130279]: 2025-12-06T07:13:59Z|00219|binding|INFO|Releasing lport b8de4a3f-9741-430b-9a24-416bb86a915a from this chassis (sb_readonly=0)
Dec 06 07:13:59 compute-1 nova_compute[226101]: 2025-12-06 07:13:59.964 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:00.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:00 compute-1 nova_compute[226101]: 2025-12-06 07:14:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:00 compute-1 nova_compute[226101]: 2025-12-06 07:14:00.697 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:01 compute-1 nova_compute[226101]: 2025-12-06 07:14:01.186 226109 DEBUG nova.compute.manager [req-a0504231-f694-499a-bf10-6e5cd1ee7ff8 req-4f6ab623-5ffd-4c76-b521-66278f15db4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received event network-changed-9879530b-b141-43b3-a86c-e7136c43c4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:01 compute-1 nova_compute[226101]: 2025-12-06 07:14:01.186 226109 DEBUG nova.compute.manager [req-a0504231-f694-499a-bf10-6e5cd1ee7ff8 req-4f6ab623-5ffd-4c76-b521-66278f15db4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Refreshing instance network info cache due to event network-changed-9879530b-b141-43b3-a86c-e7136c43c4d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:14:01 compute-1 nova_compute[226101]: 2025-12-06 07:14:01.187 226109 DEBUG oslo_concurrency.lockutils [req-a0504231-f694-499a-bf10-6e5cd1ee7ff8 req-4f6ab623-5ffd-4c76-b521-66278f15db4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:14:01 compute-1 nova_compute[226101]: 2025-12-06 07:14:01.187 226109 DEBUG oslo_concurrency.lockutils [req-a0504231-f694-499a-bf10-6e5cd1ee7ff8 req-4f6ab623-5ffd-4c76-b521-66278f15db4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:14:01 compute-1 nova_compute[226101]: 2025-12-06 07:14:01.187 226109 DEBUG nova.network.neutron [req-a0504231-f694-499a-bf10-6e5cd1ee7ff8 req-4f6ab623-5ffd-4c76-b521-66278f15db4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Refreshing network info cache for port 9879530b-b141-43b3-a86c-e7136c43c4d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:14:01 compute-1 ceph-mon[81689]: pgmap v1672: 305 pgs: 305 active+clean; 213 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Dec 06 07:14:01 compute-1 nova_compute[226101]: 2025-12-06 07:14:01.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:01.631 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:01.632 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:01.632 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:02.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:02.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:02 compute-1 ceph-mon[81689]: pgmap v1673: 305 pgs: 305 active+clean; 195 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 200 op/s
Dec 06 07:14:02 compute-1 nova_compute[226101]: 2025-12-06 07:14:02.844 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:03 compute-1 nova_compute[226101]: 2025-12-06 07:14:03.305 226109 DEBUG nova.network.neutron [req-a0504231-f694-499a-bf10-6e5cd1ee7ff8 req-4f6ab623-5ffd-4c76-b521-66278f15db4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Updated VIF entry in instance network info cache for port 9879530b-b141-43b3-a86c-e7136c43c4d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:14:03 compute-1 nova_compute[226101]: 2025-12-06 07:14:03.306 226109 DEBUG nova.network.neutron [req-a0504231-f694-499a-bf10-6e5cd1ee7ff8 req-4f6ab623-5ffd-4c76-b521-66278f15db4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Updating instance_info_cache with network_info: [{"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:14:03 compute-1 nova_compute[226101]: 2025-12-06 07:14:03.324 226109 DEBUG oslo_concurrency.lockutils [req-a0504231-f694-499a-bf10-6e5cd1ee7ff8 req-4f6ab623-5ffd-4c76-b521-66278f15db4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1593534a-7024-4f80-9d34-0d75101dc370" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:14:04 compute-1 nova_compute[226101]: 2025-12-06 07:14:04.012 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:04.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:04.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:05 compute-1 ceph-mon[81689]: pgmap v1674: 305 pgs: 305 active+clean; 153 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Dec 06 07:14:05 compute-1 nova_compute[226101]: 2025-12-06 07:14:05.699 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:06.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:06.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:06 compute-1 sudo[249950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:06 compute-1 sudo[249950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:06 compute-1 sudo[249950]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:06 compute-1 sudo[249975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:14:06 compute-1 sudo[249975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:06 compute-1 sudo[249975]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2158123452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1148274591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:14:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e247 e247: 3 total, 3 up, 3 in
Dec 06 07:14:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:08.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:08.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:08 compute-1 ceph-mon[81689]: pgmap v1675: 305 pgs: 305 active+clean; 107 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 887 KiB/s wr, 139 op/s
Dec 06 07:14:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:14:08 compute-1 ceph-mon[81689]: osdmap e247: 3 total, 3 up, 3 in
Dec 06 07:14:09 compute-1 nova_compute[226101]: 2025-12-06 07:14:09.014 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:10.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:14:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:10.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:14:10 compute-1 nova_compute[226101]: 2025-12-06 07:14:10.702 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:12 compute-1 podman[250000]: 2025-12-06 07:14:12.097266639 +0000 UTC m=+0.077801472 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 07:14:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:12.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:12.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:13 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:14:14 compute-1 nova_compute[226101]: 2025-12-06 07:14:14.015 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:14.128 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:14:14 compute-1 nova_compute[226101]: 2025-12-06 07:14:14.128 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:14.129 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:14:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:14.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:14.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:14 compute-1 nova_compute[226101]: 2025-12-06 07:14:14.807 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:15 compute-1 nova_compute[226101]: 2025-12-06 07:14:15.705 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 6.673435688s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 6.673436165s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.673705578s, txc = 0x55b554622f00
Dec 06 07:14:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:16.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:14:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:16.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.698106766s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.698054790s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.808459282s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.807635307s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.810213566s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.808817387s
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.207939148s, txc = 0x55b553510600
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.207381725s, txc = 0x55b5535c5500
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.207115173s, txc = 0x55b552de0900
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.203171253s, txc = 0x55b552c94c00
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.037069798s, txc = 0x55b5535d0000
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.036703110s, txc = 0x55b552ce6600
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.036248684s, txc = 0x55b5535c4900
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.035786629s, txc = 0x55b5538f8f00
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.023149967s, txc = 0x55b5539c2300
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.022781849s, txc = 0x55b553a65b00
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.022284985s, txc = 0x55b552de0f00
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.022302628s, txc = 0x55b553a64300
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.021331787s, txc = 0x55b5526f4c00
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.023184299s, txc = 0x55b552b92900
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.962030888s, txc = 0x55b552d0e300
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.072136879s, txc = 0x55b5538f8600
Dec 06 07:14:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.073521137s, txc = 0x55b553511b00
Dec 06 07:14:17 compute-1 podman[250028]: 2025-12-06 07:14:17.068327737 +0000 UTC m=+0.047855864 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 06 07:14:17 compute-1 podman[250027]: 2025-12-06 07:14:17.070209208 +0000 UTC m=+0.055634854 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:14:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:17.131 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:17 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:14:17 compute-1 ceph-mon[81689]: pgmap v1677: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 151 op/s
Dec 06 07:14:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1032650525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:14:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1032650525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:14:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:14:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:18.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:14:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:18.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:19 compute-1 nova_compute[226101]: 2025-12-06 07:14:19.051 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:19 compute-1 nova_compute[226101]: 2025-12-06 07:14:19.930 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:20.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:20.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:20 compute-1 ceph-mon[81689]: pgmap v1678: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 144 op/s
Dec 06 07:14:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1544661985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:20 compute-1 ceph-mon[81689]: pgmap v1679: 305 pgs: 305 active+clean; 115 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 1.1 MiB/s wr, 68 op/s
Dec 06 07:14:20 compute-1 ceph-mon[81689]: pgmap v1680: 305 pgs: 305 active+clean; 115 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.1 MiB/s wr, 47 op/s
Dec 06 07:14:20 compute-1 ceph-mon[81689]: pgmap v1681: 305 pgs: 305 active+clean; 115 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Dec 06 07:14:20 compute-1 nova_compute[226101]: 2025-12-06 07:14:20.706 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:21 compute-1 ceph-mon[81689]: pgmap v1682: 305 pgs: 305 active+clean; 134 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.0 MiB/s wr, 27 op/s
Dec 06 07:14:21 compute-1 ceph-mon[81689]: pgmap v1683: 305 pgs: 305 active+clean; 134 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 07:14:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:22.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:22.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:23 compute-1 ovn_controller[130279]: 2025-12-06T07:14:23Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:2a:50 10.100.0.9
Dec 06 07:14:23 compute-1 ovn_controller[130279]: 2025-12-06T07:14:23Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:2a:50 10.100.0.9
Dec 06 07:14:24 compute-1 nova_compute[226101]: 2025-12-06 07:14:24.054 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:24.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:24.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:24 compute-1 ceph-mon[81689]: pgmap v1684: 305 pgs: 305 active+clean; 161 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 4.1 MiB/s wr, 61 op/s
Dec 06 07:14:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1136458103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:25 compute-1 ceph-mon[81689]: pgmap v1685: 305 pgs: 305 active+clean; 161 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.2 MiB/s wr, 44 op/s
Dec 06 07:14:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2713055160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:25 compute-1 nova_compute[226101]: 2025-12-06 07:14:25.708 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:25 compute-1 sshd-session[250064]: Connection reset by authenticating user sshd 45.140.17.124 port 31124 [preauth]
Dec 06 07:14:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:26.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:14:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:26.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:14:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/838833154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:28 compute-1 sshd-session[250066]: Connection reset by authenticating user root 45.140.17.124 port 39986 [preauth]
Dec 06 07:14:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:28.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:28.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:28 compute-1 ceph-mon[81689]: pgmap v1686: 305 pgs: 305 active+clean; 190 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 4.3 MiB/s wr, 64 op/s
Dec 06 07:14:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2970115831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:29 compute-1 nova_compute[226101]: 2025-12-06 07:14:29.056 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:30.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:30.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:30 compute-1 ceph-mon[81689]: pgmap v1687: 305 pgs: 305 active+clean; 206 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 4.7 MiB/s wr, 93 op/s
Dec 06 07:14:30 compute-1 nova_compute[226101]: 2025-12-06 07:14:30.711 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:31 compute-1 ceph-mon[81689]: pgmap v1688: 305 pgs: 305 active+clean; 206 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Dec 06 07:14:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:32.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:14:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:32.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:14:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e248 e248: 3 total, 3 up, 3 in
Dec 06 07:14:33 compute-1 sshd-session[250068]: Invalid user tomcat from 45.140.17.124 port 39990
Dec 06 07:14:33 compute-1 sshd-session[250068]: Connection reset by invalid user tomcat 45.140.17.124 port 39990 [preauth]
Dec 06 07:14:34 compute-1 nova_compute[226101]: 2025-12-06 07:14:34.058 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:34 compute-1 ceph-mon[81689]: pgmap v1689: 305 pgs: 305 active+clean; 213 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 184 op/s
Dec 06 07:14:34 compute-1 ceph-mon[81689]: osdmap e248: 3 total, 3 up, 3 in
Dec 06 07:14:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:34.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:14:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:34.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:14:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e249 e249: 3 total, 3 up, 3 in
Dec 06 07:14:35 compute-1 ceph-mon[81689]: pgmap v1691: 305 pgs: 305 active+clean; 213 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.9 MiB/s wr, 175 op/s
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.411 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "1593534a-7024-4f80-9d34-0d75101dc370" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.412 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.412 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "1593534a-7024-4f80-9d34-0d75101dc370-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.412 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.412 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.413 226109 INFO nova.compute.manager [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Terminating instance
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.414 226109 DEBUG nova.compute.manager [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:14:35 compute-1 kernel: tap9879530b-b1 (unregistering): left promiscuous mode
Dec 06 07:14:35 compute-1 NetworkManager[49031]: <info>  [1765005275.4619] device (tap9879530b-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:14:35 compute-1 ovn_controller[130279]: 2025-12-06T07:14:35Z|00220|binding|INFO|Releasing lport 9879530b-b141-43b3-a86c-e7136c43c4d4 from this chassis (sb_readonly=0)
Dec 06 07:14:35 compute-1 ovn_controller[130279]: 2025-12-06T07:14:35Z|00221|binding|INFO|Setting lport 9879530b-b141-43b3-a86c-e7136c43c4d4 down in Southbound
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.472 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 ovn_controller[130279]: 2025-12-06T07:14:35Z|00222|binding|INFO|Removing iface tap9879530b-b1 ovn-installed in OVS
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.475 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.479 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2a:50 10.100.0.9'], port_security=['fa:16:3e:b2:2a:50 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1593534a-7024-4f80-9d34-0d75101dc370', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b693354f7db4a0eb127bf2915dd342e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16f66fbc-314e-46a4-a9b8-7348347e57a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=383d018e-0936-4290-addc-3609348cc799, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9879530b-b141-43b3-a86c-e7136c43c4d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.480 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9879530b-b141-43b3-a86c-e7136c43c4d4 in datapath 029c3c2b-c7ec-4776-9c3c-b0ccd42caa15 unbound from our chassis
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.481 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 029c3c2b-c7ec-4776-9c3c-b0ccd42caa15, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.483 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2881cd9c-0fe8-4f6d-b03c-0ce310009008]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.483 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15 namespace which is not needed anymore
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.497 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000042.scope: Deactivated successfully.
Dec 06 07:14:35 compute-1 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000042.scope: Consumed 14.631s CPU time.
Dec 06 07:14:35 compute-1 systemd-machined[190302]: Machine qemu-30-instance-00000042 terminated.
Dec 06 07:14:35 compute-1 neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15[249934]: [NOTICE]   (249938) : haproxy version is 2.8.14-c23fe91
Dec 06 07:14:35 compute-1 neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15[249934]: [NOTICE]   (249938) : path to executable is /usr/sbin/haproxy
Dec 06 07:14:35 compute-1 neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15[249934]: [WARNING]  (249938) : Exiting Master process...
Dec 06 07:14:35 compute-1 neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15[249934]: [ALERT]    (249938) : Current worker (249940) exited with code 143 (Terminated)
Dec 06 07:14:35 compute-1 neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15[249934]: [WARNING]  (249938) : All workers exited. Exiting... (0)
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.637 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 systemd[1]: libpod-1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947.scope: Deactivated successfully.
Dec 06 07:14:35 compute-1 conmon[249934]: conmon 1681028a79b194b4234c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947.scope/container/memory.events
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.642 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 podman[250099]: 2025-12-06 07:14:35.646692841 +0000 UTC m=+0.061291385 container died 1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.652 226109 INFO nova.virt.libvirt.driver [-] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Instance destroyed successfully.
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.653 226109 DEBUG nova.objects.instance [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lazy-loading 'resources' on Instance uuid 1593534a-7024-4f80-9d34-0d75101dc370 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.674 226109 DEBUG nova.virt.libvirt.vif [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1447106479',display_name='tempest-ServersTestJSON-server-1447106479',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1447106479',id=66,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB4LWpEx8TK4J8iUIdEoL+UGkMmaboPUbt0fSaRfu7BLndPZ6NyAQi2AuHQ0TUsU00lHX/vI+qkROsTetEv6jInEgEAa2ZZdW5mPiDbhGcSSekiGMBRFQecx5T18LbOpZA==',key_name='tempest-keypair-1585904709',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:13:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b693354f7db4a0eb127bf2915dd342e',ramdisk_id='',reservation_id='r-qj832q6n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-256186984',owner_user_name='tempest-ServersTestJSON-256186984-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='30cfe1ade59e4735affe15c43e86fec2',uuid=1593534a-7024-4f80-9d34-0d75101dc370,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.675 226109 DEBUG nova.network.os_vif_util [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Converting VIF {"id": "9879530b-b141-43b3-a86c-e7136c43c4d4", "address": "fa:16:3e:b2:2a:50", "network": {"id": "029c3c2b-c7ec-4776-9c3c-b0ccd42caa15", "bridge": "br-int", "label": "tempest-ServersTestJSON-515571753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b693354f7db4a0eb127bf2915dd342e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9879530b-b1", "ovs_interfaceid": "9879530b-b141-43b3-a86c-e7136c43c4d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.676 226109 DEBUG nova.network.os_vif_util [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:2a:50,bridge_name='br-int',has_traffic_filtering=True,id=9879530b-b141-43b3-a86c-e7136c43c4d4,network=Network(029c3c2b-c7ec-4776-9c3c-b0ccd42caa15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9879530b-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.677 226109 DEBUG os_vif [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:2a:50,bridge_name='br-int',has_traffic_filtering=True,id=9879530b-b141-43b3-a86c-e7136c43c4d4,network=Network(029c3c2b-c7ec-4776-9c3c-b0ccd42caa15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9879530b-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.679 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.679 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9879530b-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.681 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-d11f1523f69a51c579d187319dff84e04cb2416b37f2ca899be50d9240237218-merged.mount: Deactivated successfully.
Dec 06 07:14:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947-userdata-shm.mount: Deactivated successfully.
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.687 226109 INFO os_vif [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:2a:50,bridge_name='br-int',has_traffic_filtering=True,id=9879530b-b141-43b3-a86c-e7136c43c4d4,network=Network(029c3c2b-c7ec-4776-9c3c-b0ccd42caa15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9879530b-b1')
Dec 06 07:14:35 compute-1 podman[250099]: 2025-12-06 07:14:35.69294176 +0000 UTC m=+0.107540304 container cleanup 1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:14:35 compute-1 systemd[1]: libpod-conmon-1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947.scope: Deactivated successfully.
Dec 06 07:14:35 compute-1 podman[250155]: 2025-12-06 07:14:35.758274855 +0000 UTC m=+0.043465105 container remove 1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.763 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0ef7cb-53ea-46a8-bc43-cc24f1dc02cf]: (4, ('Sat Dec  6 07:14:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15 (1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947)\n1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947\nSat Dec  6 07:14:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15 (1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947)\n1681028a79b194b4234c084407b590cdd3f3b9ef5fc19b7be35e7fef7235d947\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.765 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[13107915-e410-4fa0-a512-c8ce4965d32c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.767 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap029c3c2b-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:35 compute-1 kernel: tap029c3c2b-c0: left promiscuous mode
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.785 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c3ebe6-ae21-48af-a2f3-8f534e6f6871]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.806 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfb9eaa-9ebc-4084-a5eb-2e7252496211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.807 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eb674d28-3dcf-46c1-989d-3b3286ee4fce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.822 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[342182e2-66f9-4e6b-b11f-35e1d9c229e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551679, 'reachable_time': 22363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250173, 'error': None, 'target': 'ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.825 226109 DEBUG nova.compute.manager [req-0fc072ba-1247-4d24-b5ba-fc40b375ba9d req-96747906-2394-45bd-8255-381d7a90f27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received event network-vif-unplugged-9879530b-b141-43b3-a86c-e7136c43c4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.824 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-029c3c2b-c7ec-4776-9c3c-b0ccd42caa15 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.825 226109 DEBUG oslo_concurrency.lockutils [req-0fc072ba-1247-4d24-b5ba-fc40b375ba9d req-96747906-2394-45bd-8255-381d7a90f27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1593534a-7024-4f80-9d34-0d75101dc370-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:35 compute-1 systemd[1]: run-netns-ovnmeta\x2d029c3c2b\x2dc7ec\x2d4776\x2d9c3c\x2db0ccd42caa15.mount: Deactivated successfully.
Dec 06 07:14:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:35.825 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad4a709-99b6-48dc-92e7-1c7ff8a81b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.825 226109 DEBUG oslo_concurrency.lockutils [req-0fc072ba-1247-4d24-b5ba-fc40b375ba9d req-96747906-2394-45bd-8255-381d7a90f27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.826 226109 DEBUG oslo_concurrency.lockutils [req-0fc072ba-1247-4d24-b5ba-fc40b375ba9d req-96747906-2394-45bd-8255-381d7a90f27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.826 226109 DEBUG nova.compute.manager [req-0fc072ba-1247-4d24-b5ba-fc40b375ba9d req-96747906-2394-45bd-8255-381d7a90f27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] No waiting events found dispatching network-vif-unplugged-9879530b-b141-43b3-a86c-e7136c43c4d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:14:35 compute-1 nova_compute[226101]: 2025-12-06 07:14:35.826 226109 DEBUG nova.compute.manager [req-0fc072ba-1247-4d24-b5ba-fc40b375ba9d req-96747906-2394-45bd-8255-381d7a90f27e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received event network-vif-unplugged-9879530b-b141-43b3-a86c-e7136c43c4d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:14:36 compute-1 nova_compute[226101]: 2025-12-06 07:14:36.054 226109 INFO nova.virt.libvirt.driver [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Deleting instance files /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370_del
Dec 06 07:14:36 compute-1 nova_compute[226101]: 2025-12-06 07:14:36.055 226109 INFO nova.virt.libvirt.driver [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Deletion of /var/lib/nova/instances/1593534a-7024-4f80-9d34-0d75101dc370_del complete
Dec 06 07:14:36 compute-1 sshd-session[250071]: Connection reset by authenticating user root 45.140.17.124 port 44346 [preauth]
Dec 06 07:14:36 compute-1 nova_compute[226101]: 2025-12-06 07:14:36.184 226109 INFO nova.compute.manager [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Took 0.77 seconds to destroy the instance on the hypervisor.
Dec 06 07:14:36 compute-1 nova_compute[226101]: 2025-12-06 07:14:36.184 226109 DEBUG oslo.service.loopingcall [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:14:36 compute-1 nova_compute[226101]: 2025-12-06 07:14:36.184 226109 DEBUG nova.compute.manager [-] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:14:36 compute-1 nova_compute[226101]: 2025-12-06 07:14:36.185 226109 DEBUG nova.network.neutron [-] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:14:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:36.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:36 compute-1 ceph-mon[81689]: osdmap e249: 3 total, 3 up, 3 in
Dec 06 07:14:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:36.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e250 e250: 3 total, 3 up, 3 in
Dec 06 07:14:37 compute-1 ceph-mon[81689]: pgmap v1693: 305 pgs: 305 active+clean; 231 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 973 KiB/s wr, 209 op/s
Dec 06 07:14:37 compute-1 ceph-mon[81689]: osdmap e250: 3 total, 3 up, 3 in
Dec 06 07:14:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e251 e251: 3 total, 3 up, 3 in
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.564 226109 DEBUG nova.network.neutron [-] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.581 226109 INFO nova.compute.manager [-] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Took 1.40 seconds to deallocate network for instance.
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.624 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.624 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.664 226109 DEBUG nova.compute.manager [req-04149e8e-5356-47bf-ba6b-8fe1020c9c0c req-fb86bc41-4355-4ccd-8581-4f51cc1e26f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received event network-vif-deleted-9879530b-b141-43b3-a86c-e7136c43c4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.680 226109 DEBUG oslo_concurrency.processutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.926 226109 DEBUG nova.compute.manager [req-74ba6683-d8f4-4a6f-8bfd-4c06366f5e8a req-78977113-6e65-4d27-92c0-818eff0b4fab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received event network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.926 226109 DEBUG oslo_concurrency.lockutils [req-74ba6683-d8f4-4a6f-8bfd-4c06366f5e8a req-78977113-6e65-4d27-92c0-818eff0b4fab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1593534a-7024-4f80-9d34-0d75101dc370-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.927 226109 DEBUG oslo_concurrency.lockutils [req-74ba6683-d8f4-4a6f-8bfd-4c06366f5e8a req-78977113-6e65-4d27-92c0-818eff0b4fab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.927 226109 DEBUG oslo_concurrency.lockutils [req-74ba6683-d8f4-4a6f-8bfd-4c06366f5e8a req-78977113-6e65-4d27-92c0-818eff0b4fab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.927 226109 DEBUG nova.compute.manager [req-74ba6683-d8f4-4a6f-8bfd-4c06366f5e8a req-78977113-6e65-4d27-92c0-818eff0b4fab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] No waiting events found dispatching network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:14:37 compute-1 nova_compute[226101]: 2025-12-06 07:14:37.927 226109 WARNING nova.compute.manager [req-74ba6683-d8f4-4a6f-8bfd-4c06366f5e8a req-78977113-6e65-4d27-92c0-818eff0b4fab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Received unexpected event network-vif-plugged-9879530b-b141-43b3-a86c-e7136c43c4d4 for instance with vm_state deleted and task_state None.
Dec 06 07:14:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:14:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3488248931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:38 compute-1 nova_compute[226101]: 2025-12-06 07:14:38.119 226109 DEBUG oslo_concurrency.processutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:38 compute-1 nova_compute[226101]: 2025-12-06 07:14:38.125 226109 DEBUG nova.compute.provider_tree [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:14:38 compute-1 nova_compute[226101]: 2025-12-06 07:14:38.142 226109 DEBUG nova.scheduler.client.report [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:14:38 compute-1 nova_compute[226101]: 2025-12-06 07:14:38.171 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:38 compute-1 nova_compute[226101]: 2025-12-06 07:14:38.211 226109 INFO nova.scheduler.client.report [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Deleted allocations for instance 1593534a-7024-4f80-9d34-0d75101dc370
Dec 06 07:14:38 compute-1 nova_compute[226101]: 2025-12-06 07:14:38.293 226109 DEBUG oslo_concurrency.lockutils [None req-322f8dbc-3660-4793-b36d-5b52e8466fbd 30cfe1ade59e4735affe15c43e86fec2 9b693354f7db4a0eb127bf2915dd342e - - default default] Lock "1593534a-7024-4f80-9d34-0d75101dc370" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:38.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:38.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:38 compute-1 ceph-mon[81689]: osdmap e251: 3 total, 3 up, 3 in
Dec 06 07:14:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3488248931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:39 compute-1 nova_compute[226101]: 2025-12-06 07:14:39.059 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:39 compute-1 ceph-mon[81689]: pgmap v1696: 305 pgs: 305 active+clean; 216 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 4.3 MiB/s wr, 308 op/s
Dec 06 07:14:40 compute-1 sshd-session[250175]: Connection reset by authenticating user root 45.140.17.124 port 44364 [preauth]
Dec 06 07:14:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:40.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:40.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:40 compute-1 nova_compute[226101]: 2025-12-06 07:14:40.682 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:42.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:42.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:42 compute-1 ceph-mon[81689]: pgmap v1697: 305 pgs: 305 active+clean; 216 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 257 op/s
Dec 06 07:14:43 compute-1 nova_compute[226101]: 2025-12-06 07:14:43.060 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:43 compute-1 nova_compute[226101]: 2025-12-06 07:14:43.286 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:43 compute-1 podman[250199]: 2025-12-06 07:14:43.308198226 +0000 UTC m=+0.291881443 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:14:44 compute-1 nova_compute[226101]: 2025-12-06 07:14:44.060 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:44.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:44.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:44 compute-1 ceph-mon[81689]: pgmap v1698: 305 pgs: 305 active+clean; 115 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.3 MiB/s wr, 250 op/s
Dec 06 07:14:45 compute-1 nova_compute[226101]: 2025-12-06 07:14:45.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:45 compute-1 ceph-mon[81689]: pgmap v1699: 305 pgs: 305 active+clean; 115 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Dec 06 07:14:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:46.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 e252: 3 total, 3 up, 3 in
Dec 06 07:14:46 compute-1 ceph-mon[81689]: pgmap v1700: 305 pgs: 305 active+clean; 120 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 193 op/s
Dec 06 07:14:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2417296555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:46 compute-1 ceph-mon[81689]: osdmap e252: 3 total, 3 up, 3 in
Dec 06 07:14:48 compute-1 podman[250227]: 2025-12-06 07:14:48.07433455 +0000 UTC m=+0.049152159 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:14:48 compute-1 podman[250226]: 2025-12-06 07:14:48.075160502 +0000 UTC m=+0.054668917 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:14:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:48.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:48.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:49 compute-1 ceph-mon[81689]: pgmap v1702: 305 pgs: 305 active+clean; 118 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.5 MiB/s wr, 152 op/s
Dec 06 07:14:49 compute-1 nova_compute[226101]: 2025-12-06 07:14:49.064 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:50.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:50.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:50 compute-1 nova_compute[226101]: 2025-12-06 07:14:50.650 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005275.6487138, 1593534a-7024-4f80-9d34-0d75101dc370 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:14:50 compute-1 nova_compute[226101]: 2025-12-06 07:14:50.650 226109 INFO nova.compute.manager [-] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] VM Stopped (Lifecycle Event)
Dec 06 07:14:50 compute-1 nova_compute[226101]: 2025-12-06 07:14:50.669 226109 DEBUG nova.compute.manager [None req-7dc7c5ff-0346-4fe6-b9e4-062cb8796ebe - - - - - -] [instance: 1593534a-7024-4f80-9d34-0d75101dc370] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:14:50 compute-1 nova_compute[226101]: 2025-12-06 07:14:50.686 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:51 compute-1 ceph-mon[81689]: pgmap v1703: 305 pgs: 305 active+clean; 118 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.5 MiB/s wr, 152 op/s
Dec 06 07:14:51 compute-1 nova_compute[226101]: 2025-12-06 07:14:51.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:52.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:52.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:52 compute-1 nova_compute[226101]: 2025-12-06 07:14:52.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:52 compute-1 nova_compute[226101]: 2025-12-06 07:14:52.612 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:52 compute-1 nova_compute[226101]: 2025-12-06 07:14:52.612 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:52 compute-1 nova_compute[226101]: 2025-12-06 07:14:52.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:52 compute-1 nova_compute[226101]: 2025-12-06 07:14:52.613 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:14:52 compute-1 nova_compute[226101]: 2025-12-06 07:14:52.613 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:14:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3053445577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.080 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.239 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.240 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4716MB free_disk=20.942859649658203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.241 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.241 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.335 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.335 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.362 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:14:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2002282999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.943 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.949 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.966 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.992 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:14:53 compute-1 nova_compute[226101]: 2025-12-06 07:14:53.993 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:53 compute-1 ceph-mon[81689]: pgmap v1704: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Dec 06 07:14:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3053445577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1137790135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:54 compute-1 nova_compute[226101]: 2025-12-06 07:14:54.064 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:54.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:54.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:54.473 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:14:54 compute-1 nova_compute[226101]: 2025-12-06 07:14:54.474 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:54.474 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:14:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:14:54.475 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:54 compute-1 nova_compute[226101]: 2025-12-06 07:14:54.993 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:54 compute-1 nova_compute[226101]: 2025-12-06 07:14:54.994 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:54 compute-1 nova_compute[226101]: 2025-12-06 07:14:54.995 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:54 compute-1 nova_compute[226101]: 2025-12-06 07:14:54.995 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:14:55 compute-1 ceph-mon[81689]: pgmap v1705: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Dec 06 07:14:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3605191494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2002282999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:55 compute-1 nova_compute[226101]: 2025-12-06 07:14:55.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:55 compute-1 nova_compute[226101]: 2025-12-06 07:14:55.585 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:55 compute-1 nova_compute[226101]: 2025-12-06 07:14:55.629 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:55 compute-1 nova_compute[226101]: 2025-12-06 07:14:55.629 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:14:55 compute-1 nova_compute[226101]: 2025-12-06 07:14:55.629 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:14:55 compute-1 nova_compute[226101]: 2025-12-06 07:14:55.688 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:55 compute-1 nova_compute[226101]: 2025-12-06 07:14:55.705 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:14:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:56.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:56.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:56 compute-1 ceph-mon[81689]: pgmap v1706: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 1.0 MiB/s wr, 50 op/s
Dec 06 07:14:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:14:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:58.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:14:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:14:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:58.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2536976784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2670706817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:59 compute-1 nova_compute[226101]: 2025-12-06 07:14:59.066 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:59 compute-1 ceph-mon[81689]: pgmap v1707: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 199 KiB/s wr, 26 op/s
Dec 06 07:14:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2433529301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2565456730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:00.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:00.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:00 compute-1 nova_compute[226101]: 2025-12-06 07:15:00.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:00 compute-1 nova_compute[226101]: 2025-12-06 07:15:00.690 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:01.632 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:01.632 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:01.633 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:01 compute-1 ceph-mon[81689]: pgmap v1708: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 54 KiB/s wr, 5 op/s
Dec 06 07:15:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:02.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/430836661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/305077607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:03 compute-1 nova_compute[226101]: 2025-12-06 07:15:03.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:04 compute-1 nova_compute[226101]: 2025-12-06 07:15:04.068 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:04.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:04 compute-1 ceph-mon[81689]: pgmap v1709: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 991 KiB/s rd, 54 KiB/s wr, 44 op/s
Dec 06 07:15:05 compute-1 nova_compute[226101]: 2025-12-06 07:15:05.693 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 07:15:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:06.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:06.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:06 compute-1 sudo[250310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:06 compute-1 sudo[250310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:06 compute-1 sudo[250310]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:06 compute-1 sudo[250335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:15:06 compute-1 sudo[250335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:06 compute-1 sudo[250335]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:06 compute-1 sudo[250360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:06 compute-1 sudo[250360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:06 compute-1 sudo[250360]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:06 compute-1 ceph-mon[81689]: pgmap v1710: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 952 KiB/s rd, 13 KiB/s wr, 39 op/s
Dec 06 07:15:06 compute-1 sudo[250385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:15:06 compute-1 sudo[250385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:07 compute-1 ceph-mon[81689]: pgmap v1711: 305 pgs: 305 active+clean; 171 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 86 op/s
Dec 06 07:15:07 compute-1 sudo[250385]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:08.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:08.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:15:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:15:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:15:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:15:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:15:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:15:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1892722272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:09 compute-1 nova_compute[226101]: 2025-12-06 07:15:09.070 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:10 compute-1 ceph-mon[81689]: pgmap v1712: 305 pgs: 305 active+clean; 213 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Dec 06 07:15:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4028038919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1935116714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2184190138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:15:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2184190138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:15:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1753630511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:10.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:10.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:10 compute-1 nova_compute[226101]: 2025-12-06 07:15:10.695 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-1 ceph-mon[81689]: pgmap v1713: 305 pgs: 305 active+clean; 213 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 114 op/s
Dec 06 07:15:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:12.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:12.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:13 compute-1 ceph-mon[81689]: pgmap v1714: 305 pgs: 305 active+clean; 213 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 132 op/s
Dec 06 07:15:14 compute-1 nova_compute[226101]: 2025-12-06 07:15:14.071 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:14 compute-1 podman[250440]: 2025-12-06 07:15:14.102351575 +0000 UTC m=+0.087120683 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 07:15:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:14.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:14.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:15 compute-1 nova_compute[226101]: 2025-12-06 07:15:15.698 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:16 compute-1 sudo[250466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:16 compute-1 sudo[250466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:16 compute-1 sudo[250466]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:16 compute-1 sudo[250491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:15:16 compute-1 sudo[250491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:16 compute-1 sudo[250491]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:16.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:16.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:16 compute-1 ceph-mon[81689]: pgmap v1715: 305 pgs: 305 active+clean; 213 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.5 MiB/s wr, 92 op/s
Dec 06 07:15:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:18.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:18.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:19 compute-1 podman[250516]: 2025-12-06 07:15:19.070686267 +0000 UTC m=+0.055299634 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:15:19 compute-1 nova_compute[226101]: 2025-12-06 07:15:19.073 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:19 compute-1 podman[250517]: 2025-12-06 07:15:19.090184894 +0000 UTC m=+0.059414646 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:15:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:20 compute-1 ceph-mon[81689]: pgmap v1716: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 06 07:15:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:20 compute-1 nova_compute[226101]: 2025-12-06 07:15:20.700 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:22 compute-1 ceph-mon[81689]: pgmap v1717: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.3 MiB/s wr, 231 op/s
Dec 06 07:15:22 compute-1 ceph-mon[81689]: pgmap v1718: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 39 KiB/s wr, 203 op/s
Dec 06 07:15:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:22.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:22.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:24 compute-1 nova_compute[226101]: 2025-12-06 07:15:24.074 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:24 compute-1 ceph-mon[81689]: pgmap v1719: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 65 KiB/s wr, 206 op/s
Dec 06 07:15:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:24.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:25 compute-1 nova_compute[226101]: 2025-12-06 07:15:25.701 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:26.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:27 compute-1 ceph-mon[81689]: pgmap v1720: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 65 KiB/s wr, 189 op/s
Dec 06 07:15:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:28.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:28.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:29 compute-1 nova_compute[226101]: 2025-12-06 07:15:29.076 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:30 compute-1 nova_compute[226101]: 2025-12-06 07:15:30.703 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:31 compute-1 ceph-mon[81689]: pgmap v1721: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 65 KiB/s wr, 193 op/s
Dec 06 07:15:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2013365150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:31 compute-1 ceph-mon[81689]: pgmap v1722: 305 pgs: 305 active+clean; 218 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 193 KiB/s wr, 84 op/s
Dec 06 07:15:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4084160538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:32.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:34 compute-1 nova_compute[226101]: 2025-12-06 07:15:34.078 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:34 compute-1 ovn_controller[130279]: 2025-12-06T07:15:34Z|00223|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:15:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:34.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:34.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:35 compute-1 ceph-mon[81689]: pgmap v1723: 305 pgs: 305 active+clean; 218 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 178 KiB/s wr, 16 op/s
Dec 06 07:15:35 compute-1 nova_compute[226101]: 2025-12-06 07:15:35.706 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:36.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:36.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:36 compute-1 ceph-mon[81689]: pgmap v1724: 305 pgs: 305 active+clean; 242 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 155 KiB/s rd, 2.9 MiB/s wr, 54 op/s
Dec 06 07:15:36 compute-1 ceph-mon[81689]: pgmap v1725: 305 pgs: 305 active+clean; 242 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 152 KiB/s rd, 2.9 MiB/s wr, 51 op/s
Dec 06 07:15:37 compute-1 nova_compute[226101]: 2025-12-06 07:15:37.902 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:37 compute-1 nova_compute[226101]: 2025-12-06 07:15:37.902 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:37 compute-1 nova_compute[226101]: 2025-12-06 07:15:37.917 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:15:37 compute-1 nova_compute[226101]: 2025-12-06 07:15:37.994 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:37 compute-1 nova_compute[226101]: 2025-12-06 07:15:37.994 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:38 compute-1 nova_compute[226101]: 2025-12-06 07:15:38.001 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:15:38 compute-1 nova_compute[226101]: 2025-12-06 07:15:38.002 226109 INFO nova.compute.claims [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:15:38 compute-1 nova_compute[226101]: 2025-12-06 07:15:38.137 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:38 compute-1 nova_compute[226101]: 2025-12-06 07:15:38.403 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:38 compute-1 nova_compute[226101]: 2025-12-06 07:15:38.404 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:38.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:38 compute-1 nova_compute[226101]: 2025-12-06 07:15:38.420 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:15:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:38.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:38 compute-1 nova_compute[226101]: 2025-12-06 07:15:38.491 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:38 compute-1 ceph-mon[81689]: pgmap v1726: 305 pgs: 305 active+clean; 252 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 90 op/s
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.079 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1564804685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.324 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.331 226109 DEBUG nova.compute.provider_tree [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.353 226109 DEBUG nova.scheduler.client.report [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.376 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.376 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.379 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.384 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.385 226109 INFO nova.compute.claims [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.453 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.453 226109 DEBUG nova.network.neutron [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.472 226109 INFO nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.496 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.573 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.598 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.600 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.600 226109 INFO nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Creating image(s)
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.625 226109 DEBUG nova.storage.rbd_utils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.667 226109 DEBUG nova.storage.rbd_utils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.698 226109 DEBUG nova.storage.rbd_utils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.704 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.730 226109 DEBUG nova.policy [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '67604a2c995248f8931119287d416e1c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ba80f0b33d04d6d9508bc18e9b1914b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.765 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.766 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.766 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.767 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.880 226109 DEBUG nova.storage.rbd_utils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:39 compute-1 nova_compute[226101]: 2025-12-06 07:15:39.887 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:40 compute-1 ceph-mon[81689]: pgmap v1727: 305 pgs: 305 active+clean; 258 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 116 op/s
Dec 06 07:15:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3250951825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.075 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.082 226109 DEBUG nova.compute.provider_tree [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.098 226109 DEBUG nova.scheduler.client.report [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.119 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.121 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.161 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.162 226109 DEBUG nova.network.neutron [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.179 226109 INFO nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.200 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.321 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.323 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.323 226109 INFO nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Creating image(s)
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.346 226109 DEBUG nova.storage.rbd_utils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 12903f3d-051e-4858-aee4-dea9692252ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.370 226109 DEBUG nova.storage.rbd_utils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 12903f3d-051e-4858-aee4-dea9692252ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.400 226109 DEBUG nova.storage.rbd_utils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 12903f3d-051e-4858-aee4-dea9692252ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.404 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:40.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:40.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.471 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.472 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.473 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.474 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.503 226109 DEBUG nova.storage.rbd_utils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 12903f3d-051e-4858-aee4-dea9692252ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.506 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 12903f3d-051e-4858-aee4-dea9692252ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.545 226109 DEBUG nova.policy [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.708 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:40 compute-1 nova_compute[226101]: 2025-12-06 07:15:40.921 226109 DEBUG nova.network.neutron [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Successfully created port: f05b6f8b-6a82-4d64-a5c6-057b8f9368ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.257 226109 DEBUG nova.network.neutron [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Successfully created port: 9edcc777-b052-41d1-a28c-dbfe02a70e3b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:15:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1564804685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:41 compute-1 ceph-mon[81689]: pgmap v1728: 305 pgs: 305 active+clean; 258 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 107 op/s
Dec 06 07:15:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3250951825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.834 226109 DEBUG nova.network.neutron [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Successfully updated port: f05b6f8b-6a82-4d64-a5c6-057b8f9368ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.855 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.855 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquired lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.855 226109 DEBUG nova.network.neutron [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.952 226109 DEBUG nova.compute.manager [req-f66433ff-25be-4a35-a477-a3c02657d2c3 req-eb73ba87-6040-403a-822d-614cd8bccf2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-changed-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.953 226109 DEBUG nova.compute.manager [req-f66433ff-25be-4a35-a477-a3c02657d2c3 req-eb73ba87-6040-403a-822d-614cd8bccf2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Refreshing instance network info cache due to event network-changed-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.953 226109 DEBUG oslo_concurrency.lockutils [req-f66433ff-25be-4a35-a477-a3c02657d2c3 req-eb73ba87-6040-403a-822d-614cd8bccf2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:41 compute-1 nova_compute[226101]: 2025-12-06 07:15:41.955 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 12903f3d-051e-4858-aee4-dea9692252ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.040 226109 DEBUG nova.storage.rbd_utils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] resizing rbd image 12903f3d-051e-4858-aee4-dea9692252ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.156 226109 DEBUG nova.objects.instance [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'migration_context' on Instance uuid 12903f3d-051e-4858-aee4-dea9692252ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.168 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.169 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Ensure instance console log exists: /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.170 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.170 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.170 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:42.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:42.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.531 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.567 226109 DEBUG nova.network.neutron [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.572 226109 DEBUG nova.network.neutron [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Successfully updated port: 9edcc777-b052-41d1-a28c-dbfe02a70e3b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.619 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.620 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.620 226109 DEBUG nova.network.neutron [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.627 226109 DEBUG nova.storage.rbd_utils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] resizing rbd image 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.675 226109 DEBUG nova.compute.manager [req-bec0b737-510d-40c1-bc2f-be61e4ed862a req-a57b515a-f2bf-497b-b4e7-8b4b967a45d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-changed-9edcc777-b052-41d1-a28c-dbfe02a70e3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.676 226109 DEBUG nova.compute.manager [req-bec0b737-510d-40c1-bc2f-be61e4ed862a req-a57b515a-f2bf-497b-b4e7-8b4b967a45d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Refreshing instance network info cache due to event network-changed-9edcc777-b052-41d1-a28c-dbfe02a70e3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.676 226109 DEBUG oslo_concurrency.lockutils [req-bec0b737-510d-40c1-bc2f-be61e4ed862a req-a57b515a-f2bf-497b-b4e7-8b4b967a45d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:42 compute-1 nova_compute[226101]: 2025-12-06 07:15:42.822 226109 DEBUG nova.network.neutron [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:15:43 compute-1 nova_compute[226101]: 2025-12-06 07:15:43.815 226109 DEBUG nova.network.neutron [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updating instance_info_cache with network_info: [{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:43 compute-1 nova_compute[226101]: 2025-12-06 07:15:43.839 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Releasing lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:43 compute-1 nova_compute[226101]: 2025-12-06 07:15:43.840 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Instance network_info: |[{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:15:43 compute-1 nova_compute[226101]: 2025-12-06 07:15:43.841 226109 DEBUG oslo_concurrency.lockutils [req-f66433ff-25be-4a35-a477-a3c02657d2c3 req-eb73ba87-6040-403a-822d-614cd8bccf2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:43 compute-1 nova_compute[226101]: 2025-12-06 07:15:43.841 226109 DEBUG nova.network.neutron [req-f66433ff-25be-4a35-a477-a3c02657d2c3 req-eb73ba87-6040-403a-822d-614cd8bccf2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Refreshing network info cache for port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.082 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:44.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.528 226109 DEBUG nova.network.neutron [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updating instance_info_cache with network_info: [{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.563 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.564 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Instance network_info: |[{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.564 226109 DEBUG oslo_concurrency.lockutils [req-bec0b737-510d-40c1-bc2f-be61e4ed862a req-a57b515a-f2bf-497b-b4e7-8b4b967a45d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.565 226109 DEBUG nova.network.neutron [req-bec0b737-510d-40c1-bc2f-be61e4ed862a req-a57b515a-f2bf-497b-b4e7-8b4b967a45d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Refreshing network info cache for port 9edcc777-b052-41d1-a28c-dbfe02a70e3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.568 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Start _get_guest_xml network_info=[{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.572 226109 WARNING nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.577 226109 DEBUG nova.virt.libvirt.host [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.578 226109 DEBUG nova.virt.libvirt.host [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.580 226109 DEBUG nova.virt.libvirt.host [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.580 226109 DEBUG nova.virt.libvirt.host [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.582 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.582 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.582 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.583 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.583 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.583 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.583 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.583 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.584 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.584 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.584 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.584 226109 DEBUG nova.virt.hardware [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:15:44 compute-1 nova_compute[226101]: 2025-12-06 07:15:44.587 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:45 compute-1 podman[250924]: 2025-12-06 07:15:45.099197776 +0000 UTC m=+0.084185465 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:15:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:15:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2853557948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.669 226109 DEBUG nova.objects.instance [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'migration_context' on Instance uuid 0490f82b-373a-4948-9734-3b8cbe80d4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.683 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.683 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Ensure instance console log exists: /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.684 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.684 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.684 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.686 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Start _get_guest_xml network_info=[{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.692 226109 WARNING nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.698 226109 DEBUG nova.virt.libvirt.host [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.699 226109 DEBUG nova.virt.libvirt.host [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.706 226109 DEBUG nova.virt.libvirt.host [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.707 226109 DEBUG nova.virt.libvirt.host [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.708 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.708 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.708 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.709 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.709 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.709 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.709 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.710 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.710 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.710 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.710 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.710 226109 DEBUG nova.virt.hardware [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.713 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.736 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.792 226109 DEBUG nova.network.neutron [req-f66433ff-25be-4a35-a477-a3c02657d2c3 req-eb73ba87-6040-403a-822d-614cd8bccf2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updated VIF entry in instance network info cache for port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.792 226109 DEBUG nova.network.neutron [req-f66433ff-25be-4a35-a477-a3c02657d2c3 req-eb73ba87-6040-403a-822d-614cd8bccf2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updating instance_info_cache with network_info: [{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:45 compute-1 nova_compute[226101]: 2025-12-06 07:15:45.807 226109 DEBUG oslo_concurrency.lockutils [req-f66433ff-25be-4a35-a477-a3c02657d2c3 req-eb73ba87-6040-403a-822d-614cd8bccf2b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:45 compute-1 ceph-mon[81689]: pgmap v1729: 305 pgs: 305 active+clean; 322 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 177 op/s
Dec 06 07:15:46 compute-1 nova_compute[226101]: 2025-12-06 07:15:46.146 226109 DEBUG nova.network.neutron [req-bec0b737-510d-40c1-bc2f-be61e4ed862a req-a57b515a-f2bf-497b-b4e7-8b4b967a45d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updated VIF entry in instance network info cache for port 9edcc777-b052-41d1-a28c-dbfe02a70e3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:15:46 compute-1 nova_compute[226101]: 2025-12-06 07:15:46.146 226109 DEBUG nova.network.neutron [req-bec0b737-510d-40c1-bc2f-be61e4ed862a req-a57b515a-f2bf-497b-b4e7-8b4b967a45d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updating instance_info_cache with network_info: [{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:46 compute-1 nova_compute[226101]: 2025-12-06 07:15:46.163 226109 DEBUG oslo_concurrency.lockutils [req-bec0b737-510d-40c1-bc2f-be61e4ed862a req-a57b515a-f2bf-497b-b4e7-8b4b967a45d7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:15:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/840482458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:46.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:46.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:48 compute-1 nova_compute[226101]: 2025-12-06 07:15:48.039 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:48 compute-1 nova_compute[226101]: 2025-12-06 07:15:48.040 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:48 compute-1 nova_compute[226101]: 2025-12-06 07:15:48.067 226109 DEBUG nova.storage.rbd_utils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 12903f3d-051e-4858-aee4-dea9692252ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:48 compute-1 nova_compute[226101]: 2025-12-06 07:15:48.071 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:48 compute-1 nova_compute[226101]: 2025-12-06 07:15:48.119 226109 DEBUG nova.storage.rbd_utils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:48 compute-1 nova_compute[226101]: 2025-12-06 07:15:48.123 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:48.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:15:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:48.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:15:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:15:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3152543257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:15:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4167606791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.083 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.592 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.593 226109 DEBUG nova.virt.libvirt.vif [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:15:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.594 226109 DEBUG nova.network.os_vif_util [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.595 226109 DEBUG nova.network.os_vif_util [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:63:76,bridge_name='br-int',has_traffic_filtering=True,id=9edcc777-b052-41d1-a28c-dbfe02a70e3b,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9edcc777-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.596 226109 DEBUG nova.objects.instance [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12903f3d-051e-4858-aee4-dea9692252ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.598 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.599 226109 DEBUG nova.virt.libvirt.vif [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-786816242',display_name='tempest-SecurityGroupsTestJSON-server-786816242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-786816242',id=71,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-0ut37c0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:15:39Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=0490f82b-373a-4948-9734-3b8cbe80d4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.600 226109 DEBUG nova.network.os_vif_util [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.600 226109 DEBUG nova.network.os_vif_util [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.601 226109 DEBUG nova.objects.instance [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0490f82b-373a-4948-9734-3b8cbe80d4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:49 compute-1 ceph-mon[81689]: pgmap v1730: 305 pgs: 305 active+clean; 322 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 139 op/s
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.622 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <uuid>12903f3d-051e-4858-aee4-dea9692252ae</uuid>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <name>instance-00000048</name>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1665481838</nova:name>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:15:44</nova:creationTime>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:port uuid="9edcc777-b052-41d1-a28c-dbfe02a70e3b">
Dec 06 07:15:49 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <system>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="serial">12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="uuid">12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </system>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <os>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </os>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <features>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </features>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/12903f3d-051e-4858-aee4-dea9692252ae_disk">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </source>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/12903f3d-051e-4858-aee4-dea9692252ae_disk.config">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </source>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:2c:63:76"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <target dev="tap9edcc777-b0"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log" append="off"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <video>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </video>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:15:49 compute-1 nova_compute[226101]: </domain>
Dec 06 07:15:49 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.623 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Preparing to wait for external event network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.623 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.624 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.624 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.625 226109 DEBUG nova.virt.libvirt.vif [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:15:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.625 226109 DEBUG nova.network.os_vif_util [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.625 226109 DEBUG nova.network.os_vif_util [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:63:76,bridge_name='br-int',has_traffic_filtering=True,id=9edcc777-b052-41d1-a28c-dbfe02a70e3b,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9edcc777-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.626 226109 DEBUG os_vif [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:63:76,bridge_name='br-int',has_traffic_filtering=True,id=9edcc777-b052-41d1-a28c-dbfe02a70e3b,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9edcc777-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.627 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <uuid>0490f82b-373a-4948-9734-3b8cbe80d4d4</uuid>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <name>instance-00000047</name>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:name>tempest-SecurityGroupsTestJSON-server-786816242</nova:name>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:15:45</nova:creationTime>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:user uuid="67604a2c995248f8931119287d416e1c">tempest-SecurityGroupsTestJSON-409098844-project-member</nova:user>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:project uuid="4ba80f0b33d04d6d9508bc18e9b1914b">tempest-SecurityGroupsTestJSON-409098844</nova:project>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <nova:port uuid="f05b6f8b-6a82-4d64-a5c6-057b8f9368ea">
Dec 06 07:15:49 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <system>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="serial">0490f82b-373a-4948-9734-3b8cbe80d4d4</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="uuid">0490f82b-373a-4948-9734-3b8cbe80d4d4</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </system>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <os>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </os>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <features>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </features>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0490f82b-373a-4948-9734-3b8cbe80d4d4_disk">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </source>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0490f82b-373a-4948-9734-3b8cbe80d4d4_disk.config">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </source>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:15:49 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:6e:9e:8a"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <target dev="tapf05b6f8b-6a"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/console.log" append="off"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <video>
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </video>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:15:49 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:15:49 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:15:49 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:15:49 compute-1 nova_compute[226101]: </domain>
Dec 06 07:15:49 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.629 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Preparing to wait for external event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.629 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.629 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.630 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.630 226109 DEBUG nova.virt.libvirt.vif [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-786816242',display_name='tempest-SecurityGroupsTestJSON-server-786816242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-786816242',id=71,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-0ut37c0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:15:39Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=0490f82b-373a-4948-9734-3b8cbe80d4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.630 226109 DEBUG nova.network.os_vif_util [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.631 226109 DEBUG nova.network.os_vif_util [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.631 226109 DEBUG os_vif [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.632 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.632 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.633 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.634 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.635 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.635 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.636 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.637 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9edcc777-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.637 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9edcc777-b0, col_values=(('external_ids', {'iface-id': '9edcc777-b052-41d1-a28c-dbfe02a70e3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:63:76', 'vm-uuid': '12903f3d-051e-4858-aee4-dea9692252ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.638 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 NetworkManager[49031]: <info>  [1765005349.6400] manager: (tap9edcc777-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.641 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.645 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.646 226109 INFO os_vif [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:63:76,bridge_name='br-int',has_traffic_filtering=True,id=9edcc777-b052-41d1-a28c-dbfe02a70e3b,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9edcc777-b0')
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.648 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.648 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf05b6f8b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.648 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf05b6f8b-6a, col_values=(('external_ids', {'iface-id': 'f05b6f8b-6a82-4d64-a5c6-057b8f9368ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:9e:8a', 'vm-uuid': '0490f82b-373a-4948-9734-3b8cbe80d4d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.650 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 NetworkManager[49031]: <info>  [1765005349.6513] manager: (tapf05b6f8b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.652 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.657 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.658 226109 INFO os_vif [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a')
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.703 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.704 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.704 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:2c:63:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.704 226109 INFO nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Using config drive
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.727 226109 DEBUG nova.storage.rbd_utils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 12903f3d-051e-4858-aee4-dea9692252ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.737 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.738 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.738 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] No VIF found with MAC fa:16:3e:6e:9e:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.738 226109 INFO nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Using config drive
Dec 06 07:15:49 compute-1 nova_compute[226101]: 2025-12-06 07:15:49.758 226109 DEBUG nova.storage.rbd_utils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:50 compute-1 podman[251122]: 2025-12-06 07:15:50.057975422 +0000 UTC m=+0.046011564 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:15:50 compute-1 podman[251121]: 2025-12-06 07:15:50.064988991 +0000 UTC m=+0.055018956 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec 06 07:15:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:50.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:50.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2853557948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:51 compute-1 ceph-mon[81689]: pgmap v1731: 305 pgs: 305 active+clean; 367 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.7 MiB/s wr, 181 op/s
Dec 06 07:15:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/840482458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:51 compute-1 ceph-mon[81689]: pgmap v1732: 305 pgs: 305 active+clean; 374 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 170 op/s
Dec 06 07:15:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3152543257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4167606791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.556 226109 INFO nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Creating config drive at /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/disk.config
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.562 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9sgmwccz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.699 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9sgmwccz" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.730 226109 DEBUG nova.storage.rbd_utils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 12903f3d-051e-4858-aee4-dea9692252ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.733 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/disk.config 12903f3d-051e-4858-aee4-dea9692252ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.769 226109 INFO nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Creating config drive at /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/disk.config
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.773 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5watyv5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.906 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5watyv5" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.939 226109 DEBUG nova.storage.rbd_utils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:51 compute-1 nova_compute[226101]: 2025-12-06 07:15:51.943 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/disk.config 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.035 226109 DEBUG oslo_concurrency.processutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/disk.config 12903f3d-051e-4858-aee4-dea9692252ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.036 226109 INFO nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Deleting local config drive /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/disk.config because it was imported into RBD.
Dec 06 07:15:52 compute-1 kernel: tap9edcc777-b0: entered promiscuous mode
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.0969] manager: (tap9edcc777-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.102 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00224|binding|INFO|Claiming lport 9edcc777-b052-41d1-a28c-dbfe02a70e3b for this chassis.
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00225|binding|INFO|9edcc777-b052-41d1-a28c-dbfe02a70e3b: Claiming fa:16:3e:2c:63:76 10.100.0.4
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.107 226109 DEBUG oslo_concurrency.processutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/disk.config 0490f82b-373a-4948-9734-3b8cbe80d4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.108 226109 INFO nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Deleting local config drive /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/disk.config because it was imported into RBD.
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.117 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:63:76 10.100.0.4'], port_security=['fa:16:3e:2c:63:76 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '12903f3d-051e-4858-aee4-dea9692252ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66c297e9-7226-4696-b93d-dd5b2f1d8a89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9edcc777-b052-41d1-a28c-dbfe02a70e3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.119 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9edcc777-b052-41d1-a28c-dbfe02a70e3b in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.121 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:15:52 compute-1 systemd-udevd[251253]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.132 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b9fa63d3-667a-4e28-8392-10114bb1d0bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.133 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61a21643-71 in ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.135 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61a21643-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.135 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[062a731f-441e-409f-8961-2234db6fc5b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.137 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2a66176a-4685-410c-af27-fc10fe7a16fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.1462] device (tap9edcc777-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.1469] device (tap9edcc777-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:15:52 compute-1 systemd-machined[190302]: New machine qemu-31-instance-00000048.
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.150 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[273bb811-c9d6-4a8a-b8ee-6b8b6eef6490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 systemd[1]: Started Virtual Machine qemu-31-instance-00000048.
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.177 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.177 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[26672c35-1bd6-4356-aaa8-7658bd747e98]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.1813] manager: (tapf05b6f8b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Dec 06 07:15:52 compute-1 kernel: tapf05b6f8b-6a: entered promiscuous mode
Dec 06 07:15:52 compute-1 systemd-udevd[251258]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00226|binding|INFO|Claiming lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for this chassis.
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00227|binding|INFO|f05b6f8b-6a82-4d64-a5c6-057b8f9368ea: Claiming fa:16:3e:6e:9e:8a 10.100.0.10
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.1947] device (tapf05b6f8b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.1953] device (tapf05b6f8b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00228|binding|INFO|Setting lport 9edcc777-b052-41d1-a28c-dbfe02a70e3b ovn-installed in OVS
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.195 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:9e:8a 10.100.0.10'], port_security=['fa:16:3e:6e:9e:8a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0490f82b-373a-4948-9734-3b8cbe80d4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-facf815c-af05-4eae-8215-596b89b048ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ba80f0b33d04d6d9508bc18e9b1914b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06e30c9c-0168-4e04-b100-fe33575ec890', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e725b401-7789-49e6-93b9-c5c8c58adad1, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00229|binding|INFO|Setting lport 9edcc777-b052-41d1-a28c-dbfe02a70e3b up in Southbound
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.198 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.211 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[90a1b26a-c98f-4b27-ba96-2b1126208a00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.221 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[590f3a67-e3d5-4503-b60f-a2e223645ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.2220] manager: (tap61a21643-70): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Dec 06 07:15:52 compute-1 systemd-machined[190302]: New machine qemu-32-instance-00000047.
Dec 06 07:15:52 compute-1 systemd[1]: Started Virtual Machine qemu-32-instance-00000047.
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.252 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c712eeca-3cc4-407f-8e4c-307898bc49ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.254 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[36bebf18-9a35-47d8-9c07-2fedac2189b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.257 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00230|binding|INFO|Setting lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea ovn-installed in OVS
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00231|binding|INFO|Setting lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea up in Southbound
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.262 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.2752] device (tap61a21643-70): carrier: link connected
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.280 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[304e9bb4-a600-4fc1-917e-0e60ad256f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.294 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dc61c04b-3d09-4fb9-972a-b91d5fec6df1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563177, 'reachable_time': 20086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251300, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.316 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cb65fc-3efd-4c89-b87f-706cbded6502]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:67b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563177, 'tstamp': 563177}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251306, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.331 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c088c57f-b321-4eba-9470-6d382d4202c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563177, 'reachable_time': 20086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251307, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.363 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[065c33d7-099b-4369-95cd-5700a2a2e228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.414 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5685653d-b4fb-4f7f-a877-60d6285da9bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.415 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.416 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.416 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.418 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 NetworkManager[49031]: <info>  [1765005352.4187] manager: (tap61a21643-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Dec 06 07:15:52 compute-1 kernel: tap61a21643-70: entered promiscuous mode
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.421 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.422 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 ovn_controller[130279]: 2025-12-06T07:15:52Z|00232|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:15:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.438 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.440 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.440 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab7f6ef-0f70-4884-ae86-0ebf02664760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.441 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.443 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'env', 'PROCESS_TAG=haproxy-61a21643-77ba-4a09-8184-10dc4bd52b26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61a21643-77ba-4a09-8184-10dc4bd52b26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:15:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:52.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:52 compute-1 ceph-mon[81689]: pgmap v1733: 305 pgs: 305 active+clean; 374 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 650 KiB/s rd, 3.8 MiB/s wr, 140 op/s
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.529863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352529915, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1370, "num_deletes": 254, "total_data_size": 2976892, "memory_usage": 3019152, "flush_reason": "Manual Compaction"}
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352543590, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 1951469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35514, "largest_seqno": 36879, "table_properties": {"data_size": 1945462, "index_size": 3274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13654, "raw_average_key_size": 20, "raw_value_size": 1933142, "raw_average_value_size": 2924, "num_data_blocks": 144, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005232, "oldest_key_time": 1765005232, "file_creation_time": 1765005352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 13784 microseconds, and 5460 cpu microseconds.
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.543651) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 1951469 bytes OK
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.543671) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.545594) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.545610) EVENT_LOG_v1 {"time_micros": 1765005352545606, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.545625) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2970348, prev total WAL file size 2970348, number of live WAL files 2.
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.551681) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(1905KB)], [66(9347KB)]
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352551785, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 11522845, "oldest_snapshot_seqno": -1}
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6580 keys, 9589942 bytes, temperature: kUnknown
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352623636, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 9589942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9546081, "index_size": 26277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 170087, "raw_average_key_size": 25, "raw_value_size": 9428042, "raw_average_value_size": 1432, "num_data_blocks": 1044, "num_entries": 6580, "num_filter_entries": 6580, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765005352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.624022) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9589942 bytes
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.625855) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.1 rd, 133.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.8) write-amplify(4.9) OK, records in: 7104, records dropped: 524 output_compression: NoCompression
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.625877) EVENT_LOG_v1 {"time_micros": 1765005352625864, "job": 40, "event": "compaction_finished", "compaction_time_micros": 71961, "compaction_time_cpu_micros": 23910, "output_level": 6, "num_output_files": 1, "total_output_size": 9589942, "num_input_records": 7104, "num_output_records": 6580, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352626310, "job": 40, "event": "table_file_deletion", "file_number": 68}
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352627932, "job": 40, "event": "table_file_deletion", "file_number": 66}
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.551535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.628054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.628061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.628063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.628065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:15:52.628067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.672 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005352.6720822, 12903f3d-051e-4858-aee4-dea9692252ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.673 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] VM Started (Lifecycle Event)
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.708 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.712 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005352.6722786, 12903f3d-051e-4858-aee4-dea9692252ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.712 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] VM Paused (Lifecycle Event)
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.733 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.736 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.765 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:15:52 compute-1 podman[251382]: 2025-12-06 07:15:52.800462276 +0000 UTC m=+0.051528533 container create f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.818 226109 DEBUG nova.compute.manager [req-f4324aff-41a3-4208-9dc6-794ccdc5b253 req-1eb75b43-b901-4e43-9b81-6c5f98915fcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.819 226109 DEBUG oslo_concurrency.lockutils [req-f4324aff-41a3-4208-9dc6-794ccdc5b253 req-1eb75b43-b901-4e43-9b81-6c5f98915fcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.820 226109 DEBUG oslo_concurrency.lockutils [req-f4324aff-41a3-4208-9dc6-794ccdc5b253 req-1eb75b43-b901-4e43-9b81-6c5f98915fcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.820 226109 DEBUG oslo_concurrency.lockutils [req-f4324aff-41a3-4208-9dc6-794ccdc5b253 req-1eb75b43-b901-4e43-9b81-6c5f98915fcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.820 226109 DEBUG nova.compute.manager [req-f4324aff-41a3-4208-9dc6-794ccdc5b253 req-1eb75b43-b901-4e43-9b81-6c5f98915fcd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Processing event network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.821 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.830 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005352.8289056, 12903f3d-051e-4858-aee4-dea9692252ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.830 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] VM Resumed (Lifecycle Event)
Dec 06 07:15:52 compute-1 systemd[1]: Started libpod-conmon-f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1.scope.
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.832 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.835 226109 INFO nova.virt.libvirt.driver [-] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Instance spawned successfully.
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.835 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:15:52 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.856 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2dfb8c8903cd392b4ed747137ebf953cae2840076dada065cae0ec8fd8e77df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.863 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:15:52 compute-1 podman[251382]: 2025-12-06 07:15:52.769979012 +0000 UTC m=+0.021045289 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.867 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.868 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.868 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.868 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.869 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.869 226109 DEBUG nova.virt.libvirt.driver [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:52 compute-1 podman[251382]: 2025-12-06 07:15:52.875889802 +0000 UTC m=+0.126956079 container init f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:15:52 compute-1 podman[251382]: 2025-12-06 07:15:52.881510374 +0000 UTC m=+0.132576631 container start f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 07:15:52 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[251422]: [NOTICE]   (251442) : New worker (251444) forked
Dec 06 07:15:52 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[251422]: [NOTICE]   (251442) : Loading success.
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.914 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.945 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea in datapath facf815c-af05-4eae-8215-596b89b048ab unbound from our chassis
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.947 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.955 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cb9e30-aee6-4a6a-b11e-46e324e1aa19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.956 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfacf815c-a1 in ovnmeta-facf815c-af05-4eae-8215-596b89b048ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.957 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfacf815c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.957 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[75a8fc3b-236b-4057-be2c-d7a928349d35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.958 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[de9646b6-9ec8-40d7-89c5-e23e2d8e894a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.968 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005352.9676783, 0490f82b-373a-4948-9734-3b8cbe80d4d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.968 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] VM Started (Lifecycle Event)
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.971 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[62f2e184-ea98-4c65-b64f-a357be178a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.985 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.988 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005352.9677548, 0490f82b-373a-4948-9734-3b8cbe80d4d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.988 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] VM Paused (Lifecycle Event)
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.991 226109 INFO nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Took 12.67 seconds to spawn the instance on the hypervisor.
Dec 06 07:15:52 compute-1 nova_compute[226101]: 2025-12-06 07:15:52.992 226109 DEBUG nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:52.993 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[08dcedce-0d86-4fcc-8a37-59df378f41d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.006 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.009 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.019 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6a8964-ad4f-4143-a5be-322b55591249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 systemd-udevd[251295]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:15:53 compute-1 NetworkManager[49031]: <info>  [1765005353.0260] manager: (tapfacf815c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.024 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9eeb1533-2cc5-4fd7-9d98-1338d83d9847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.043 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.054 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4d62846c-be97-40bf-bb87-7587b31364a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.057 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7048c0da-3d8b-40bb-aef5-c796d48a8987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 NetworkManager[49031]: <info>  [1765005353.0808] device (tapfacf815c-a0): carrier: link connected
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.085 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2d163dec-4afb-4b69-b107-541d54c6dff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.094 226109 INFO nova.compute.manager [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Took 14.62 seconds to build instance.
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.101 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[473121df-0e87-403f-abf2-6e21ff5192aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfacf815c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:6f:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563258, 'reachable_time': 36240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251464, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.114 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[161d6db2-76de-4596-831f-5c6c2939830e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:6f88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563258, 'tstamp': 563258}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251465, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.130 226109 DEBUG oslo_concurrency.lockutils [None req-9fe57763-e486-4d7e-b86d-2407b5921321 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.130 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ac847978-6875-4116-bbab-db2b72a43178]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfacf815c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:6f:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563258, 'reachable_time': 36240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251466, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.160 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2c730d8d-85f5-44f6-840e-a374d13f8adb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.214 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fbeeecf9-ca81-4472-af3e-3faf8c2c1a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.216 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfacf815c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.216 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.217 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfacf815c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.218 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:53 compute-1 kernel: tapfacf815c-a0: entered promiscuous mode
Dec 06 07:15:53 compute-1 NetworkManager[49031]: <info>  [1765005353.2196] manager: (tapfacf815c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.225 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfacf815c-a0, col_values=(('external_ids', {'iface-id': 'ad9e5490-4d4e-46f0-9eb3-53b36e933dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.226 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:53 compute-1 ovn_controller[130279]: 2025-12-06T07:15:53Z|00233|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.227 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.229 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.230 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[673897c1-9865-4246-ad88-c5bd333db015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.231 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:15:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:53.232 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'env', 'PROCESS_TAG=haproxy-facf815c-af05-4eae-8215-596b89b048ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/facf815c-af05-4eae-8215-596b89b048ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:53 compute-1 ceph-mon[81689]: pgmap v1734: 305 pgs: 305 active+clean; 374 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 851 KiB/s rd, 3.8 MiB/s wr, 159 op/s
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.631 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.631 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.631 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.632 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:15:53 compute-1 nova_compute[226101]: 2025-12-06 07:15:53.632 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:53 compute-1 podman[251499]: 2025-12-06 07:15:53.556863463 +0000 UTC m=+0.023757303 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:15:53 compute-1 podman[251499]: 2025-12-06 07:15:53.683094451 +0000 UTC m=+0.149988261 container create 2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:15:53 compute-1 systemd[1]: Started libpod-conmon-2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94.scope.
Dec 06 07:15:53 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:15:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf04cefd5abadad38c86e72a45b58f8551fe61e3cfec0a85819a5edacaf555c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:53 compute-1 podman[251499]: 2025-12-06 07:15:53.867328687 +0000 UTC m=+0.334222527 container init 2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:15:53 compute-1 podman[251499]: 2025-12-06 07:15:53.874265264 +0000 UTC m=+0.341159074 container start 2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:15:53 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251534]: [NOTICE]   (251538) : New worker (251540) forked
Dec 06 07:15:53 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251534]: [NOTICE]   (251538) : Loading success.
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.087 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2156316598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.130 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.213 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.214 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.218 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.218 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.374 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.376 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4466MB free_disk=20.81011962890625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.377 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.378 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:54.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.462 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0490f82b-373a-4948-9734-3b8cbe80d4d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.462 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 12903f3d-051e-4858-aee4-dea9692252ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.462 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.462 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.514 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:54.579 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:15:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:54.581 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:15:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:15:54.582 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.583 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:54 compute-1 nova_compute[226101]: 2025-12-06 07:15:54.652 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2156316598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2420440032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3849921206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.061 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.066 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.097 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.145 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.146 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.152 226109 DEBUG nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.153 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.153 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.153 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.154 226109 DEBUG nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] No waiting events found dispatching network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.154 226109 WARNING nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received unexpected event network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b for instance with vm_state active and task_state None.
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.154 226109 DEBUG nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.154 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.154 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.154 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.154 226109 DEBUG nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Processing event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.155 226109 DEBUG nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.155 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.155 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.155 226109 DEBUG oslo_concurrency.lockutils [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.155 226109 DEBUG nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] No waiting events found dispatching network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.155 226109 WARNING nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received unexpected event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for instance with vm_state building and task_state spawning.
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.156 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.160 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005355.1598692, 0490f82b-373a-4948-9734-3b8cbe80d4d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.160 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] VM Resumed (Lifecycle Event)
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.183 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.191 226109 INFO nova.virt.libvirt.driver [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Instance spawned successfully.
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.191 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.195 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.197 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.228 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.232 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.233 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.233 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.234 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.234 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.235 226109 DEBUG nova.virt.libvirt.driver [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.343 226109 INFO nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Took 15.74 seconds to spawn the instance on the hypervisor.
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.343 226109 DEBUG nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.442 226109 INFO nova.compute.manager [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Took 17.48 seconds to build instance.
Dec 06 07:15:55 compute-1 nova_compute[226101]: 2025-12-06 07:15:55.460 226109 DEBUG oslo_concurrency.lockutils [None req-ca5fa26a-735e-4662-9da4-f5bdeb12722f 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.149 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.150 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.150 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.150 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.396 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.397 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.397 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.397 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0490f82b-373a-4948-9734-3b8cbe80d4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:56 compute-1 NetworkManager[49031]: <info>  [1765005356.5627] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Dec 06 07:15:56 compute-1 NetworkManager[49031]: <info>  [1765005356.5635] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.563 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:56 compute-1 ceph-mon[81689]: pgmap v1735: 305 pgs: 305 active+clean; 374 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 1.6 MiB/s wr, 89 op/s
Dec 06 07:15:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1643282998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3849921206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1702482958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:56 compute-1 ovn_controller[130279]: 2025-12-06T07:15:56Z|00234|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:15:56 compute-1 ovn_controller[130279]: 2025-12-06T07:15:56Z|00235|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:15:56 compute-1 ovn_controller[130279]: 2025-12-06T07:15:56Z|00236|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:15:56 compute-1 ovn_controller[130279]: 2025-12-06T07:15:56Z|00237|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.751 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.916 226109 DEBUG nova.compute.manager [req-60bc0682-ce92-4406-812e-1d5f3f82ac34 req-3b93e080-450a-4984-8958-d20cb0becfdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-changed-9edcc777-b052-41d1-a28c-dbfe02a70e3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.916 226109 DEBUG nova.compute.manager [req-60bc0682-ce92-4406-812e-1d5f3f82ac34 req-3b93e080-450a-4984-8958-d20cb0becfdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Refreshing instance network info cache due to event network-changed-9edcc777-b052-41d1-a28c-dbfe02a70e3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.916 226109 DEBUG oslo_concurrency.lockutils [req-60bc0682-ce92-4406-812e-1d5f3f82ac34 req-3b93e080-450a-4984-8958-d20cb0becfdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.917 226109 DEBUG oslo_concurrency.lockutils [req-60bc0682-ce92-4406-812e-1d5f3f82ac34 req-3b93e080-450a-4984-8958-d20cb0becfdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:56 compute-1 nova_compute[226101]: 2025-12-06 07:15:56.917 226109 DEBUG nova.network.neutron [req-60bc0682-ce92-4406-812e-1d5f3f82ac34 req-3b93e080-450a-4984-8958-d20cb0becfdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Refreshing network info cache for port 9edcc777-b052-41d1-a28c-dbfe02a70e3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:15:57 compute-1 nova_compute[226101]: 2025-12-06 07:15:57.999 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updating instance_info_cache with network_info: [{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.018 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.019 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.019 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.020 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.020 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.020 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.375 226109 DEBUG nova.network.neutron [req-60bc0682-ce92-4406-812e-1d5f3f82ac34 req-3b93e080-450a-4984-8958-d20cb0becfdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updated VIF entry in instance network info cache for port 9edcc777-b052-41d1-a28c-dbfe02a70e3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.376 226109 DEBUG nova.network.neutron [req-60bc0682-ce92-4406-812e-1d5f3f82ac34 req-3b93e080-450a-4984-8958-d20cb0becfdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updating instance_info_cache with network_info: [{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.401 226109 DEBUG oslo_concurrency.lockutils [req-60bc0682-ce92-4406-812e-1d5f3f82ac34 req-3b93e080-450a-4984-8958-d20cb0becfdf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:58.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:15:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:58 compute-1 ceph-mon[81689]: pgmap v1736: 305 pgs: 305 active+clean; 346 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.6 MiB/s wr, 162 op/s
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.813 226109 DEBUG oslo_concurrency.lockutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.814 226109 DEBUG oslo_concurrency.lockutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.814 226109 INFO nova.compute.manager [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Rebooting instance
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.832 226109 DEBUG oslo_concurrency.lockutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.832 226109 DEBUG oslo_concurrency.lockutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquired lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:58 compute-1 nova_compute[226101]: 2025-12-06 07:15:58.833 226109 DEBUG nova.network.neutron [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:15:59 compute-1 nova_compute[226101]: 2025-12-06 07:15:59.007 226109 DEBUG nova.compute.manager [req-8059e48c-f3e7-4897-805a-70b245a67eb6 req-0ab52d22-dc12-4b4c-91fa-16c0513db01b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-changed-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:59 compute-1 nova_compute[226101]: 2025-12-06 07:15:59.008 226109 DEBUG nova.compute.manager [req-8059e48c-f3e7-4897-805a-70b245a67eb6 req-0ab52d22-dc12-4b4c-91fa-16c0513db01b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Refreshing instance network info cache due to event network-changed-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:59 compute-1 nova_compute[226101]: 2025-12-06 07:15:59.009 226109 DEBUG oslo_concurrency.lockutils [req-8059e48c-f3e7-4897-805a-70b245a67eb6 req-0ab52d22-dc12-4b4c-91fa-16c0513db01b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:59 compute-1 nova_compute[226101]: 2025-12-06 07:15:59.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:59 compute-1 ceph-mon[81689]: pgmap v1737: 305 pgs: 305 active+clean; 295 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 248 KiB/s wr, 213 op/s
Dec 06 07:15:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/404538913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:59 compute-1 nova_compute[226101]: 2025-12-06 07:15:59.656 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:59 compute-1 ovn_controller[130279]: 2025-12-06T07:15:59Z|00238|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:15:59 compute-1 ovn_controller[130279]: 2025-12-06T07:15:59Z|00239|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:15:59 compute-1 nova_compute[226101]: 2025-12-06 07:15:59.753 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.017 226109 DEBUG nova.network.neutron [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updating instance_info_cache with network_info: [{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.059 226109 DEBUG oslo_concurrency.lockutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Releasing lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.060 226109 DEBUG nova.compute.manager [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.060 226109 DEBUG oslo_concurrency.lockutils [req-8059e48c-f3e7-4897-805a-70b245a67eb6 req-0ab52d22-dc12-4b4c-91fa-16c0513db01b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.061 226109 DEBUG nova.network.neutron [req-8059e48c-f3e7-4897-805a-70b245a67eb6 req-0ab52d22-dc12-4b4c-91fa-16c0513db01b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Refreshing network info cache for port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:16:00 compute-1 kernel: tapf05b6f8b-6a (unregistering): left promiscuous mode
Dec 06 07:16:00 compute-1 NetworkManager[49031]: <info>  [1765005360.2187] device (tapf05b6f8b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.229 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:00 compute-1 ovn_controller[130279]: 2025-12-06T07:16:00Z|00240|binding|INFO|Releasing lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea from this chassis (sb_readonly=0)
Dec 06 07:16:00 compute-1 ovn_controller[130279]: 2025-12-06T07:16:00Z|00241|binding|INFO|Setting lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea down in Southbound
Dec 06 07:16:00 compute-1 ovn_controller[130279]: 2025-12-06T07:16:00Z|00242|binding|INFO|Removing iface tapf05b6f8b-6a ovn-installed in OVS
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.235 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:9e:8a 10.100.0.10'], port_security=['fa:16:3e:6e:9e:8a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0490f82b-373a-4948-9734-3b8cbe80d4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-facf815c-af05-4eae-8215-596b89b048ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ba80f0b33d04d6d9508bc18e9b1914b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '06e30c9c-0168-4e04-b100-fe33575ec890 d8324645-6328-45b1-9190-d3e6f3794514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e725b401-7789-49e6-93b9-c5c8c58adad1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.237 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea in datapath facf815c-af05-4eae-8215-596b89b048ab unbound from our chassis
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.238 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network facf815c-af05-4eae-8215-596b89b048ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.239 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b317ad4-c953-40fb-abe2-974ff59fb631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.240 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-facf815c-af05-4eae-8215-596b89b048ab namespace which is not needed anymore
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.251 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:00 compute-1 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000047.scope: Deactivated successfully.
Dec 06 07:16:00 compute-1 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000047.scope: Consumed 5.797s CPU time.
Dec 06 07:16:00 compute-1 systemd-machined[190302]: Machine qemu-32-instance-00000047 terminated.
Dec 06 07:16:00 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251534]: [NOTICE]   (251538) : haproxy version is 2.8.14-c23fe91
Dec 06 07:16:00 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251534]: [NOTICE]   (251538) : path to executable is /usr/sbin/haproxy
Dec 06 07:16:00 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251534]: [WARNING]  (251538) : Exiting Master process...
Dec 06 07:16:00 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251534]: [WARNING]  (251538) : Exiting Master process...
Dec 06 07:16:00 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251534]: [ALERT]    (251538) : Current worker (251540) exited with code 143 (Terminated)
Dec 06 07:16:00 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251534]: [WARNING]  (251538) : All workers exited. Exiting... (0)
Dec 06 07:16:00 compute-1 systemd[1]: libpod-2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94.scope: Deactivated successfully.
Dec 06 07:16:00 compute-1 podman[251599]: 2025-12-06 07:16:00.368756703 +0000 UTC m=+0.043569498 container died 2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:16:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.392 226109 INFO nova.virt.libvirt.driver [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Instance destroyed successfully.
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.393 226109 DEBUG nova.objects.instance [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'resources' on Instance uuid 0490f82b-373a-4948-9734-3b8cbe80d4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:00 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94-userdata-shm.mount: Deactivated successfully.
Dec 06 07:16:00 compute-1 systemd[1]: var-lib-containers-storage-overlay-4bf04cefd5abadad38c86e72a45b58f8551fe61e3cfec0a85819a5edacaf555c-merged.mount: Deactivated successfully.
Dec 06 07:16:00 compute-1 podman[251599]: 2025-12-06 07:16:00.413586233 +0000 UTC m=+0.088399028 container cleanup 2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.419 226109 DEBUG nova.virt.libvirt.vif [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-786816242',display_name='tempest-SecurityGroupsTestJSON-server-786816242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-786816242',id=71,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-0ut37c0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:16:00Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=0490f82b-373a-4948-9734-3b8cbe80d4d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.420 226109 DEBUG nova.network.os_vif_util [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:00 compute-1 systemd[1]: libpod-conmon-2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94.scope: Deactivated successfully.
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.421 226109 DEBUG nova.network.os_vif_util [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.422 226109 DEBUG os_vif [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.423 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.424 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf05b6f8b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.425 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.431 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.434 226109 INFO os_vif [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a')
Dec 06 07:16:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:00.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.443 226109 DEBUG nova.virt.libvirt.driver [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Start _get_guest_xml network_info=[{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.447 226109 WARNING nova.virt.libvirt.driver [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.452 226109 DEBUG nova.virt.libvirt.host [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.453 226109 DEBUG nova.virt.libvirt.host [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.456 226109 DEBUG nova.virt.libvirt.host [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.457 226109 DEBUG nova.virt.libvirt.host [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:16:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.458 226109 DEBUG nova.virt.libvirt.driver [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.458 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.459 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.459 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.460 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.460 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.460 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.460 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.461 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.461 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.461 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.461 226109 DEBUG nova.virt.hardware [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.462 226109 DEBUG nova.objects.instance [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0490f82b-373a-4948-9734-3b8cbe80d4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:00 compute-1 podman[251642]: 2025-12-06 07:16:00.481397114 +0000 UTC m=+0.044308087 container remove 2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.484 226109 DEBUG oslo_concurrency.processutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.487 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e092dc44-c075-48ee-9fd0-85fd90aaf3f6]: (4, ('Sat Dec  6 07:16:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab (2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94)\n2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94\nSat Dec  6 07:16:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab (2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94)\n2c4c42e43c909aec77b07947b1ae2ed607ad637d1cd8cc9e3e88ea287c38aa94\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.488 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[605b6172-7b93-45ff-a5a6-4713d33c4b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.489 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfacf815c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:00 compute-1 kernel: tapfacf815c-a0: left promiscuous mode
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.496 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eab095fc-016b-426a-b05a-733d12449a3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.508 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[09fae4e2-768f-4bd2-9625-2f5f376fb7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.510 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[767ebb5f-2941-4aa9-9ae1-76d16e9f1b28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.515 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.518 226109 DEBUG nova.compute.manager [req-444afa49-73c5-4a6b-85e7-83cc91d2daf5 req-4650b46d-3b05-4977-b13f-c4223ac47a31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-unplugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.519 226109 DEBUG oslo_concurrency.lockutils [req-444afa49-73c5-4a6b-85e7-83cc91d2daf5 req-4650b46d-3b05-4977-b13f-c4223ac47a31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.519 226109 DEBUG oslo_concurrency.lockutils [req-444afa49-73c5-4a6b-85e7-83cc91d2daf5 req-4650b46d-3b05-4977-b13f-c4223ac47a31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.519 226109 DEBUG oslo_concurrency.lockutils [req-444afa49-73c5-4a6b-85e7-83cc91d2daf5 req-4650b46d-3b05-4977-b13f-c4223ac47a31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.520 226109 DEBUG nova.compute.manager [req-444afa49-73c5-4a6b-85e7-83cc91d2daf5 req-4650b46d-3b05-4977-b13f-c4223ac47a31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] No waiting events found dispatching network-vif-unplugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.520 226109 WARNING nova.compute.manager [req-444afa49-73c5-4a6b-85e7-83cc91d2daf5 req-4650b46d-3b05-4977-b13f-c4223ac47a31 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received unexpected event network-vif-unplugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for instance with vm_state active and task_state reboot_started_hard.
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.524 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[99a083ed-d06a-4d6f-b8ec-78fef1674294]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563251, 'reachable_time': 42161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251655, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:00 compute-1 systemd[1]: run-netns-ovnmeta\x2dfacf815c\x2daf05\x2d4eae\x2d8215\x2d596b89b048ab.mount: Deactivated successfully.
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.529 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-facf815c-af05-4eae-8215-596b89b048ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:16:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:00.530 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8bc115-a5e6-4744-8448-74843cbe765f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2563791832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:16:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2145519798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.938 226109 DEBUG oslo_concurrency.processutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:00 compute-1 nova_compute[226101]: 2025-12-06 07:16:00.978 226109 DEBUG oslo_concurrency.processutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.338 226109 DEBUG nova.network.neutron [req-8059e48c-f3e7-4897-805a-70b245a67eb6 req-0ab52d22-dc12-4b4c-91fa-16c0513db01b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updated VIF entry in instance network info cache for port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.339 226109 DEBUG nova.network.neutron [req-8059e48c-f3e7-4897-805a-70b245a67eb6 req-0ab52d22-dc12-4b4c-91fa-16c0513db01b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updating instance_info_cache with network_info: [{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.370 226109 DEBUG oslo_concurrency.lockutils [req-8059e48c-f3e7-4897-805a-70b245a67eb6 req-0ab52d22-dc12-4b4c-91fa-16c0513db01b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:16:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/638429195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.427 226109 DEBUG oslo_concurrency.processutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.428 226109 DEBUG nova.virt.libvirt.vif [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-786816242',display_name='tempest-SecurityGroupsTestJSON-server-786816242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-786816242',id=71,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-0ut37c0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:16:00Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=0490f82b-373a-4948-9734-3b8cbe80d4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.429 226109 DEBUG nova.network.os_vif_util [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.430 226109 DEBUG nova.network.os_vif_util [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.431 226109 DEBUG nova.objects.instance [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0490f82b-373a-4948-9734-3b8cbe80d4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.448 226109 DEBUG nova.virt.libvirt.driver [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <uuid>0490f82b-373a-4948-9734-3b8cbe80d4d4</uuid>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <name>instance-00000047</name>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <nova:name>tempest-SecurityGroupsTestJSON-server-786816242</nova:name>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:16:00</nova:creationTime>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <nova:user uuid="67604a2c995248f8931119287d416e1c">tempest-SecurityGroupsTestJSON-409098844-project-member</nova:user>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <nova:project uuid="4ba80f0b33d04d6d9508bc18e9b1914b">tempest-SecurityGroupsTestJSON-409098844</nova:project>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <nova:port uuid="f05b6f8b-6a82-4d64-a5c6-057b8f9368ea">
Dec 06 07:16:01 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <system>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <entry name="serial">0490f82b-373a-4948-9734-3b8cbe80d4d4</entry>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <entry name="uuid">0490f82b-373a-4948-9734-3b8cbe80d4d4</entry>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </system>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <os>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   </os>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <features>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   </features>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0490f82b-373a-4948-9734-3b8cbe80d4d4_disk">
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0490f82b-373a-4948-9734-3b8cbe80d4d4_disk.config">
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:16:01 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:6e:9e:8a"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <target dev="tapf05b6f8b-6a"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4/console.log" append="off"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <video>
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </video>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:16:01 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:16:01 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:16:01 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:16:01 compute-1 nova_compute[226101]: </domain>
Dec 06 07:16:01 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.455 226109 DEBUG nova.virt.libvirt.driver [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.455 226109 DEBUG nova.virt.libvirt.driver [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.456 226109 DEBUG nova.virt.libvirt.vif [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-786816242',display_name='tempest-SecurityGroupsTestJSON-server-786816242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-786816242',id=71,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-0ut37c0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:16:00Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=0490f82b-373a-4948-9734-3b8cbe80d4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.457 226109 DEBUG nova.network.os_vif_util [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.457 226109 DEBUG nova.network.os_vif_util [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.458 226109 DEBUG os_vif [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.458 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.459 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.459 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.462 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.462 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf05b6f8b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.462 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf05b6f8b-6a, col_values=(('external_ids', {'iface-id': 'f05b6f8b-6a82-4d64-a5c6-057b8f9368ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:9e:8a', 'vm-uuid': '0490f82b-373a-4948-9734-3b8cbe80d4d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:01 compute-1 NetworkManager[49031]: <info>  [1765005361.4647] manager: (tapf05b6f8b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.463 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.468 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.469 226109 INFO os_vif [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a')
Dec 06 07:16:01 compute-1 kernel: tapf05b6f8b-6a: entered promiscuous mode
Dec 06 07:16:01 compute-1 systemd-udevd[251578]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:16:01 compute-1 NetworkManager[49031]: <info>  [1765005361.5367] manager: (tapf05b6f8b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Dec 06 07:16:01 compute-1 ovn_controller[130279]: 2025-12-06T07:16:01Z|00243|binding|INFO|Claiming lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for this chassis.
Dec 06 07:16:01 compute-1 ovn_controller[130279]: 2025-12-06T07:16:01Z|00244|binding|INFO|f05b6f8b-6a82-4d64-a5c6-057b8f9368ea: Claiming fa:16:3e:6e:9e:8a 10.100.0.10
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.544 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 NetworkManager[49031]: <info>  [1765005361.5468] device (tapf05b6f8b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.547 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:9e:8a 10.100.0.10'], port_security=['fa:16:3e:6e:9e:8a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0490f82b-373a-4948-9734-3b8cbe80d4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-facf815c-af05-4eae-8215-596b89b048ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ba80f0b33d04d6d9508bc18e9b1914b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '06e30c9c-0168-4e04-b100-fe33575ec890 d8324645-6328-45b1-9190-d3e6f3794514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e725b401-7789-49e6-93b9-c5c8c58adad1, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:01 compute-1 NetworkManager[49031]: <info>  [1765005361.5479] device (tapf05b6f8b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.548 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea in datapath facf815c-af05-4eae-8215-596b89b048ab bound to our chassis
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.549 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:16:01 compute-1 ovn_controller[130279]: 2025-12-06T07:16:01Z|00245|binding|INFO|Setting lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea ovn-installed in OVS
Dec 06 07:16:01 compute-1 ovn_controller[130279]: 2025-12-06T07:16:01Z|00246|binding|INFO|Setting lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea up in Southbound
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.556 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.558 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca5107c-8823-4d59-a5f2-175f5d2d077b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.559 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfacf815c-a1 in ovnmeta-facf815c-af05-4eae-8215-596b89b048ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.560 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.562 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfacf815c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.562 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[76f0eaa5-466b-4e33-87aa-224d35ddd7a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.565 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1754b8-4315-43f6-be84-615bbc1d52af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.574 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb6bbac-7481-44db-a7e3-31912927f58e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 systemd-machined[190302]: New machine qemu-33-instance-00000047.
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.588 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8702fc-eab2-4751-9dfd-5d47ad53ee30]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 systemd[1]: Started Virtual Machine qemu-33-instance-00000047.
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.618 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ce54ea42-eca1-48be-a832-02f0b3029b8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.623 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6c32eb-37b3-4216-a952-c0bf430b8e13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 NetworkManager[49031]: <info>  [1765005361.6274] manager: (tapfacf815c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.633 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.634 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.634 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.652 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb739fe-773a-43c3-9d7b-f7f33209c77c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.654 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6e528833-16e7-403a-83f4-06c1744f7175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 NetworkManager[49031]: <info>  [1765005361.6717] device (tapfacf815c-a0): carrier: link connected
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.675 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebec9be-2e83-4b3c-8226-0f51158f1938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.688 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[af21f3d3-a121-47b1-b51a-d98874aa5550]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfacf815c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:6f:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564117, 'reachable_time': 40321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251761, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.708 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[04c0a390-9d18-4257-b8aa-d69a833d380d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:6f88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564117, 'tstamp': 564117}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251762, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.723 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9cffb6-c76c-41ba-ba2f-2b7027413aef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfacf815c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:6f:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564117, 'reachable_time': 40321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251763, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.750 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dac8d143-c6b2-415e-9c1f-8d20143adb42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.793 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9871fd-8bb7-4f62-a72c-1f50226400df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.794 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfacf815c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.794 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.795 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfacf815c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 kernel: tapfacf815c-a0: entered promiscuous mode
Dec 06 07:16:01 compute-1 NetworkManager[49031]: <info>  [1765005361.7973] manager: (tapfacf815c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.799 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.803 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfacf815c-a0, col_values=(('external_ids', {'iface-id': 'ad9e5490-4d4e-46f0-9eb3-53b36e933dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:01 compute-1 ovn_controller[130279]: 2025-12-06T07:16:01Z|00247|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.806 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.807 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4844a25f-a9dd-41a7-a05e-6f112a243bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.808 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:01.808 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'env', 'PROCESS_TAG=haproxy-facf815c-af05-4eae-8215-596b89b048ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/facf815c-af05-4eae-8215-596b89b048ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:16:01 compute-1 nova_compute[226101]: 2025-12-06 07:16:01.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:02 compute-1 podman[251794]: 2025-12-06 07:16:02.133778039 +0000 UTC m=+0.047913486 container create 190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:16:02 compute-1 systemd[1]: Started libpod-conmon-190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005.scope.
Dec 06 07:16:02 compute-1 podman[251794]: 2025-12-06 07:16:02.111120516 +0000 UTC m=+0.025255973 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:16:02 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:16:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeddd36b5c706815035d4378f5963448aa8cacfad8b6050538fab46fb47c1f7d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:02 compute-1 podman[251794]: 2025-12-06 07:16:02.252032612 +0000 UTC m=+0.166168089 container init 190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:16:02 compute-1 podman[251794]: 2025-12-06 07:16:02.264251522 +0000 UTC m=+0.178386969 container start 190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:16:02 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251809]: [NOTICE]   (251829) : New worker (251840) forked
Dec 06 07:16:02 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251809]: [NOTICE]   (251829) : Loading success.
Dec 06 07:16:02 compute-1 ceph-mon[81689]: pgmap v1738: 305 pgs: 305 active+clean; 295 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 64 KiB/s wr, 185 op/s
Dec 06 07:16:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2145519798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/638429195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:02.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:02.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.721 226109 DEBUG nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.721 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.722 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.722 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.722 226109 DEBUG nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] No waiting events found dispatching network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.722 226109 WARNING nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received unexpected event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for instance with vm_state active and task_state reboot_started_hard.
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.723 226109 DEBUG nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.723 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.723 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.723 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.724 226109 DEBUG nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] No waiting events found dispatching network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.724 226109 WARNING nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received unexpected event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for instance with vm_state active and task_state reboot_started_hard.
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.724 226109 DEBUG nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.724 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.725 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.725 226109 DEBUG oslo_concurrency.lockutils [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.725 226109 DEBUG nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] No waiting events found dispatching network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.725 226109 WARNING nova.compute.manager [req-a09d2f26-d946-4f2b-8779-193cbbb9e5d5 req-5ab8cfcb-d1c4-4205-be87-11c579f685db 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received unexpected event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for instance with vm_state active and task_state reboot_started_hard.
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.881 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 0490f82b-373a-4948-9734-3b8cbe80d4d4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.881 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005362.88029, 0490f82b-373a-4948-9734-3b8cbe80d4d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.881 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] VM Resumed (Lifecycle Event)
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.885 226109 DEBUG nova.compute.manager [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.887 226109 INFO nova.virt.libvirt.driver [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Instance rebooted successfully.
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.888 226109 DEBUG nova.compute.manager [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.960 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.963 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.996 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.996 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005362.8844411, 0490f82b-373a-4948-9734-3b8cbe80d4d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:16:02 compute-1 nova_compute[226101]: 2025-12-06 07:16:02.997 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] VM Started (Lifecycle Event)
Dec 06 07:16:03 compute-1 nova_compute[226101]: 2025-12-06 07:16:03.001 226109 DEBUG oslo_concurrency.lockutils [None req-edd2742c-2304-4ef6-8b34-37e0d7dfb693 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:03 compute-1 nova_compute[226101]: 2025-12-06 07:16:03.016 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:03 compute-1 nova_compute[226101]: 2025-12-06 07:16:03.018 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:16:03 compute-1 ceph-mon[81689]: pgmap v1739: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 75 KiB/s wr, 218 op/s
Dec 06 07:16:04 compute-1 nova_compute[226101]: 2025-12-06 07:16:04.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:04.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:04.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:04 compute-1 nova_compute[226101]: 2025-12-06 07:16:04.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:05 compute-1 ceph-mon[81689]: pgmap v1740: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 41 KiB/s wr, 199 op/s
Dec 06 07:16:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:06.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:06.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:06 compute-1 nova_compute[226101]: 2025-12-06 07:16:06.465 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3858770152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2759368417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:07 compute-1 ovn_controller[130279]: 2025-12-06T07:16:07Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:63:76 10.100.0.4
Dec 06 07:16:07 compute-1 ovn_controller[130279]: 2025-12-06T07:16:07Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:63:76 10.100.0.4
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.175 226109 DEBUG nova.compute.manager [req-73d7520c-c7e8-424b-a0fd-33558c4935f4 req-54799f73-8a79-402d-9a01-e32fe6c35e9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-changed-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.175 226109 DEBUG nova.compute.manager [req-73d7520c-c7e8-424b-a0fd-33558c4935f4 req-54799f73-8a79-402d-9a01-e32fe6c35e9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Refreshing instance network info cache due to event network-changed-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.176 226109 DEBUG oslo_concurrency.lockutils [req-73d7520c-c7e8-424b-a0fd-33558c4935f4 req-54799f73-8a79-402d-9a01-e32fe6c35e9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.176 226109 DEBUG oslo_concurrency.lockutils [req-73d7520c-c7e8-424b-a0fd-33558c4935f4 req-54799f73-8a79-402d-9a01-e32fe6c35e9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.177 226109 DEBUG nova.network.neutron [req-73d7520c-c7e8-424b-a0fd-33558c4935f4 req-54799f73-8a79-402d-9a01-e32fe6c35e9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Refreshing network info cache for port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.699 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.756 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.757 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.758 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.758 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.759 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.760 226109 INFO nova.compute.manager [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Terminating instance
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.761 226109 DEBUG nova.compute.manager [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:16:07 compute-1 kernel: tapf05b6f8b-6a (unregistering): left promiscuous mode
Dec 06 07:16:07 compute-1 NetworkManager[49031]: <info>  [1765005367.7998] device (tapf05b6f8b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:16:07 compute-1 ovn_controller[130279]: 2025-12-06T07:16:07Z|00248|binding|INFO|Releasing lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea from this chassis (sb_readonly=0)
Dec 06 07:16:07 compute-1 ovn_controller[130279]: 2025-12-06T07:16:07Z|00249|binding|INFO|Setting lport f05b6f8b-6a82-4d64-a5c6-057b8f9368ea down in Southbound
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.803 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:07 compute-1 ovn_controller[130279]: 2025-12-06T07:16:07Z|00250|binding|INFO|Removing iface tapf05b6f8b-6a ovn-installed in OVS
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.806 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:07.810 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:9e:8a 10.100.0.10'], port_security=['fa:16:3e:6e:9e:8a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0490f82b-373a-4948-9734-3b8cbe80d4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-facf815c-af05-4eae-8215-596b89b048ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ba80f0b33d04d6d9508bc18e9b1914b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '06e30c9c-0168-4e04-b100-fe33575ec890 bc66da4a-ae17-444a-958c-3d427692adb5 d8324645-6328-45b1-9190-d3e6f3794514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e725b401-7789-49e6-93b9-c5c8c58adad1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:07.812 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea in datapath facf815c-af05-4eae-8215-596b89b048ab unbound from our chassis
Dec 06 07:16:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:07.813 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network facf815c-af05-4eae-8215-596b89b048ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:16:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:07.814 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ae120b-3195-4b13-a37f-c37c94437e17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:07.815 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-facf815c-af05-4eae-8215-596b89b048ab namespace which is not needed anymore
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.873 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:07 compute-1 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000047.scope: Deactivated successfully.
Dec 06 07:16:07 compute-1 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000047.scope: Consumed 5.981s CPU time.
Dec 06 07:16:07 compute-1 systemd-machined[190302]: Machine qemu-33-instance-00000047 terminated.
Dec 06 07:16:07 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251809]: [NOTICE]   (251829) : haproxy version is 2.8.14-c23fe91
Dec 06 07:16:07 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251809]: [NOTICE]   (251829) : path to executable is /usr/sbin/haproxy
Dec 06 07:16:07 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251809]: [WARNING]  (251829) : Exiting Master process...
Dec 06 07:16:07 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251809]: [ALERT]    (251829) : Current worker (251840) exited with code 143 (Terminated)
Dec 06 07:16:07 compute-1 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[251809]: [WARNING]  (251829) : All workers exited. Exiting... (0)
Dec 06 07:16:07 compute-1 systemd[1]: libpod-190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005.scope: Deactivated successfully.
Dec 06 07:16:07 compute-1 podman[251891]: 2025-12-06 07:16:07.98334089 +0000 UTC m=+0.045242892 container died 190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.998 226109 INFO nova.virt.libvirt.driver [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Instance destroyed successfully.
Dec 06 07:16:07 compute-1 nova_compute[226101]: 2025-12-06 07:16:07.999 226109 DEBUG nova.objects.instance [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'resources' on Instance uuid 0490f82b-373a-4948-9734-3b8cbe80d4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.014 226109 DEBUG nova.virt.libvirt.vif [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-786816242',display_name='tempest-SecurityGroupsTestJSON-server-786816242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-786816242',id=71,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-0ut37c0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:16:02Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=0490f82b-373a-4948-9734-3b8cbe80d4d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.014 226109 DEBUG nova.network.os_vif_util [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.015 226109 DEBUG nova.network.os_vif_util [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.015 226109 DEBUG os_vif [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.017 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.018 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf05b6f8b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.020 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.022 226109 INFO os_vif [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:9e:8a,bridge_name='br-int',has_traffic_filtering=True,id=f05b6f8b-6a82-4d64-a5c6-057b8f9368ea,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05b6f8b-6a')
Dec 06 07:16:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-aeddd36b5c706815035d4378f5963448aa8cacfad8b6050538fab46fb47c1f7d-merged.mount: Deactivated successfully.
Dec 06 07:16:08 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005-userdata-shm.mount: Deactivated successfully.
Dec 06 07:16:08 compute-1 podman[251891]: 2025-12-06 07:16:08.133703561 +0000 UTC m=+0.195605563 container cleanup 190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:16:08 compute-1 systemd[1]: libpod-conmon-190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005.scope: Deactivated successfully.
Dec 06 07:16:08 compute-1 ceph-mon[81689]: pgmap v1741: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 49 KiB/s wr, 221 op/s
Dec 06 07:16:08 compute-1 podman[251952]: 2025-12-06 07:16:08.250489625 +0000 UTC m=+0.096006413 container remove 190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.257 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebf3bcf-1a8c-47f2-b144-0179bb08b947]: (4, ('Sat Dec  6 07:16:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab (190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005)\n190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005\nSat Dec  6 07:16:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab (190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005)\n190623a2f4760384588d860102c2518599a7b71af2d8f79cc595c0ff991dc005\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.258 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[18f0df84-48f5-4da1-ae43-2428e8a3800f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.259 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfacf815c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.261 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:08 compute-1 kernel: tapfacf815c-a0: left promiscuous mode
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.288 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.291 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b82055ea-58d2-47f1-b390-726f614b5208]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.302 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[afb706ac-48d9-42dd-b404-c973afbf311f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.303 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[640ebc0a-657a-48fe-95c7-b72fc956ae30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.318 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd867f3-6818-4ca2-abdb-7d9aa5052718]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564111, 'reachable_time': 41759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251968, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:08 compute-1 systemd[1]: run-netns-ovnmeta\x2dfacf815c\x2daf05\x2d4eae\x2d8215\x2d596b89b048ab.mount: Deactivated successfully.
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.321 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-facf815c-af05-4eae-8215-596b89b048ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:16:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:08.322 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[602e4521-31fb-45ab-8418-dc103e96f069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:08.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:08.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.530 226109 DEBUG nova.network.neutron [req-73d7520c-c7e8-424b-a0fd-33558c4935f4 req-54799f73-8a79-402d-9a01-e32fe6c35e9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updated VIF entry in instance network info cache for port f05b6f8b-6a82-4d64-a5c6-057b8f9368ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.530 226109 DEBUG nova.network.neutron [req-73d7520c-c7e8-424b-a0fd-33558c4935f4 req-54799f73-8a79-402d-9a01-e32fe6c35e9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updating instance_info_cache with network_info: [{"id": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "address": "fa:16:3e:6e:9e:8a", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05b6f8b-6a", "ovs_interfaceid": "f05b6f8b-6a82-4d64-a5c6-057b8f9368ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:08 compute-1 nova_compute[226101]: 2025-12-06 07:16:08.555 226109 DEBUG oslo_concurrency.lockutils [req-73d7520c-c7e8-424b-a0fd-33558c4935f4 req-54799f73-8a79-402d-9a01-e32fe6c35e9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0490f82b-373a-4948-9734-3b8cbe80d4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:09 compute-1 nova_compute[226101]: 2025-12-06 07:16:09.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:09 compute-1 ceph-mon[81689]: pgmap v1742: 305 pgs: 305 active+clean; 304 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.1 MiB/s wr, 228 op/s
Dec 06 07:16:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2814864671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:16:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:10.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:10 compute-1 nova_compute[226101]: 2025-12-06 07:16:10.748 226109 DEBUG nova.compute.manager [req-9edf5553-e7e3-40ff-b6f6-15af30956fdd req-c6add896-ace3-41de-ad2d-9017d1ec09fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-unplugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:10 compute-1 nova_compute[226101]: 2025-12-06 07:16:10.749 226109 DEBUG oslo_concurrency.lockutils [req-9edf5553-e7e3-40ff-b6f6-15af30956fdd req-c6add896-ace3-41de-ad2d-9017d1ec09fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:10 compute-1 nova_compute[226101]: 2025-12-06 07:16:10.749 226109 DEBUG oslo_concurrency.lockutils [req-9edf5553-e7e3-40ff-b6f6-15af30956fdd req-c6add896-ace3-41de-ad2d-9017d1ec09fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:10 compute-1 nova_compute[226101]: 2025-12-06 07:16:10.750 226109 DEBUG oslo_concurrency.lockutils [req-9edf5553-e7e3-40ff-b6f6-15af30956fdd req-c6add896-ace3-41de-ad2d-9017d1ec09fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:10 compute-1 nova_compute[226101]: 2025-12-06 07:16:10.750 226109 DEBUG nova.compute.manager [req-9edf5553-e7e3-40ff-b6f6-15af30956fdd req-c6add896-ace3-41de-ad2d-9017d1ec09fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] No waiting events found dispatching network-vif-unplugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:10 compute-1 nova_compute[226101]: 2025-12-06 07:16:10.751 226109 DEBUG nova.compute.manager [req-9edf5553-e7e3-40ff-b6f6-15af30956fdd req-c6add896-ace3-41de-ad2d-9017d1ec09fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-unplugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:16:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2814864671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:16:11 compute-1 nova_compute[226101]: 2025-12-06 07:16:11.445 226109 INFO nova.virt.libvirt.driver [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Deleting instance files /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4_del
Dec 06 07:16:11 compute-1 nova_compute[226101]: 2025-12-06 07:16:11.445 226109 INFO nova.virt.libvirt.driver [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Deletion of /var/lib/nova/instances/0490f82b-373a-4948-9734-3b8cbe80d4d4_del complete
Dec 06 07:16:11 compute-1 nova_compute[226101]: 2025-12-06 07:16:11.500 226109 INFO nova.compute.manager [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Took 3.74 seconds to destroy the instance on the hypervisor.
Dec 06 07:16:11 compute-1 nova_compute[226101]: 2025-12-06 07:16:11.500 226109 DEBUG oslo.service.loopingcall [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:16:11 compute-1 nova_compute[226101]: 2025-12-06 07:16:11.501 226109 DEBUG nova.compute.manager [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:16:11 compute-1 nova_compute[226101]: 2025-12-06 07:16:11.501 226109 DEBUG nova.network.neutron [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.025 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:12 compute-1 ceph-mon[81689]: pgmap v1743: 305 pgs: 305 active+clean; 304 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Dec 06 07:16:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:12.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.796 226109 DEBUG nova.network.neutron [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.810 226109 INFO nova.compute.manager [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Took 1.31 seconds to deallocate network for instance.
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.851 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.851 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.854 226109 DEBUG nova.compute.manager [req-36f32a5d-6116-472e-b9e7-7a9edc0246f1 req-e813fb0e-02aa-4ef5-a840-c59af8f6b9bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.855 226109 DEBUG oslo_concurrency.lockutils [req-36f32a5d-6116-472e-b9e7-7a9edc0246f1 req-e813fb0e-02aa-4ef5-a840-c59af8f6b9bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.855 226109 DEBUG oslo_concurrency.lockutils [req-36f32a5d-6116-472e-b9e7-7a9edc0246f1 req-e813fb0e-02aa-4ef5-a840-c59af8f6b9bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.855 226109 DEBUG oslo_concurrency.lockutils [req-36f32a5d-6116-472e-b9e7-7a9edc0246f1 req-e813fb0e-02aa-4ef5-a840-c59af8f6b9bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.856 226109 DEBUG nova.compute.manager [req-36f32a5d-6116-472e-b9e7-7a9edc0246f1 req-e813fb0e-02aa-4ef5-a840-c59af8f6b9bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] No waiting events found dispatching network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:12 compute-1 nova_compute[226101]: 2025-12-06 07:16:12.856 226109 WARNING nova.compute.manager [req-36f32a5d-6116-472e-b9e7-7a9edc0246f1 req-e813fb0e-02aa-4ef5-a840-c59af8f6b9bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received unexpected event network-vif-plugged-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea for instance with vm_state active and task_state deleting.
Dec 06 07:16:13 compute-1 nova_compute[226101]: 2025-12-06 07:16:13.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:13 compute-1 nova_compute[226101]: 2025-12-06 07:16:13.192 226109 DEBUG oslo_concurrency.processutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3465677993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:13 compute-1 nova_compute[226101]: 2025-12-06 07:16:13.607 226109 DEBUG oslo_concurrency.processutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:13 compute-1 nova_compute[226101]: 2025-12-06 07:16:13.613 226109 DEBUG nova.compute.provider_tree [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:16:13 compute-1 nova_compute[226101]: 2025-12-06 07:16:13.637 226109 DEBUG nova.scheduler.client.report [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:16:13 compute-1 nova_compute[226101]: 2025-12-06 07:16:13.667 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:13 compute-1 nova_compute[226101]: 2025-12-06 07:16:13.691 226109 INFO nova.scheduler.client.report [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Deleted allocations for instance 0490f82b-373a-4948-9734-3b8cbe80d4d4
Dec 06 07:16:13 compute-1 nova_compute[226101]: 2025-12-06 07:16:13.753 226109 DEBUG oslo_concurrency.lockutils [None req-68fc8d5c-4e30-48f8-a2e2-267c97261d31 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "0490f82b-373a-4948-9734-3b8cbe80d4d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:14 compute-1 nova_compute[226101]: 2025-12-06 07:16:14.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:14 compute-1 ceph-mon[81689]: pgmap v1744: 305 pgs: 305 active+clean; 281 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.2 MiB/s wr, 259 op/s
Dec 06 07:16:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:14.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:14.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:14 compute-1 nova_compute[226101]: 2025-12-06 07:16:14.955 226109 DEBUG nova.compute.manager [req-611538c7-15f5-4171-b61b-6bba66b46ffd req-6093fe3c-565e-4647-a499-bca18c27c9e1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Received event network-vif-deleted-f05b6f8b-6a82-4d64-a5c6-057b8f9368ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3465677993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:15 compute-1 ceph-mon[81689]: pgmap v1745: 305 pgs: 305 active+clean; 281 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Dec 06 07:16:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:16 compute-1 podman[251993]: 2025-12-06 07:16:16.134876517 +0000 UTC m=+0.116427845 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Dec 06 07:16:16 compute-1 sudo[252017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:16 compute-1 sudo[252017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:16 compute-1 sudo[252017]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:16.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:16 compute-1 sudo[252042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:16:16 compute-1 sudo[252042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:16 compute-1 sudo[252042]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:16 compute-1 sudo[252067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:16 compute-1 sudo[252067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:16 compute-1 sudo[252067]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:16 compute-1 sudo[252092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:16:16 compute-1 sudo[252092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:17 compute-1 sudo[252092]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:17 compute-1 nova_compute[226101]: 2025-12-06 07:16:17.687 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:17 compute-1 ceph-mon[81689]: pgmap v1746: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 232 op/s
Dec 06 07:16:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:16:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:16:18 compute-1 nova_compute[226101]: 2025-12-06 07:16:18.020 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:18.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:18.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:19 compute-1 nova_compute[226101]: 2025-12-06 07:16:19.096 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:19 compute-1 ceph-mon[81689]: pgmap v1747: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Dec 06 07:16:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:16:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:16:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:16:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:20.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:20.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:21 compute-1 podman[252149]: 2025-12-06 07:16:21.072915932 +0000 UTC m=+0.057669148 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:16:21 compute-1 podman[252148]: 2025-12-06 07:16:21.076214721 +0000 UTC m=+0.061655637 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 06 07:16:21 compute-1 ceph-mon[81689]: pgmap v1748: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 131 op/s
Dec 06 07:16:21 compute-1 nova_compute[226101]: 2025-12-06 07:16:21.421 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:22.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:22 compute-1 nova_compute[226101]: 2025-12-06 07:16:22.997 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005367.9965909, 0490f82b-373a-4948-9734-3b8cbe80d4d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:16:22 compute-1 nova_compute[226101]: 2025-12-06 07:16:22.997 226109 INFO nova.compute.manager [-] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] VM Stopped (Lifecycle Event)
Dec 06 07:16:23 compute-1 nova_compute[226101]: 2025-12-06 07:16:23.021 226109 DEBUG nova.compute.manager [None req-2eacecdb-9c3b-4f5c-a02b-ec5dc7451f02 - - - - - -] [instance: 0490f82b-373a-4948-9734-3b8cbe80d4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:23 compute-1 nova_compute[226101]: 2025-12-06 07:16:23.054 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:23 compute-1 ceph-mon[81689]: pgmap v1749: 305 pgs: 305 active+clean; 258 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.1 MiB/s wr, 182 op/s
Dec 06 07:16:24 compute-1 nova_compute[226101]: 2025-12-06 07:16:24.134 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:24.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:24.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:24 compute-1 ceph-mon[81689]: pgmap v1750: 305 pgs: 305 active+clean; 258 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 26 KiB/s wr, 58 op/s
Dec 06 07:16:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/789104311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:26.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:26.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:27 compute-1 ceph-mon[81689]: pgmap v1751: 305 pgs: 305 active+clean; 209 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 574 KiB/s rd, 27 KiB/s wr, 72 op/s
Dec 06 07:16:28 compute-1 nova_compute[226101]: 2025-12-06 07:16:28.056 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:28.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:28.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:29 compute-1 nova_compute[226101]: 2025-12-06 07:16:29.136 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:29 compute-1 ceph-mon[81689]: pgmap v1752: 305 pgs: 305 active+clean; 202 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 35 KiB/s wr, 81 op/s
Dec 06 07:16:29 compute-1 sudo[252186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:29 compute-1 sudo[252186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:29 compute-1 sudo[252186]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:29 compute-1 sudo[252211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:16:29 compute-1 sudo[252211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:29 compute-1 sudo[252211]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:30.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:30.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:31 compute-1 ceph-mon[81689]: pgmap v1753: 305 pgs: 305 active+clean; 202 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 34 KiB/s wr, 79 op/s
Dec 06 07:16:31 compute-1 nova_compute[226101]: 2025-12-06 07:16:31.465 226109 DEBUG oslo_concurrency.lockutils [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-12903f3d-051e-4858-aee4-dea9692252ae-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:31 compute-1 nova_compute[226101]: 2025-12-06 07:16:31.466 226109 DEBUG oslo_concurrency.lockutils [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-12903f3d-051e-4858-aee4-dea9692252ae-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:31 compute-1 nova_compute[226101]: 2025-12-06 07:16:31.466 226109 DEBUG nova.objects.instance [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid 12903f3d-051e-4858-aee4-dea9692252ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:32.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:32 compute-1 nova_compute[226101]: 2025-12-06 07:16:32.576 226109 DEBUG nova.objects.instance [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_requests' on Instance uuid 12903f3d-051e-4858-aee4-dea9692252ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:32 compute-1 nova_compute[226101]: 2025-12-06 07:16:32.600 226109 DEBUG nova.network.neutron [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:16:33 compute-1 nova_compute[226101]: 2025-12-06 07:16:33.057 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:33 compute-1 nova_compute[226101]: 2025-12-06 07:16:33.186 226109 DEBUG nova.policy [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:16:33 compute-1 ceph-mon[81689]: pgmap v1754: 305 pgs: 305 active+clean; 239 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Dec 06 07:16:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:33.488 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:33 compute-1 nova_compute[226101]: 2025-12-06 07:16:33.489 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:33.489 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:16:33 compute-1 ovn_controller[130279]: 2025-12-06T07:16:33Z|00251|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:16:33 compute-1 nova_compute[226101]: 2025-12-06 07:16:33.699 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:33 compute-1 nova_compute[226101]: 2025-12-06 07:16:33.894 226109 DEBUG nova.network.neutron [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Successfully created port: fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:16:34 compute-1 nova_compute[226101]: 2025-12-06 07:16:34.138 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:34.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:35 compute-1 nova_compute[226101]: 2025-12-06 07:16:35.215 226109 DEBUG nova.network.neutron [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Successfully updated port: fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:16:35 compute-1 nova_compute[226101]: 2025-12-06 07:16:35.265 226109 DEBUG oslo_concurrency.lockutils [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:16:35 compute-1 nova_compute[226101]: 2025-12-06 07:16:35.265 226109 DEBUG oslo_concurrency.lockutils [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:16:35 compute-1 nova_compute[226101]: 2025-12-06 07:16:35.266 226109 DEBUG nova.network.neutron [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:16:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:35 compute-1 nova_compute[226101]: 2025-12-06 07:16:35.428 226109 DEBUG nova.compute.manager [req-6c62ec60-c970-4e4a-8693-0ba917769ae8 req-d91b4e00-9cdf-4bce-b527-ccd33be2cad7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-changed-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:35 compute-1 nova_compute[226101]: 2025-12-06 07:16:35.428 226109 DEBUG nova.compute.manager [req-6c62ec60-c970-4e4a-8693-0ba917769ae8 req-d91b4e00-9cdf-4bce-b527-ccd33be2cad7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Refreshing instance network info cache due to event network-changed-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:16:35 compute-1 nova_compute[226101]: 2025-12-06 07:16:35.428 226109 DEBUG oslo_concurrency.lockutils [req-6c62ec60-c970-4e4a-8693-0ba917769ae8 req-d91b4e00-9cdf-4bce-b527-ccd33be2cad7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:16:35 compute-1 nova_compute[226101]: 2025-12-06 07:16:35.539 226109 WARNING nova.network.neutron [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it
Dec 06 07:16:35 compute-1 ceph-mon[81689]: pgmap v1755: 305 pgs: 305 active+clean; 239 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 64 op/s
Dec 06 07:16:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:36.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:36 compute-1 ceph-mon[81689]: pgmap v1756: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:38.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:38.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.527 226109 DEBUG nova.network.neutron [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updating instance_info_cache with network_info: [{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.552 226109 DEBUG oslo_concurrency.lockutils [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.552 226109 DEBUG oslo_concurrency.lockutils [req-6c62ec60-c970-4e4a-8693-0ba917769ae8 req-d91b4e00-9cdf-4bce-b527-ccd33be2cad7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.553 226109 DEBUG nova.network.neutron [req-6c62ec60-c970-4e4a-8693-0ba917769ae8 req-d91b4e00-9cdf-4bce-b527-ccd33be2cad7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Refreshing network info cache for port fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.555 226109 DEBUG nova.virt.libvirt.vif [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.556 226109 DEBUG nova.network.os_vif_util [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.556 226109 DEBUG nova.network.os_vif_util [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.557 226109 DEBUG os_vif [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.557 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.558 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.558 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.561 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.561 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfcd7bd8f-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.561 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfcd7bd8f-ac, col_values=(('external_ids', {'iface-id': 'fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:28:7b', 'vm-uuid': '12903f3d-051e-4858-aee4-dea9692252ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.563 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.564 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:16:38 compute-1 NetworkManager[49031]: <info>  [1765005398.5642] manager: (tapfcd7bd8f-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.569 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.570 226109 INFO os_vif [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac')
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.571 226109 DEBUG nova.virt.libvirt.vif [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.571 226109 DEBUG nova.network.os_vif_util [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.571 226109 DEBUG nova.network.os_vif_util [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.574 226109 DEBUG nova.virt.libvirt.guest [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] attach device xml: <interface type="ethernet">
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:88:28:7b"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <target dev="tapfcd7bd8f-ac"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]: </interface>
Dec 06 07:16:38 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:16:38 compute-1 kernel: tapfcd7bd8f-ac: entered promiscuous mode
Dec 06 07:16:38 compute-1 NetworkManager[49031]: <info>  [1765005398.5877] manager: (tapfcd7bd8f-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Dec 06 07:16:38 compute-1 ovn_controller[130279]: 2025-12-06T07:16:38Z|00252|binding|INFO|Claiming lport fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 for this chassis.
Dec 06 07:16:38 compute-1 ovn_controller[130279]: 2025-12-06T07:16:38Z|00253|binding|INFO|fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412: Claiming fa:16:3e:88:28:7b 10.100.0.14
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.589 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.595 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:28:7b 10.100.0.14'], port_security=['fa:16:3e:88:28:7b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '12903f3d-051e-4858-aee4-dea9692252ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.596 139580 INFO neutron.agent.ovn.metadata.agent [-] Port fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.598 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:16:38 compute-1 ovn_controller[130279]: 2025-12-06T07:16:38Z|00254|binding|INFO|Setting lport fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 ovn-installed in OVS
Dec 06 07:16:38 compute-1 ovn_controller[130279]: 2025-12-06T07:16:38Z|00255|binding|INFO|Setting lport fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 up in Southbound
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.613 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.616 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.623 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dc92be8c-9065-4f4c-89e9-e72f2480e5e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:38 compute-1 systemd-udevd[252244]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:16:38 compute-1 NetworkManager[49031]: <info>  [1765005398.6389] device (tapfcd7bd8f-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:16:38 compute-1 NetworkManager[49031]: <info>  [1765005398.6396] device (tapfcd7bd8f-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.654 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[24fa7916-0832-4045-b892-477bf6cab2ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.657 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6cfb7a-88a0-4119-8bf5-e75d7a8d05dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.685 226109 DEBUG nova.virt.libvirt.driver [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.685 226109 DEBUG nova.virt.libvirt.driver [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.686 226109 DEBUG nova.virt.libvirt.driver [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:2c:63:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.686 226109 DEBUG nova.virt.libvirt.driver [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:88:28:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.689 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ec72face-c60b-444d-a943-6f23a0bbfc12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.707 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[af420614-18e8-4925-b688-b4bb56c4a90d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563177, 'reachable_time': 20086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252251, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.713 226109 DEBUG nova.virt.libvirt.guest [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1665481838</nova:name>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:16:38</nova:creationTime>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:port uuid="9edcc777-b052-41d1-a28c-dbfe02a70e3b">
Dec 06 07:16:38 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     <nova:port uuid="fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412">
Dec 06 07:16:38 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:16:38 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:38 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:16:38 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:16:38 compute-1 nova_compute[226101]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.724 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6c695660-20bb-42aa-bf28-3bf09a530915]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563188, 'tstamp': 563188}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252252, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563191, 'tstamp': 563191}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252252, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.726 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.728 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.729 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.729 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.730 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.730 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:38.730 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.745 226109 DEBUG oslo_concurrency.lockutils [None req-00edaf9d-e690-4aeb-bf27-ef81153ccb5a 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-12903f3d-051e-4858-aee4-dea9692252ae-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.845 226109 DEBUG nova.compute.manager [req-34b962b6-cbbb-4975-94f5-28bb4c0c921c req-6b8cca7c-8bf4-4895-bd95-a174a7dd98ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.846 226109 DEBUG oslo_concurrency.lockutils [req-34b962b6-cbbb-4975-94f5-28bb4c0c921c req-6b8cca7c-8bf4-4895-bd95-a174a7dd98ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.846 226109 DEBUG oslo_concurrency.lockutils [req-34b962b6-cbbb-4975-94f5-28bb4c0c921c req-6b8cca7c-8bf4-4895-bd95-a174a7dd98ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.847 226109 DEBUG oslo_concurrency.lockutils [req-34b962b6-cbbb-4975-94f5-28bb4c0c921c req-6b8cca7c-8bf4-4895-bd95-a174a7dd98ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.847 226109 DEBUG nova.compute.manager [req-34b962b6-cbbb-4975-94f5-28bb4c0c921c req-6b8cca7c-8bf4-4895-bd95-a174a7dd98ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] No waiting events found dispatching network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:38 compute-1 nova_compute[226101]: 2025-12-06 07:16:38.847 226109 WARNING nova.compute.manager [req-34b962b6-cbbb-4975-94f5-28bb4c0c921c req-6b8cca7c-8bf4-4895-bd95-a174a7dd98ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received unexpected event network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 for instance with vm_state active and task_state None.
Dec 06 07:16:39 compute-1 nova_compute[226101]: 2025-12-06 07:16:39.171 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:39 compute-1 ceph-mon[81689]: pgmap v1757: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.192 226109 DEBUG nova.network.neutron [req-6c62ec60-c970-4e4a-8693-0ba917769ae8 req-d91b4e00-9cdf-4bce-b527-ccd33be2cad7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updated VIF entry in instance network info cache for port fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.193 226109 DEBUG nova.network.neutron [req-6c62ec60-c970-4e4a-8693-0ba917769ae8 req-d91b4e00-9cdf-4bce-b527-ccd33be2cad7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updating instance_info_cache with network_info: [{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.213 226109 DEBUG oslo_concurrency.lockutils [req-6c62ec60-c970-4e4a-8693-0ba917769ae8 req-d91b4e00-9cdf-4bce-b527-ccd33be2cad7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:40 compute-1 ovn_controller[130279]: 2025-12-06T07:16:40Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:28:7b 10.100.0.14
Dec 06 07:16:40 compute-1 ovn_controller[130279]: 2025-12-06T07:16:40Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:28:7b 10.100.0.14
Dec 06 07:16:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:40.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:40.491 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:40.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.901 226109 DEBUG oslo_concurrency.lockutils [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-12903f3d-051e-4858-aee4-dea9692252ae-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.902 226109 DEBUG oslo_concurrency.lockutils [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-12903f3d-051e-4858-aee4-dea9692252ae-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.921 226109 DEBUG nova.objects.instance [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid 12903f3d-051e-4858-aee4-dea9692252ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.944 226109 DEBUG nova.virt.libvirt.vif [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.944 226109 DEBUG nova.network.os_vif_util [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.945 226109 DEBUG nova.network.os_vif_util [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.948 226109 DEBUG nova.compute.manager [req-03acd8d1-b2b8-473e-b5e4-762ebc87bc64 req-a64ee213-f634-46c7-86b5-928080dfe7f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.948 226109 DEBUG oslo_concurrency.lockutils [req-03acd8d1-b2b8-473e-b5e4-762ebc87bc64 req-a64ee213-f634-46c7-86b5-928080dfe7f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.948 226109 DEBUG oslo_concurrency.lockutils [req-03acd8d1-b2b8-473e-b5e4-762ebc87bc64 req-a64ee213-f634-46c7-86b5-928080dfe7f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.949 226109 DEBUG oslo_concurrency.lockutils [req-03acd8d1-b2b8-473e-b5e4-762ebc87bc64 req-a64ee213-f634-46c7-86b5-928080dfe7f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.949 226109 DEBUG nova.compute.manager [req-03acd8d1-b2b8-473e-b5e4-762ebc87bc64 req-a64ee213-f634-46c7-86b5-928080dfe7f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] No waiting events found dispatching network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.949 226109 WARNING nova.compute.manager [req-03acd8d1-b2b8-473e-b5e4-762ebc87bc64 req-a64ee213-f634-46c7-86b5-928080dfe7f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received unexpected event network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 for instance with vm_state active and task_state None.
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.953 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.955 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.958 226109 DEBUG nova.virt.libvirt.driver [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Attempting to detach device tapfcd7bd8f-ac from instance 12903f3d-051e-4858-aee4-dea9692252ae from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.959 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] detach device xml: <interface type="ethernet">
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:88:28:7b"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <target dev="tapfcd7bd8f-ac"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]: </interface>
Dec 06 07:16:40 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.965 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.968 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface>not found in domain: <domain type='kvm' id='31'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <name>instance-00000048</name>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <uuid>12903f3d-051e-4858-aee4-dea9692252ae</uuid>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1665481838</nova:name>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:16:38</nova:creationTime>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:port uuid="9edcc777-b052-41d1-a28c-dbfe02a70e3b">
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <nova:port uuid="fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412">
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:16:40 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <memory unit='KiB'>131072</memory>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <resource>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <partition>/machine</partition>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </resource>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <sysinfo type='smbios'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <system>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <entry name='serial'>12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <entry name='uuid'>12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </system>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <os>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <boot dev='hd'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <smbios mode='sysinfo'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </os>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <features>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <vmcoreinfo state='on'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </features>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <feature policy='require' name='x2apic'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <feature policy='require' name='vme'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <clock offset='utc'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <timer name='hpet' present='no'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <on_reboot>restart</on_reboot>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <on_crash>destroy</on_crash>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <disk type='network' device='disk'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/12903f3d-051e-4858-aee4-dea9692252ae_disk' index='2'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target dev='vda' bus='virtio'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='virtio-disk0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <disk type='network' device='cdrom'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/12903f3d-051e-4858-aee4-dea9692252ae_disk.config' index='1'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target dev='sda' bus='sata'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <readonly/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='sata0-0-0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pcie.0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='1' port='0x10'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.1'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='2' port='0x11'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.2'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='3' port='0x12'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.3'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='4' port='0x13'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.4'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='5' port='0x14'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.5'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='6' port='0x15'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.6'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='7' port='0x16'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.7'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='8' port='0x17'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.8'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='9' port='0x18'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.9'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='10' port='0x19'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.10'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='11' port='0x1a'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.11'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='12' port='0x1b'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.12'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='13' port='0x1c'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.13'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='14' port='0x1d'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.14'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='15' port='0x1e'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.15'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='16' port='0x1f'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.16'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='17' port='0x20'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.17'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='18' port='0x21'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.18'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='19' port='0x22'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.19'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='20' port='0x23'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.20'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='21' port='0x24'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.21'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='22' port='0x25'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.22'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='23' port='0x26'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.23'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='24' port='0x27'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.24'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target chassis='25' port='0x28'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.25'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model name='pcie-pci-bridge'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='pci.26'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='usb'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <controller type='sata' index='0'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='ide'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:2c:63:76'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target dev='tap9edcc777-b0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='net0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:88:28:7b'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target dev='tapfcd7bd8f-ac'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='net1'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <serial type='pty'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log' append='off'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target type='isa-serial' port='0'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:         <model name='isa-serial'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       </target>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log' append='off'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <target type='serial' port='0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </console>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <input type='tablet' bus='usb'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='input0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <input type='mouse' bus='ps2'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='input1'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <input type='keyboard' bus='ps2'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='input2'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <listen type='address' address='::0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </graphics>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <audio id='1' type='none'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <video>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='video0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </video>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <watchdog model='itco' action='reset'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='watchdog0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </watchdog>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <memballoon model='virtio'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <stats period='10'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='balloon0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <rng model='virtio'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <alias name='rng0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <label>system_u:system_r:svirt_t:s0:c423,c994</label>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c423,c994</imagelabel>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <label>+107:+107</label>
Dec 06 07:16:40 compute-1 nova_compute[226101]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:16:40 compute-1 nova_compute[226101]: </domain>
Dec 06 07:16:40 compute-1 nova_compute[226101]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.968 226109 INFO nova.virt.libvirt.driver [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully detached device tapfcd7bd8f-ac from instance 12903f3d-051e-4858-aee4-dea9692252ae from the persistent domain config.
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.968 226109 DEBUG nova.virt.libvirt.driver [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] (1/8): Attempting to detach device tapfcd7bd8f-ac with device alias net1 from instance 12903f3d-051e-4858-aee4-dea9692252ae from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:16:40 compute-1 nova_compute[226101]: 2025-12-06 07:16:40.968 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] detach device xml: <interface type="ethernet">
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:88:28:7b"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]:   <target dev="tapfcd7bd8f-ac"/>
Dec 06 07:16:40 compute-1 nova_compute[226101]: </interface>
Dec 06 07:16:40 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:16:41 compute-1 kernel: tapfcd7bd8f-ac (unregistering): left promiscuous mode
Dec 06 07:16:41 compute-1 NetworkManager[49031]: <info>  [1765005401.0090] device (tapfcd7bd8f-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:16:41 compute-1 ovn_controller[130279]: 2025-12-06T07:16:41Z|00256|binding|INFO|Releasing lport fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 from this chassis (sb_readonly=0)
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.019 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765005401.0190089, 12903f3d-051e-4858-aee4-dea9692252ae => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:41 compute-1 ovn_controller[130279]: 2025-12-06T07:16:41Z|00257|binding|INFO|Setting lport fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 down in Southbound
Dec 06 07:16:41 compute-1 ovn_controller[130279]: 2025-12-06T07:16:41Z|00258|binding|INFO|Removing iface tapfcd7bd8f-ac ovn-installed in OVS
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.021 226109 DEBUG nova.virt.libvirt.driver [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Start waiting for the detach event from libvirt for device tapfcd7bd8f-ac with device alias net1 for instance 12903f3d-051e-4858-aee4-dea9692252ae _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.021 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.022 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.025 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface>not found in domain: <domain type='kvm' id='31'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <name>instance-00000048</name>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <uuid>12903f3d-051e-4858-aee4-dea9692252ae</uuid>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1665481838</nova:name>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:16:38</nova:creationTime>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:port uuid="9edcc777-b052-41d1-a28c-dbfe02a70e3b">
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:port uuid="fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412">
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:16:41 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <memory unit='KiB'>131072</memory>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <resource>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <partition>/machine</partition>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </resource>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <sysinfo type='smbios'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <system>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <entry name='serial'>12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <entry name='uuid'>12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </system>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <os>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <boot dev='hd'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <smbios mode='sysinfo'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </os>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <features>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <vmcoreinfo state='on'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </features>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <feature policy='require' name='x2apic'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <feature policy='require' name='vme'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <clock offset='utc'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <timer name='hpet' present='no'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <on_reboot>restart</on_reboot>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <on_crash>destroy</on_crash>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <disk type='network' device='disk'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/12903f3d-051e-4858-aee4-dea9692252ae_disk' index='2'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target dev='vda' bus='virtio'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='virtio-disk0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <disk type='network' device='cdrom'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/12903f3d-051e-4858-aee4-dea9692252ae_disk.config' index='1'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target dev='sda' bus='sata'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <readonly/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='sata0-0-0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pcie.0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='1' port='0x10'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.1'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='2' port='0x11'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.2'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='3' port='0x12'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.3'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='4' port='0x13'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.4'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='5' port='0x14'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.5'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='6' port='0x15'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.6'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='7' port='0x16'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.7'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='8' port='0x17'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.8'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='9' port='0x18'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.9'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='10' port='0x19'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.10'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='11' port='0x1a'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.11'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='12' port='0x1b'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.12'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='13' port='0x1c'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.13'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='14' port='0x1d'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.14'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='15' port='0x1e'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.15'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='16' port='0x1f'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.16'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='17' port='0x20'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.17'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='18' port='0x21'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.18'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='19' port='0x22'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.19'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='20' port='0x23'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.20'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='21' port='0x24'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.21'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='22' port='0x25'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.22'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='23' port='0x26'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.23'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='24' port='0x27'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.24'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target chassis='25' port='0x28'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.25'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model name='pcie-pci-bridge'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='pci.26'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='usb'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <controller type='sata' index='0'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='ide'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:2c:63:76'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target dev='tap9edcc777-b0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='net0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <serial type='pty'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log' append='off'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target type='isa-serial' port='0'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:         <model name='isa-serial'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       </target>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log' append='off'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <target type='serial' port='0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </console>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <input type='tablet' bus='usb'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='input0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <input type='mouse' bus='ps2'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='input1'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <input type='keyboard' bus='ps2'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='input2'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <listen type='address' address='::0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </graphics>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <audio id='1' type='none'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <video>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='video0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </video>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <watchdog model='itco' action='reset'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='watchdog0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </watchdog>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <memballoon model='virtio'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <stats period='10'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='balloon0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <rng model='virtio'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <alias name='rng0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <label>system_u:system_r:svirt_t:s0:c423,c994</label>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c423,c994</imagelabel>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <label>+107:+107</label>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:16:41 compute-1 nova_compute[226101]: </domain>
Dec 06 07:16:41 compute-1 nova_compute[226101]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.025 226109 INFO nova.virt.libvirt.driver [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully detached device tapfcd7bd8f-ac from instance 12903f3d-051e-4858-aee4-dea9692252ae from the live domain config.
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.026 226109 DEBUG nova.virt.libvirt.vif [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.027 226109 DEBUG nova.network.os_vif_util [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.027 226109 DEBUG nova.network.os_vif_util [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.028 226109 DEBUG os_vif [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.029 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.029 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcd7bd8f-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.031 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.033 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.035 226109 INFO os_vif [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac')
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.036 226109 DEBUG nova.virt.libvirt.guest [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1665481838</nova:name>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:16:41</nova:creationTime>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     <nova:port uuid="9edcc777-b052-41d1-a28c-dbfe02a70e3b">
Dec 06 07:16:41 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:16:41 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:41 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:16:41 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:16:41 compute-1 nova_compute[226101]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.091 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:28:7b 10.100.0.14'], port_security=['fa:16:3e:88:28:7b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '12903f3d-051e-4858-aee4-dea9692252ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.092 139580 INFO neutron.agent.ovn.metadata.agent [-] Port fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.093 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.108 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[318eb033-5965-471e-a91c-246b489bc97a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.139 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b62921fc-ceb0-40ef-acb9-19a9f28bd0a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.142 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[49a3c71d-7364-4968-9ac4-2efddbb2bd66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.168 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7a4d62-d645-4881-b922-2dd3a3089812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.188 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[22f6f4af-73b7-4ed4-8c7f-9c27f4a7b403]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563177, 'reachable_time': 20086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252262, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.206 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfa96d4-9753-4501-9f65-33ee9974096b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563188, 'tstamp': 563188}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252263, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563191, 'tstamp': 563191}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252263, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.208 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.210 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:41 compute-1 nova_compute[226101]: 2025-12-06 07:16:41.210 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.211 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.211 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.212 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:41.212 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:41 compute-1 ceph-mon[81689]: pgmap v1758: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.301 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:42.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:42.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.537 226109 DEBUG oslo_concurrency.lockutils [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.537 226109 DEBUG oslo_concurrency.lockutils [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.538 226109 DEBUG nova.network.neutron [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.619 226109 DEBUG nova.compute.manager [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-deleted-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.620 226109 INFO nova.compute.manager [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Neutron deleted interface fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412; detaching it from the instance and deleting it from the info cache
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.620 226109 DEBUG nova.network.neutron [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updating instance_info_cache with network_info: [{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.642 226109 DEBUG nova.objects.instance [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'system_metadata' on Instance uuid 12903f3d-051e-4858-aee4-dea9692252ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.698 226109 DEBUG nova.objects.instance [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'flavor' on Instance uuid 12903f3d-051e-4858-aee4-dea9692252ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.719 226109 DEBUG nova.virt.libvirt.vif [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.720 226109 DEBUG nova.network.os_vif_util [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.721 226109 DEBUG nova.network.os_vif_util [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.723 226109 DEBUG nova.virt.libvirt.guest [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.727 226109 DEBUG nova.virt.libvirt.guest [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface>not found in domain: <domain type='kvm' id='31'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <name>instance-00000048</name>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <uuid>12903f3d-051e-4858-aee4-dea9692252ae</uuid>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1665481838</nova:name>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:16:41</nova:creationTime>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:port uuid="9edcc777-b052-41d1-a28c-dbfe02a70e3b">
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:16:42 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <memory unit='KiB'>131072</memory>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <resource>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <partition>/machine</partition>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </resource>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <sysinfo type='smbios'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <system>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='serial'>12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='uuid'>12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </system>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <os>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <boot dev='hd'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <smbios mode='sysinfo'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </os>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <features>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <vmcoreinfo state='on'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </features>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <feature policy='require' name='x2apic'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <feature policy='require' name='vme'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <clock offset='utc'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <timer name='hpet' present='no'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <on_reboot>restart</on_reboot>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <on_crash>destroy</on_crash>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <disk type='network' device='disk'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/12903f3d-051e-4858-aee4-dea9692252ae_disk' index='2'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target dev='vda' bus='virtio'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='virtio-disk0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <disk type='network' device='cdrom'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/12903f3d-051e-4858-aee4-dea9692252ae_disk.config' index='1'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target dev='sda' bus='sata'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <readonly/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='sata0-0-0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pcie.0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='1' port='0x10'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='2' port='0x11'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='3' port='0x12'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.3'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='4' port='0x13'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.4'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='5' port='0x14'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.5'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='6' port='0x15'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.6'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='7' port='0x16'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.7'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='8' port='0x17'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.8'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='9' port='0x18'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.9'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='10' port='0x19'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.10'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='11' port='0x1a'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.11'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='12' port='0x1b'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.12'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='13' port='0x1c'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.13'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='14' port='0x1d'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.14'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='15' port='0x1e'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.15'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='16' port='0x1f'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.16'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='17' port='0x20'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.17'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='18' port='0x21'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.18'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='19' port='0x22'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.19'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='20' port='0x23'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.20'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='21' port='0x24'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.21'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='22' port='0x25'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.22'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='23' port='0x26'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.23'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='24' port='0x27'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.24'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='25' port='0x28'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.25'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-pci-bridge'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.26'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='usb'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='sata' index='0'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='ide'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:2c:63:76'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target dev='tap9edcc777-b0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='net0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <serial type='pty'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log' append='off'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target type='isa-serial' port='0'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <model name='isa-serial'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </target>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log' append='off'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target type='serial' port='0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </console>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <input type='tablet' bus='usb'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='input0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <input type='mouse' bus='ps2'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='input1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <input type='keyboard' bus='ps2'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='input2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <listen type='address' address='::0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </graphics>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <audio id='1' type='none'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <video>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='video0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </video>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <watchdog model='itco' action='reset'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='watchdog0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </watchdog>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <memballoon model='virtio'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <stats period='10'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='balloon0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <rng model='virtio'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='rng0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <label>system_u:system_r:svirt_t:s0:c423,c994</label>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c423,c994</imagelabel>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <label>+107:+107</label>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:16:42 compute-1 nova_compute[226101]: </domain>
Dec 06 07:16:42 compute-1 nova_compute[226101]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.728 226109 DEBUG nova.virt.libvirt.guest [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.731 226109 DEBUG nova.virt.libvirt.guest [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:88:28:7b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfcd7bd8f-ac"/></interface>not found in domain: <domain type='kvm' id='31'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <name>instance-00000048</name>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <uuid>12903f3d-051e-4858-aee4-dea9692252ae</uuid>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1665481838</nova:name>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:16:41</nova:creationTime>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:port uuid="9edcc777-b052-41d1-a28c-dbfe02a70e3b">
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:16:42 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <memory unit='KiB'>131072</memory>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <resource>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <partition>/machine</partition>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </resource>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <sysinfo type='smbios'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <system>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='serial'>12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='uuid'>12903f3d-051e-4858-aee4-dea9692252ae</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </system>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <os>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <boot dev='hd'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <smbios mode='sysinfo'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </os>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <features>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <vmcoreinfo state='on'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </features>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <feature policy='require' name='x2apic'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <feature policy='require' name='vme'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <clock offset='utc'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <timer name='hpet' present='no'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <on_reboot>restart</on_reboot>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <on_crash>destroy</on_crash>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <disk type='network' device='disk'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/12903f3d-051e-4858-aee4-dea9692252ae_disk' index='2'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target dev='vda' bus='virtio'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='virtio-disk0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <disk type='network' device='cdrom'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/12903f3d-051e-4858-aee4-dea9692252ae_disk.config' index='1'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </source>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target dev='sda' bus='sata'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <readonly/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='sata0-0-0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pcie.0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='1' port='0x10'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='2' port='0x11'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='3' port='0x12'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.3'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='4' port='0x13'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.4'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='5' port='0x14'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.5'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='6' port='0x15'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.6'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='7' port='0x16'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.7'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='8' port='0x17'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.8'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='9' port='0x18'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.9'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='10' port='0x19'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.10'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='11' port='0x1a'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.11'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='12' port='0x1b'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.12'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='13' port='0x1c'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.13'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='14' port='0x1d'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.14'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='15' port='0x1e'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.15'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='16' port='0x1f'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.16'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='17' port='0x20'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.17'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='18' port='0x21'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.18'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='19' port='0x22'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.19'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='20' port='0x23'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.20'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='21' port='0x24'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.21'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='22' port='0x25'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.22'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='23' port='0x26'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.23'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='24' port='0x27'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.24'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target chassis='25' port='0x28'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.25'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model name='pcie-pci-bridge'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='pci.26'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='usb'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <controller type='sata' index='0'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='ide'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:2c:63:76'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target dev='tap9edcc777-b0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='net0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <serial type='pty'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log' append='off'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target type='isa-serial' port='0'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:         <model name='isa-serial'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       </target>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae/console.log' append='off'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <target type='serial' port='0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </console>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <input type='tablet' bus='usb'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='input0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <input type='mouse' bus='ps2'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='input1'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <input type='keyboard' bus='ps2'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='input2'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </input>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <listen type='address' address='::0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </graphics>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <audio id='1' type='none'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <video>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='video0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </video>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <watchdog model='itco' action='reset'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='watchdog0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </watchdog>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <memballoon model='virtio'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <stats period='10'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='balloon0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <rng model='virtio'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <alias name='rng0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <label>system_u:system_r:svirt_t:s0:c423,c994</label>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c423,c994</imagelabel>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <label>+107:+107</label>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:16:42 compute-1 nova_compute[226101]: </domain>
Dec 06 07:16:42 compute-1 nova_compute[226101]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.731 226109 WARNING nova.virt.libvirt.driver [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Detaching interface fa:16:3e:88:28:7b failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapfcd7bd8f-ac' not found.
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.732 226109 DEBUG nova.virt.libvirt.vif [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.732 226109 DEBUG nova.network.os_vif_util [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "address": "fa:16:3e:88:28:7b", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcd7bd8f-ac", "ovs_interfaceid": "fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.733 226109 DEBUG nova.network.os_vif_util [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.733 226109 DEBUG os_vif [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.734 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.734 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcd7bd8f-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.734 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.736 226109 INFO os_vif [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:28:7b,bridge_name='br-int',has_traffic_filtering=True,id=fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcd7bd8f-ac')
Dec 06 07:16:42 compute-1 nova_compute[226101]: 2025-12-06 07:16:42.736 226109 DEBUG nova.virt.libvirt.guest [req-3bde1daf-f37b-4576-aec2-d74a1ce01068 req-7a5865c6-cb57-4822-b5d7-db3a5879f74e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1665481838</nova:name>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:16:42</nova:creationTime>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     <nova:port uuid="9edcc777-b052-41d1-a28c-dbfe02a70e3b">
Dec 06 07:16:42 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:16:42 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:16:42 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:16:42 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:16:42 compute-1 nova_compute[226101]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.070 226109 DEBUG nova.compute.manager [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-unplugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.070 226109 DEBUG oslo_concurrency.lockutils [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.071 226109 DEBUG oslo_concurrency.lockutils [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.071 226109 DEBUG oslo_concurrency.lockutils [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.071 226109 DEBUG nova.compute.manager [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] No waiting events found dispatching network-vif-unplugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.071 226109 WARNING nova.compute.manager [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received unexpected event network-vif-unplugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 for instance with vm_state active and task_state None.
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.071 226109 DEBUG nova.compute.manager [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.071 226109 DEBUG oslo_concurrency.lockutils [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.072 226109 DEBUG oslo_concurrency.lockutils [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.072 226109 DEBUG oslo_concurrency.lockutils [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.072 226109 DEBUG nova.compute.manager [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] No waiting events found dispatching network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.072 226109 WARNING nova.compute.manager [req-42dda6b9-19e3-4513-b3f9-041bfc06aaa4 req-64ae320f-7f0a-4c42-92d0-863e4f3022b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received unexpected event network-vif-plugged-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 for instance with vm_state active and task_state None.
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.330 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.331 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.331 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.331 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.331 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.332 226109 INFO nova.compute.manager [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Terminating instance
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.333 226109 DEBUG nova.compute.manager [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:16:43 compute-1 kernel: tap9edcc777-b0 (unregistering): left promiscuous mode
Dec 06 07:16:43 compute-1 NetworkManager[49031]: <info>  [1765005403.4024] device (tap9edcc777-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:16:43 compute-1 ovn_controller[130279]: 2025-12-06T07:16:43Z|00259|binding|INFO|Releasing lport 9edcc777-b052-41d1-a28c-dbfe02a70e3b from this chassis (sb_readonly=0)
Dec 06 07:16:43 compute-1 ovn_controller[130279]: 2025-12-06T07:16:43Z|00260|binding|INFO|Setting lport 9edcc777-b052-41d1-a28c-dbfe02a70e3b down in Southbound
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.409 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:43 compute-1 ovn_controller[130279]: 2025-12-06T07:16:43Z|00261|binding|INFO|Removing iface tap9edcc777-b0 ovn-installed in OVS
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.411 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.426 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:63:76 10.100.0.4'], port_security=['fa:16:3e:2c:63:76 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '12903f3d-051e-4858-aee4-dea9692252ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66c297e9-7226-4696-b93d-dd5b2f1d8a89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9edcc777-b052-41d1-a28c-dbfe02a70e3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.428 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9edcc777-b052-41d1-a28c-dbfe02a70e3b in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.430 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.430 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61a21643-77ba-4a09-8184-10dc4bd52b26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.431 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e5cf6e80-2dcf-4882-982c-de402274bf44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.432 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 namespace which is not needed anymore
Dec 06 07:16:43 compute-1 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000048.scope: Deactivated successfully.
Dec 06 07:16:43 compute-1 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000048.scope: Consumed 16.523s CPU time.
Dec 06 07:16:43 compute-1 systemd-machined[190302]: Machine qemu-31-instance-00000048 terminated.
Dec 06 07:16:43 compute-1 ceph-mon[81689]: pgmap v1759: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 07:16:43 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[251422]: [NOTICE]   (251442) : haproxy version is 2.8.14-c23fe91
Dec 06 07:16:43 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[251422]: [NOTICE]   (251442) : path to executable is /usr/sbin/haproxy
Dec 06 07:16:43 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[251422]: [WARNING]  (251442) : Exiting Master process...
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.569 226109 INFO nova.virt.libvirt.driver [-] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Instance destroyed successfully.
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.570 226109 DEBUG nova.objects.instance [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'resources' on Instance uuid 12903f3d-051e-4858-aee4-dea9692252ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:43 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[251422]: [ALERT]    (251442) : Current worker (251444) exited with code 143 (Terminated)
Dec 06 07:16:43 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[251422]: [WARNING]  (251442) : All workers exited. Exiting... (0)
Dec 06 07:16:43 compute-1 systemd[1]: libpod-f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1.scope: Deactivated successfully.
Dec 06 07:16:43 compute-1 podman[252288]: 2025-12-06 07:16:43.579263944 +0000 UTC m=+0.052605991 container died f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.583 226109 DEBUG nova.virt.libvirt.vif [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1665481838',display_name='tempest-AttachInterfacesTestJSON-server-1665481838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1665481838',id=72,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbDvBsJaocJrE+yuUPfUUsKWsbS3aDuuo5DAKFle2C94ehrnWbtFdLZ9ffClE1Td4CiC7kKu0EsLjsZ/P0vQXLjL1TvhGndU0ZR+NRZJEm1h6FBgxmX4KC48abyjY74CQ==',key_name='tempest-keypair-1654448957',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-s5i2p452',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=12903f3d-051e-4858-aee4-dea9692252ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.584 226109 DEBUG nova.network.os_vif_util [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.584 226109 DEBUG nova.network.os_vif_util [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:63:76,bridge_name='br-int',has_traffic_filtering=True,id=9edcc777-b052-41d1-a28c-dbfe02a70e3b,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9edcc777-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.585 226109 DEBUG os_vif [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:63:76,bridge_name='br-int',has_traffic_filtering=True,id=9edcc777-b052-41d1-a28c-dbfe02a70e3b,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9edcc777-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.586 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.586 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9edcc777-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.588 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.590 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.592 226109 INFO os_vif [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:63:76,bridge_name='br-int',has_traffic_filtering=True,id=9edcc777-b052-41d1-a28c-dbfe02a70e3b,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9edcc777-b0')
Dec 06 07:16:43 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1-userdata-shm.mount: Deactivated successfully.
Dec 06 07:16:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-f2dfb8c8903cd392b4ed747137ebf953cae2840076dada065cae0ec8fd8e77df-merged.mount: Deactivated successfully.
Dec 06 07:16:43 compute-1 podman[252288]: 2025-12-06 07:16:43.619961613 +0000 UTC m=+0.093303660 container cleanup f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:16:43 compute-1 systemd[1]: libpod-conmon-f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1.scope: Deactivated successfully.
Dec 06 07:16:43 compute-1 podman[252341]: 2025-12-06 07:16:43.679373229 +0000 UTC m=+0.040477945 container remove f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.684 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[08b976d1-e49e-4511-b4ca-caa0474d9ba3]: (4, ('Sat Dec  6 07:16:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 (f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1)\nf5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1\nSat Dec  6 07:16:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 (f5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1)\nf5a5ac323604492dd5f3c23f59bd82ad5ab8cd94af37805910b147229f7f7bc1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.686 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[246fc59d-5707-4405-86c3-23b4810cd0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.687 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:43 compute-1 kernel: tap61a21643-70: left promiscuous mode
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.690 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:43 compute-1 nova_compute[226101]: 2025-12-06 07:16:43.702 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.704 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a11d3afe-e46f-4bd4-ae0c-2a3f6f056d65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.721 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[82a93b9e-9e90-4aae-b0f6-3219c9ce2a27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.723 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fbdc556e-e717-4181-b80e-9de024661b0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.739 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2cec32c7-5daa-4cba-ad3a-811b33c852a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563170, 'reachable_time': 26850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252359, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.741 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:16:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:16:43.741 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[eda0e663-2dfc-4ed4-bab9-6e15de8f655d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:43 compute-1 systemd[1]: run-netns-ovnmeta\x2d61a21643\x2d77ba\x2d4a09\x2d8184\x2d10dc4bd52b26.mount: Deactivated successfully.
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.174 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.227 226109 INFO nova.network.neutron [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Port fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.227 226109 DEBUG nova.network.neutron [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updating instance_info_cache with network_info: [{"id": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "address": "fa:16:3e:2c:63:76", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9edcc777-b0", "ovs_interfaceid": "9edcc777-b052-41d1-a28c-dbfe02a70e3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.256 226109 DEBUG oslo_concurrency.lockutils [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-12903f3d-051e-4858-aee4-dea9692252ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.290 226109 DEBUG oslo_concurrency.lockutils [None req-ebc792c3-1449-48da-ae29-6b82becda70c 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-12903f3d-051e-4858-aee4-dea9692252ae-fcd7bd8f-ac37-4f8a-a63b-53f2a4b98412" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.471 226109 INFO nova.virt.libvirt.driver [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Deleting instance files /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae_del
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.472 226109 INFO nova.virt.libvirt.driver [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Deletion of /var/lib/nova/instances/12903f3d-051e-4858-aee4-dea9692252ae_del complete
Dec 06 07:16:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:44.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:44.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.516 226109 INFO nova.compute.manager [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Took 1.18 seconds to destroy the instance on the hypervisor.
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.517 226109 DEBUG oslo.service.loopingcall [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.517 226109 DEBUG nova.compute.manager [-] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:16:44 compute-1 nova_compute[226101]: 2025-12-06 07:16:44.517 226109 DEBUG nova.network.neutron [-] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.092 226109 DEBUG nova.compute.manager [req-844ec2b5-56e4-43ba-9756-c20982b2ab47 req-fd7b0d4e-801d-4a77-915f-317a0c746d88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-unplugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.093 226109 DEBUG oslo_concurrency.lockutils [req-844ec2b5-56e4-43ba-9756-c20982b2ab47 req-fd7b0d4e-801d-4a77-915f-317a0c746d88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.093 226109 DEBUG oslo_concurrency.lockutils [req-844ec2b5-56e4-43ba-9756-c20982b2ab47 req-fd7b0d4e-801d-4a77-915f-317a0c746d88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.093 226109 DEBUG oslo_concurrency.lockutils [req-844ec2b5-56e4-43ba-9756-c20982b2ab47 req-fd7b0d4e-801d-4a77-915f-317a0c746d88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.094 226109 DEBUG nova.compute.manager [req-844ec2b5-56e4-43ba-9756-c20982b2ab47 req-fd7b0d4e-801d-4a77-915f-317a0c746d88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] No waiting events found dispatching network-vif-unplugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.094 226109 DEBUG nova.compute.manager [req-844ec2b5-56e4-43ba-9756-c20982b2ab47 req-fd7b0d4e-801d-4a77-915f-317a0c746d88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-unplugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:16:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.572 226109 DEBUG nova.network.neutron [-] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.599 226109 INFO nova.compute.manager [-] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Took 1.08 seconds to deallocate network for instance.
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.645 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.645 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.653 226109 DEBUG nova.compute.manager [req-069802af-0927-4a43-8081-9ede775f832d req-3da4ffdb-e014-4a0b-b6ff-f195e28d144b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-deleted-9edcc777-b052-41d1-a28c-dbfe02a70e3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:45 compute-1 nova_compute[226101]: 2025-12-06 07:16:45.706 226109 DEBUG oslo_concurrency.processutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:46 compute-1 nova_compute[226101]: 2025-12-06 07:16:46.033 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3643432773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:46 compute-1 nova_compute[226101]: 2025-12-06 07:16:46.170 226109 DEBUG oslo_concurrency.processutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:46 compute-1 nova_compute[226101]: 2025-12-06 07:16:46.176 226109 DEBUG nova.compute.provider_tree [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:16:46 compute-1 nova_compute[226101]: 2025-12-06 07:16:46.196 226109 DEBUG nova.scheduler.client.report [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:16:46 compute-1 nova_compute[226101]: 2025-12-06 07:16:46.223 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:46 compute-1 nova_compute[226101]: 2025-12-06 07:16:46.268 226109 INFO nova.scheduler.client.report [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Deleted allocations for instance 12903f3d-051e-4858-aee4-dea9692252ae
Dec 06 07:16:46 compute-1 ceph-mon[81689]: pgmap v1760: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 548 KiB/s wr, 3 op/s
Dec 06 07:16:46 compute-1 nova_compute[226101]: 2025-12-06 07:16:46.351 226109 DEBUG oslo_concurrency.lockutils [None req-9a7e4f20-e88d-4230-987d-42cd1d82c4ff 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:46.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:46.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:47 compute-1 podman[252383]: 2025-12-06 07:16:47.086215012 +0000 UTC m=+0.074321048 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:16:48 compute-1 nova_compute[226101]: 2025-12-06 07:16:48.228 226109 DEBUG nova.compute.manager [req-f7cbcbe7-9b12-4f6f-aa4f-8c4fb8db5b30 req-8a6b2256-a317-40fb-9a51-91284f201d16 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received event network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:48 compute-1 nova_compute[226101]: 2025-12-06 07:16:48.229 226109 DEBUG oslo_concurrency.lockutils [req-f7cbcbe7-9b12-4f6f-aa4f-8c4fb8db5b30 req-8a6b2256-a317-40fb-9a51-91284f201d16 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "12903f3d-051e-4858-aee4-dea9692252ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:48 compute-1 nova_compute[226101]: 2025-12-06 07:16:48.229 226109 DEBUG oslo_concurrency.lockutils [req-f7cbcbe7-9b12-4f6f-aa4f-8c4fb8db5b30 req-8a6b2256-a317-40fb-9a51-91284f201d16 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:48 compute-1 nova_compute[226101]: 2025-12-06 07:16:48.230 226109 DEBUG oslo_concurrency.lockutils [req-f7cbcbe7-9b12-4f6f-aa4f-8c4fb8db5b30 req-8a6b2256-a317-40fb-9a51-91284f201d16 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "12903f3d-051e-4858-aee4-dea9692252ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:48 compute-1 nova_compute[226101]: 2025-12-06 07:16:48.230 226109 DEBUG nova.compute.manager [req-f7cbcbe7-9b12-4f6f-aa4f-8c4fb8db5b30 req-8a6b2256-a317-40fb-9a51-91284f201d16 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] No waiting events found dispatching network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:48 compute-1 nova_compute[226101]: 2025-12-06 07:16:48.230 226109 WARNING nova.compute.manager [req-f7cbcbe7-9b12-4f6f-aa4f-8c4fb8db5b30 req-8a6b2256-a317-40fb-9a51-91284f201d16 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Received unexpected event network-vif-plugged-9edcc777-b052-41d1-a28c-dbfe02a70e3b for instance with vm_state deleted and task_state None.
Dec 06 07:16:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:48.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:48.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:48 compute-1 nova_compute[226101]: 2025-12-06 07:16:48.589 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:49 compute-1 nova_compute[226101]: 2025-12-06 07:16:49.176 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:50.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:50.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:50 compute-1 ceph-mon[81689]: pgmap v1761: 305 pgs: 305 active+clean; 209 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 549 KiB/s wr, 27 op/s
Dec 06 07:16:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3643432773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2134509086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:52 compute-1 ceph-mon[81689]: pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 6.1 KiB/s wr, 29 op/s
Dec 06 07:16:52 compute-1 ceph-mon[81689]: pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Dec 06 07:16:52 compute-1 podman[252412]: 2025-12-06 07:16:52.067402303 +0000 UTC m=+0.051159913 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:16:52 compute-1 podman[252413]: 2025-12-06 07:16:52.068619856 +0000 UTC m=+0.050569597 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:16:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:52.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:52.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:53 compute-1 nova_compute[226101]: 2025-12-06 07:16:53.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:54 compute-1 ceph-mon[81689]: pgmap v1764: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 06 07:16:54 compute-1 nova_compute[226101]: 2025-12-06 07:16:54.178 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:16:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:54.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:16:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:54.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:54 compute-1 nova_compute[226101]: 2025-12-06 07:16:54.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:54 compute-1 nova_compute[226101]: 2025-12-06 07:16:54.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:54 compute-1 nova_compute[226101]: 2025-12-06 07:16:54.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:16:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.618 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.618 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.677 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.677 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.678 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.678 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:16:55 compute-1 nova_compute[226101]: 2025-12-06 07:16:55.678 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2263186031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:56 compute-1 nova_compute[226101]: 2025-12-06 07:16:56.106 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:56 compute-1 nova_compute[226101]: 2025-12-06 07:16:56.267 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:16:56 compute-1 nova_compute[226101]: 2025-12-06 07:16:56.269 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4692MB free_disk=20.921871185302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:16:56 compute-1 nova_compute[226101]: 2025-12-06 07:16:56.269 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:56 compute-1 nova_compute[226101]: 2025-12-06 07:16:56.269 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2054429614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:56 compute-1 ceph-mon[81689]: pgmap v1765: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 06 07:16:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3089742917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3693536985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:56.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.021 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.282 226109 INFO nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 has allocations against this compute host but is not found in the database.
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.282 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.283 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.328 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.332 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:57 compute-1 ceph-mon[81689]: pgmap v1766: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 06 07:16:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2263186031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3299157153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.506 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.506 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.523 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.610 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2033545108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.777 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.782 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.801 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.828 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.829 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.829 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.837 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:16:57 compute-1 nova_compute[226101]: 2025-12-06 07:16:57.837 226109 INFO nova.compute.claims [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:16:58 compute-1 nova_compute[226101]: 2025-12-06 07:16:58.015 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1288267147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:58 compute-1 nova_compute[226101]: 2025-12-06 07:16:58.459 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:58 compute-1 nova_compute[226101]: 2025-12-06 07:16:58.464 226109 DEBUG nova.compute.provider_tree [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:16:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2033545108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1288267147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:58.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:16:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:58 compute-1 nova_compute[226101]: 2025-12-06 07:16:58.568 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005403.5667856, 12903f3d-051e-4858-aee4-dea9692252ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:16:58 compute-1 nova_compute[226101]: 2025-12-06 07:16:58.568 226109 INFO nova.compute.manager [-] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] VM Stopped (Lifecycle Event)
Dec 06 07:16:58 compute-1 nova_compute[226101]: 2025-12-06 07:16:58.593 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-1 nova_compute[226101]: 2025-12-06 07:16:58.800 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:58 compute-1 nova_compute[226101]: 2025-12-06 07:16:58.801 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.179 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.401 226109 DEBUG nova.scheduler.client.report [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.475 226109 DEBUG nova.compute.manager [None req-48f98d8e-f41f-4082-8fe9-a4c752c1fd34 - - - - - -] [instance: 12903f3d-051e-4858-aee4-dea9692252ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.477 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.478 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:59 compute-1 ceph-mon[81689]: pgmap v1767: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:16:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/857765210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.510 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.510 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.650 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.651 226109 DEBUG nova.network.neutron [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.709 226109 INFO nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:16:59 compute-1 nova_compute[226101]: 2025-12-06 07:16:59.730 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:17:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1613157721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:00.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:00 compute-1 nova_compute[226101]: 2025-12-06 07:17:00.592 226109 DEBUG nova.policy [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.505 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.506 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.507 226109 INFO nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Creating image(s)
Dec 06 07:17:01 compute-1 ceph-mon[81689]: pgmap v1768: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:17:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:01.634 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:01.635 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:01.635 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.648 226109 DEBUG nova.storage.rbd_utils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.681 226109 DEBUG nova.storage.rbd_utils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.717 226109 DEBUG nova.storage.rbd_utils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.722 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.791 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.792 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.792 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.793 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.821 226109 DEBUG nova.storage.rbd_utils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:01 compute-1 nova_compute[226101]: 2025-12-06 07:17:01.825 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.107 226109 DEBUG nova.network.neutron [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Successfully created port: e7c11aa4-dbf4-473c-b956-2063ef2a13a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.167 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.246 226109 DEBUG nova.storage.rbd_utils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] resizing rbd image d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.355 226109 DEBUG nova.objects.instance [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'migration_context' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.373 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.374 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Ensure instance console log exists: /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.374 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.375 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.375 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:02.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:02 compute-1 nova_compute[226101]: 2025-12-06 07:17:02.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:03 compute-1 nova_compute[226101]: 2025-12-06 07:17:03.595 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:04 compute-1 ceph-mon[81689]: pgmap v1769: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 07:17:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3941074034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:04 compute-1 nova_compute[226101]: 2025-12-06 07:17:04.230 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:04.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:04.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:04 compute-1 nova_compute[226101]: 2025-12-06 07:17:04.635 226109 DEBUG nova.network.neutron [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Successfully updated port: e7c11aa4-dbf4-473c-b956-2063ef2a13a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:17:05 compute-1 nova_compute[226101]: 2025-12-06 07:17:05.095 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:05 compute-1 nova_compute[226101]: 2025-12-06 07:17:05.095 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:05 compute-1 nova_compute[226101]: 2025-12-06 07:17:05.095 226109 DEBUG nova.network.neutron [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:17:05 compute-1 nova_compute[226101]: 2025-12-06 07:17:05.104 226109 DEBUG nova.compute.manager [req-e44a4550-7975-4482-9aa9-ff9abead9342 req-2a4aa00b-dd50-499e-90be-2c6243309ec4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-changed-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:05 compute-1 nova_compute[226101]: 2025-12-06 07:17:05.105 226109 DEBUG nova.compute.manager [req-e44a4550-7975-4482-9aa9-ff9abead9342 req-2a4aa00b-dd50-499e-90be-2c6243309ec4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing instance network info cache due to event network-changed-e7c11aa4-dbf4-473c-b956-2063ef2a13a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:17:05 compute-1 nova_compute[226101]: 2025-12-06 07:17:05.105 226109 DEBUG oslo_concurrency.lockutils [req-e44a4550-7975-4482-9aa9-ff9abead9342 req-2a4aa00b-dd50-499e-90be-2c6243309ec4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:05 compute-1 ceph-mon[81689]: pgmap v1770: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 20 op/s
Dec 06 07:17:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:05 compute-1 nova_compute[226101]: 2025-12-06 07:17:05.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:05 compute-1 nova_compute[226101]: 2025-12-06 07:17:05.825 226109 DEBUG nova.network.neutron [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:17:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3670335113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:06.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:07 compute-1 ceph-mon[81689]: pgmap v1771: 305 pgs: 305 active+clean; 231 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 466 KiB/s wr, 77 op/s
Dec 06 07:17:07 compute-1 nova_compute[226101]: 2025-12-06 07:17:07.397 226109 DEBUG nova.network.neutron [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2059182180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:08.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:08.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:08 compute-1 nova_compute[226101]: 2025-12-06 07:17:08.598 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:08 compute-1 nova_compute[226101]: 2025-12-06 07:17:08.998 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:08 compute-1 nova_compute[226101]: 2025-12-06 07:17:08.998 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Instance network_info: |[{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:17:08 compute-1 nova_compute[226101]: 2025-12-06 07:17:08.998 226109 DEBUG oslo_concurrency.lockutils [req-e44a4550-7975-4482-9aa9-ff9abead9342 req-2a4aa00b-dd50-499e-90be-2c6243309ec4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:08 compute-1 nova_compute[226101]: 2025-12-06 07:17:08.999 226109 DEBUG nova.network.neutron [req-e44a4550-7975-4482-9aa9-ff9abead9342 req-2a4aa00b-dd50-499e-90be-2c6243309ec4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing network info cache for port e7c11aa4-dbf4-473c-b956-2063ef2a13a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.001 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Start _get_guest_xml network_info=[{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.122 226109 WARNING nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.128 226109 DEBUG nova.virt.libvirt.host [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.129 226109 DEBUG nova.virt.libvirt.host [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.132 226109 DEBUG nova.virt.libvirt.host [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.132 226109 DEBUG nova.virt.libvirt.host [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.133 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.134 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.134 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.134 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.135 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.135 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.135 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.135 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.135 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.136 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.136 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.136 226109 DEBUG nova.virt.hardware [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.139 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.233 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:09 compute-1 ceph-mon[81689]: pgmap v1772: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:17:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/134053312' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:17:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/134053312' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:17:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:17:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/412767804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.743 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.766 226109 DEBUG nova.storage.rbd_utils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:09 compute-1 nova_compute[226101]: 2025-12-06 07:17:09.770 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:17:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3995752661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.196 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.197 226109 DEBUG nova.virt.libvirt.vif [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:16:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.198 226109 DEBUG nova.network.os_vif_util [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.198 226109 DEBUG nova.network.os_vif_util [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:ba:6e,bridge_name='br-int',has_traffic_filtering=True,id=e7c11aa4-dbf4-473c-b956-2063ef2a13a2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c11aa4-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.199 226109 DEBUG nova.objects.instance [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.225 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <uuid>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</uuid>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <name>instance-0000004a</name>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:17:09</nova:creationTime>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:17:10 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <system>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <entry name="serial">d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <entry name="uuid">d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </system>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <os>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   </os>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <features>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   </features>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk">
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       </source>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config">
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       </source>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:17:10 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:17:ba:6e"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <target dev="tape7c11aa4-db"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log" append="off"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <video>
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </video>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:17:10 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:17:10 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:17:10 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:17:10 compute-1 nova_compute[226101]: </domain>
Dec 06 07:17:10 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.226 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Preparing to wait for external event network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.226 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.227 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.227 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.228 226109 DEBUG nova.virt.libvirt.vif [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:16:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.228 226109 DEBUG nova.network.os_vif_util [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.229 226109 DEBUG nova.network.os_vif_util [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:ba:6e,bridge_name='br-int',has_traffic_filtering=True,id=e7c11aa4-dbf4-473c-b956-2063ef2a13a2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c11aa4-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.229 226109 DEBUG os_vif [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:ba:6e,bridge_name='br-int',has_traffic_filtering=True,id=e7c11aa4-dbf4-473c-b956-2063ef2a13a2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c11aa4-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.230 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.230 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.231 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.232 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.233 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7c11aa4-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.233 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7c11aa4-db, col_values=(('external_ids', {'iface-id': 'e7c11aa4-dbf4-473c-b956-2063ef2a13a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:ba:6e', 'vm-uuid': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.235 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:10 compute-1 NetworkManager[49031]: <info>  [1765005430.2358] manager: (tape7c11aa4-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.237 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.241 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.244 226109 INFO os_vif [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:ba:6e,bridge_name='br-int',has_traffic_filtering=True,id=e7c11aa4-dbf4-473c-b956-2063ef2a13a2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c11aa4-db')
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.303 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.304 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.304 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:17:ba:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.305 226109 INFO nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Using config drive
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.325 226109 DEBUG nova.storage.rbd_utils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/412767804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3995752661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:10.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:10.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.899 226109 INFO nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Creating config drive at /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/disk.config
Dec 06 07:17:10 compute-1 nova_compute[226101]: 2025-12-06 07:17:10.905 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt4l6k5_g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.053 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt4l6k5_g" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.085 226109 DEBUG nova.storage.rbd_utils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.088 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/disk.config d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.587 226109 DEBUG oslo_concurrency.processutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/disk.config d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.588 226109 INFO nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Deleting local config drive /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/disk.config because it was imported into RBD.
Dec 06 07:17:11 compute-1 ceph-mon[81689]: pgmap v1773: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:17:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2290600617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:11 compute-1 kernel: tape7c11aa4-db: entered promiscuous mode
Dec 06 07:17:11 compute-1 ovn_controller[130279]: 2025-12-06T07:17:11Z|00262|binding|INFO|Claiming lport e7c11aa4-dbf4-473c-b956-2063ef2a13a2 for this chassis.
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.637 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 ovn_controller[130279]: 2025-12-06T07:17:11Z|00263|binding|INFO|e7c11aa4-dbf4-473c-b956-2063ef2a13a2: Claiming fa:16:3e:17:ba:6e 10.100.0.13
Dec 06 07:17:11 compute-1 NetworkManager[49031]: <info>  [1765005431.6395] manager: (tape7c11aa4-db): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.642 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.644 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.659 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:ba:6e 10.100.0.13'], port_security=['fa:16:3e:17:ba:6e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd78eff0-3454-4e87-a2e0-e9ca1c59d1a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=e7c11aa4-dbf4-473c-b956-2063ef2a13a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.660 139580 INFO neutron.agent.ovn.metadata.agent [-] Port e7c11aa4-dbf4-473c-b956-2063ef2a13a2 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.662 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:17:11 compute-1 systemd-udevd[252816]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:17:11 compute-1 systemd-machined[190302]: New machine qemu-34-instance-0000004a.
Dec 06 07:17:11 compute-1 NetworkManager[49031]: <info>  [1765005431.6760] device (tape7c11aa4-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:17:11 compute-1 NetworkManager[49031]: <info>  [1765005431.6772] device (tape7c11aa4-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.678 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f438eeca-9ecf-4153-bc09-037c1efa1679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.680 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61a21643-71 in ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.681 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61a21643-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.682 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eb96bf97-a53c-4089-8856-fc46e6a54428]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.683 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc3fc35-e5e0-4f82-84ba-5b88dae09536]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.696 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[c18f7abd-ffa9-491f-a901-cfadda24de50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 systemd[1]: Started Virtual Machine qemu-34-instance-0000004a.
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.706 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 ovn_controller[130279]: 2025-12-06T07:17:11Z|00264|binding|INFO|Setting lport e7c11aa4-dbf4-473c-b956-2063ef2a13a2 ovn-installed in OVS
Dec 06 07:17:11 compute-1 ovn_controller[130279]: 2025-12-06T07:17:11Z|00265|binding|INFO|Setting lport e7c11aa4-dbf4-473c-b956-2063ef2a13a2 up in Southbound
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.711 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.711 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ad03d8f0-d429-4dc5-abbd-46da7a5d890e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.738 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[62845771-daaf-4e65-9d96-0d15f2a5e988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 systemd-udevd[252820]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.742 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[464fe9dd-d527-4fb9-9bab-35651793d54d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 NetworkManager[49031]: <info>  [1765005431.7445] manager: (tap61a21643-70): new Veth device (/org/freedesktop/NetworkManager/Devices/120)
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.775 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[98d75c1a-16aa-4d1e-9357-fd7ccced92ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.777 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6f3973-305f-46c6-bc25-ff9fd0b2461f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 NetworkManager[49031]: <info>  [1765005431.7979] device (tap61a21643-70): carrier: link connected
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.802 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba1f43e-300d-4a51-92ad-8c9d26d2a057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.816 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2550e426-5513-4ea5-bfa2-9339b9df7f25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571130, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252850, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.830 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[467b4b20-83c0-4ebb-b257-bef53c4fdf59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:67b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571130, 'tstamp': 571130}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252851, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.845 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc1e9b0-fbbe-425c-a1c1-cc1b1a364d65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571130, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252852, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.873 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[898507dc-25a4-4a21-9a07-4b35024e4bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.931 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa5d413-ba03-46cc-bc34-a28d58a42006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.933 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.933 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.934 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.935 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 NetworkManager[49031]: <info>  [1765005431.9364] manager: (tap61a21643-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Dec 06 07:17:11 compute-1 kernel: tap61a21643-70: entered promiscuous mode
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.938 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.939 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:11 compute-1 ovn_controller[130279]: 2025-12-06T07:17:11Z|00266|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.940 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.952 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.952 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.953 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[06fb2e78-3369-4975-b375-e9b830a11dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.954 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:17:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:11.955 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'env', 'PROCESS_TAG=haproxy-61a21643-77ba-4a09-8184-10dc4bd52b26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61a21643-77ba-4a09-8184-10dc4bd52b26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.983 226109 DEBUG nova.compute.manager [req-9a7bbb9c-baed-4657-9af7-10287e1b6372 req-9acb75f4-9df5-41f6-bf87-864edc8d3b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.985 226109 DEBUG oslo_concurrency.lockutils [req-9a7bbb9c-baed-4657-9af7-10287e1b6372 req-9acb75f4-9df5-41f6-bf87-864edc8d3b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.985 226109 DEBUG oslo_concurrency.lockutils [req-9a7bbb9c-baed-4657-9af7-10287e1b6372 req-9acb75f4-9df5-41f6-bf87-864edc8d3b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.986 226109 DEBUG oslo_concurrency.lockutils [req-9a7bbb9c-baed-4657-9af7-10287e1b6372 req-9acb75f4-9df5-41f6-bf87-864edc8d3b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:11 compute-1 nova_compute[226101]: 2025-12-06 07:17:11.986 226109 DEBUG nova.compute.manager [req-9a7bbb9c-baed-4657-9af7-10287e1b6372 req-9acb75f4-9df5-41f6-bf87-864edc8d3b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Processing event network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.234 226109 DEBUG nova.network.neutron [req-e44a4550-7975-4482-9aa9-ff9abead9342 req-2a4aa00b-dd50-499e-90be-2c6243309ec4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updated VIF entry in instance network info cache for port e7c11aa4-dbf4-473c-b956-2063ef2a13a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.234 226109 DEBUG nova.network.neutron [req-e44a4550-7975-4482-9aa9-ff9abead9342 req-2a4aa00b-dd50-499e-90be-2c6243309ec4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.277 226109 DEBUG oslo_concurrency.lockutils [req-e44a4550-7975-4482-9aa9-ff9abead9342 req-2a4aa00b-dd50-499e-90be-2c6243309ec4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:12 compute-1 podman[252891]: 2025-12-06 07:17:12.261109558 +0000 UTC m=+0.018518351 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.462 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005432.4616551, d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.462 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] VM Started (Lifecycle Event)
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.464 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.466 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.469 226109 INFO nova.virt.libvirt.driver [-] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Instance spawned successfully.
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.469 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:17:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:12.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:12.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.608 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.614 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.614 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.614 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.615 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.615 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.616 226109 DEBUG nova.virt.libvirt.driver [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.620 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.661 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.662 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005432.4639447, d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.662 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] VM Paused (Lifecycle Event)
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.691 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.694 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005432.46603, d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.694 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] VM Resumed (Lifecycle Event)
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.700 226109 INFO nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Took 11.19 seconds to spawn the instance on the hypervisor.
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.701 226109 DEBUG nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.727 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.731 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.770 226109 INFO nova.compute.manager [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Took 15.18 seconds to build instance.
Dec 06 07:17:12 compute-1 nova_compute[226101]: 2025-12-06 07:17:12.786 226109 DEBUG oslo_concurrency.lockutils [None req-293b293f-8587-4afd-acd0-51c4418a6cc6 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:13 compute-1 ceph-mon[81689]: pgmap v1774: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:17:13 compute-1 podman[252891]: 2025-12-06 07:17:13.50312803 +0000 UTC m=+1.260536813 container create a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:17:13 compute-1 systemd[1]: Started libpod-conmon-a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb.scope.
Dec 06 07:17:13 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:17:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3a2b8da106c8a10aa0ead6cf9a7045fcee6c9e2fb3feb04a1668b6774b7307/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:13 compute-1 podman[252891]: 2025-12-06 07:17:13.905596549 +0000 UTC m=+1.663005352 container init a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:17:13 compute-1 podman[252891]: 2025-12-06 07:17:13.911327674 +0000 UTC m=+1.668736447 container start a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:17:13 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[252941]: [NOTICE]   (252945) : New worker (252947) forked
Dec 06 07:17:13 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[252941]: [NOTICE]   (252945) : Loading success.
Dec 06 07:17:14 compute-1 nova_compute[226101]: 2025-12-06 07:17:14.071 226109 DEBUG nova.compute.manager [req-53804ed7-542b-4358-b103-b9c873f27de7 req-fe26421f-4fb9-40de-8315-3761989763a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:14 compute-1 nova_compute[226101]: 2025-12-06 07:17:14.072 226109 DEBUG oslo_concurrency.lockutils [req-53804ed7-542b-4358-b103-b9c873f27de7 req-fe26421f-4fb9-40de-8315-3761989763a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:14 compute-1 nova_compute[226101]: 2025-12-06 07:17:14.072 226109 DEBUG oslo_concurrency.lockutils [req-53804ed7-542b-4358-b103-b9c873f27de7 req-fe26421f-4fb9-40de-8315-3761989763a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:14 compute-1 nova_compute[226101]: 2025-12-06 07:17:14.073 226109 DEBUG oslo_concurrency.lockutils [req-53804ed7-542b-4358-b103-b9c873f27de7 req-fe26421f-4fb9-40de-8315-3761989763a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:14 compute-1 nova_compute[226101]: 2025-12-06 07:17:14.073 226109 DEBUG nova.compute.manager [req-53804ed7-542b-4358-b103-b9c873f27de7 req-fe26421f-4fb9-40de-8315-3761989763a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:14 compute-1 nova_compute[226101]: 2025-12-06 07:17:14.073 226109 WARNING nova.compute.manager [req-53804ed7-542b-4358-b103-b9c873f27de7 req-fe26421f-4fb9-40de-8315-3761989763a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 for instance with vm_state active and task_state None.
Dec 06 07:17:14 compute-1 nova_compute[226101]: 2025-12-06 07:17:14.235 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:14.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:14.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:15 compute-1 ceph-mon[81689]: pgmap v1775: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.415 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:15 compute-1 NetworkManager[49031]: <info>  [1765005435.4159] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Dec 06 07:17:15 compute-1 NetworkManager[49031]: <info>  [1765005435.4170] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Dec 06 07:17:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:15.449 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:15.450 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.605 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:15 compute-1 ovn_controller[130279]: 2025-12-06T07:17:15Z|00267|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.627 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.988 226109 DEBUG nova.compute.manager [req-42131dfe-5381-4ea9-a677-4a1bed7325a2 req-a61368c2-2761-4381-941c-709220f5c451 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-changed-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.988 226109 DEBUG nova.compute.manager [req-42131dfe-5381-4ea9-a677-4a1bed7325a2 req-a61368c2-2761-4381-941c-709220f5c451 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing instance network info cache due to event network-changed-e7c11aa4-dbf4-473c-b956-2063ef2a13a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.989 226109 DEBUG oslo_concurrency.lockutils [req-42131dfe-5381-4ea9-a677-4a1bed7325a2 req-a61368c2-2761-4381-941c-709220f5c451 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.989 226109 DEBUG oslo_concurrency.lockutils [req-42131dfe-5381-4ea9-a677-4a1bed7325a2 req-a61368c2-2761-4381-941c-709220f5c451 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:15 compute-1 nova_compute[226101]: 2025-12-06 07:17:15.989 226109 DEBUG nova.network.neutron [req-42131dfe-5381-4ea9-a677-4a1bed7325a2 req-a61368c2-2761-4381-941c-709220f5c451 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing network info cache for port e7c11aa4-dbf4-473c-b956-2063ef2a13a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:17:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 07:17:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:16.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 07:17:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:16.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:17 compute-1 ceph-mon[81689]: pgmap v1776: 305 pgs: 305 active+clean; 277 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.1 MiB/s wr, 148 op/s
Dec 06 07:17:18 compute-1 podman[252957]: 2025-12-06 07:17:18.09548347 +0000 UTC m=+0.079280332 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:17:18 compute-1 nova_compute[226101]: 2025-12-06 07:17:18.508 226109 DEBUG nova.network.neutron [req-42131dfe-5381-4ea9-a677-4a1bed7325a2 req-a61368c2-2761-4381-941c-709220f5c451 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updated VIF entry in instance network info cache for port e7c11aa4-dbf4-473c-b956-2063ef2a13a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:17:18 compute-1 nova_compute[226101]: 2025-12-06 07:17:18.509 226109 DEBUG nova.network.neutron [req-42131dfe-5381-4ea9-a677-4a1bed7325a2 req-a61368c2-2761-4381-941c-709220f5c451 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:18 compute-1 nova_compute[226101]: 2025-12-06 07:17:18.529 226109 DEBUG oslo_concurrency.lockutils [req-42131dfe-5381-4ea9-a677-4a1bed7325a2 req-a61368c2-2761-4381-941c-709220f5c451 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:18 compute-1 nova_compute[226101]: 2025-12-06 07:17:18.905 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:19 compute-1 ceph-mon[81689]: pgmap v1777: 305 pgs: 305 active+clean; 295 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 170 op/s
Dec 06 07:17:19 compute-1 nova_compute[226101]: 2025-12-06 07:17:19.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:20 compute-1 nova_compute[226101]: 2025-12-06 07:17:20.237 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:20.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:20.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:21 compute-1 ceph-mon[81689]: pgmap v1778: 305 pgs: 305 active+clean; 295 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 07:17:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:22.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:22.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:23 compute-1 podman[252984]: 2025-12-06 07:17:23.070483124 +0000 UTC m=+0.051411659 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 07:17:23 compute-1 podman[252983]: 2025-12-06 07:17:23.070843354 +0000 UTC m=+0.054014430 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:17:24 compute-1 nova_compute[226101]: 2025-12-06 07:17:24.239 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:24.451 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:24 compute-1 ceph-mon[81689]: pgmap v1779: 305 pgs: 305 active+clean; 295 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Dec 06 07:17:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:24.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:24.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:25 compute-1 nova_compute[226101]: 2025-12-06 07:17:25.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:26.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3281632848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:26 compute-1 ceph-mon[81689]: pgmap v1780: 305 pgs: 305 active+clean; 295 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 07:17:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:28.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:28.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:28 compute-1 ceph-mon[81689]: pgmap v1781: 305 pgs: 305 active+clean; 301 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 157 op/s
Dec 06 07:17:29 compute-1 nova_compute[226101]: 2025-12-06 07:17:29.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:29 compute-1 sudo[253020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:29 compute-1 sudo[253020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:29 compute-1 sudo[253020]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:29 compute-1 sudo[253045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:17:29 compute-1 sudo[253045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:29 compute-1 sudo[253045]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:29 compute-1 sudo[253071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:29 compute-1 sudo[253071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:29 compute-1 sudo[253071]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:29 compute-1 sudo[253096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:17:29 compute-1 sudo[253096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:30 compute-1 nova_compute[226101]: 2025-12-06 07:17:30.242 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:30 compute-1 sudo[253096]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:30 compute-1 ceph-mon[81689]: pgmap v1782: 305 pgs: 305 active+clean; 311 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:17:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:30.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:30.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:31 compute-1 ceph-mon[81689]: pgmap v1783: 305 pgs: 305 active+clean; 311 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Dec 06 07:17:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:32 compute-1 ovn_controller[130279]: 2025-12-06T07:17:32Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:ba:6e 10.100.0.13
Dec 06 07:17:32 compute-1 ovn_controller[130279]: 2025-12-06T07:17:32Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:ba:6e 10.100.0.13
Dec 06 07:17:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:17:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:17:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:17:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:17:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:17:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/234714428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:32.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:32.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:33 compute-1 ceph-mon[81689]: pgmap v1784: 305 pgs: 305 active+clean; 334 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Dec 06 07:17:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3848123894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/297393884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:34 compute-1 nova_compute[226101]: 2025-12-06 07:17:34.242 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:34.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2369414319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:35 compute-1 nova_compute[226101]: 2025-12-06 07:17:35.251 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:36.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:36.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:37 compute-1 ceph-mon[81689]: pgmap v1785: 305 pgs: 305 active+clean; 334 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Dec 06 07:17:37 compute-1 ceph-mon[81689]: pgmap v1786: 305 pgs: 305 active+clean; 312 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 4.2 MiB/s wr, 137 op/s
Dec 06 07:17:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:38.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:38.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:38 compute-1 ceph-mon[81689]: pgmap v1787: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 5.1 MiB/s wr, 133 op/s
Dec 06 07:17:39 compute-1 nova_compute[226101]: 2025-12-06 07:17:39.244 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:40 compute-1 nova_compute[226101]: 2025-12-06 07:17:40.253 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:40.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:40.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2337843887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:41 compute-1 ceph-mon[81689]: pgmap v1788: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Dec 06 07:17:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:42.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:43 compute-1 ceph-mon[81689]: pgmap v1789: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 267 op/s
Dec 06 07:17:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:43 compute-1 sudo[253152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:43 compute-1 sudo[253152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:43 compute-1 sudo[253152]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:43 compute-1 sudo[253177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:17:43 compute-1 sudo[253177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:43 compute-1 sudo[253177]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:44 compute-1 nova_compute[226101]: 2025-12-06 07:17:44.246 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:44.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:45 compute-1 nova_compute[226101]: 2025-12-06 07:17:45.256 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:46.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:46.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:47 compute-1 ceph-mon[81689]: pgmap v1790: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 215 op/s
Dec 06 07:17:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:48.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:48.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:48 compute-1 ceph-mon[81689]: pgmap v1791: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 231 op/s
Dec 06 07:17:49 compute-1 podman[253202]: 2025-12-06 07:17:49.115302132 +0000 UTC m=+0.088821899 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 07:17:49 compute-1 nova_compute[226101]: 2025-12-06 07:17:49.339 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-1 ceph-mon[81689]: pgmap v1792: 305 pgs: 305 active+clean; 309 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Dec 06 07:17:50 compute-1 nova_compute[226101]: 2025-12-06 07:17:50.259 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:50.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:50.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:51 compute-1 ceph-mon[81689]: pgmap v1793: 305 pgs: 305 active+clean; 309 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 44 KiB/s wr, 163 op/s
Dec 06 07:17:51 compute-1 nova_compute[226101]: 2025-12-06 07:17:51.949 226109 DEBUG oslo_concurrency.lockutils [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:51 compute-1 nova_compute[226101]: 2025-12-06 07:17:51.949 226109 DEBUG oslo_concurrency.lockutils [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:51 compute-1 nova_compute[226101]: 2025-12-06 07:17:51.950 226109 DEBUG nova.objects.instance [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:51 compute-1 nova_compute[226101]: 2025-12-06 07:17:51.974 226109 DEBUG nova.objects.instance [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_requests' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:51 compute-1 nova_compute[226101]: 2025-12-06 07:17:51.985 226109 DEBUG nova.network.neutron [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:17:52 compute-1 nova_compute[226101]: 2025-12-06 07:17:52.530 226109 DEBUG nova.policy [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:17:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:52.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:53 compute-1 ceph-mon[81689]: pgmap v1794: 305 pgs: 305 active+clean; 295 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 47 KiB/s wr, 249 op/s
Dec 06 07:17:53 compute-1 nova_compute[226101]: 2025-12-06 07:17:53.490 226109 DEBUG nova.network.neutron [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Successfully created port: a49d82f2-78d3-4252-867e-1f2512bab05a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:17:54 compute-1 podman[253229]: 2025-12-06 07:17:54.09038257 +0000 UTC m=+0.074804550 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Dec 06 07:17:54 compute-1 podman[253228]: 2025-12-06 07:17:54.091545562 +0000 UTC m=+0.078885091 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:17:54 compute-1 nova_compute[226101]: 2025-12-06 07:17:54.385 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:54.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:54.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.182 226109 DEBUG nova.network.neutron [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Successfully updated port: a49d82f2-78d3-4252-867e-1f2512bab05a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.260 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.310 226109 DEBUG oslo_concurrency.lockutils [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.310 226109 DEBUG oslo_concurrency.lockutils [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.310 226109 DEBUG nova.network.neutron [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:17:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:55 compute-1 ceph-mon[81689]: pgmap v1795: 305 pgs: 305 active+clean; 295 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 106 op/s
Dec 06 07:17:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2968549645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.568 226109 DEBUG nova.compute.manager [req-a27372c9-cf75-46e8-ad56-e1146684973e req-6cea7ba8-90bc-4ea5-b1c6-47d4920e5c5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-changed-a49d82f2-78d3-4252-867e-1f2512bab05a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.569 226109 DEBUG nova.compute.manager [req-a27372c9-cf75-46e8-ad56-e1146684973e req-6cea7ba8-90bc-4ea5-b1c6-47d4920e5c5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing instance network info cache due to event network-changed-a49d82f2-78d3-4252-867e-1f2512bab05a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.569 226109 DEBUG oslo_concurrency.lockutils [req-a27372c9-cf75-46e8-ad56-e1146684973e req-6cea7ba8-90bc-4ea5-b1c6-47d4920e5c5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.718 226109 WARNING nova.network.neutron [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it
Dec 06 07:17:55 compute-1 nova_compute[226101]: 2025-12-06 07:17:55.896 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:56.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:17:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:56.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:17:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3831876106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1251802086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.164 226109 DEBUG nova.network.neutron [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.209 226109 DEBUG oslo_concurrency.lockutils [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.211 226109 DEBUG oslo_concurrency.lockutils [req-a27372c9-cf75-46e8-ad56-e1146684973e req-6cea7ba8-90bc-4ea5-b1c6-47d4920e5c5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.211 226109 DEBUG nova.network.neutron [req-a27372c9-cf75-46e8-ad56-e1146684973e req-6cea7ba8-90bc-4ea5-b1c6-47d4920e5c5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing network info cache for port a49d82f2-78d3-4252-867e-1f2512bab05a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:17:58 compute-1 ceph-mon[81689]: pgmap v1796: 305 pgs: 305 active+clean; 273 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 113 op/s
Dec 06 07:17:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3589256854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.213 226109 DEBUG nova.virt.libvirt.vif [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.214 226109 DEBUG nova.network.os_vif_util [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.214 226109 DEBUG nova.network.os_vif_util [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.215 226109 DEBUG os_vif [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.215 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.216 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.216 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.218 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.219 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49d82f2-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.220 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa49d82f2-78, col_values=(('external_ids', {'iface-id': 'a49d82f2-78d3-4252-867e-1f2512bab05a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:44:56', 'vm-uuid': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.221 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 NetworkManager[49031]: <info>  [1765005478.2226] manager: (tapa49d82f2-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.224 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.228 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.229 226109 INFO os_vif [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78')
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.229 226109 DEBUG nova.virt.libvirt.vif [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.230 226109 DEBUG nova.network.os_vif_util [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.230 226109 DEBUG nova.network.os_vif_util [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.233 226109 DEBUG nova.virt.libvirt.guest [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] attach device xml: <interface type="ethernet">
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:90:44:56"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <target dev="tapa49d82f2-78"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]: </interface>
Dec 06 07:17:58 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:17:58 compute-1 kernel: tapa49d82f2-78: entered promiscuous mode
Dec 06 07:17:58 compute-1 NetworkManager[49031]: <info>  [1765005478.2461] manager: (tapa49d82f2-78): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.247 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 ovn_controller[130279]: 2025-12-06T07:17:58Z|00268|binding|INFO|Claiming lport a49d82f2-78d3-4252-867e-1f2512bab05a for this chassis.
Dec 06 07:17:58 compute-1 ovn_controller[130279]: 2025-12-06T07:17:58Z|00269|binding|INFO|a49d82f2-78d3-4252-867e-1f2512bab05a: Claiming fa:16:3e:90:44:56 10.100.0.8
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.255 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:44:56 10.100.0.8'], port_security=['fa:16:3e:90:44:56 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=a49d82f2-78d3-4252-867e-1f2512bab05a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.256 139580 INFO neutron.agent.ovn.metadata.agent [-] Port a49d82f2-78d3-4252-867e-1f2512bab05a in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.258 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:17:58 compute-1 ovn_controller[130279]: 2025-12-06T07:17:58Z|00270|binding|INFO|Setting lport a49d82f2-78d3-4252-867e-1f2512bab05a ovn-installed in OVS
Dec 06 07:17:58 compute-1 ovn_controller[130279]: 2025-12-06T07:17:58Z|00271|binding|INFO|Setting lport a49d82f2-78d3-4252-867e-1f2512bab05a up in Southbound
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.263 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.267 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.280 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e2c930-3263-4274-b18c-202610f15b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:58 compute-1 systemd-udevd[253271]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:17:58 compute-1 NetworkManager[49031]: <info>  [1765005478.3002] device (tapa49d82f2-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:17:58 compute-1 NetworkManager[49031]: <info>  [1765005478.3010] device (tapa49d82f2-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.310 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[be9f8146-a5f4-4b18-896e-e6a6134e1aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.315 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cb40dc12-1923-4345-866a-6f51f4b9db92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.345 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[fa222c11-485b-4b90-bccb-956452e4b7f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.361 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f458c340-63b0-4df0-9024-a86f8101b344]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571130, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253278, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.378 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5e49b92b-2a21-4a84-90cd-4ef19206fba8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571140, 'tstamp': 571140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253279, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571143, 'tstamp': 571143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253279, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.380 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.381 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.382 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.383 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.383 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.384 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:58.384 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.516 226109 DEBUG nova.virt.libvirt.driver [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.517 226109 DEBUG nova.virt.libvirt.driver [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.517 226109 DEBUG nova.virt.libvirt.driver [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:17:ba:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.517 226109 DEBUG nova.virt.libvirt.driver [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:90:44:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.547 226109 DEBUG nova.virt.libvirt.guest [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:17:58</nova:creationTime>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:17:58 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     <nova:port uuid="a49d82f2-78d3-4252-867e-1f2512bab05a">
Dec 06 07:17:58 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:17:58 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:17:58 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:17:58 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:17:58 compute-1 nova_compute[226101]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:17:58 compute-1 nova_compute[226101]: 2025-12-06 07:17:58.574 226109 DEBUG oslo_concurrency.lockutils [None req-9dd9b3ba-5be3-4c6e-8b83-d508ea22ea00 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:17:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:58.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.054 226109 DEBUG nova.compute.manager [req-aff1e92a-b734-4ba3-b187-f68935ace1fd req-918a8492-b12f-41cd-835e-a20570dd317e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.054 226109 DEBUG oslo_concurrency.lockutils [req-aff1e92a-b734-4ba3-b187-f68935ace1fd req-918a8492-b12f-41cd-835e-a20570dd317e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.054 226109 DEBUG oslo_concurrency.lockutils [req-aff1e92a-b734-4ba3-b187-f68935ace1fd req-918a8492-b12f-41cd-835e-a20570dd317e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.054 226109 DEBUG oslo_concurrency.lockutils [req-aff1e92a-b734-4ba3-b187-f68935ace1fd req-918a8492-b12f-41cd-835e-a20570dd317e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.054 226109 DEBUG nova.compute.manager [req-aff1e92a-b734-4ba3-b187-f68935ace1fd req-918a8492-b12f-41cd-835e-a20570dd317e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.055 226109 WARNING nova.compute.manager [req-aff1e92a-b734-4ba3-b187-f68935ace1fd req-918a8492-b12f-41cd-835e-a20570dd317e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a for instance with vm_state active and task_state None.
Dec 06 07:17:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:59.195 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.195 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:17:59.196 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.387 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:59 compute-1 ceph-mon[81689]: pgmap v1797: 305 pgs: 305 active+clean; 249 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.7 KiB/s wr, 116 op/s
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.899 226109 DEBUG oslo_concurrency.lockutils [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.899 226109 DEBUG oslo_concurrency.lockutils [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:59 compute-1 nova_compute[226101]: 2025-12-06 07:17:59.900 226109 DEBUG nova.objects.instance [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:00.197 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:00 compute-1 ovn_controller[130279]: 2025-12-06T07:18:00Z|00272|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.388 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:00.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.689 226109 DEBUG nova.network.neutron [req-a27372c9-cf75-46e8-ad56-e1146684973e req-6cea7ba8-90bc-4ea5-b1c6-47d4920e5c5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updated VIF entry in instance network info cache for port a49d82f2-78d3-4252-867e-1f2512bab05a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.690 226109 DEBUG nova.network.neutron [req-a27372c9-cf75-46e8-ad56-e1146684973e req-6cea7ba8-90bc-4ea5-b1c6-47d4920e5c5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.697 226109 DEBUG nova.objects.instance [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_requests' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.756 226109 DEBUG oslo_concurrency.lockutils [req-a27372c9-cf75-46e8-ad56-e1146684973e req-6cea7ba8-90bc-4ea5-b1c6-47d4920e5c5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.756 226109 DEBUG nova.network.neutron [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.758 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.758 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:18:00 compute-1 nova_compute[226101]: 2025-12-06 07:18:00.758 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:01 compute-1 nova_compute[226101]: 2025-12-06 07:18:01.017 226109 DEBUG nova.policy [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:18:01 compute-1 ovn_controller[130279]: 2025-12-06T07:18:01Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:44:56 10.100.0.8
Dec 06 07:18:01 compute-1 ovn_controller[130279]: 2025-12-06T07:18:01Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:44:56 10.100.0.8
Dec 06 07:18:01 compute-1 nova_compute[226101]: 2025-12-06 07:18:01.221 226109 DEBUG nova.compute.manager [req-3ace5ffb-23a1-440e-8761-f92e427a6b61 req-6b58c0d9-b12d-496e-8563-e3a99a80ebe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:01 compute-1 nova_compute[226101]: 2025-12-06 07:18:01.221 226109 DEBUG oslo_concurrency.lockutils [req-3ace5ffb-23a1-440e-8761-f92e427a6b61 req-6b58c0d9-b12d-496e-8563-e3a99a80ebe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:01 compute-1 nova_compute[226101]: 2025-12-06 07:18:01.221 226109 DEBUG oslo_concurrency.lockutils [req-3ace5ffb-23a1-440e-8761-f92e427a6b61 req-6b58c0d9-b12d-496e-8563-e3a99a80ebe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:01 compute-1 nova_compute[226101]: 2025-12-06 07:18:01.222 226109 DEBUG oslo_concurrency.lockutils [req-3ace5ffb-23a1-440e-8761-f92e427a6b61 req-6b58c0d9-b12d-496e-8563-e3a99a80ebe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:01 compute-1 nova_compute[226101]: 2025-12-06 07:18:01.222 226109 DEBUG nova.compute.manager [req-3ace5ffb-23a1-440e-8761-f92e427a6b61 req-6b58c0d9-b12d-496e-8563-e3a99a80ebe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:01 compute-1 nova_compute[226101]: 2025-12-06 07:18:01.222 226109 WARNING nova.compute.manager [req-3ace5ffb-23a1-440e-8761-f92e427a6b61 req-6b58c0d9-b12d-496e-8563-e3a99a80ebe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a for instance with vm_state active and task_state None.
Dec 06 07:18:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:01.635 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:01.635 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:01.636 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:01 compute-1 ceph-mon[81689]: pgmap v1798: 305 pgs: 305 active+clean; 249 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.7 KiB/s wr, 112 op/s
Dec 06 07:18:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3472577622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:02 compute-1 nova_compute[226101]: 2025-12-06 07:18:02.090 226109 DEBUG nova.network.neutron [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Successfully created port: 34bdfa57-1694-46e3-8ea7-7493d4453761 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:18:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:02.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:02.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/13716958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:03 compute-1 nova_compute[226101]: 2025-12-06 07:18:03.001 226109 DEBUG nova.network.neutron [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Successfully updated port: 34bdfa57-1694-46e3-8ea7-7493d4453761 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:18:03 compute-1 nova_compute[226101]: 2025-12-06 07:18:03.021 226109 DEBUG oslo_concurrency.lockutils [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:03 compute-1 nova_compute[226101]: 2025-12-06 07:18:03.224 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:03 compute-1 nova_compute[226101]: 2025-12-06 07:18:03.537 226109 DEBUG nova.compute.manager [req-7e6ae3d9-eb1f-4b1e-bf2d-44457269b33a req-ecc0d577-94d2-4ec9-8dc4-1316b55909c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-changed-34bdfa57-1694-46e3-8ea7-7493d4453761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:03 compute-1 nova_compute[226101]: 2025-12-06 07:18:03.537 226109 DEBUG nova.compute.manager [req-7e6ae3d9-eb1f-4b1e-bf2d-44457269b33a req-ecc0d577-94d2-4ec9-8dc4-1316b55909c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing instance network info cache due to event network-changed-34bdfa57-1694-46e3-8ea7-7493d4453761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:18:03 compute-1 nova_compute[226101]: 2025-12-06 07:18:03.537 226109 DEBUG oslo_concurrency.lockutils [req-7e6ae3d9-eb1f-4b1e-bf2d-44457269b33a req-ecc0d577-94d2-4ec9-8dc4-1316b55909c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:04 compute-1 ceph-mon[81689]: pgmap v1799: 305 pgs: 305 active+clean; 266 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.390 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.494 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.523 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.524 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.524 226109 DEBUG oslo_concurrency.lockutils [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.524 226109 DEBUG nova.network.neutron [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.525 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.526 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.526 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.526 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.526 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.527 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.527 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.574 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.575 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.575 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.575 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.576 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:04.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:04.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.815 226109 WARNING nova.network.neutron [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it
Dec 06 07:18:04 compute-1 nova_compute[226101]: 2025-12-06 07:18:04.816 226109 WARNING nova.network.neutron [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it
Dec 06 07:18:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2412116112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.132 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.214 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.214 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.396 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.397 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4506MB free_disk=20.89682388305664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.397 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.398 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:05 compute-1 ceph-mon[81689]: pgmap v1800: 305 pgs: 305 active+clean; 266 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Dec 06 07:18:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1036209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.806 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.807 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.807 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:18:05 compute-1 nova_compute[226101]: 2025-12-06 07:18:05.912 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.087 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.088 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.146 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.187 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.324 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:06.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3335321634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.793 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.799 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.860 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.891 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.892 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.892 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.893 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:18:06 compute-1 nova_compute[226101]: 2025-12-06 07:18:06.937 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:18:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2412116112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:07 compute-1 nova_compute[226101]: 2025-12-06 07:18:07.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:07 compute-1 nova_compute[226101]: 2025-12-06 07:18:07.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:07 compute-1 nova_compute[226101]: 2025-12-06 07:18:07.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:07 compute-1 nova_compute[226101]: 2025-12-06 07:18:07.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:18:08 compute-1 nova_compute[226101]: 2025-12-06 07:18:08.227 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:08.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:08 compute-1 ceph-mon[81689]: pgmap v1801: 305 pgs: 305 active+clean; 271 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 06 07:18:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3335321634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:09 compute-1 nova_compute[226101]: 2025-12-06 07:18:09.392 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:09 compute-1 ceph-mon[81689]: pgmap v1802: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 458 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 07:18:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3749883810' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3749883810' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:10.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:11 compute-1 ceph-mon[81689]: pgmap v1803: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Dec 06 07:18:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:12.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:12.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.727 226109 DEBUG nova.network.neutron [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.780 226109 DEBUG oslo_concurrency.lockutils [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.781 226109 DEBUG oslo_concurrency.lockutils [req-7e6ae3d9-eb1f-4b1e-bf2d-44457269b33a req-ecc0d577-94d2-4ec9-8dc4-1316b55909c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.781 226109 DEBUG nova.network.neutron [req-7e6ae3d9-eb1f-4b1e-bf2d-44457269b33a req-ecc0d577-94d2-4ec9-8dc4-1316b55909c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing network info cache for port 34bdfa57-1694-46e3-8ea7-7493d4453761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.785 226109 DEBUG nova.virt.libvirt.vif [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.785 226109 DEBUG nova.network.os_vif_util [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.786 226109 DEBUG nova.network.os_vif_util [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:aa:1f,bridge_name='br-int',has_traffic_filtering=True,id=34bdfa57-1694-46e3-8ea7-7493d4453761,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34bdfa57-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.787 226109 DEBUG os_vif [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:aa:1f,bridge_name='br-int',has_traffic_filtering=True,id=34bdfa57-1694-46e3-8ea7-7493d4453761,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34bdfa57-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.787 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.788 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.788 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.791 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.791 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34bdfa57-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.792 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34bdfa57-16, col_values=(('external_ids', {'iface-id': '34bdfa57-1694-46e3-8ea7-7493d4453761', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:aa:1f', 'vm-uuid': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.793 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:12 compute-1 NetworkManager[49031]: <info>  [1765005492.7944] manager: (tap34bdfa57-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.799 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.800 226109 INFO os_vif [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:aa:1f,bridge_name='br-int',has_traffic_filtering=True,id=34bdfa57-1694-46e3-8ea7-7493d4453761,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34bdfa57-16')
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.800 226109 DEBUG nova.virt.libvirt.vif [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.801 226109 DEBUG nova.network.os_vif_util [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.801 226109 DEBUG nova.network.os_vif_util [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:aa:1f,bridge_name='br-int',has_traffic_filtering=True,id=34bdfa57-1694-46e3-8ea7-7493d4453761,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34bdfa57-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.804 226109 DEBUG nova.virt.libvirt.guest [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] attach device xml: <interface type="ethernet">
Dec 06 07:18:12 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:2d:aa:1f"/>
Dec 06 07:18:12 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:18:12 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:18:12 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:18:12 compute-1 nova_compute[226101]:   <target dev="tap34bdfa57-16"/>
Dec 06 07:18:12 compute-1 nova_compute[226101]: </interface>
Dec 06 07:18:12 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:18:12 compute-1 kernel: tap34bdfa57-16: entered promiscuous mode
Dec 06 07:18:12 compute-1 NetworkManager[49031]: <info>  [1765005492.8167] manager: (tap34bdfa57-16): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.818 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:12 compute-1 ovn_controller[130279]: 2025-12-06T07:18:12Z|00273|binding|INFO|Claiming lport 34bdfa57-1694-46e3-8ea7-7493d4453761 for this chassis.
Dec 06 07:18:12 compute-1 ovn_controller[130279]: 2025-12-06T07:18:12Z|00274|binding|INFO|34bdfa57-1694-46e3-8ea7-7493d4453761: Claiming fa:16:3e:2d:aa:1f 10.100.0.3
Dec 06 07:18:12 compute-1 ovn_controller[130279]: 2025-12-06T07:18:12Z|00275|binding|INFO|Setting lport 34bdfa57-1694-46e3-8ea7-7493d4453761 ovn-installed in OVS
Dec 06 07:18:12 compute-1 ovn_controller[130279]: 2025-12-06T07:18:12Z|00276|binding|INFO|Setting lport 34bdfa57-1694-46e3-8ea7-7493d4453761 up in Southbound
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.834 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:aa:1f 10.100.0.3'], port_security=['fa:16:3e:2d:aa:1f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=34bdfa57-1694-46e3-8ea7-7493d4453761) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.835 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 34bdfa57-1694-46e3-8ea7-7493d4453761 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.837 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.839 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:12 compute-1 systemd-udevd[253332]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.854 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[68e87bd7-8157-4b95-943b-efdabc825438]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:12 compute-1 NetworkManager[49031]: <info>  [1765005492.8628] device (tap34bdfa57-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:18:12 compute-1 NetworkManager[49031]: <info>  [1765005492.8639] device (tap34bdfa57-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.884 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[49e8a0c7-4e94-4458-9d67-6c190485b221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.889 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[55b62c29-bd28-4121-9717-bc9c5cccc735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.924 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe3cf88-346e-4892-9cf1-11fc8acde535]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.944 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fc65bf-b227-420a-9a2d-630743c50799]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571130, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253339, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.965 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8346244c-0f69-488b-8bb5-64c302da9d4b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571140, 'tstamp': 571140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253340, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571143, 'tstamp': 571143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253340, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.966 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.967 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:12 compute-1 nova_compute[226101]: 2025-12-06 07:18:12.968 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.969 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.969 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.969 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:12.970 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:13 compute-1 nova_compute[226101]: 2025-12-06 07:18:13.255 226109 DEBUG nova.virt.libvirt.driver [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:13 compute-1 nova_compute[226101]: 2025-12-06 07:18:13.256 226109 DEBUG nova.virt.libvirt.driver [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:13 compute-1 nova_compute[226101]: 2025-12-06 07:18:13.256 226109 DEBUG nova.virt.libvirt.driver [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:17:ba:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:13 compute-1 nova_compute[226101]: 2025-12-06 07:18:13.257 226109 DEBUG nova.virt.libvirt.driver [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:90:44:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:13 compute-1 nova_compute[226101]: 2025-12-06 07:18:13.257 226109 DEBUG nova.virt.libvirt.driver [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:2d:aa:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:13 compute-1 nova_compute[226101]: 2025-12-06 07:18:13.299 226109 DEBUG nova.virt.libvirt.guest [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:13 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:18:13</nova:creationTime>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:18:13 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:port uuid="a49d82f2-78d3-4252-867e-1f2512bab05a">
Dec 06 07:18:13 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     <nova:port uuid="34bdfa57-1694-46e3-8ea7-7493d4453761">
Dec 06 07:18:13 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:18:13 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:13 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:18:13 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:18:13 compute-1 nova_compute[226101]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:18:13 compute-1 nova_compute[226101]: 2025-12-06 07:18:13.334 226109 DEBUG oslo_concurrency.lockutils [None req-5084481b-7a1b-4cc6-a9ab-1e2baf4929cb 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 13.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:14 compute-1 ceph-mon[81689]: pgmap v1804: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Dec 06 07:18:14 compute-1 nova_compute[226101]: 2025-12-06 07:18:14.395 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:14.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:15 compute-1 ovn_controller[130279]: 2025-12-06T07:18:15Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:aa:1f 10.100.0.3
Dec 06 07:18:15 compute-1 ovn_controller[130279]: 2025-12-06T07:18:15Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:aa:1f 10.100.0.3
Dec 06 07:18:15 compute-1 nova_compute[226101]: 2025-12-06 07:18:15.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:16.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:16.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.046 226109 DEBUG nova.network.neutron [req-7e6ae3d9-eb1f-4b1e-bf2d-44457269b33a req-ecc0d577-94d2-4ec9-8dc4-1316b55909c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updated VIF entry in instance network info cache for port 34bdfa57-1694-46e3-8ea7-7493d4453761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.046 226109 DEBUG nova.network.neutron [req-7e6ae3d9-eb1f-4b1e-bf2d-44457269b33a req-ecc0d577-94d2-4ec9-8dc4-1316b55909c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.104 226109 DEBUG oslo_concurrency.lockutils [req-7e6ae3d9-eb1f-4b1e-bf2d-44457269b33a req-ecc0d577-94d2-4ec9-8dc4-1316b55909c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:17 compute-1 ceph-mon[81689]: pgmap v1805: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 363 KiB/s wr, 42 op/s
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.737 226109 DEBUG nova.compute.manager [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.737 226109 DEBUG oslo_concurrency.lockutils [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.737 226109 DEBUG oslo_concurrency.lockutils [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.738 226109 DEBUG oslo_concurrency.lockutils [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.738 226109 DEBUG nova.compute.manager [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.738 226109 WARNING nova.compute.manager [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 for instance with vm_state active and task_state None.
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.738 226109 DEBUG nova.compute.manager [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.739 226109 DEBUG oslo_concurrency.lockutils [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.739 226109 DEBUG oslo_concurrency.lockutils [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.739 226109 DEBUG oslo_concurrency.lockutils [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.739 226109 DEBUG nova.compute.manager [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.740 226109 WARNING nova.compute.manager [req-e60be751-1df4-4d2f-8f37-898a958ac704 req-28bb5d74-72e9-435b-816a-3d564d7d9b06 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 for instance with vm_state active and task_state None.
Dec 06 07:18:17 compute-1 nova_compute[226101]: 2025-12-06 07:18:17.795 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:18 compute-1 nova_compute[226101]: 2025-12-06 07:18:18.155 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:18 compute-1 ceph-mon[81689]: pgmap v1806: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 363 KiB/s wr, 42 op/s
Dec 06 07:18:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:18.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:18.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:19 compute-1 nova_compute[226101]: 2025-12-06 07:18:19.398 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:20 compute-1 podman[253342]: 2025-12-06 07:18:20.140942927 +0000 UTC m=+0.120424883 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:18:20 compute-1 nova_compute[226101]: 2025-12-06 07:18:20.326 226109 DEBUG oslo_concurrency.lockutils [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-50940ab5-b0bd-40c1-a79f-2e5633b08667" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:20 compute-1 nova_compute[226101]: 2025-12-06 07:18:20.326 226109 DEBUG oslo_concurrency.lockutils [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-50940ab5-b0bd-40c1-a79f-2e5633b08667" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:20 compute-1 nova_compute[226101]: 2025-12-06 07:18:20.326 226109 DEBUG nova.objects.instance [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:20 compute-1 ceph-mon[81689]: pgmap v1807: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 111 KiB/s wr, 21 op/s
Dec 06 07:18:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:20.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e253 e253: 3 total, 3 up, 3 in
Dec 06 07:18:21 compute-1 ceph-mon[81689]: pgmap v1808: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 25 KiB/s wr, 3 op/s
Dec 06 07:18:21 compute-1 ceph-mon[81689]: osdmap e253: 3 total, 3 up, 3 in
Dec 06 07:18:21 compute-1 nova_compute[226101]: 2025-12-06 07:18:21.756 226109 DEBUG nova.objects.instance [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_requests' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:21 compute-1 nova_compute[226101]: 2025-12-06 07:18:21.787 226109 DEBUG nova.network.neutron [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:18:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:18:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3005.4 total, 600.0 interval
                                           Cumulative writes: 25K writes, 101K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 8708 syncs, 2.98 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8319 writes, 30K keys, 8319 commit groups, 1.0 writes per commit group, ingest: 27.85 MB, 0.05 MB/s
                                           Interval WAL: 8319 writes, 3263 syncs, 2.55 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:18:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1013867470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1826628988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:22.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:22.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:22 compute-1 nova_compute[226101]: 2025-12-06 07:18:22.724 226109 DEBUG nova.policy [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:18:22 compute-1 nova_compute[226101]: 2025-12-06 07:18:22.797 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:23 compute-1 ceph-mon[81689]: pgmap v1810: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 17 KiB/s wr, 29 op/s
Dec 06 07:18:23 compute-1 nova_compute[226101]: 2025-12-06 07:18:23.773 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:23 compute-1 nova_compute[226101]: 2025-12-06 07:18:23.921 226109 DEBUG nova.network.neutron [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Successfully updated port: 50940ab5-b0bd-40c1-a79f-2e5633b08667 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:18:23 compute-1 nova_compute[226101]: 2025-12-06 07:18:23.936 226109 DEBUG oslo_concurrency.lockutils [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:23 compute-1 nova_compute[226101]: 2025-12-06 07:18:23.936 226109 DEBUG oslo_concurrency.lockutils [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:23 compute-1 nova_compute[226101]: 2025-12-06 07:18:23.937 226109 DEBUG nova.network.neutron [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:18:24 compute-1 nova_compute[226101]: 2025-12-06 07:18:24.399 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:24 compute-1 nova_compute[226101]: 2025-12-06 07:18:24.569 226109 WARNING nova.network.neutron [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it
Dec 06 07:18:24 compute-1 nova_compute[226101]: 2025-12-06 07:18:24.570 226109 WARNING nova.network.neutron [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it
Dec 06 07:18:24 compute-1 nova_compute[226101]: 2025-12-06 07:18:24.570 226109 WARNING nova.network.neutron [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it
Dec 06 07:18:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:24.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:25 compute-1 podman[253369]: 2025-12-06 07:18:25.067099802 +0000 UTC m=+0.055329655 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:18:25 compute-1 podman[253370]: 2025-12-06 07:18:25.087205566 +0000 UTC m=+0.074207265 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 06 07:18:25 compute-1 ceph-mon[81689]: pgmap v1811: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 17 KiB/s wr, 29 op/s
Dec 06 07:18:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:25 compute-1 nova_compute[226101]: 2025-12-06 07:18:25.767 226109 DEBUG nova.compute.manager [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-changed-50940ab5-b0bd-40c1-a79f-2e5633b08667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:25 compute-1 nova_compute[226101]: 2025-12-06 07:18:25.767 226109 DEBUG nova.compute.manager [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing instance network info cache due to event network-changed-50940ab5-b0bd-40c1-a79f-2e5633b08667. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:18:25 compute-1 nova_compute[226101]: 2025-12-06 07:18:25.767 226109 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:27 compute-1 ceph-mon[81689]: pgmap v1812: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 17 KiB/s wr, 73 op/s
Dec 06 07:18:27 compute-1 nova_compute[226101]: 2025-12-06 07:18:27.799 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e254 e254: 3 total, 3 up, 3 in
Dec 06 07:18:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:28.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:28.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.669 226109 DEBUG nova.network.neutron [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.870 226109 DEBUG oslo_concurrency.lockutils [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.871 226109 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.871 226109 DEBUG nova.network.neutron [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Refreshing network info cache for port 50940ab5-b0bd-40c1-a79f-2e5633b08667 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.874 226109 DEBUG nova.virt.libvirt.vif [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.874 226109 DEBUG nova.network.os_vif_util [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.875 226109 DEBUG nova.network.os_vif_util [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d5:ef,bridge_name='br-int',has_traffic_filtering=True,id=50940ab5-b0bd-40c1-a79f-2e5633b08667,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50940ab5-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.875 226109 DEBUG os_vif [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d5:ef,bridge_name='br-int',has_traffic_filtering=True,id=50940ab5-b0bd-40c1-a79f-2e5633b08667,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50940ab5-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.875 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.876 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.876 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.878 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.879 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50940ab5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.879 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50940ab5-b0, col_values=(('external_ids', {'iface-id': '50940ab5-b0bd-40c1-a79f-2e5633b08667', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:d5:ef', 'vm-uuid': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.880 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:28 compute-1 NetworkManager[49031]: <info>  [1765005508.8814] manager: (tap50940ab5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.882 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.886 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.887 226109 INFO os_vif [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d5:ef,bridge_name='br-int',has_traffic_filtering=True,id=50940ab5-b0bd-40c1-a79f-2e5633b08667,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50940ab5-b0')
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.888 226109 DEBUG nova.virt.libvirt.vif [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.888 226109 DEBUG nova.network.os_vif_util [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.889 226109 DEBUG nova.network.os_vif_util [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d5:ef,bridge_name='br-int',has_traffic_filtering=True,id=50940ab5-b0bd-40c1-a79f-2e5633b08667,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50940ab5-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.890 226109 DEBUG nova.virt.libvirt.guest [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] attach device xml: <interface type="ethernet">
Dec 06 07:18:28 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:e9:d5:ef"/>
Dec 06 07:18:28 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:18:28 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:18:28 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:18:28 compute-1 nova_compute[226101]:   <target dev="tap50940ab5-b0"/>
Dec 06 07:18:28 compute-1 nova_compute[226101]: </interface>
Dec 06 07:18:28 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:18:28 compute-1 kernel: tap50940ab5-b0: entered promiscuous mode
Dec 06 07:18:28 compute-1 NetworkManager[49031]: <info>  [1765005508.8989] manager: (tap50940ab5-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Dec 06 07:18:28 compute-1 ovn_controller[130279]: 2025-12-06T07:18:28Z|00277|binding|INFO|Claiming lport 50940ab5-b0bd-40c1-a79f-2e5633b08667 for this chassis.
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.900 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:28 compute-1 ovn_controller[130279]: 2025-12-06T07:18:28Z|00278|binding|INFO|50940ab5-b0bd-40c1-a79f-2e5633b08667: Claiming fa:16:3e:e9:d5:ef 10.100.0.6
Dec 06 07:18:28 compute-1 ovn_controller[130279]: 2025-12-06T07:18:28Z|00279|binding|INFO|Setting lport 50940ab5-b0bd-40c1-a79f-2e5633b08667 ovn-installed in OVS
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.922 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:28 compute-1 nova_compute[226101]: 2025-12-06 07:18:28.926 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:28 compute-1 systemd-udevd[253411]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:18:28 compute-1 ovn_controller[130279]: 2025-12-06T07:18:28Z|00280|binding|INFO|Setting lport 50940ab5-b0bd-40c1-a79f-2e5633b08667 up in Southbound
Dec 06 07:18:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:28.930 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:d5:ef 10.100.0.6'], port_security=['fa:16:3e:e9:d5:ef 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1010844274', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1010844274', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=50940ab5-b0bd-40c1-a79f-2e5633b08667) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:28.932 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 50940ab5-b0bd-40c1-a79f-2e5633b08667 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis
Dec 06 07:18:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:28.934 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:18:28 compute-1 NetworkManager[49031]: <info>  [1765005508.9384] device (tap50940ab5-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:18:28 compute-1 NetworkManager[49031]: <info>  [1765005508.9393] device (tap50940ab5-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:18:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:28.949 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b87908f2-e217-40ef-983d-6d121395bc5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:28.974 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[fde802fa-7215-43f6-95b3-d44339920109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:28.977 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fd87cf-ae64-4f9c-9d77-cd5e05b6cba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:29.004 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[02654abe-9180-4b5e-a41e-73ee24330a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:29.020 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2e586e-50a7-4665-b91e-0377d794187d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 784, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 784, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571130, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253419, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.021 226109 DEBUG nova.virt.libvirt.driver [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.022 226109 DEBUG nova.virt.libvirt.driver [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.022 226109 DEBUG nova.virt.libvirt.driver [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:17:ba:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.022 226109 DEBUG nova.virt.libvirt.driver [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:90:44:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.022 226109 DEBUG nova.virt.libvirt.driver [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:2d:aa:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.023 226109 DEBUG nova.virt.libvirt.driver [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:e9:d5:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:29.039 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ab86c976-86ee-4f84-982f-9a717c17586f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571140, 'tstamp': 571140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253420, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571143, 'tstamp': 571143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253420, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:29.040 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.042 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:29.043 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:29.043 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:29.043 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:29.043 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.053 226109 DEBUG nova.virt.libvirt.guest [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:29 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:18:29</nova:creationTime>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:18:29 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:port uuid="a49d82f2-78d3-4252-867e-1f2512bab05a">
Dec 06 07:18:29 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:port uuid="34bdfa57-1694-46e3-8ea7-7493d4453761">
Dec 06 07:18:29 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     <nova:port uuid="50940ab5-b0bd-40c1-a79f-2e5633b08667">
Dec 06 07:18:29 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:18:29 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:29 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:18:29 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:18:29 compute-1 nova_compute[226101]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.117 226109 DEBUG oslo_concurrency.lockutils [None req-3c073a90-58a9-4c2e-9356-b26c9d37ec8f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-50940ab5-b0bd-40c1-a79f-2e5633b08667" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.334 226109 DEBUG nova.compute.manager [req-945229b2-3484-4ffd-bed7-0379637aa1ad req-dceb9e46-2426-4774-a742-a51f96d72d6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-50940ab5-b0bd-40c1-a79f-2e5633b08667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.335 226109 DEBUG oslo_concurrency.lockutils [req-945229b2-3484-4ffd-bed7-0379637aa1ad req-dceb9e46-2426-4774-a742-a51f96d72d6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.335 226109 DEBUG oslo_concurrency.lockutils [req-945229b2-3484-4ffd-bed7-0379637aa1ad req-dceb9e46-2426-4774-a742-a51f96d72d6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.335 226109 DEBUG oslo_concurrency.lockutils [req-945229b2-3484-4ffd-bed7-0379637aa1ad req-dceb9e46-2426-4774-a742-a51f96d72d6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.335 226109 DEBUG nova.compute.manager [req-945229b2-3484-4ffd-bed7-0379637aa1ad req-dceb9e46-2426-4774-a742-a51f96d72d6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-50940ab5-b0bd-40c1-a79f-2e5633b08667 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.335 226109 WARNING nova.compute.manager [req-945229b2-3484-4ffd-bed7-0379637aa1ad req-dceb9e46-2426-4774-a742-a51f96d72d6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-50940ab5-b0bd-40c1-a79f-2e5633b08667 for instance with vm_state active and task_state None.
Dec 06 07:18:29 compute-1 nova_compute[226101]: 2025-12-06 07:18:29.428 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:29 compute-1 ceph-mon[81689]: pgmap v1813: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 125 op/s
Dec 06 07:18:29 compute-1 ceph-mon[81689]: osdmap e254: 3 total, 3 up, 3 in
Dec 06 07:18:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1035860075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:30 compute-1 ovn_controller[130279]: 2025-12-06T07:18:30Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:d5:ef 10.100.0.6
Dec 06 07:18:30 compute-1 ovn_controller[130279]: 2025-12-06T07:18:30Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:d5:ef 10.100.0.6
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.332 226109 DEBUG nova.network.neutron [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updated VIF entry in instance network info cache for port 50940ab5-b0bd-40c1-a79f-2e5633b08667. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.333 226109 DEBUG nova.network.neutron [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.348 226109 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2140050300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.584 226109 DEBUG oslo_concurrency.lockutils [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-a49d82f2-78d3-4252-867e-1f2512bab05a" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.585 226109 DEBUG oslo_concurrency.lockutils [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-a49d82f2-78d3-4252-867e-1f2512bab05a" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.603 226109 DEBUG nova.objects.instance [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.621 226109 DEBUG nova.virt.libvirt.vif [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.621 226109 DEBUG nova.network.os_vif_util [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.622 226109 DEBUG nova.network.os_vif_util [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.626 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:18:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.629 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:18:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:30.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.633 226109 DEBUG nova.virt.libvirt.driver [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Attempting to detach device tapa49d82f2-78 from instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.634 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] detach device xml: <interface type="ethernet">
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:90:44:56"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <target dev="tapa49d82f2-78"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]: </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.641 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.645 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface>not found in domain: <domain type='kvm' id='34'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <name>instance-0000004a</name>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <uuid>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</uuid>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:18:29</nova:creationTime>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="a49d82f2-78d3-4252-867e-1f2512bab05a">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="34bdfa57-1694-46e3-8ea7-7493d4453761">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="50940ab5-b0bd-40c1-a79f-2e5633b08667">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:18:30 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <memory unit='KiB'>131072</memory>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <resource>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <partition>/machine</partition>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </resource>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <sysinfo type='smbios'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <system>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='serial'>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='uuid'>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </system>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <os>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <boot dev='hd'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <smbios mode='sysinfo'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </os>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <features>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <vmcoreinfo state='on'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </features>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <feature policy='require' name='x2apic'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <feature policy='require' name='vme'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <clock offset='utc'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <timer name='hpet' present='no'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <on_reboot>restart</on_reboot>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <on_crash>destroy</on_crash>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <disk type='network' device='disk'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk' index='2'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='vda' bus='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='virtio-disk0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <disk type='network' device='cdrom'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config' index='1'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='sda' bus='sata'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <readonly/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='sata0-0-0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pcie.0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='1' port='0x10'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='2' port='0x11'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='3' port='0x12'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='4' port='0x13'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.4'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='5' port='0x14'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.5'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='6' port='0x15'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.6'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='7' port='0x16'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.7'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='8' port='0x17'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.8'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='9' port='0x18'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.9'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='10' port='0x19'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.10'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='11' port='0x1a'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.11'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='12' port='0x1b'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.12'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='13' port='0x1c'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.13'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='14' port='0x1d'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.14'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='15' port='0x1e'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.15'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='16' port='0x1f'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.16'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='17' port='0x20'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.17'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='18' port='0x21'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.18'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='19' port='0x22'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.19'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='20' port='0x23'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.20'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='21' port='0x24'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.21'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='22' port='0x25'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.22'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='23' port='0x26'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.23'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='24' port='0x27'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.24'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='25' port='0x28'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.25'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-pci-bridge'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.26'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='usb'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='sata' index='0'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='ide'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:17:ba:6e'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='tape7c11aa4-db'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='net0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:90:44:56'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='tapa49d82f2-78'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='net1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:2d:aa:1f'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='tap34bdfa57-16'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='net2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:e9:d5:ef'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='tap50940ab5-b0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='net3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <serial type='pty'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log' append='off'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target type='isa-serial' port='0'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <model name='isa-serial'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </target>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log' append='off'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target type='serial' port='0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </console>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <input type='tablet' bus='usb'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='input0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <input type='mouse' bus='ps2'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='input1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <input type='keyboard' bus='ps2'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='input2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <listen type='address' address='::0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </graphics>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <audio id='1' type='none'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <video>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='video0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </video>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <watchdog model='itco' action='reset'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='watchdog0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </watchdog>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <memballoon model='virtio'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <stats period='10'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='balloon0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <rng model='virtio'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='rng0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <label>system_u:system_r:svirt_t:s0:c324,c832</label>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c324,c832</imagelabel>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <label>+107:+107</label>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:18:30 compute-1 nova_compute[226101]: </domain>
Dec 06 07:18:30 compute-1 nova_compute[226101]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.645 226109 INFO nova.virt.libvirt.driver [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully detached device tapa49d82f2-78 from instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 from the persistent domain config.
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.646 226109 DEBUG nova.virt.libvirt.driver [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] (1/8): Attempting to detach device tapa49d82f2-78 with device alias net1 from instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.646 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] detach device xml: <interface type="ethernet">
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <mac address="fa:16:3e:90:44:56"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <model type="virtio"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <mtu size="1442"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <target dev="tapa49d82f2-78"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]: </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:18:30 compute-1 kernel: tapa49d82f2-78 (unregistering): left promiscuous mode
Dec 06 07:18:30 compute-1 NetworkManager[49031]: <info>  [1765005510.7447] device (tapa49d82f2-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:18:30 compute-1 ovn_controller[130279]: 2025-12-06T07:18:30Z|00281|binding|INFO|Releasing lport a49d82f2-78d3-4252-867e-1f2512bab05a from this chassis (sb_readonly=0)
Dec 06 07:18:30 compute-1 ovn_controller[130279]: 2025-12-06T07:18:30Z|00282|binding|INFO|Setting lport a49d82f2-78d3-4252-867e-1f2512bab05a down in Southbound
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.750 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:30 compute-1 ovn_controller[130279]: 2025-12-06T07:18:30Z|00283|binding|INFO|Removing iface tapa49d82f2-78 ovn-installed in OVS
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.755 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765005510.7551367, d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.756 226109 DEBUG nova.virt.libvirt.driver [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Start waiting for the detach event from libvirt for device tapa49d82f2-78 with device alias net1 for instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.757 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.760 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface>not found in domain: <domain type='kvm' id='34'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <name>instance-0000004a</name>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <uuid>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</uuid>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:18:29</nova:creationTime>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="a49d82f2-78d3-4252-867e-1f2512bab05a">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="34bdfa57-1694-46e3-8ea7-7493d4453761">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="50940ab5-b0bd-40c1-a79f-2e5633b08667">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:18:30 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <memory unit='KiB'>131072</memory>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <resource>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <partition>/machine</partition>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </resource>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <sysinfo type='smbios'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <system>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='serial'>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='uuid'>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </system>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <os>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <boot dev='hd'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <smbios mode='sysinfo'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </os>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <features>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <vmcoreinfo state='on'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </features>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <feature policy='require' name='x2apic'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <feature policy='require' name='vme'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <clock offset='utc'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <timer name='hpet' present='no'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <on_reboot>restart</on_reboot>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <on_crash>destroy</on_crash>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <disk type='network' device='disk'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk' index='2'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='vda' bus='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='virtio-disk0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <disk type='network' device='cdrom'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config' index='1'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='sda' bus='sata'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <readonly/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='sata0-0-0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pcie.0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='1' port='0x10'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='2' port='0x11'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='3' port='0x12'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.760 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:44:56 10.100.0.8'], port_security=['fa:16:3e:90:44:56 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=a49d82f2-78d3-4252-867e-1f2512bab05a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='4' port='0x13'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.4'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='5' port='0x14'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.5'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='6' port='0x15'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.6'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='7' port='0x16'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.7'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='8' port='0x17'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.8'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='9' port='0x18'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.9'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='10' port='0x19'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.10'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='11' port='0x1a'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.11'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='12' port='0x1b'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.12'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='13' port='0x1c'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.13'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='14' port='0x1d'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.14'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='15' port='0x1e'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.15'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='16' port='0x1f'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.16'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='17' port='0x20'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.17'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='18' port='0x21'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.18'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='19' port='0x22'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.19'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='20' port='0x23'/>
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.762 139580 INFO neutron.agent.ovn.metadata.agent [-] Port a49d82f2-78d3-4252-867e-1f2512bab05a in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.20'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='21' port='0x24'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.21'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='22' port='0x25'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.22'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='23' port='0x26'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.23'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='24' port='0x27'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.24'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target chassis='25' port='0x28'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.25'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model name='pcie-pci-bridge'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='pci.26'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='usb'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <controller type='sata' index='0'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='ide'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:17:ba:6e'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='tape7c11aa4-db'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='net0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:2d:aa:1f'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='tap34bdfa57-16'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='net2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:e9:d5:ef'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target dev='tap50940ab5-b0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='net3'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <serial type='pty'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log' append='off'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target type='isa-serial' port='0'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:         <model name='isa-serial'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       </target>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log' append='off'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <target type='serial' port='0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </console>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <input type='tablet' bus='usb'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='input0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <input type='mouse' bus='ps2'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='input1'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <input type='keyboard' bus='ps2'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='input2'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <listen type='address' address='::0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </graphics>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <audio id='1' type='none'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <video>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='video0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </video>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <watchdog model='itco' action='reset'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='watchdog0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </watchdog>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <memballoon model='virtio'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <stats period='10'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='balloon0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <rng model='virtio'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <alias name='rng0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <label>system_u:system_r:svirt_t:s0:c324,c832</label>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c324,c832</imagelabel>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <label>+107:+107</label>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:18:30 compute-1 nova_compute[226101]: </domain>
Dec 06 07:18:30 compute-1 nova_compute[226101]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.760 226109 INFO nova.virt.libvirt.driver [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully detached device tapa49d82f2-78 from instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 from the live domain config.
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.761 226109 DEBUG nova.virt.libvirt.vif [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.761 226109 DEBUG nova.network.os_vif_util [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.762 226109 DEBUG nova.network.os_vif_util [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.762 226109 DEBUG os_vif [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.764 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.764 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.764 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49d82f2-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.765 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.767 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.771 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.774 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.778 226109 INFO os_vif [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78')
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.779 226109 DEBUG nova.virt.libvirt.guest [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:18:30</nova:creationTime>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="34bdfa57-1694-46e3-8ea7-7493d4453761">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     <nova:port uuid="50940ab5-b0bd-40c1-a79f-2e5633b08667">
Dec 06 07:18:30 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:18:30 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:30 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:18:30 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:18:30 compute-1 nova_compute[226101]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.783 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2a75876d-52e6-402c-a47d-be31028c39e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.820 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4b34ee-1d5e-49be-91a4-c36c808e52fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.823 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5ce6f8-b560-4e8c-8a71-ddb730eeb3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.851 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[470f5312-5566-4957-a31d-c6a87cb6d47c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.868 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9bac306f-746d-476c-9f41-febb6af3f231]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571130, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253431, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.882 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4cd019-4130-41c2-a9fb-26a657b6f6ae]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571140, 'tstamp': 571140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253432, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571143, 'tstamp': 571143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253432, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.884 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:30 compute-1 nova_compute[226101]: 2025-12-06 07:18:30.887 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.887 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.888 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.889 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:30.890 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.462 226109 DEBUG nova.compute.manager [req-e30d6dea-8e8d-42e5-bdb3-f2e079b32f9d req-0ccd53a6-e75c-497a-807b-8111c661fc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-50940ab5-b0bd-40c1-a79f-2e5633b08667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.462 226109 DEBUG oslo_concurrency.lockutils [req-e30d6dea-8e8d-42e5-bdb3-f2e079b32f9d req-0ccd53a6-e75c-497a-807b-8111c661fc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.462 226109 DEBUG oslo_concurrency.lockutils [req-e30d6dea-8e8d-42e5-bdb3-f2e079b32f9d req-0ccd53a6-e75c-497a-807b-8111c661fc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.463 226109 DEBUG oslo_concurrency.lockutils [req-e30d6dea-8e8d-42e5-bdb3-f2e079b32f9d req-0ccd53a6-e75c-497a-807b-8111c661fc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.463 226109 DEBUG nova.compute.manager [req-e30d6dea-8e8d-42e5-bdb3-f2e079b32f9d req-0ccd53a6-e75c-497a-807b-8111c661fc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-50940ab5-b0bd-40c1-a79f-2e5633b08667 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.463 226109 WARNING nova.compute.manager [req-e30d6dea-8e8d-42e5-bdb3-f2e079b32f9d req-0ccd53a6-e75c-497a-807b-8111c661fc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-50940ab5-b0bd-40c1-a79f-2e5633b08667 for instance with vm_state active and task_state None.
Dec 06 07:18:31 compute-1 ceph-mon[81689]: pgmap v1815: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 9.2 KiB/s wr, 139 op/s
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.487 226109 DEBUG nova.compute.manager [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-deleted-a49d82f2-78d3-4252-867e-1f2512bab05a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.487 226109 INFO nova.compute.manager [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Neutron deleted interface a49d82f2-78d3-4252-867e-1f2512bab05a; detaching it from the instance and deleting it from the info cache
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.487 226109 DEBUG nova.network.neutron [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.515 226109 DEBUG nova.objects.instance [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'system_metadata' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.539 226109 DEBUG nova.objects.instance [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'flavor' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.550 226109 DEBUG oslo_concurrency.lockutils [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.551 226109 DEBUG oslo_concurrency.lockutils [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.551 226109 DEBUG nova.network.neutron [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.560 226109 DEBUG nova.virt.libvirt.vif [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.561 226109 DEBUG nova.network.os_vif_util [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.561 226109 DEBUG nova.network.os_vif_util [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.563 226109 DEBUG nova.virt.libvirt.guest [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.566 226109 DEBUG nova.virt.libvirt.guest [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface>not found in domain: <domain type='kvm' id='34'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <name>instance-0000004a</name>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <uuid>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</uuid>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:18:30</nova:creationTime>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="34bdfa57-1694-46e3-8ea7-7493d4453761">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="50940ab5-b0bd-40c1-a79f-2e5633b08667">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:18:31 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <memory unit='KiB'>131072</memory>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <resource>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <partition>/machine</partition>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </resource>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <sysinfo type='smbios'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <system>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='serial'>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='uuid'>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </system>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <os>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <boot dev='hd'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <smbios mode='sysinfo'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </os>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <features>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <vmcoreinfo state='on'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </features>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <feature policy='require' name='x2apic'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <feature policy='require' name='vme'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <clock offset='utc'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <timer name='hpet' present='no'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <on_reboot>restart</on_reboot>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <on_crash>destroy</on_crash>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <disk type='network' device='disk'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk' index='2'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='vda' bus='virtio'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='virtio-disk0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <disk type='network' device='cdrom'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config' index='1'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='sda' bus='sata'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <readonly/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='sata0-0-0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pcie.0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='1' port='0x10'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='2' port='0x11'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='3' port='0x12'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='4' port='0x13'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.4'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='5' port='0x14'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.5'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='6' port='0x15'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.6'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='7' port='0x16'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.7'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='8' port='0x17'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.8'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='9' port='0x18'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.9'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='10' port='0x19'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.10'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='11' port='0x1a'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.11'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='12' port='0x1b'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.12'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='13' port='0x1c'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.13'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='14' port='0x1d'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.14'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='15' port='0x1e'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.15'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='16' port='0x1f'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.16'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='17' port='0x20'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.17'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='18' port='0x21'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.18'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='19' port='0x22'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.19'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='20' port='0x23'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.20'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='21' port='0x24'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.21'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='22' port='0x25'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.22'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='23' port='0x26'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.23'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='24' port='0x27'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.24'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='25' port='0x28'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.25'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-pci-bridge'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.26'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='usb'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='sata' index='0'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='ide'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:17:ba:6e'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='tape7c11aa4-db'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='net0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:2d:aa:1f'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='tap34bdfa57-16'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='net2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:e9:d5:ef'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='tap50940ab5-b0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='net3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <serial type='pty'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log' append='off'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target type='isa-serial' port='0'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <model name='isa-serial'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </target>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log' append='off'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target type='serial' port='0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </console>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <input type='tablet' bus='usb'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='input0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <input type='mouse' bus='ps2'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='input1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <input type='keyboard' bus='ps2'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='input2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <listen type='address' address='::0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </graphics>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <audio id='1' type='none'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <video>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='video0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </video>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <watchdog model='itco' action='reset'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='watchdog0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </watchdog>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <memballoon model='virtio'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <stats period='10'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='balloon0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <rng model='virtio'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='rng0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <label>system_u:system_r:svirt_t:s0:c324,c832</label>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c324,c832</imagelabel>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <label>+107:+107</label>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:18:31 compute-1 nova_compute[226101]: </domain>
Dec 06 07:18:31 compute-1 nova_compute[226101]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.566 226109 DEBUG nova.virt.libvirt.guest [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.573 226109 DEBUG nova.virt.libvirt.guest [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:90:44:56"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa49d82f2-78"/></interface>not found in domain: <domain type='kvm' id='34'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <name>instance-0000004a</name>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <uuid>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</uuid>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:18:30</nova:creationTime>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="34bdfa57-1694-46e3-8ea7-7493d4453761">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="50940ab5-b0bd-40c1-a79f-2e5633b08667">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:18:31 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <memory unit='KiB'>131072</memory>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <resource>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <partition>/machine</partition>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </resource>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <sysinfo type='smbios'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <system>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='serial'>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='uuid'>d9e82d9c-ae9d-45de-b1bc-1410d9df84a9</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </system>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <os>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <boot dev='hd'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <smbios mode='sysinfo'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </os>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <features>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <vmcoreinfo state='on'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </features>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <feature policy='require' name='x2apic'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <feature policy='require' name='vme'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <clock offset='utc'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <timer name='hpet' present='no'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <on_reboot>restart</on_reboot>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <on_crash>destroy</on_crash>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <disk type='network' device='disk'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk' index='2'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='vda' bus='virtio'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='virtio-disk0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <disk type='network' device='cdrom'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <auth username='openstack'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <source protocol='rbd' name='vms/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_disk.config' index='1'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='sda' bus='sata'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <readonly/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='sata0-0-0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pcie.0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='1' port='0x10'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='2' port='0x11'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='3' port='0x12'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='4' port='0x13'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.4'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='5' port='0x14'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.5'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='6' port='0x15'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.6'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='7' port='0x16'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.7'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='8' port='0x17'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.8'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='9' port='0x18'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.9'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='10' port='0x19'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.10'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='11' port='0x1a'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.11'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='12' port='0x1b'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.12'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='13' port='0x1c'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.13'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='14' port='0x1d'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.14'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='15' port='0x1e'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.15'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='16' port='0x1f'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.16'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='17' port='0x20'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.17'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='18' port='0x21'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.18'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='19' port='0x22'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.19'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='20' port='0x23'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.20'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='21' port='0x24'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.21'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='22' port='0x25'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.22'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='23' port='0x26'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.23'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='24' port='0x27'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.24'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-root-port'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target chassis='25' port='0x28'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.25'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model name='pcie-pci-bridge'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='pci.26'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='usb'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <controller type='sata' index='0'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='ide'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </controller>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:17:ba:6e'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='tape7c11aa4-db'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='net0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:2d:aa:1f'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='tap34bdfa57-16'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='net2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <interface type='ethernet'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mac address='fa:16:3e:e9:d5:ef'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target dev='tap50940ab5-b0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model type='virtio'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <mtu size='1442'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='net3'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <serial type='pty'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log' append='off'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target type='isa-serial' port='0'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:         <model name='isa-serial'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       </target>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <source path='/dev/pts/0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <log file='/var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9/console.log' append='off'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <target type='serial' port='0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='serial0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </console>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <input type='tablet' bus='usb'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='input0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <input type='mouse' bus='ps2'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='input1'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <input type='keyboard' bus='ps2'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='input2'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </input>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <listen type='address' address='::0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </graphics>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <audio id='1' type='none'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <video>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='video0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </video>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <watchdog model='itco' action='reset'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='watchdog0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </watchdog>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <memballoon model='virtio'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <stats period='10'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='balloon0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <rng model='virtio'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <alias name='rng0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <label>system_u:system_r:svirt_t:s0:c324,c832</label>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c324,c832</imagelabel>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <label>+107:+107</label>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </seclabel>
Dec 06 07:18:31 compute-1 nova_compute[226101]: </domain>
Dec 06 07:18:31 compute-1 nova_compute[226101]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.574 226109 WARNING nova.virt.libvirt.driver [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Detaching interface fa:16:3e:90:44:56 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapa49d82f2-78' not found.
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.575 226109 DEBUG nova.virt.libvirt.vif [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.575 226109 DEBUG nova.network.os_vif_util [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "a49d82f2-78d3-4252-867e-1f2512bab05a", "address": "fa:16:3e:90:44:56", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49d82f2-78", "ovs_interfaceid": "a49d82f2-78d3-4252-867e-1f2512bab05a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.575 226109 DEBUG nova.network.os_vif_util [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.576 226109 DEBUG os_vif [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.576 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.577 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49d82f2-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.577 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.578 226109 INFO os_vif [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:44:56,bridge_name='br-int',has_traffic_filtering=True,id=a49d82f2-78d3-4252-867e-1f2512bab05a,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49d82f2-78')
Dec 06 07:18:31 compute-1 nova_compute[226101]: 2025-12-06 07:18:31.579 226109 DEBUG nova.virt.libvirt.guest [req-948c26bb-5619-43e1-8466-49652105dc6b req-b3d1e73e-5109-4786-8fc0-d3466a6168af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1375778431</nova:name>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:creationTime>2025-12-06 07:18:31</nova:creationTime>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:flavor name="m1.nano">
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:memory>128</nova:memory>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:disk>1</nova:disk>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:swap>0</nova:swap>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:flavor>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:owner>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:owner>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   <nova:ports>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="e7c11aa4-dbf4-473c-b956-2063ef2a13a2">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="34bdfa57-1694-46e3-8ea7-7493d4453761">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     <nova:port uuid="50940ab5-b0bd-40c1-a79f-2e5633b08667">
Dec 06 07:18:31 compute-1 nova_compute[226101]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:18:31 compute-1 nova_compute[226101]:     </nova:port>
Dec 06 07:18:31 compute-1 nova_compute[226101]:   </nova:ports>
Dec 06 07:18:31 compute-1 nova_compute[226101]: </nova:instance>
Dec 06 07:18:31 compute-1 nova_compute[226101]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.529 226109 DEBUG nova.compute.manager [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-unplugged-a49d82f2-78d3-4252-867e-1f2512bab05a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.529 226109 DEBUG oslo_concurrency.lockutils [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.530 226109 DEBUG oslo_concurrency.lockutils [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.530 226109 DEBUG oslo_concurrency.lockutils [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.530 226109 DEBUG nova.compute.manager [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-unplugged-a49d82f2-78d3-4252-867e-1f2512bab05a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.531 226109 WARNING nova.compute.manager [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-unplugged-a49d82f2-78d3-4252-867e-1f2512bab05a for instance with vm_state active and task_state None.
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.531 226109 DEBUG nova.compute.manager [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.531 226109 DEBUG oslo_concurrency.lockutils [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.531 226109 DEBUG oslo_concurrency.lockutils [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.532 226109 DEBUG oslo_concurrency.lockutils [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.532 226109 DEBUG nova.compute.manager [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.532 226109 WARNING nova.compute.manager [req-71fc8642-fc13-43bd-b2aa-51429e7e4b0c req-1e046fbe-6f44-4fff-be24-5451d755a3dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-a49d82f2-78d3-4252-867e-1f2512bab05a for instance with vm_state active and task_state None.
Dec 06 07:18:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:32.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.787 139580 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b41ff1ff-9205-48c9-86de-7050da84b091 with type ""
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.788 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:d5:ef 10.100.0.6'], port_security=['fa:16:3e:e9:d5:ef 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1010844274', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1010844274', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=50940ab5-b0bd-40c1-a79f-2e5633b08667) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.789 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 50940ab5-b0bd-40c1-a79f-2e5633b08667 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.790 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:18:32 compute-1 ovn_controller[130279]: 2025-12-06T07:18:32Z|00284|binding|INFO|Removing iface tap50940ab5-b0 ovn-installed in OVS
Dec 06 07:18:32 compute-1 ovn_controller[130279]: 2025-12-06T07:18:32Z|00285|binding|INFO|Removing lport 50940ab5-b0bd-40c1-a79f-2e5633b08667 ovn-installed in OVS
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.805 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[819f0951-4b35-4f72-a95e-cf726c03d1a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.806 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.884 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.898 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5d737b-923b-4531-8f0c-4586afadd862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.901 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0381ad39-b64f-439f-b6b9-949a08afb7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.930 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[75e75c7b-c059-43b6-a608-0ef7d50eabf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.948 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[84f6da5b-1898-41da-996f-86cf23e7f43d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 784, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 784, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571130, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253438, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.962 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[47f74926-7fc7-4bb8-b585-81fb6ee30677]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571140, 'tstamp': 571140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253439, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571143, 'tstamp': 571143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253439, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.963 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.964 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.965 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.966 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.966 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.966 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:32.966 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.986 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.987 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.987 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.987 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.988 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.989 226109 INFO nova.compute.manager [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Terminating instance
Dec 06 07:18:32 compute-1 nova_compute[226101]: 2025-12-06 07:18:32.990 226109 DEBUG nova.compute.manager [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:18:33 compute-1 kernel: tape7c11aa4-db (unregistering): left promiscuous mode
Dec 06 07:18:33 compute-1 NetworkManager[49031]: <info>  [1765005513.0451] device (tape7c11aa4-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.051 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 ovn_controller[130279]: 2025-12-06T07:18:33Z|00286|binding|INFO|Releasing lport e7c11aa4-dbf4-473c-b956-2063ef2a13a2 from this chassis (sb_readonly=0)
Dec 06 07:18:33 compute-1 ovn_controller[130279]: 2025-12-06T07:18:33Z|00287|binding|INFO|Setting lport e7c11aa4-dbf4-473c-b956-2063ef2a13a2 down in Southbound
Dec 06 07:18:33 compute-1 ovn_controller[130279]: 2025-12-06T07:18:33Z|00288|binding|INFO|Removing iface tape7c11aa4-db ovn-installed in OVS
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.053 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.067 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.068 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:ba:6e 10.100.0.13'], port_security=['fa:16:3e:17:ba:6e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd78eff0-3454-4e87-a2e0-e9ca1c59d1a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=e7c11aa4-dbf4-473c-b956-2063ef2a13a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:33 compute-1 kernel: tap34bdfa57-16 (unregistering): left promiscuous mode
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.070 139580 INFO neutron.agent.ovn.metadata.agent [-] Port e7c11aa4-dbf4-473c-b956-2063ef2a13a2 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.071 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:18:33 compute-1 NetworkManager[49031]: <info>  [1765005513.0739] device (tap34bdfa57-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.086 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 ovn_controller[130279]: 2025-12-06T07:18:33Z|00289|binding|INFO|Releasing lport 34bdfa57-1694-46e3-8ea7-7493d4453761 from this chassis (sb_readonly=0)
Dec 06 07:18:33 compute-1 ovn_controller[130279]: 2025-12-06T07:18:33Z|00290|binding|INFO|Setting lport 34bdfa57-1694-46e3-8ea7-7493d4453761 down in Southbound
Dec 06 07:18:33 compute-1 ovn_controller[130279]: 2025-12-06T07:18:33Z|00291|binding|INFO|Removing iface tap34bdfa57-16 ovn-installed in OVS
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.087 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[123869b4-c738-4bc1-a874-9e3a9da4dcfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 kernel: tap50940ab5-b0 (unregistering): left promiscuous mode
Dec 06 07:18:33 compute-1 NetworkManager[49031]: <info>  [1765005513.1020] device (tap50940ab5-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.103 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.105 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:aa:1f 10.100.0.3'], port_security=['fa:16:3e:2d:aa:1f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd9e82d9c-ae9d-45de-b1bc-1410d9df84a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=34bdfa57-1694-46e3-8ea7-7493d4453761) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.110 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.119 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ae20acd4-42ed-4f32-81de-ef8b12210b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.122 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7523eff5-a089-4096-9a2d-a94171884c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.150 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc2f78e-688b-4870-a8e6-ed4bcbdea1ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Dec 06 07:18:33 compute-1 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004a.scope: Consumed 17.283s CPU time.
Dec 06 07:18:33 compute-1 systemd-machined[190302]: Machine qemu-34-instance-0000004a terminated.
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.167 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7341eaa8-ac10-4bd1-83aa-16ae744955a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 784, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 784, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571130, 'reachable_time': 32862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253464, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.182 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f992e4-4954-47d7-8a97-144dfcdc36d3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571140, 'tstamp': 571140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253465, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571143, 'tstamp': 571143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253465, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.184 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.186 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.195 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.195 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.196 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.196 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.197 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.198 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 34bdfa57-1694-46e3-8ea7-7493d4453761 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.200 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61a21643-77ba-4a09-8184-10dc4bd52b26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.200 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f1d448-71b1-4fa5-9d20-e91b9ce38617]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.201 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 namespace which is not needed anymore
Dec 06 07:18:33 compute-1 NetworkManager[49031]: <info>  [1765005513.2069] manager: (tape7c11aa4-db): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Dec 06 07:18:33 compute-1 NetworkManager[49031]: <info>  [1765005513.2219] manager: (tap34bdfa57-16): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.252 226109 INFO nova.virt.libvirt.driver [-] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Instance destroyed successfully.
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.252 226109 DEBUG nova.objects.instance [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'resources' on Instance uuid d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.270 226109 DEBUG nova.virt.libvirt.vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.270 226109 DEBUG nova.network.os_vif_util [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.271 226109 DEBUG nova.network.os_vif_util [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:ba:6e,bridge_name='br-int',has_traffic_filtering=True,id=e7c11aa4-dbf4-473c-b956-2063ef2a13a2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c11aa4-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.271 226109 DEBUG os_vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:ba:6e,bridge_name='br-int',has_traffic_filtering=True,id=e7c11aa4-dbf4-473c-b956-2063ef2a13a2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c11aa4-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.273 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.273 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7c11aa4-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.275 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.277 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.283 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.284 226109 INFO os_vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:ba:6e,bridge_name='br-int',has_traffic_filtering=True,id=e7c11aa4-dbf4-473c-b956-2063ef2a13a2,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c11aa4-db')
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.285 226109 DEBUG nova.virt.libvirt.vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.286 226109 DEBUG nova.network.os_vif_util [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.286 226109 DEBUG nova.network.os_vif_util [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:aa:1f,bridge_name='br-int',has_traffic_filtering=True,id=34bdfa57-1694-46e3-8ea7-7493d4453761,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34bdfa57-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.286 226109 DEBUG os_vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:aa:1f,bridge_name='br-int',has_traffic_filtering=True,id=34bdfa57-1694-46e3-8ea7-7493d4453761,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34bdfa57-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.288 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.288 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34bdfa57-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.290 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.291 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.295 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.301 226109 INFO os_vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:aa:1f,bridge_name='br-int',has_traffic_filtering=True,id=34bdfa57-1694-46e3-8ea7-7493d4453761,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34bdfa57-16')
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.302 226109 DEBUG nova.virt.libvirt.vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1375778431',display_name='tempest-AttachInterfacesTestJSON-server-1375778431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1375778431',id=74,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7bbbR+qeTZhCTqnJ3dn41UWDL8wXBahu7vOb2VTDDloA6T0od70bcBr1VvfMsbygXVYaCnGqPd5XjmZBPUOJCRm5rRGzfbeLnp4nESot4CURb98CHS86koNZpo2djuGQ==',key_name='tempest-keypair-1250284815',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-eabb62ze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=d9e82d9c-ae9d-45de-b1bc-1410d9df84a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.303 226109 DEBUG nova.network.os_vif_util [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.303 226109 DEBUG nova.network.os_vif_util [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d5:ef,bridge_name='br-int',has_traffic_filtering=True,id=50940ab5-b0bd-40c1-a79f-2e5633b08667,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50940ab5-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.304 226109 DEBUG os_vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d5:ef,bridge_name='br-int',has_traffic_filtering=True,id=50940ab5-b0bd-40c1-a79f-2e5633b08667,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50940ab5-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.305 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50940ab5-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.307 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.308 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.311 226109 INFO os_vif [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d5:ef,bridge_name='br-int',has_traffic_filtering=True,id=50940ab5-b0bd-40c1-a79f-2e5633b08667,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50940ab5-b0')
Dec 06 07:18:33 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[252941]: [NOTICE]   (252945) : haproxy version is 2.8.14-c23fe91
Dec 06 07:18:33 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[252941]: [NOTICE]   (252945) : path to executable is /usr/sbin/haproxy
Dec 06 07:18:33 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[252941]: [WARNING]  (252945) : Exiting Master process...
Dec 06 07:18:33 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[252941]: [WARNING]  (252945) : Exiting Master process...
Dec 06 07:18:33 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[252941]: [ALERT]    (252945) : Current worker (252947) exited with code 143 (Terminated)
Dec 06 07:18:33 compute-1 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[252941]: [WARNING]  (252945) : All workers exited. Exiting... (0)
Dec 06 07:18:33 compute-1 systemd[1]: libpod-a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb.scope: Deactivated successfully.
Dec 06 07:18:33 compute-1 podman[253525]: 2025-12-06 07:18:33.604728789 +0000 UTC m=+0.308488982 container died a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:18:33 compute-1 ceph-mon[81689]: pgmap v1816: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.0 KiB/s wr, 115 op/s
Dec 06 07:18:33 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb-userdata-shm.mount: Deactivated successfully.
Dec 06 07:18:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-0a3a2b8da106c8a10aa0ead6cf9a7045fcee6c9e2fb3feb04a1668b6774b7307-merged.mount: Deactivated successfully.
Dec 06 07:18:33 compute-1 podman[253525]: 2025-12-06 07:18:33.872378257 +0000 UTC m=+0.576138430 container cleanup a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:18:33 compute-1 systemd[1]: libpod-conmon-a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb.scope: Deactivated successfully.
Dec 06 07:18:33 compute-1 podman[253575]: 2025-12-06 07:18:33.94545471 +0000 UTC m=+0.051400528 container remove a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.950 226109 DEBUG nova.compute.manager [req-f58e8dfc-6a25-46eb-820c-67d75ffc95c8 req-effbf928-cec0-4808-948b-b6bac852bb81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-unplugged-34bdfa57-1694-46e3-8ea7-7493d4453761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.951 226109 DEBUG oslo_concurrency.lockutils [req-f58e8dfc-6a25-46eb-820c-67d75ffc95c8 req-effbf928-cec0-4808-948b-b6bac852bb81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.951 226109 DEBUG oslo_concurrency.lockutils [req-f58e8dfc-6a25-46eb-820c-67d75ffc95c8 req-effbf928-cec0-4808-948b-b6bac852bb81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.952 226109 DEBUG oslo_concurrency.lockutils [req-f58e8dfc-6a25-46eb-820c-67d75ffc95c8 req-effbf928-cec0-4808-948b-b6bac852bb81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.952 226109 DEBUG nova.compute.manager [req-f58e8dfc-6a25-46eb-820c-67d75ffc95c8 req-effbf928-cec0-4808-948b-b6bac852bb81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-unplugged-34bdfa57-1694-46e3-8ea7-7493d4453761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.952 226109 DEBUG nova.compute.manager [req-f58e8dfc-6a25-46eb-820c-67d75ffc95c8 req-effbf928-cec0-4808-948b-b6bac852bb81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-unplugged-34bdfa57-1694-46e3-8ea7-7493d4453761 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.951 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[11e43f95-9d42-4a5c-b0ec-40d9a2737f8c]: (4, ('Sat Dec  6 07:18:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 (a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb)\na056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb\nSat Dec  6 07:18:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 (a056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb)\na056ec39fd0b73c10a6457f4db7132b4bd4f21e8a9e0e427f89158fea9ad23eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.954 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b825ccee-c847-4610-9cc4-751a986a4112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:33.955 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-1 nova_compute[226101]: 2025-12-06 07:18:33.990 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-1 kernel: tap61a21643-70: left promiscuous mode
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.002 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:34.006 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[03685a53-8590-4b05-9f63-136318701c48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:34.021 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6b3c0d-c64a-4514-9128-9b3ef563d44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:34.023 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[49383705-9f81-49c3-84f0-2009e3a1da28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:34.038 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4e4cc488-89a0-46ed-9319-d5292eba12aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571123, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253591, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:34.040 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:18:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:34.040 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a4dbee-eaea-45ec-bcf6-f8e74bf30aa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:34 compute-1 systemd[1]: run-netns-ovnmeta\x2d61a21643\x2d77ba\x2d4a09\x2d8184\x2d10dc4bd52b26.mount: Deactivated successfully.
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.195 226109 INFO nova.virt.libvirt.driver [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Deleting instance files /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_del
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.196 226109 INFO nova.virt.libvirt.driver [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Deletion of /var/lib/nova/instances/d9e82d9c-ae9d-45de-b1bc-1410d9df84a9_del complete
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.286 226109 INFO nova.compute.manager [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Took 1.30 seconds to destroy the instance on the hypervisor.
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.287 226109 DEBUG oslo.service.loopingcall [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.287 226109 DEBUG nova.compute.manager [-] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.287 226109 DEBUG nova.network.neutron [-] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.429 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:34.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:34.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.757 226109 DEBUG nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-deleted-50940ab5-b0bd-40c1-a79f-2e5633b08667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.757 226109 INFO nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Neutron deleted interface 50940ab5-b0bd-40c1-a79f-2e5633b08667; detaching it from the instance and deleting it from the info cache
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.758 226109 DEBUG nova.network.neutron [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.797 226109 DEBUG nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Detach interface failed, port_id=50940ab5-b0bd-40c1-a79f-2e5633b08667, reason: Instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.798 226109 DEBUG nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-unplugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.798 226109 DEBUG oslo_concurrency.lockutils [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.798 226109 DEBUG oslo_concurrency.lockutils [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.799 226109 DEBUG oslo_concurrency.lockutils [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.799 226109 DEBUG nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-unplugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.799 226109 DEBUG nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-unplugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.799 226109 DEBUG nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.800 226109 DEBUG oslo_concurrency.lockutils [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.800 226109 DEBUG oslo_concurrency.lockutils [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.800 226109 DEBUG oslo_concurrency.lockutils [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.800 226109 DEBUG nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:34 compute-1 nova_compute[226101]: 2025-12-06 07:18:34.800 226109 WARNING nova.compute.manager [req-d08a5a1e-e0ad-43c4-8587-40a1b07f6a3a req-52cb6be2-d5f0-4c07-b8cf-7769fbaf639f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 for instance with vm_state active and task_state deleting.
Dec 06 07:18:35 compute-1 nova_compute[226101]: 2025-12-06 07:18:35.363 226109 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 50940ab5-b0bd-40c1-a79f-2e5633b08667 could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 07:18:35 compute-1 nova_compute[226101]: 2025-12-06 07:18:35.363 226109 DEBUG nova.network.neutron [-] Unable to show port 50940ab5-b0bd-40c1-a79f-2e5633b08667 as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Dec 06 07:18:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:35 compute-1 ceph-mon[81689]: pgmap v1817: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.0 KiB/s wr, 115 op/s
Dec 06 07:18:36 compute-1 nova_compute[226101]: 2025-12-06 07:18:36.172 226109 DEBUG nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:36 compute-1 nova_compute[226101]: 2025-12-06 07:18:36.172 226109 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:36 compute-1 nova_compute[226101]: 2025-12-06 07:18:36.173 226109 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:36 compute-1 nova_compute[226101]: 2025-12-06 07:18:36.173 226109 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:36 compute-1 nova_compute[226101]: 2025-12-06 07:18:36.173 226109 DEBUG nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] No waiting events found dispatching network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:36 compute-1 nova_compute[226101]: 2025-12-06 07:18:36.173 226109 WARNING nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received unexpected event network-vif-plugged-34bdfa57-1694-46e3-8ea7-7493d4453761 for instance with vm_state active and task_state deleting.
Dec 06 07:18:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:36.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:36.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:36 compute-1 ceph-mon[81689]: pgmap v1818: 305 pgs: 305 active+clean; 209 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.9 KiB/s wr, 103 op/s
Dec 06 07:18:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 e255: 3 total, 3 up, 3 in
Dec 06 07:18:37 compute-1 nova_compute[226101]: 2025-12-06 07:18:37.899 226109 DEBUG nova.network.neutron [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [{"id": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "address": "fa:16:3e:17:ba:6e", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c11aa4-db", "ovs_interfaceid": "e7c11aa4-dbf4-473c-b956-2063ef2a13a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "34bdfa57-1694-46e3-8ea7-7493d4453761", "address": "fa:16:3e:2d:aa:1f", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34bdfa57-16", "ovs_interfaceid": "34bdfa57-1694-46e3-8ea7-7493d4453761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "address": "fa:16:3e:e9:d5:ef", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50940ab5-b0", "ovs_interfaceid": "50940ab5-b0bd-40c1-a79f-2e5633b08667", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:37 compute-1 nova_compute[226101]: 2025-12-06 07:18:37.918 226109 DEBUG nova.network.neutron [-] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:37 compute-1 nova_compute[226101]: 2025-12-06 07:18:37.920 226109 DEBUG oslo_concurrency.lockutils [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:37 compute-1 nova_compute[226101]: 2025-12-06 07:18:37.957 226109 DEBUG oslo_concurrency.lockutils [None req-033f9d72-d90a-4d61-9dbe-e315c9aa3301 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-d9e82d9c-ae9d-45de-b1bc-1410d9df84a9-a49d82f2-78d3-4252-867e-1f2512bab05a" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 7.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:37 compute-1 nova_compute[226101]: 2025-12-06 07:18:37.958 226109 INFO nova.compute.manager [-] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Took 3.67 seconds to deallocate network for instance.
Dec 06 07:18:37 compute-1 nova_compute[226101]: 2025-12-06 07:18:37.997 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:37 compute-1 nova_compute[226101]: 2025-12-06 07:18:37.997 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.053 226109 DEBUG oslo_concurrency.processutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.309 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.338 226109 DEBUG nova.compute.manager [req-96f35f33-d2aa-47e7-96e4-e47194b0c0cf req-ea3a13e5-3f0a-4364-b5ab-50341e219f6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-deleted-34bdfa57-1694-46e3-8ea7-7493d4453761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.339 226109 DEBUG nova.compute.manager [req-96f35f33-d2aa-47e7-96e4-e47194b0c0cf req-ea3a13e5-3f0a-4364-b5ab-50341e219f6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Received event network-vif-deleted-e7c11aa4-dbf4-473c-b956-2063ef2a13a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:38 compute-1 ceph-mon[81689]: osdmap e255: 3 total, 3 up, 3 in
Dec 06 07:18:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4221516505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.600 226109 DEBUG oslo_concurrency.processutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.606 226109 DEBUG nova.compute.provider_tree [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.630 226109 DEBUG nova.scheduler.client.report [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:18:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:38.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:38.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.654 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.698 226109 INFO nova.scheduler.client.report [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Deleted allocations for instance d9e82d9c-ae9d-45de-b1bc-1410d9df84a9
Dec 06 07:18:38 compute-1 nova_compute[226101]: 2025-12-06 07:18:38.814 226109 DEBUG oslo_concurrency.lockutils [None req-214b5b3c-03df-413f-b450-6dc5afa5899b 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "d9e82d9c-ae9d-45de-b1bc-1410d9df84a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:39 compute-1 nova_compute[226101]: 2025-12-06 07:18:39.430 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:39 compute-1 ceph-mon[81689]: pgmap v1820: 305 pgs: 305 active+clean; 121 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 KiB/s wr, 108 op/s
Dec 06 07:18:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4221516505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:18:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792564145' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:18:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792564145' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:40.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:40.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2792564145' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2792564145' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:18:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:42 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:42.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:42.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:42 compute-1 ceph-mon[81689]: pgmap v1821: 305 pgs: 305 active+clean; 121 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 KiB/s wr, 102 op/s
Dec 06 07:18:43 compute-1 nova_compute[226101]: 2025-12-06 07:18:43.313 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:43 compute-1 sudo[253614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:43 compute-1 sudo[253614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-1 sudo[253614]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-1 sudo[253639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:18:43 compute-1 sudo[253639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-1 sudo[253639]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-1 sudo[253664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:43 compute-1 sudo[253664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-1 sudo[253664]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-1 sudo[253689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:18:43 compute-1 sudo[253689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:18:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2349873373' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:18:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2349873373' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:43 compute-1 ceph-mon[81689]: pgmap v1822: 305 pgs: 305 active+clean; 167 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 06 07:18:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2349873373' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2349873373' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:43 compute-1 podman[253785]: 2025-12-06 07:18:43.984912474 +0000 UTC m=+0.063535428 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:18:44 compute-1 podman[253785]: 2025-12-06 07:18:44.080881315 +0000 UTC m=+0.159504219 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:18:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:44.382 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:44.382 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:18:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:18:44.383 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:44 compute-1 nova_compute[226101]: 2025-12-06 07:18:44.383 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:44 compute-1 nova_compute[226101]: 2025-12-06 07:18:44.431 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:44 compute-1 sudo[253689]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:44 compute-1 sudo[253906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:44 compute-1 sudo[253906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:44 compute-1 sudo[253906]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:44.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:44.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:44 compute-1 sudo[253931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:18:44 compute-1 sudo[253931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:44 compute-1 sudo[253931]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:44 compute-1 sudo[253956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:44 compute-1 sudo[253956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:44 compute-1 sudo[253956]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:44 compute-1 sudo[253981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:18:44 compute-1 sudo[253981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:44 compute-1 ceph-mon[81689]: pgmap v1823: 305 pgs: 305 active+clean; 167 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 06 07:18:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:45 compute-1 sudo[253981]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.501 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.502 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.532 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:18:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.655 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.656 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.662 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.662 226109 INFO nova.compute.claims [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.832 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquiring lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.833 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.853 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.857 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:45 compute-1 nova_compute[226101]: 2025-12-06 07:18:45.955 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:18:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:18:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1916357225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:46 compute-1 nova_compute[226101]: 2025-12-06 07:18:46.296 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:46 compute-1 nova_compute[226101]: 2025-12-06 07:18:46.302 226109 DEBUG nova.compute.provider_tree [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:18:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:46.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:47 compute-1 ceph-mon[81689]: pgmap v1824: 305 pgs: 305 active+clean; 146 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Dec 06 07:18:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3859210869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3859210869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:18:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:18:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:18:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1916357225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.133 226109 DEBUG nova.scheduler.client.report [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.190 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.190 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.193 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.199 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.200 226109 INFO nova.compute.claims [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.251 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005513.2502322, d9e82d9c-ae9d-45de-b1bc-1410d9df84a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.252 226109 INFO nova.compute.manager [-] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] VM Stopped (Lifecycle Event)
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.276 226109 DEBUG nova.compute.manager [None req-dfa00773-09c8-4250-ae8c-27dacb1d3869 - - - - - -] [instance: d9e82d9c-ae9d-45de-b1bc-1410d9df84a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.316 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.490 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.490 226109 DEBUG nova.network.neutron [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.541 226109 INFO nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.565 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.646 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:48.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.765 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.767 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.767 226109 INFO nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Creating image(s)
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.793 226109 DEBUG nova.storage.rbd_utils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.819 226109 DEBUG nova.storage.rbd_utils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.848 226109 DEBUG nova.storage.rbd_utils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.852 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.918 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.920 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.921 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.921 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.950 226109 DEBUG nova.storage.rbd_utils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:48 compute-1 nova_compute[226101]: 2025-12-06 07:18:48.953 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:49 compute-1 nova_compute[226101]: 2025-12-06 07:18:49.033 226109 DEBUG nova.policy [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '627c36bb63534e52a4b1d5adf47e6ffd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '929e2be1488d4b80b7ad8946093a6abe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:18:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/368597711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:49 compute-1 nova_compute[226101]: 2025-12-06 07:18:49.125 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:49 compute-1 nova_compute[226101]: 2025-12-06 07:18:49.131 226109 DEBUG nova.compute.provider_tree [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:18:49 compute-1 ceph-mon[81689]: pgmap v1825: 305 pgs: 305 active+clean; 88 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.0 MiB/s wr, 72 op/s
Dec 06 07:18:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/368597711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:49 compute-1 nova_compute[226101]: 2025-12-06 07:18:49.432 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:49 compute-1 nova_compute[226101]: 2025-12-06 07:18:49.464 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:49 compute-1 nova_compute[226101]: 2025-12-06 07:18:49.537 226109 DEBUG nova.storage.rbd_utils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] resizing rbd image 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:18:49 compute-1 nova_compute[226101]: 2025-12-06 07:18:49.799 226109 DEBUG nova.objects.instance [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'migration_context' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:50.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:50.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.852 226109 DEBUG nova.scheduler.client.report [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.892 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.892 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Ensure instance console log exists: /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.893 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.893 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.893 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.906 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.907 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.986 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:18:50 compute-1 nova_compute[226101]: 2025-12-06 07:18:50.987 226109 DEBUG nova.network.neutron [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:18:51 compute-1 nova_compute[226101]: 2025-12-06 07:18:51.007 226109 INFO nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:18:51 compute-1 podman[254247]: 2025-12-06 07:18:51.106317213 +0000 UTC m=+0.090305241 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 07:18:51 compute-1 nova_compute[226101]: 2025-12-06 07:18:51.268 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:18:51 compute-1 ceph-mon[81689]: pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.108 226109 INFO nova.virt.block_device [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Booting with volume b6c10720-e1c1-4fdc-b2cd-94ad0567f0a4 at /dev/vda
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.367 226109 DEBUG nova.policy [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4715802a0b24079b4ca157db07e1d75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c6696bf390d4cc5bfd9852d5b264b5a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.438 226109 DEBUG os_brick.utils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.439 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.450 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.450 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[4975615e-ccfa-4532-b75d-6bed5a391fb6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.451 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.460 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.460 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddc635c-89b1-42b8-8ffb-cb8bf77eff97]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.461 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.470 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.470 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa8d805-3059-43c3-abb5-a2acad04581e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.472 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[07940a6f-af76-4e62-b000-78cb9d837256]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.473 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.497 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.499 226109 DEBUG os_brick.initiator.connectors.lightos [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.499 226109 DEBUG os_brick.initiator.connectors.lightos [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.500 226109 DEBUG os_brick.initiator.connectors.lightos [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.500 226109 DEBUG os_brick.utils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.500 226109 DEBUG nova.virt.block_device [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Updating existing volume attachment record: 2493396f-2b2d-40f4-92f6-42d8b6305e0d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:18:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:52.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:52.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:52 compute-1 nova_compute[226101]: 2025-12-06 07:18:52.789 226109 DEBUG nova.network.neutron [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Successfully created port: b26720ee-8d01-41bb-ad74-31e572887a36 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:18:53 compute-1 sudo[254281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:53 compute-1 sudo[254281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:53 compute-1 sudo[254281]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:53 compute-1 sudo[254306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:18:53 compute-1 sudo[254306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:53 compute-1 sudo[254306]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:53 compute-1 nova_compute[226101]: 2025-12-06 07:18:53.320 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:18:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1106685511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.244 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.246 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.246 226109 INFO nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Creating image(s)
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.247 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.247 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Ensure instance console log exists: /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.248 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.249 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.249 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.277 226109 DEBUG nova.network.neutron [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Successfully updated port: b26720ee-8d01-41bb-ad74-31e572887a36 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.306 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.307 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.307 226109 DEBUG nova.network.neutron [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.406 226109 DEBUG nova.network.neutron [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Successfully created port: c94f5eb3-41d4-4e79-a071-3b5b801e2644 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.433 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.445 226109 DEBUG nova.compute.manager [req-b7a0f16c-4c4b-46fe-a419-db825d61d734 req-f661826f-2bcc-44de-a913-983e9d6f1083 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-changed-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.446 226109 DEBUG nova.compute.manager [req-b7a0f16c-4c4b-46fe-a419-db825d61d734 req-f661826f-2bcc-44de-a913-983e9d6f1083 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Refreshing instance network info cache due to event network-changed-b26720ee-8d01-41bb-ad74-31e572887a36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.446 226109 DEBUG oslo_concurrency.lockutils [req-b7a0f16c-4c4b-46fe-a419-db825d61d734 req-f661826f-2bcc-44de-a913-983e9d6f1083 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.507 226109 DEBUG nova.network.neutron [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:18:54 compute-1 ceph-mon[81689]: pgmap v1827: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 3.5 MiB/s wr, 90 op/s
Dec 06 07:18:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:54.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:18:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:54.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:18:54 compute-1 nova_compute[226101]: 2025-12-06 07:18:54.883 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:55 compute-1 nova_compute[226101]: 2025-12-06 07:18:55.136 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:55 compute-1 nova_compute[226101]: 2025-12-06 07:18:55.608 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:55 compute-1 nova_compute[226101]: 2025-12-06 07:18:55.609 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:18:55 compute-1 nova_compute[226101]: 2025-12-06 07:18:55.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:56 compute-1 podman[254333]: 2025-12-06 07:18:56.080299258 +0000 UTC m=+0.051367587 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:18:56 compute-1 podman[254332]: 2025-12-06 07:18:56.087069541 +0000 UTC m=+0.057772581 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 07:18:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1106685511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:56 compute-1 ceph-mon[81689]: pgmap v1828: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec 06 07:18:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1674897031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.637 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.638 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.638 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.638 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.638 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:56.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:56.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.729 226109 DEBUG nova.network.neutron [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.757 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.757 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance network_info: |[{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.758 226109 DEBUG oslo_concurrency.lockutils [req-b7a0f16c-4c4b-46fe-a419-db825d61d734 req-f661826f-2bcc-44de-a913-983e9d6f1083 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.758 226109 DEBUG nova.network.neutron [req-b7a0f16c-4c4b-46fe-a419-db825d61d734 req-f661826f-2bcc-44de-a913-983e9d6f1083 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Refreshing network info cache for port b26720ee-8d01-41bb-ad74-31e572887a36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.761 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Start _get_guest_xml network_info=[{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.765 226109 WARNING nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.791 226109 DEBUG nova.virt.libvirt.host [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.792 226109 DEBUG nova.virt.libvirt.host [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.812 226109 DEBUG nova.virt.libvirt.host [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.813 226109 DEBUG nova.virt.libvirt.host [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.814 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.814 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.814 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.815 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.815 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.815 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.815 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.816 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.816 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.816 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.816 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.816 226109 DEBUG nova.virt.hardware [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:18:56 compute-1 nova_compute[226101]: 2025-12-06 07:18:56.820 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.130 226109 DEBUG nova.network.neutron [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Successfully updated port: c94f5eb3-41d4-4e79-a071-3b5b801e2644 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.164 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquiring lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.165 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquired lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.165 226109 DEBUG nova.network.neutron [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:18:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/740716915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:18:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/658088653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.324 226109 DEBUG nova.compute.manager [req-7ff6b0b5-abe8-491d-b070-d0fb6f3ee424 req-9e25a140-a7e7-43b7-9253-fc5fef33cd39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received event network-changed-c94f5eb3-41d4-4e79-a071-3b5b801e2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.325 226109 DEBUG nova.compute.manager [req-7ff6b0b5-abe8-491d-b070-d0fb6f3ee424 req-9e25a140-a7e7-43b7-9253-fc5fef33cd39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Refreshing instance network info cache due to event network-changed-c94f5eb3-41d4-4e79-a071-3b5b801e2644. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.326 226109 DEBUG oslo_concurrency.lockutils [req-7ff6b0b5-abe8-491d-b070-d0fb6f3ee424 req-9e25a140-a7e7-43b7-9253-fc5fef33cd39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.341 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.703s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.496 226109 DEBUG nova.network.neutron [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.538 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.539 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4622MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.539 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.539 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.653 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 67f0eb12-5070-4bc1-846b-09eb25dec88d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.653 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 5f718fcb-597c-47a7-b738-1cd8d4b7b0de actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.654 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.654 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:18:57 compute-1 nova_compute[226101]: 2025-12-06 07:18:57.712 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:57 compute-1 ceph-mon[81689]: pgmap v1829: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.324 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.619 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.800s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.652 226109 DEBUG nova.storage.rbd_utils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.656 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:58.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:18:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.804 226109 DEBUG nova.network.neutron [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Updating instance_info_cache with network_info: [{"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.836 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Releasing lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.836 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Instance network_info: |[{"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.837 226109 DEBUG oslo_concurrency.lockutils [req-7ff6b0b5-abe8-491d-b070-d0fb6f3ee424 req-9e25a140-a7e7-43b7-9253-fc5fef33cd39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.837 226109 DEBUG nova.network.neutron [req-7ff6b0b5-abe8-491d-b070-d0fb6f3ee424 req-9e25a140-a7e7-43b7-9253-fc5fef33cd39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Refreshing network info cache for port c94f5eb3-41d4-4e79-a071-3b5b801e2644 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.840 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Start _get_guest_xml network_info=[{"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b6c10720-e1c1-4fdc-b2cd-94ad0567f0a4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b6c10720-e1c1-4fdc-b2cd-94ad0567f0a4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5f718fcb-597c-47a7-b738-1cd8d4b7b0de', 'attached_at': '', 'detached_at': '', 'volume_id': 'b6c10720-e1c1-4fdc-b2cd-94ad0567f0a4', 'serial': 'b6c10720-e1c1-4fdc-b2cd-94ad0567f0a4'}, 'mount_device': '/dev/vda', 'boot_index': 0, 'attachment_id': '2493396f-2b2d-40f4-92f6-42d8b6305e0d', 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.844 226109 WARNING nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.851 226109 DEBUG nova.virt.libvirt.host [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.852 226109 DEBUG nova.virt.libvirt.host [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.856 226109 DEBUG nova.virt.libvirt.host [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.857 226109 DEBUG nova.virt.libvirt.host [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.858 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.858 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.858 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.858 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.859 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.859 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.859 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.859 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.859 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.859 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.860 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.860 226109 DEBUG nova.virt.hardware [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:18:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2324004116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4153241235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/740716915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/658088653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-1 ceph-mon[81689]: pgmap v1830: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec 06 07:18:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3302555264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.959 226109 DEBUG nova.storage.rbd_utils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] rbd image 5f718fcb-597c-47a7-b738-1cd8d4b7b0de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.963 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:58 compute-1 nova_compute[226101]: 2025-12-06 07:18:58.993 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.000 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.018 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.040 226109 DEBUG nova.network.neutron [req-b7a0f16c-4c4b-46fe-a419-db825d61d734 req-f661826f-2bcc-44de-a913-983e9d6f1083 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updated VIF entry in instance network info cache for port b26720ee-8d01-41bb-ad74-31e572887a36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.041 226109 DEBUG nova.network.neutron [req-b7a0f16c-4c4b-46fe-a419-db825d61d734 req-f661826f-2bcc-44de-a913-983e9d6f1083 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.052 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.052 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.059 226109 DEBUG oslo_concurrency.lockutils [req-b7a0f16c-4c4b-46fe-a419-db825d61d734 req-f661826f-2bcc-44de-a913-983e9d6f1083 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:18:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2158792105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.104 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.105 226109 DEBUG nova.virt.libvirt.vif [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:18:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.106 226109 DEBUG nova.network.os_vif_util [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.107 226109 DEBUG nova.network.os_vif_util [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.108 226109 DEBUG nova.objects.instance [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.128 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <uuid>67f0eb12-5070-4bc1-846b-09eb25dec88d</uuid>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <name>instance-0000004d</name>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerActionsTestJSON-server-1140474602</nova:name>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:18:56</nova:creationTime>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:port uuid="b26720ee-8d01-41bb-ad74-31e572887a36">
Dec 06 07:18:59 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <system>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="serial">67f0eb12-5070-4bc1-846b-09eb25dec88d</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="uuid">67f0eb12-5070-4bc1-846b-09eb25dec88d</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </system>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <os>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </os>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <features>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </features>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/67f0eb12-5070-4bc1-846b-09eb25dec88d_disk">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/67f0eb12-5070-4bc1-846b-09eb25dec88d_disk.config">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:a2:d1:1e"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <target dev="tapb26720ee-8d"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/console.log" append="off"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <video>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </video>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:18:59 compute-1 nova_compute[226101]: </domain>
Dec 06 07:18:59 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.129 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Preparing to wait for external event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.130 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.130 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.130 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.131 226109 DEBUG nova.virt.libvirt.vif [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:18:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.131 226109 DEBUG nova.network.os_vif_util [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.132 226109 DEBUG nova.network.os_vif_util [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.132 226109 DEBUG os_vif [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.133 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.134 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.137 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.137 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb26720ee-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.138 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb26720ee-8d, col_values=(('external_ids', {'iface-id': 'b26720ee-8d01-41bb-ad74-31e572887a36', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:d1:1e', 'vm-uuid': '67f0eb12-5070-4bc1-846b-09eb25dec88d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.139 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:59 compute-1 NetworkManager[49031]: <info>  [1765005539.1412] manager: (tapb26720ee-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.142 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.148 226109 INFO os_vif [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d')
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.218 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.219 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.219 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No VIF found with MAC fa:16:3e:a2:d1:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.219 226109 INFO nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Using config drive
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.255 226109 DEBUG nova.storage.rbd_utils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:18:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2096741357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.435 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.437 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.464 226109 DEBUG nova.virt.libvirt.vif [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:18:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1488925369',display_name='tempest-ServersTestBootFromVolume-server-1488925369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1488925369',id=78,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTqnkSt2tkptxGb8j0FyIF7mhfC3g4nCTY0dmS1O9j2D4whvbbZzVjTaB1HV1FzYrZYaDqs4EAuECl1pd2rTCLOMkEdFzfpSZ2TAaIcLDXdfmlhSXmE1QqkQ96nlBQuBw==',key_name='tempest-keypair-1966242547',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c6696bf390d4cc5bfd9852d5b264b5a',ramdisk_id='',reservation_id='r-tfb2kdsn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1735484763',owner_user_name='tempest-ServersTestBootFromVolume-1735484763-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:18:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4715802a0b24079b4ca157db07e1d75',uuid=5f718fcb-597c-47a7-b738-1cd8d4b7b0de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.464 226109 DEBUG nova.network.os_vif_util [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Converting VIF {"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.465 226109 DEBUG nova.network.os_vif_util [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:cd:e3,bridge_name='br-int',has_traffic_filtering=True,id=c94f5eb3-41d4-4e79-a071-3b5b801e2644,network=Network(bf3441f7-9585-4902-b159-32b41a1d162a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc94f5eb3-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.466 226109 DEBUG nova.objects.instance [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f718fcb-597c-47a7-b738-1cd8d4b7b0de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.483 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <uuid>5f718fcb-597c-47a7-b738-1cd8d4b7b0de</uuid>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <name>instance-0000004e</name>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersTestBootFromVolume-server-1488925369</nova:name>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:18:58</nova:creationTime>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:user uuid="f4715802a0b24079b4ca157db07e1d75">tempest-ServersTestBootFromVolume-1735484763-project-member</nova:user>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:project uuid="6c6696bf390d4cc5bfd9852d5b264b5a">tempest-ServersTestBootFromVolume-1735484763</nova:project>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <nova:port uuid="c94f5eb3-41d4-4e79-a071-3b5b801e2644">
Dec 06 07:18:59 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <system>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="serial">5f718fcb-597c-47a7-b738-1cd8d4b7b0de</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="uuid">5f718fcb-597c-47a7-b738-1cd8d4b7b0de</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </system>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <os>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </os>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <features>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </features>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/5f718fcb-597c-47a7-b738-1cd8d4b7b0de_disk.config">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <source protocol="rbd" name="volumes/volume-b6c10720-e1c1-4fdc-b2cd-94ad0567f0a4">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </source>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:18:59 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <serial>b6c10720-e1c1-4fdc-b2cd-94ad0567f0a4</serial>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:2c:cd:e3"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <target dev="tapc94f5eb3-41"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de/console.log" append="off"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <video>
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </video>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:18:59 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:18:59 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:18:59 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:18:59 compute-1 nova_compute[226101]: </domain>
Dec 06 07:18:59 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.484 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Preparing to wait for external event network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.485 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquiring lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.485 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.485 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.486 226109 DEBUG nova.virt.libvirt.vif [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:18:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1488925369',display_name='tempest-ServersTestBootFromVolume-server-1488925369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1488925369',id=78,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTqnkSt2tkptxGb8j0FyIF7mhfC3g4nCTY0dmS1O9j2D4whvbbZzVjTaB1HV1FzYrZYaDqs4EAuECl1pd2rTCLOMkEdFzfpSZ2TAaIcLDXdfmlhSXmE1QqkQ96nlBQuBw==',key_name='tempest-keypair-1966242547',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c6696bf390d4cc5bfd9852d5b264b5a',ramdisk_id='',reservation_id='r-tfb2kdsn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1735484763',owner_user_name='tempest-ServersTestBootFromVolume-1735484763-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:18:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4715802a0b24079b4ca157db07e1d75',uuid=5f718fcb-597c-47a7-b738-1cd8d4b7b0de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.487 226109 DEBUG nova.network.os_vif_util [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Converting VIF {"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.487 226109 DEBUG nova.network.os_vif_util [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:cd:e3,bridge_name='br-int',has_traffic_filtering=True,id=c94f5eb3-41d4-4e79-a071-3b5b801e2644,network=Network(bf3441f7-9585-4902-b159-32b41a1d162a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc94f5eb3-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.488 226109 DEBUG os_vif [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:cd:e3,bridge_name='br-int',has_traffic_filtering=True,id=c94f5eb3-41d4-4e79-a071-3b5b801e2644,network=Network(bf3441f7-9585-4902-b159-32b41a1d162a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc94f5eb3-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.488 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.489 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.489 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.492 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.492 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc94f5eb3-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.493 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc94f5eb3-41, col_values=(('external_ids', {'iface-id': 'c94f5eb3-41d4-4e79-a071-3b5b801e2644', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:cd:e3', 'vm-uuid': '5f718fcb-597c-47a7-b738-1cd8d4b7b0de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:59 compute-1 NetworkManager[49031]: <info>  [1765005539.4960] manager: (tapc94f5eb3-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.496 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.501 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.502 226109 INFO os_vif [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:cd:e3,bridge_name='br-int',has_traffic_filtering=True,id=c94f5eb3-41d4-4e79-a071-3b5b801e2644,network=Network(bf3441f7-9585-4902-b159-32b41a1d162a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc94f5eb3-41')
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.695 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.696 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.696 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] No VIF found with MAC fa:16:3e:2c:cd:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.697 226109 INFO nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Using config drive
Dec 06 07:18:59 compute-1 nova_compute[226101]: 2025-12-06 07:18:59.721 226109 DEBUG nova.storage.rbd_utils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] rbd image 5f718fcb-597c-47a7-b738-1cd8d4b7b0de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.034 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.034 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.034 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.122 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.122 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.122 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.123 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2324004116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2158792105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2096741357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.618 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:00.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:00.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.673 226109 INFO nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Creating config drive at /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de/disk.config
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.680 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd23iuqj_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.716 226109 INFO nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Creating config drive at /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/disk.config
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.723 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqge8coq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.813 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd23iuqj_" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.849 226109 DEBUG nova.storage.rbd_utils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] rbd image 5f718fcb-597c-47a7-b738-1cd8d4b7b0de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.856 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de/disk.config 5f718fcb-597c-47a7-b738-1cd8d4b7b0de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.885 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqge8coq" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.925 226109 DEBUG nova.storage.rbd_utils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:00 compute-1 nova_compute[226101]: 2025-12-06 07:19:00.930 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/disk.config 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.112 226109 DEBUG oslo_concurrency.processutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/disk.config 67f0eb12-5070-4bc1-846b-09eb25dec88d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.113 226109 INFO nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Deleting local config drive /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/disk.config because it was imported into RBD.
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.120 226109 DEBUG oslo_concurrency.processutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de/disk.config 5f718fcb-597c-47a7-b738-1cd8d4b7b0de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.121 226109 INFO nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Deleting local config drive /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de/disk.config because it was imported into RBD.
Dec 06 07:19:01 compute-1 kernel: tapb26720ee-8d: entered promiscuous mode
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.1943] manager: (tapb26720ee-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.1993] manager: (tapc94f5eb3-41): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00292|binding|INFO|Claiming lport b26720ee-8d01-41bb-ad74-31e572887a36 for this chassis.
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00293|binding|INFO|b26720ee-8d01-41bb-ad74-31e572887a36: Claiming fa:16:3e:a2:d1:1e 10.100.0.14
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.200 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.214 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:d1:1e 10.100.0.14'], port_security=['fa:16:3e:a2:d1:1e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '67f0eb12-5070-4bc1-846b-09eb25dec88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b26720ee-8d01-41bb-ad74-31e572887a36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.215 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b26720ee-8d01-41bb-ad74-31e572887a36 in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.217 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.228 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0b885c94-2212-4acf-84d7-c97d59c50a36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.229 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.231 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.231 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e98f97a6-97f1-4e0d-b1de-9e31d90384e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 systemd-udevd[254661]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:19:01 compute-1 systemd-udevd[254663]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.232 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[799f51f6-a4d1-4f6e-86fd-86bb0b73a1f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.245 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5ead1132-6de3-4148-bf59-2bca95278aa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 systemd-machined[190302]: New machine qemu-35-instance-0000004d.
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.2520] device (tapb26720ee-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.2529] device (tapb26720ee-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:19:01 compute-1 kernel: tapc94f5eb3-41: entered promiscuous mode
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.2685] device (tapc94f5eb3-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.2699] device (tapc94f5eb3-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.270 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00294|binding|INFO|Claiming lport c94f5eb3-41d4-4e79-a071-3b5b801e2644 for this chassis.
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00295|binding|INFO|c94f5eb3-41d4-4e79-a071-3b5b801e2644: Claiming fa:16:3e:2c:cd:e3 10.100.0.12
Dec 06 07:19:01 compute-1 systemd[1]: Started Virtual Machine qemu-35-instance-0000004d.
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.274 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6622c287-967d-4bd0-8865-1aa6f97a0654]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.282 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:cd:e3 10.100.0.12'], port_security=['fa:16:3e:2c:cd:e3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f718fcb-597c-47a7-b738-1cd8d4b7b0de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf3441f7-9585-4902-b159-32b41a1d162a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c6696bf390d4cc5bfd9852d5b264b5a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d4a667f-a5c6-421a-b37d-95b02f665a4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebc2c736-6517-4f60-a962-50feb29f121c, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=c94f5eb3-41d4-4e79-a071-3b5b801e2644) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00296|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 ovn-installed in OVS
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00297|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 up in Southbound
Dec 06 07:19:01 compute-1 systemd-machined[190302]: New machine qemu-36-instance-0000004e.
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.322 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:01 compute-1 systemd[1]: Started Virtual Machine qemu-36-instance-0000004e.
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.339 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5520edbd-9cf0-4aa8-814e-2fa1d3fa31ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.3514] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.350 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b62f0a73-78d5-4c66-b5a0-0c6fe16c5143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00298|binding|INFO|Setting lport c94f5eb3-41d4-4e79-a071-3b5b801e2644 ovn-installed in OVS
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00299|binding|INFO|Setting lport c94f5eb3-41d4-4e79-a071-3b5b801e2644 up in Southbound
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.358 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.385 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7321c61a-46af-411f-a12f-bd93cfa9902f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.388 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3885933d-dc0b-46ce-960c-dd0cea40dd33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.4117] device (tap4d599401-30): carrier: link connected
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.417 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[185fc561-2989-4ac1-8dc7-3bb2033ae54c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.434 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d79e5a38-3c02-4623-8e4b-d4f7e1d78dad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582091, 'reachable_time': 43722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254704, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.450 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b919e220-e513-4240-9d0e-f771928455b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582091, 'tstamp': 582091}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254705, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.465 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[09092f90-f7de-4295-aa92-9557db1688b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582091, 'reachable_time': 43722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254706, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.499 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4465f8-599e-4abb-9293-60877c86b68a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.556 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[17806533-495e-4802-95c0-0684ce41576f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.558 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.558 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.559 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:01 compute-1 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.561 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:01 compute-1 NetworkManager[49031]: <info>  [1765005541.5616] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.564 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.565 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.566 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:01 compute-1 ovn_controller[130279]: 2025-12-06T07:19:01Z|00300|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.567 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:19:01 compute-1 ceph-mon[81689]: pgmap v1831: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.581 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.581 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[32aa7c33-bc94-4d7a-a0f7-1704cfb11da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.582 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.583 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.636 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.644 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:01.644 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.822 226109 DEBUG nova.network.neutron [req-7ff6b0b5-abe8-491d-b070-d0fb6f3ee424 req-9e25a140-a7e7-43b7-9253-fc5fef33cd39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Updated VIF entry in instance network info cache for port c94f5eb3-41d4-4e79-a071-3b5b801e2644. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.823 226109 DEBUG nova.network.neutron [req-7ff6b0b5-abe8-491d-b070-d0fb6f3ee424 req-9e25a140-a7e7-43b7-9253-fc5fef33cd39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Updating instance_info_cache with network_info: [{"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:01 compute-1 nova_compute[226101]: 2025-12-06 07:19:01.853 226109 DEBUG oslo_concurrency.lockutils [req-7ff6b0b5-abe8-491d-b070-d0fb6f3ee424 req-9e25a140-a7e7-43b7-9253-fc5fef33cd39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:01 compute-1 podman[254738]: 2025-12-06 07:19:01.952954735 +0000 UTC m=+0.049105919 container create 0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:19:01 compute-1 systemd[1]: Started libpod-conmon-0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0.scope.
Dec 06 07:19:02 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:19:02 compute-1 podman[254738]: 2025-12-06 07:19:01.925037361 +0000 UTC m=+0.021188575 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:19:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60e30451ab3f04f531d424d68a10dd34e2e8c47084f8a7f9aa9183b245c7bdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:02 compute-1 podman[254738]: 2025-12-06 07:19:02.056295104 +0000 UTC m=+0.152446298 container init 0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.063 226109 DEBUG nova.compute.manager [req-b529ba0b-21a2-465e-bdb9-eba55bf0826a req-2ae2f7c3-b0b8-4393-a4f3-2a75a8a2c016 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.064 226109 DEBUG oslo_concurrency.lockutils [req-b529ba0b-21a2-465e-bdb9-eba55bf0826a req-2ae2f7c3-b0b8-4393-a4f3-2a75a8a2c016 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.064 226109 DEBUG oslo_concurrency.lockutils [req-b529ba0b-21a2-465e-bdb9-eba55bf0826a req-2ae2f7c3-b0b8-4393-a4f3-2a75a8a2c016 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.064 226109 DEBUG oslo_concurrency.lockutils [req-b529ba0b-21a2-465e-bdb9-eba55bf0826a req-2ae2f7c3-b0b8-4393-a4f3-2a75a8a2c016 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.065 226109 DEBUG nova.compute.manager [req-b529ba0b-21a2-465e-bdb9-eba55bf0826a req-2ae2f7c3-b0b8-4393-a4f3-2a75a8a2c016 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Processing event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:19:02 compute-1 podman[254738]: 2025-12-06 07:19:02.070018125 +0000 UTC m=+0.166169309 container start 0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 07:19:02 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[254753]: [NOTICE]   (254757) : New worker (254759) forked
Dec 06 07:19:02 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[254753]: [NOTICE]   (254757) : Loading success.
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.165 139580 INFO neutron.agent.ovn.metadata.agent [-] Port c94f5eb3-41d4-4e79-a071-3b5b801e2644 in datapath bf3441f7-9585-4902-b159-32b41a1d162a unbound from our chassis
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.167 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf3441f7-9585-4902-b159-32b41a1d162a
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.178 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b06d2243-cf33-4289-b05d-f89030ae2b17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.179 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf3441f7-91 in ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.184 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf3441f7-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.184 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7a2c7e-a4a5-413f-8505-26160f8a10d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.185 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9bae9a57-311c-483d-a304-e5275c502193]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.199 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a4db0221-a7c7-4a76-82b1-232667ebc6fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.224 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7998b8bb-1966-4f8a-84e9-e3851bb87bb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.254 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[51f6a10f-5d4a-42b6-ace0-0d2e365f6895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 NetworkManager[49031]: <info>  [1765005542.2604] manager: (tapbf3441f7-90): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Dec 06 07:19:02 compute-1 systemd-udevd[254687]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.262 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[de1bf81d-2db7-4d44-8350-aa6b20dead43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.298 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ce76b7f7-fec4-4cff-8e19-9ddc7744c847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.305 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ceb70b-de05-4ec7-b816-998ae4b178ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 NetworkManager[49031]: <info>  [1765005542.3285] device (tapbf3441f7-90): carrier: link connected
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.334 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0f81947f-6207-4497-a38c-1139f5f4ff55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.353 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2a95a8-81e4-4daf-9ced-206443a0badc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf3441f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:26:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582183, 'reachable_time': 25520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254814, 'error': None, 'target': 'ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.372 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6afe0c-21f1-4f01-8ce1-1326a2cc15a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:26ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582183, 'tstamp': 582183}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254820, 'error': None, 'target': 'ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.388 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5309ca25-db74-4fef-b87b-00c791398376]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf3441f7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:26:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582183, 'reachable_time': 25520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254821, 'error': None, 'target': 'ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.423 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6357b0b3-f76b-40da-b5a8-7015e6a76312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.452 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005542.4514081, 5f718fcb-597c-47a7-b738-1cd8d4b7b0de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.452 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] VM Started (Lifecycle Event)
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.480 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[217a1092-ce2c-4d99-a78e-c297caaaef8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.481 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf3441f7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.482 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.482 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf3441f7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.483 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.484 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:02 compute-1 NetworkManager[49031]: <info>  [1765005542.4851] manager: (tapbf3441f7-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Dec 06 07:19:02 compute-1 kernel: tapbf3441f7-90: entered promiscuous mode
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.489 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf3441f7-90, col_values=(('external_ids', {'iface-id': '066f1064-0116-42e5-abc8-441d0edadf38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.490 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:02 compute-1 ovn_controller[130279]: 2025-12-06T07:19:02Z|00301|binding|INFO|Releasing lport 066f1064-0116-42e5-abc8-441d0edadf38 from this chassis (sb_readonly=0)
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.491 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.492 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf3441f7-9585-4902-b159-32b41a1d162a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf3441f7-9585-4902-b159-32b41a1d162a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.492 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[12367f82-fc69-45e2-9fb7-16a05b587129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.493 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-bf3441f7-9585-4902-b159-32b41a1d162a
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/bf3441f7-9585-4902-b159-32b41a1d162a.pid.haproxy
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID bf3441f7-9585-4902-b159-32b41a1d162a
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:19:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:02.493 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a', 'env', 'PROCESS_TAG=haproxy-bf3441f7-9585-4902-b159-32b41a1d162a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf3441f7-9585-4902-b159-32b41a1d162a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.496 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005542.4517162, 5f718fcb-597c-47a7-b738-1cd8d4b7b0de => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.496 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] VM Paused (Lifecycle Event)
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.504 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.517 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.523 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.545 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.586 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005542.5863478, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.587 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Started (Lifecycle Event)
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.588 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.591 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.594 226109 INFO nova.virt.libvirt.driver [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance spawned successfully.
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.594 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.610 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.617 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.621 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.621 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.622 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.622 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.622 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.623 226109 DEBUG nova.virt.libvirt.driver [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.655 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.655 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005542.587006, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.656 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Paused (Lifecycle Event)
Dec 06 07:19:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:02.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:02.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.708 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.712 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005542.5907674, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.712 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Resumed (Lifecycle Event)
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.723 226109 INFO nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Took 13.96 seconds to spawn the instance on the hypervisor.
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.724 226109 DEBUG nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.737 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.748 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.798 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.838 226109 INFO nova.compute.manager [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Took 17.23 seconds to build instance.
Dec 06 07:19:02 compute-1 nova_compute[226101]: 2025-12-06 07:19:02.882 226109 DEBUG oslo_concurrency.lockutils [None req-d51e6bfd-3f80-4541-9401-facd9385e8a8 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:02 compute-1 podman[254896]: 2025-12-06 07:19:02.883369031 +0000 UTC m=+0.055330855 container create 3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:19:02 compute-1 systemd[1]: Started libpod-conmon-3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0.scope.
Dec 06 07:19:02 compute-1 podman[254896]: 2025-12-06 07:19:02.850944025 +0000 UTC m=+0.022905889 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:19:02 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:19:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b387088f1674e95595d8a603eabb32a826970c7d31e583eda9fc59f0ba366/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:02 compute-1 podman[254896]: 2025-12-06 07:19:02.978359096 +0000 UTC m=+0.150320940 container init 3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:19:02 compute-1 podman[254896]: 2025-12-06 07:19:02.983375971 +0000 UTC m=+0.155337795 container start 3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:19:03 compute-1 neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a[254912]: [NOTICE]   (254916) : New worker (254918) forked
Dec 06 07:19:03 compute-1 neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a[254912]: [NOTICE]   (254916) : Loading success.
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.557 226109 DEBUG nova.compute.manager [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received event network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.557 226109 DEBUG oslo_concurrency.lockutils [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.557 226109 DEBUG oslo_concurrency.lockutils [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.557 226109 DEBUG oslo_concurrency.lockutils [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.558 226109 DEBUG nova.compute.manager [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Processing event network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.558 226109 DEBUG nova.compute.manager [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received event network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.558 226109 DEBUG oslo_concurrency.lockutils [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.558 226109 DEBUG oslo_concurrency.lockutils [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.558 226109 DEBUG oslo_concurrency.lockutils [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.559 226109 DEBUG nova.compute.manager [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] No waiting events found dispatching network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.559 226109 WARNING nova.compute.manager [req-6c3fb629-a7a0-498a-8322-67424f7c5b63 req-155965ab-356f-43a8-85c9-594928cfb569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received unexpected event network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 for instance with vm_state building and task_state spawning.
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.559 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.562 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005543.5626657, 5f718fcb-597c-47a7-b738-1cd8d4b7b0de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.563 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] VM Resumed (Lifecycle Event)
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.564 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.570 226109 INFO nova.virt.libvirt.driver [-] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Instance spawned successfully.
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.571 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:19:03 compute-1 ceph-mon[81689]: pgmap v1832: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec 06 07:19:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2383397060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.586 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.589 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.597 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.598 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.598 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.599 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.599 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.599 226109 DEBUG nova.virt.libvirt.driver [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:03 compute-1 nova_compute[226101]: 2025-12-06 07:19:03.630 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:19:04 compute-1 nova_compute[226101]: 2025-12-06 07:19:04.033 226109 INFO nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Took 9.79 seconds to spawn the instance on the hypervisor.
Dec 06 07:19:04 compute-1 nova_compute[226101]: 2025-12-06 07:19:04.034 226109 DEBUG nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:04 compute-1 nova_compute[226101]: 2025-12-06 07:19:04.121 226109 INFO nova.compute.manager [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Took 18.19 seconds to build instance.
Dec 06 07:19:04 compute-1 nova_compute[226101]: 2025-12-06 07:19:04.141 226109 DEBUG oslo_concurrency.lockutils [None req-db5d97b3-436b-4b11-87f5-12cb320fb366 f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:04 compute-1 nova_compute[226101]: 2025-12-06 07:19:04.436 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:04 compute-1 nova_compute[226101]: 2025-12-06 07:19:04.494 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:04 compute-1 nova_compute[226101]: 2025-12-06 07:19:04.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:04.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:04.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:05 compute-1 nova_compute[226101]: 2025-12-06 07:19:05.030 226109 DEBUG nova.compute.manager [req-23236ccf-0467-4799-b946-f2347a5dfd1c req-1f55230b-eaa3-4baa-acb5-9e5d1eaa912a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:05 compute-1 nova_compute[226101]: 2025-12-06 07:19:05.031 226109 DEBUG oslo_concurrency.lockutils [req-23236ccf-0467-4799-b946-f2347a5dfd1c req-1f55230b-eaa3-4baa-acb5-9e5d1eaa912a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:05 compute-1 nova_compute[226101]: 2025-12-06 07:19:05.031 226109 DEBUG oslo_concurrency.lockutils [req-23236ccf-0467-4799-b946-f2347a5dfd1c req-1f55230b-eaa3-4baa-acb5-9e5d1eaa912a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:05 compute-1 nova_compute[226101]: 2025-12-06 07:19:05.031 226109 DEBUG oslo_concurrency.lockutils [req-23236ccf-0467-4799-b946-f2347a5dfd1c req-1f55230b-eaa3-4baa-acb5-9e5d1eaa912a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:05 compute-1 nova_compute[226101]: 2025-12-06 07:19:05.031 226109 DEBUG nova.compute.manager [req-23236ccf-0467-4799-b946-f2347a5dfd1c req-1f55230b-eaa3-4baa-acb5-9e5d1eaa912a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:05 compute-1 nova_compute[226101]: 2025-12-06 07:19:05.032 226109 WARNING nova.compute.manager [req-23236ccf-0467-4799-b946-f2347a5dfd1c req-1f55230b-eaa3-4baa-acb5-9e5d1eaa912a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state active and task_state None.
Dec 06 07:19:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/134940863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2596240762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/69132736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:06.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:06.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:07 compute-1 nova_compute[226101]: 2025-12-06 07:19:07.055 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:07 compute-1 NetworkManager[49031]: <info>  [1765005547.0560] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Dec 06 07:19:07 compute-1 NetworkManager[49031]: <info>  [1765005547.0571] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Dec 06 07:19:07 compute-1 nova_compute[226101]: 2025-12-06 07:19:07.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:07 compute-1 ovn_controller[130279]: 2025-12-06T07:19:07Z|00302|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:19:07 compute-1 ovn_controller[130279]: 2025-12-06T07:19:07Z|00303|binding|INFO|Releasing lport 066f1064-0116-42e5-abc8-441d0edadf38 from this chassis (sb_readonly=0)
Dec 06 07:19:07 compute-1 nova_compute[226101]: 2025-12-06 07:19:07.157 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:07 compute-1 ceph-mon[81689]: pgmap v1833: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:19:07 compute-1 nova_compute[226101]: 2025-12-06 07:19:07.671 226109 DEBUG nova.compute.manager [req-18c16ea0-254a-464e-8515-e7b3683b0813 req-8947d5a9-069e-451a-a5e6-28162a13334c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-changed-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:07 compute-1 nova_compute[226101]: 2025-12-06 07:19:07.671 226109 DEBUG nova.compute.manager [req-18c16ea0-254a-464e-8515-e7b3683b0813 req-8947d5a9-069e-451a-a5e6-28162a13334c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Refreshing instance network info cache due to event network-changed-b26720ee-8d01-41bb-ad74-31e572887a36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:07 compute-1 nova_compute[226101]: 2025-12-06 07:19:07.671 226109 DEBUG oslo_concurrency.lockutils [req-18c16ea0-254a-464e-8515-e7b3683b0813 req-8947d5a9-069e-451a-a5e6-28162a13334c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:07 compute-1 nova_compute[226101]: 2025-12-06 07:19:07.671 226109 DEBUG oslo_concurrency.lockutils [req-18c16ea0-254a-464e-8515-e7b3683b0813 req-8947d5a9-069e-451a-a5e6-28162a13334c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:07 compute-1 nova_compute[226101]: 2025-12-06 07:19:07.672 226109 DEBUG nova.network.neutron [req-18c16ea0-254a-464e-8515-e7b3683b0813 req-8947d5a9-069e-451a-a5e6-28162a13334c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Refreshing network info cache for port b26720ee-8d01-41bb-ad74-31e572887a36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:08 compute-1 ceph-mon[81689]: pgmap v1834: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 875 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 06 07:19:08 compute-1 nova_compute[226101]: 2025-12-06 07:19:08.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:08.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:10 compute-1 nova_compute[226101]: 2025-12-06 07:19:10.084 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:10 compute-1 ceph-mon[81689]: pgmap v1835: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:19:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2115880549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:19:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2115880549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:19:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:10 compute-1 nova_compute[226101]: 2025-12-06 07:19:10.933 226109 DEBUG nova.compute.manager [req-11df2216-0636-4516-9733-7743abecf6a5 req-71472752-ef06-4d4c-b88f-396e8184801b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received event network-changed-c94f5eb3-41d4-4e79-a071-3b5b801e2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:10 compute-1 nova_compute[226101]: 2025-12-06 07:19:10.934 226109 DEBUG nova.compute.manager [req-11df2216-0636-4516-9733-7743abecf6a5 req-71472752-ef06-4d4c-b88f-396e8184801b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Refreshing instance network info cache due to event network-changed-c94f5eb3-41d4-4e79-a071-3b5b801e2644. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:10 compute-1 nova_compute[226101]: 2025-12-06 07:19:10.934 226109 DEBUG oslo_concurrency.lockutils [req-11df2216-0636-4516-9733-7743abecf6a5 req-71472752-ef06-4d4c-b88f-396e8184801b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:10 compute-1 nova_compute[226101]: 2025-12-06 07:19:10.935 226109 DEBUG oslo_concurrency.lockutils [req-11df2216-0636-4516-9733-7743abecf6a5 req-71472752-ef06-4d4c-b88f-396e8184801b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:10 compute-1 nova_compute[226101]: 2025-12-06 07:19:10.935 226109 DEBUG nova.network.neutron [req-11df2216-0636-4516-9733-7743abecf6a5 req-71472752-ef06-4d4c-b88f-396e8184801b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Refreshing network info cache for port c94f5eb3-41d4-4e79-a071-3b5b801e2644 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:11 compute-1 nova_compute[226101]: 2025-12-06 07:19:11.034 226109 DEBUG nova.network.neutron [req-18c16ea0-254a-464e-8515-e7b3683b0813 req-8947d5a9-069e-451a-a5e6-28162a13334c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updated VIF entry in instance network info cache for port b26720ee-8d01-41bb-ad74-31e572887a36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:11 compute-1 nova_compute[226101]: 2025-12-06 07:19:11.035 226109 DEBUG nova.network.neutron [req-18c16ea0-254a-464e-8515-e7b3683b0813 req-8947d5a9-069e-451a-a5e6-28162a13334c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:11 compute-1 nova_compute[226101]: 2025-12-06 07:19:11.075 226109 DEBUG oslo_concurrency.lockutils [req-18c16ea0-254a-464e-8515-e7b3683b0813 req-8947d5a9-069e-451a-a5e6-28162a13334c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:11 compute-1 ceph-mon[81689]: pgmap v1836: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:19:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:12.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:13 compute-1 ceph-mon[81689]: pgmap v1837: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 248 op/s
Dec 06 07:19:14 compute-1 nova_compute[226101]: 2025-12-06 07:19:14.180 226109 DEBUG nova.network.neutron [req-11df2216-0636-4516-9733-7743abecf6a5 req-71472752-ef06-4d4c-b88f-396e8184801b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Updated VIF entry in instance network info cache for port c94f5eb3-41d4-4e79-a071-3b5b801e2644. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:14 compute-1 nova_compute[226101]: 2025-12-06 07:19:14.181 226109 DEBUG nova.network.neutron [req-11df2216-0636-4516-9733-7743abecf6a5 req-71472752-ef06-4d4c-b88f-396e8184801b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Updating instance_info_cache with network_info: [{"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:14 compute-1 nova_compute[226101]: 2025-12-06 07:19:14.203 226109 DEBUG oslo_concurrency.lockutils [req-11df2216-0636-4516-9733-7743abecf6a5 req-71472752-ef06-4d4c-b88f-396e8184801b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5f718fcb-597c-47a7-b738-1cd8d4b7b0de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:14.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:14.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:14 compute-1 ceph-mon[81689]: pgmap v1838: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 15 KiB/s wr, 216 op/s
Dec 06 07:19:15 compute-1 nova_compute[226101]: 2025-12-06 07:19:15.085 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:16.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:17 compute-1 nova_compute[226101]: 2025-12-06 07:19:17.030 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:17 compute-1 ceph-mon[81689]: pgmap v1839: 305 pgs: 305 active+clean; 182 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 333 KiB/s wr, 220 op/s
Dec 06 07:19:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3659152555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:18 compute-1 ovn_controller[130279]: 2025-12-06T07:19:18Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a2:d1:1e 10.100.0.14
Dec 06 07:19:18 compute-1 ovn_controller[130279]: 2025-12-06T07:19:18Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:d1:1e 10.100.0.14
Dec 06 07:19:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:19 compute-1 ceph-mon[81689]: pgmap v1840: 305 pgs: 305 active+clean; 201 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Dec 06 07:19:19 compute-1 ovn_controller[130279]: 2025-12-06T07:19:19Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:cd:e3 10.100.0.12
Dec 06 07:19:19 compute-1 ovn_controller[130279]: 2025-12-06T07:19:19Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:cd:e3 10.100.0.12
Dec 06 07:19:20 compute-1 nova_compute[226101]: 2025-12-06 07:19:20.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:20.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:20.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:21 compute-1 nova_compute[226101]: 2025-12-06 07:19:21.084 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:22 compute-1 podman[254928]: 2025-12-06 07:19:22.09644117 +0000 UTC m=+0.084950244 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:19:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:22 compute-1 ceph-mon[81689]: pgmap v1841: 305 pgs: 305 active+clean; 201 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Dec 06 07:19:23 compute-1 ceph-mon[81689]: pgmap v1842: 305 pgs: 305 active+clean; 314 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.1 MiB/s wr, 274 op/s
Dec 06 07:19:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:25 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:25.080 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:25 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:25.081 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:19:25 compute-1 nova_compute[226101]: 2025-12-06 07:19:25.080 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:25 compute-1 nova_compute[226101]: 2025-12-06 07:19:25.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:25 compute-1 nova_compute[226101]: 2025-12-06 07:19:25.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3193354649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3612372411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:26.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:26.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:27 compute-1 podman[254954]: 2025-12-06 07:19:27.073375232 +0000 UTC m=+0.062779714 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:19:27 compute-1 podman[254955]: 2025-12-06 07:19:27.073860695 +0000 UTC m=+0.061163549 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 07:19:28 compute-1 nova_compute[226101]: 2025-12-06 07:19:28.208 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquiring lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:28 compute-1 nova_compute[226101]: 2025-12-06 07:19:28.208 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:28 compute-1 nova_compute[226101]: 2025-12-06 07:19:28.208 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquiring lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:28 compute-1 nova_compute[226101]: 2025-12-06 07:19:28.208 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:28 compute-1 nova_compute[226101]: 2025-12-06 07:19:28.209 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:28 compute-1 nova_compute[226101]: 2025-12-06 07:19:28.210 226109 INFO nova.compute.manager [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Terminating instance
Dec 06 07:19:28 compute-1 nova_compute[226101]: 2025-12-06 07:19:28.211 226109 DEBUG nova.compute.manager [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:19:28 compute-1 ceph-mon[81689]: pgmap v1843: 305 pgs: 305 active+clean; 314 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 1004 KiB/s rd, 8.1 MiB/s wr, 207 op/s
Dec 06 07:19:28 compute-1 ceph-mon[81689]: pgmap v1844: 305 pgs: 305 active+clean; 322 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 8.1 MiB/s wr, 226 op/s
Dec 06 07:19:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1783859983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:30 compute-1 nova_compute[226101]: 2025-12-06 07:19:30.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:19:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.5 total, 600.0 interval
                                           Cumulative writes: 7360 writes, 39K keys, 7360 commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 7359 writes, 7359 syncs, 1.00 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1526 writes, 7242 keys, 1526 commit groups, 1.0 writes per commit group, ingest: 15.30 MB, 0.03 MB/s
                                           Interval WAL: 1525 writes, 1525 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.4      2.80              0.13        20    0.140       0      0       0.0       0.0
                                             L6      1/0    9.15 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8     46.2     38.1      4.56              0.49        19    0.240    107K    11K       0.0       0.0
                                            Sum      1/0    9.15 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8     28.6     29.9      7.36              0.62        39    0.189    107K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     46.3     46.1      0.79              0.09         6    0.132     21K   2100       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     46.2     38.1      4.56              0.49        19    0.240    107K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      2.76              0.13        19    0.145       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.045, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.21 GB write, 0.07 MB/s write, 0.21 GB read, 0.07 MB/s read, 7.4 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 23.28 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000331 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1330,22.46 MB,7.38714%) FilterBlock(39,305.67 KB,0.0981933%) IndexBlock(39,533.58 KB,0.171405%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:19:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:30.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:30.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Dec 06 07:19:30 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:30.763937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:19:30 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Dec 06 07:19:30 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005570764024, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2425, "num_deletes": 252, "total_data_size": 5804304, "memory_usage": 5875184, "flush_reason": "Manual Compaction"}
Dec 06 07:19:30 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Dec 06 07:19:30 compute-1 ceph-mon[81689]: pgmap v1845: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.9 MiB/s wr, 247 op/s
Dec 06 07:19:31 compute-1 kernel: tapc94f5eb3-41 (unregistering): left promiscuous mode
Dec 06 07:19:31 compute-1 NetworkManager[49031]: <info>  [1765005571.1632] device (tapc94f5eb3-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.177 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:31 compute-1 ovn_controller[130279]: 2025-12-06T07:19:31Z|00304|binding|INFO|Releasing lport c94f5eb3-41d4-4e79-a071-3b5b801e2644 from this chassis (sb_readonly=0)
Dec 06 07:19:31 compute-1 ovn_controller[130279]: 2025-12-06T07:19:31Z|00305|binding|INFO|Setting lport c94f5eb3-41d4-4e79-a071-3b5b801e2644 down in Southbound
Dec 06 07:19:31 compute-1 ovn_controller[130279]: 2025-12-06T07:19:31Z|00306|binding|INFO|Removing iface tapc94f5eb3-41 ovn-installed in OVS
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.180 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:31.184 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:cd:e3 10.100.0.12'], port_security=['fa:16:3e:2c:cd:e3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f718fcb-597c-47a7-b738-1cd8d4b7b0de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf3441f7-9585-4902-b159-32b41a1d162a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c6696bf390d4cc5bfd9852d5b264b5a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d4a667f-a5c6-421a-b37d-95b02f665a4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebc2c736-6517-4f60-a962-50feb29f121c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=c94f5eb3-41d4-4e79-a071-3b5b801e2644) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:31.185 139580 INFO neutron.agent.ovn.metadata.agent [-] Port c94f5eb3-41d4-4e79-a071-3b5b801e2644 in datapath bf3441f7-9585-4902-b159-32b41a1d162a unbound from our chassis
Dec 06 07:19:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:31.187 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf3441f7-9585-4902-b159-32b41a1d162a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:19:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:31.188 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1939ad83-81f7-45de-b12c-6de5005eaa59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:31.188 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a namespace which is not needed anymore
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571190802, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3783982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36885, "largest_seqno": 39304, "table_properties": {"data_size": 3774226, "index_size": 6122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20835, "raw_average_key_size": 20, "raw_value_size": 3754504, "raw_average_value_size": 3721, "num_data_blocks": 266, "num_entries": 1009, "num_filter_entries": 1009, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005353, "oldest_key_time": 1765005353, "file_creation_time": 1765005570, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 427053 microseconds, and 9238 cpu microseconds.
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.191 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:31 compute-1 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Dec 06 07:19:31 compute-1 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000004e.scope: Consumed 14.955s CPU time.
Dec 06 07:19:31 compute-1 systemd-machined[190302]: Machine qemu-36-instance-0000004e terminated.
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.190998) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3783982 bytes OK
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.191027) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.314223) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.314295) EVENT_LOG_v1 {"time_micros": 1765005571314279, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.314333) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5793653, prev total WAL file size 5793653, number of live WAL files 2.
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.316030) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3695KB)], [69(9365KB)]
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571316108, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 13373924, "oldest_snapshot_seqno": -1}
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.436 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 7066 keys, 11345531 bytes, temperature: kUnknown
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571443762, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 11345531, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11297352, "index_size": 29379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 181150, "raw_average_key_size": 25, "raw_value_size": 11169839, "raw_average_value_size": 1580, "num_data_blocks": 1169, "num_entries": 7066, "num_filter_entries": 7066, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765005571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.444 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:31 compute-1 neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a[254912]: [NOTICE]   (254916) : haproxy version is 2.8.14-c23fe91
Dec 06 07:19:31 compute-1 neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a[254912]: [NOTICE]   (254916) : path to executable is /usr/sbin/haproxy
Dec 06 07:19:31 compute-1 neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a[254912]: [WARNING]  (254916) : Exiting Master process...
Dec 06 07:19:31 compute-1 neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a[254912]: [ALERT]    (254916) : Current worker (254918) exited with code 143 (Terminated)
Dec 06 07:19:31 compute-1 neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a[254912]: [WARNING]  (254916) : All workers exited. Exiting... (0)
Dec 06 07:19:31 compute-1 systemd[1]: libpod-3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0.scope: Deactivated successfully.
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.454 226109 INFO nova.virt.libvirt.driver [-] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Instance destroyed successfully.
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.454 226109 DEBUG nova.objects.instance [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lazy-loading 'resources' on Instance uuid 5f718fcb-597c-47a7-b738-1cd8d4b7b0de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:31 compute-1 podman[255016]: 2025-12-06 07:19:31.46069013 +0000 UTC m=+0.176063616 container died 3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.444088) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11345531 bytes
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.463706) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.7 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.1 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7589, records dropped: 523 output_compression: NoCompression
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.463778) EVENT_LOG_v1 {"time_micros": 1765005571463752, "job": 42, "event": "compaction_finished", "compaction_time_micros": 127753, "compaction_time_cpu_micros": 27920, "output_level": 6, "num_output_files": 1, "total_output_size": 11345531, "num_input_records": 7589, "num_output_records": 7066, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571464757, "job": 42, "event": "table_file_deletion", "file_number": 71}
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571466737, "job": 42, "event": "table_file_deletion", "file_number": 69}
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.315907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.466790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.466797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.466799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.466801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:31.466803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.485 226109 DEBUG nova.virt.libvirt.vif [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1488925369',display_name='tempest-ServersTestBootFromVolume-server-1488925369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1488925369',id=78,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTqnkSt2tkptxGb8j0FyIF7mhfC3g4nCTY0dmS1O9j2D4whvbbZzVjTaB1HV1FzYrZYaDqs4EAuECl1pd2rTCLOMkEdFzfpSZ2TAaIcLDXdfmlhSXmE1QqkQ96nlBQuBw==',key_name='tempest-keypair-1966242547',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c6696bf390d4cc5bfd9852d5b264b5a',ramdisk_id='',reservation_id='r-tfb2kdsn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-1735484763',owner_user_name='tempest-ServersTestBootFromVolume-1735484763-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4715802a0b24079b4ca157db07e1d75',uuid=5f718fcb-597c-47a7-b738-1cd8d4b7b0de,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.486 226109 DEBUG nova.network.os_vif_util [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Converting VIF {"id": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "address": "fa:16:3e:2c:cd:e3", "network": {"id": "bf3441f7-9585-4902-b159-32b41a1d162a", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-150114568-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c6696bf390d4cc5bfd9852d5b264b5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc94f5eb3-41", "ovs_interfaceid": "c94f5eb3-41d4-4e79-a071-3b5b801e2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.487 226109 DEBUG nova.network.os_vif_util [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:cd:e3,bridge_name='br-int',has_traffic_filtering=True,id=c94f5eb3-41d4-4e79-a071-3b5b801e2644,network=Network(bf3441f7-9585-4902-b159-32b41a1d162a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc94f5eb3-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.487 226109 DEBUG os_vif [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:cd:e3,bridge_name='br-int',has_traffic_filtering=True,id=c94f5eb3-41d4-4e79-a071-3b5b801e2644,network=Network(bf3441f7-9585-4902-b159-32b41a1d162a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc94f5eb3-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.490 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.490 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc94f5eb3-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.535 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.538 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.540 226109 INFO os_vif [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:cd:e3,bridge_name='br-int',has_traffic_filtering=True,id=c94f5eb3-41d4-4e79-a071-3b5b801e2644,network=Network(bf3441f7-9585-4902-b159-32b41a1d162a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc94f5eb3-41')
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.881 226109 DEBUG nova.compute.manager [req-6af2ed92-2532-46e7-892d-999e59b73cd9 req-1bee33cd-f0a4-44d7-8e44-0925742673cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received event network-vif-unplugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.881 226109 DEBUG oslo_concurrency.lockutils [req-6af2ed92-2532-46e7-892d-999e59b73cd9 req-1bee33cd-f0a4-44d7-8e44-0925742673cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.881 226109 DEBUG oslo_concurrency.lockutils [req-6af2ed92-2532-46e7-892d-999e59b73cd9 req-1bee33cd-f0a4-44d7-8e44-0925742673cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.881 226109 DEBUG oslo_concurrency.lockutils [req-6af2ed92-2532-46e7-892d-999e59b73cd9 req-1bee33cd-f0a4-44d7-8e44-0925742673cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.882 226109 DEBUG nova.compute.manager [req-6af2ed92-2532-46e7-892d-999e59b73cd9 req-1bee33cd-f0a4-44d7-8e44-0925742673cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] No waiting events found dispatching network-vif-unplugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:31 compute-1 nova_compute[226101]: 2025-12-06 07:19:31.882 226109 DEBUG nova.compute.manager [req-6af2ed92-2532-46e7-892d-999e59b73cd9 req-1bee33cd-f0a4-44d7-8e44-0925742673cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received event network-vif-unplugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:19:32 compute-1 ceph-mon[81689]: pgmap v1846: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.1 MiB/s wr, 217 op/s
Dec 06 07:19:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0-userdata-shm.mount: Deactivated successfully.
Dec 06 07:19:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-d62b387088f1674e95595d8a603eabb32a826970c7d31e583eda9fc59f0ba366-merged.mount: Deactivated successfully.
Dec 06 07:19:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:33.083 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:33 compute-1 podman[255016]: 2025-12-06 07:19:33.913871834 +0000 UTC m=+2.629245330 container cleanup 3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 07:19:33 compute-1 systemd[1]: libpod-conmon-3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0.scope: Deactivated successfully.
Dec 06 07:19:33 compute-1 ceph-mon[81689]: pgmap v1847: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.9 MiB/s wr, 281 op/s
Dec 06 07:19:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2868323730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:33.936101) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573936141, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 294, "num_deletes": 259, "total_data_size": 120228, "memory_usage": 127264, "flush_reason": "Manual Compaction"}
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573940644, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 79132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39309, "largest_seqno": 39598, "table_properties": {"data_size": 77209, "index_size": 151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4876, "raw_average_key_size": 17, "raw_value_size": 73335, "raw_average_value_size": 263, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005571, "oldest_key_time": 1765005571, "file_creation_time": 1765005573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 4585 microseconds, and 843 cpu microseconds.
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:19:33 compute-1 nova_compute[226101]: 2025-12-06 07:19:33.977 226109 DEBUG nova.compute.manager [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received event network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:33 compute-1 nova_compute[226101]: 2025-12-06 07:19:33.978 226109 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:33 compute-1 nova_compute[226101]: 2025-12-06 07:19:33.978 226109 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:33 compute-1 nova_compute[226101]: 2025-12-06 07:19:33.978 226109 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:33 compute-1 nova_compute[226101]: 2025-12-06 07:19:33.978 226109 DEBUG nova.compute.manager [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] No waiting events found dispatching network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:33 compute-1 nova_compute[226101]: 2025-12-06 07:19:33.978 226109 WARNING nova.compute.manager [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received unexpected event network-vif-plugged-c94f5eb3-41d4-4e79-a071-3b5b801e2644 for instance with vm_state active and task_state deleting.
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:33.940686) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 79132 bytes OK
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:33.940704) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:33.982203) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:33.982240) EVENT_LOG_v1 {"time_micros": 1765005573982232, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:33.982261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 118019, prev total WAL file size 118019, number of live WAL files 2.
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:33.982672) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303038' seq:72057594037927935, type:22 .. '6C6F676D0031323633' seq:0, type:0; will stop at (end)
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(77KB)], [72(10MB)]
Dec 06 07:19:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573982710, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 11424663, "oldest_snapshot_seqno": -1}
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6818 keys, 11287022 bytes, temperature: kUnknown
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574385313, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 11287022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11240044, "index_size": 28818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 176953, "raw_average_key_size": 25, "raw_value_size": 11116301, "raw_average_value_size": 1630, "num_data_blocks": 1143, "num_entries": 6818, "num_filter_entries": 6818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765005573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:34.385581) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11287022 bytes
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:34.576650) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.4 rd, 28.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(287.0) write-amplify(142.6) OK, records in: 7344, records dropped: 526 output_compression: NoCompression
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:34.576691) EVENT_LOG_v1 {"time_micros": 1765005574576675, "job": 44, "event": "compaction_finished", "compaction_time_micros": 402679, "compaction_time_cpu_micros": 25944, "output_level": 6, "num_output_files": 1, "total_output_size": 11287022, "num_input_records": 7344, "num_output_records": 6818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574576895, "job": 44, "event": "table_file_deletion", "file_number": 74}
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574579662, "job": 44, "event": "table_file_deletion", "file_number": 72}
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:33.982628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:34.579978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:34.579984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:34.579986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:34.579988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:19:34.579990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-1 nova_compute[226101]: 2025-12-06 07:19:34.595 226109 INFO nova.virt.libvirt.driver [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Deleting instance files /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de_del
Dec 06 07:19:34 compute-1 nova_compute[226101]: 2025-12-06 07:19:34.596 226109 INFO nova.virt.libvirt.driver [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Deletion of /var/lib/nova/instances/5f718fcb-597c-47a7-b738-1cd8d4b7b0de_del complete
Dec 06 07:19:34 compute-1 nova_compute[226101]: 2025-12-06 07:19:34.639 226109 INFO nova.compute.manager [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Took 6.43 seconds to destroy the instance on the hypervisor.
Dec 06 07:19:34 compute-1 nova_compute[226101]: 2025-12-06 07:19:34.639 226109 DEBUG oslo.service.loopingcall [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:19:34 compute-1 nova_compute[226101]: 2025-12-06 07:19:34.639 226109 DEBUG nova.compute.manager [-] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:19:34 compute-1 nova_compute[226101]: 2025-12-06 07:19:34.639 226109 DEBUG nova.network.neutron [-] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:19:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:34 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:34.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:34.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:34 compute-1 podman[255074]: 2025-12-06 07:19:34.982298207 +0000 UTC m=+1.043412905 container remove 3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:19:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:34.989 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[48960496-d708-4bda-8e7e-24bd123d2de1]: (4, ('Sat Dec  6 07:19:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a (3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0)\n3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0\nSat Dec  6 07:19:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a (3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0)\n3d4744285f0764b8203df53a6ccc78abcd10f892016e8aff1e7ce78358e534d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:34.992 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fc54ef64-0069-4b53-a35e-6003b0288815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:34.992 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf3441f7-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:35 compute-1 kernel: tapbf3441f7-90: left promiscuous mode
Dec 06 07:19:35 compute-1 nova_compute[226101]: 2025-12-06 07:19:35.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:35 compute-1 nova_compute[226101]: 2025-12-06 07:19:35.049 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:35.051 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5ae0b6-2c69-4c01-a75b-d59907498ab4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:35.062 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e412a063-7b28-4a85-82e8-78349e9ca269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:35.064 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[91c9f6a0-7657-4788-8dce-3803e4324d2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:35.081 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b35f2d16-216e-420b-ab99-a821a9a1198e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582175, 'reachable_time': 31677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255088, 'error': None, 'target': 'ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:35 compute-1 systemd[1]: run-netns-ovnmeta\x2dbf3441f7\x2d9585\x2d4902\x2db159\x2d32b41a1d162a.mount: Deactivated successfully.
Dec 06 07:19:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:35.084 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf3441f7-9585-4902-b159-32b41a1d162a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:19:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:35.085 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2dd822-ad74-4d84-82c5-91538fbd7dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:35 compute-1 nova_compute[226101]: 2025-12-06 07:19:35.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1552608381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:35 compute-1 ceph-mon[81689]: pgmap v1848: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 108 op/s
Dec 06 07:19:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:36 compute-1 nova_compute[226101]: 2025-12-06 07:19:36.535 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:36 compute-1 nova_compute[226101]: 2025-12-06 07:19:36.697 226109 DEBUG nova.network.neutron [-] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:36 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:36 compute-1 nova_compute[226101]: 2025-12-06 07:19:36.713 226109 INFO nova.compute.manager [-] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Took 2.07 seconds to deallocate network for instance.
Dec 06 07:19:36 compute-1 nova_compute[226101]: 2025-12-06 07:19:36.979 226109 INFO nova.compute.manager [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Took 0.26 seconds to detach 1 volumes for instance.
Dec 06 07:19:36 compute-1 nova_compute[226101]: 2025-12-06 07:19:36.980 226109 DEBUG nova.compute.manager [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Deleting volume: b6c10720-e1c1-4fdc-b2cd-94ad0567f0a4 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.138 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.139 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.250 226109 DEBUG oslo_concurrency.processutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:19:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4046454772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.761 226109 DEBUG oslo_concurrency.processutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.767 226109 DEBUG nova.compute.provider_tree [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.788 226109 DEBUG nova.scheduler.client.report [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.806 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.834 226109 INFO nova.scheduler.client.report [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Deleted allocations for instance 5f718fcb-597c-47a7-b738-1cd8d4b7b0de
Dec 06 07:19:37 compute-1 nova_compute[226101]: 2025-12-06 07:19:37.916 226109 DEBUG oslo_concurrency.lockutils [None req-2af7dd1b-aa53-4a58-9bb3-3029ea1bbe8c f4715802a0b24079b4ca157db07e1d75 6c6696bf390d4cc5bfd9852d5b264b5a - - default default] Lock "5f718fcb-597c-47a7-b738-1cd8d4b7b0de" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:38 compute-1 nova_compute[226101]: 2025-12-06 07:19:38.409 226109 DEBUG nova.compute.manager [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Received event network-vif-deleted-c94f5eb3-41d4-4e79-a071-3b5b801e2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:38.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:38.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:40 compute-1 nova_compute[226101]: 2025-12-06 07:19:40.098 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:40 compute-1 ceph-mon[81689]: pgmap v1849: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 113 op/s
Dec 06 07:19:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.536 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4046454772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:41 compute-1 ceph-mon[81689]: pgmap v1850: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Dec 06 07:19:41 compute-1 ceph-mon[81689]: pgmap v1851: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.813 226109 DEBUG nova.compute.manager [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.883 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.884 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.907 226109 DEBUG nova.objects.instance [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_requests' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.924 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.925 226109 INFO nova.compute.claims [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.926 226109 DEBUG nova.objects.instance [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.938 226109 DEBUG nova.objects.instance [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:41 compute-1 nova_compute[226101]: 2025-12-06 07:19:41.977 226109 INFO nova.compute.resource_tracker [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating resource usage from migration aedcdaea-1330-4252-b5a0-89a67cbe90a1
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.056 226109 DEBUG oslo_concurrency.processutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:19:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2456623870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.512 226109 DEBUG oslo_concurrency.processutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.518 226109 DEBUG nova.compute.provider_tree [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.534 226109 DEBUG nova.scheduler.client.report [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.555 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.555 226109 INFO nova.compute.manager [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Migrating
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.589 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.590 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:42 compute-1 nova_compute[226101]: 2025-12-06 07:19:42.590 226109 DEBUG nova.network.neutron [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:19:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:42.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:42.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:43 compute-1 ovn_controller[130279]: 2025-12-06T07:19:43Z|00307|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:19:43 compute-1 nova_compute[226101]: 2025-12-06 07:19:43.226 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:44 compute-1 nova_compute[226101]: 2025-12-06 07:19:44.183 226109 DEBUG nova.network.neutron [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:44 compute-1 nova_compute[226101]: 2025-12-06 07:19:44.210 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:44 compute-1 nova_compute[226101]: 2025-12-06 07:19:44.297 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 07:19:44 compute-1 nova_compute[226101]: 2025-12-06 07:19:44.302 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:19:44 compute-1 ceph-mon[81689]: pgmap v1852: 305 pgs: 305 active+clean; 394 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 177 op/s
Dec 06 07:19:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2456623870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:44.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:44.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:45 compute-1 nova_compute[226101]: 2025-12-06 07:19:45.100 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:46 compute-1 nova_compute[226101]: 2025-12-06 07:19:46.451 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005571.4497824, 5f718fcb-597c-47a7-b738-1cd8d4b7b0de => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:46 compute-1 nova_compute[226101]: 2025-12-06 07:19:46.451 226109 INFO nova.compute.manager [-] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] VM Stopped (Lifecycle Event)
Dec 06 07:19:46 compute-1 nova_compute[226101]: 2025-12-06 07:19:46.472 226109 DEBUG nova.compute.manager [None req-c532ddf2-6e20-4e8a-a76c-6a18f24adafa - - - - - -] [instance: 5f718fcb-597c-47a7-b738-1cd8d4b7b0de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:46 compute-1 nova_compute[226101]: 2025-12-06 07:19:46.585 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:47 compute-1 ceph-mon[81689]: pgmap v1853: 305 pgs: 305 active+clean; 394 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Dec 06 07:19:47 compute-1 kernel: tapb26720ee-8d (unregistering): left promiscuous mode
Dec 06 07:19:47 compute-1 NetworkManager[49031]: <info>  [1765005587.0611] device (tapb26720ee-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:19:47 compute-1 ovn_controller[130279]: 2025-12-06T07:19:47Z|00308|binding|INFO|Releasing lport b26720ee-8d01-41bb-ad74-31e572887a36 from this chassis (sb_readonly=0)
Dec 06 07:19:47 compute-1 ovn_controller[130279]: 2025-12-06T07:19:47Z|00309|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 down in Southbound
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.068 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-1 ovn_controller[130279]: 2025-12-06T07:19:47Z|00310|binding|INFO|Removing iface tapb26720ee-8d ovn-installed in OVS
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.070 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.076 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:d1:1e 10.100.0.14'], port_security=['fa:16:3e:a2:d1:1e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '67f0eb12-5070-4bc1-846b-09eb25dec88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b26720ee-8d01-41bb-ad74-31e572887a36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.077 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b26720ee-8d01-41bb-ad74-31e572887a36 in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.078 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.079 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4152c5de-94a0-4cfe-8a9b-a3ef457bc961]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.080 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.086 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-1 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Dec 06 07:19:47 compute-1 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004d.scope: Consumed 15.887s CPU time.
Dec 06 07:19:47 compute-1 systemd-machined[190302]: Machine qemu-35-instance-0000004d terminated.
Dec 06 07:19:47 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[254753]: [NOTICE]   (254757) : haproxy version is 2.8.14-c23fe91
Dec 06 07:19:47 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[254753]: [NOTICE]   (254757) : path to executable is /usr/sbin/haproxy
Dec 06 07:19:47 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[254753]: [WARNING]  (254757) : Exiting Master process...
Dec 06 07:19:47 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[254753]: [WARNING]  (254757) : Exiting Master process...
Dec 06 07:19:47 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[254753]: [ALERT]    (254757) : Current worker (254759) exited with code 143 (Terminated)
Dec 06 07:19:47 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[254753]: [WARNING]  (254757) : All workers exited. Exiting... (0)
Dec 06 07:19:47 compute-1 systemd[1]: libpod-0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0.scope: Deactivated successfully.
Dec 06 07:19:47 compute-1 podman[255160]: 2025-12-06 07:19:47.205772646 +0000 UTC m=+0.045572837 container died 0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:19:47 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0-userdata-shm.mount: Deactivated successfully.
Dec 06 07:19:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-d60e30451ab3f04f531d424d68a10dd34e2e8c47084f8a7f9aa9183b245c7bdb-merged.mount: Deactivated successfully.
Dec 06 07:19:47 compute-1 podman[255160]: 2025-12-06 07:19:47.245947905 +0000 UTC m=+0.085748096 container cleanup 0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:19:47 compute-1 systemd[1]: libpod-conmon-0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0.scope: Deactivated successfully.
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.310 226109 DEBUG nova.compute.manager [req-bff0973c-df47-4771-8851-e953cf0c6768 req-d9b6414e-5934-471c-bb68-63a52bd6b8b9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.310 226109 DEBUG oslo_concurrency.lockutils [req-bff0973c-df47-4771-8851-e953cf0c6768 req-d9b6414e-5934-471c-bb68-63a52bd6b8b9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.311 226109 DEBUG oslo_concurrency.lockutils [req-bff0973c-df47-4771-8851-e953cf0c6768 req-d9b6414e-5934-471c-bb68-63a52bd6b8b9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.311 226109 DEBUG oslo_concurrency.lockutils [req-bff0973c-df47-4771-8851-e953cf0c6768 req-d9b6414e-5934-471c-bb68-63a52bd6b8b9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.311 226109 DEBUG nova.compute.manager [req-bff0973c-df47-4771-8851-e953cf0c6768 req-d9b6414e-5934-471c-bb68-63a52bd6b8b9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.311 226109 WARNING nova.compute.manager [req-bff0973c-df47-4771-8851-e953cf0c6768 req-d9b6414e-5934-471c-bb68-63a52bd6b8b9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state active and task_state resize_migrating.
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.321 226109 INFO nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance shutdown successfully after 3 seconds.
Dec 06 07:19:47 compute-1 podman[255191]: 2025-12-06 07:19:47.323122569 +0000 UTC m=+0.057759288 container remove 0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.326 226109 INFO nova.virt.libvirt.driver [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance destroyed successfully.
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.327 226109 DEBUG nova.virt.libvirt.vif [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:a2:d1:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.327 226109 DEBUG nova.network.os_vif_util [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:a2:d1:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.327 226109 DEBUG nova.network.os_vif_util [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.328 226109 DEBUG os_vif [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.328 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7bd348-7318-47e1-a5d2-514cc53d6dfc]: (4, ('Sat Dec  6 07:19:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0)\n0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0\nSat Dec  6 07:19:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0)\n0b1dc157ce30e54c303540aa67254f05b4c66a64ce547bc1a24a7cf64522c5f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.329 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.329 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb26720ee-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.330 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[28f70346-1d77-4529-b2cc-c939fd70eca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.331 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.331 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.332 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.334 226109 INFO os_vif [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d')
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.338 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.338 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:19:47 compute-1 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.346 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.349 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4e741547-cf2b-4b0d-b6dd-20198568fca4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.377 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[20f66975-eb1c-4c51-8c46-d80a0fb98949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.379 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6afda956-4a55-4706-85df-88c360098dcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.394 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b509c088-b06e-48e0-bd68-7eaac6b4c29e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582083, 'reachable_time': 36667, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255217, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-1 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.397 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:19:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:47.397 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[486974f0-9c0b-4ff8-bb3b-a3454e0375cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.732 226109 DEBUG nova.network.neutron [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Port b26720ee-8d01-41bb-ad74-31e572887a36 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.834 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.835 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:47 compute-1 nova_compute[226101]: 2025-12-06 07:19:47.835 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:48 compute-1 ceph-mon[81689]: pgmap v1854: 305 pgs: 305 active+clean; 394 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Dec 06 07:19:48 compute-1 ceph-mon[81689]: pgmap v1855: 305 pgs: 305 active+clean; 402 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Dec 06 07:19:48 compute-1 nova_compute[226101]: 2025-12-06 07:19:48.705 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:48 compute-1 nova_compute[226101]: 2025-12-06 07:19:48.706 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:48 compute-1 nova_compute[226101]: 2025-12-06 07:19:48.706 226109 DEBUG nova.network.neutron [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:19:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:48.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:48.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:49 compute-1 nova_compute[226101]: 2025-12-06 07:19:49.419 226109 DEBUG nova.compute.manager [req-6dab7c7f-df41-4b3b-8bf2-ddabf9ae2172 req-7e6f7056-c502-4f91-b6b5-aa4545993f53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:49 compute-1 nova_compute[226101]: 2025-12-06 07:19:49.420 226109 DEBUG oslo_concurrency.lockutils [req-6dab7c7f-df41-4b3b-8bf2-ddabf9ae2172 req-7e6f7056-c502-4f91-b6b5-aa4545993f53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:49 compute-1 nova_compute[226101]: 2025-12-06 07:19:49.420 226109 DEBUG oslo_concurrency.lockutils [req-6dab7c7f-df41-4b3b-8bf2-ddabf9ae2172 req-7e6f7056-c502-4f91-b6b5-aa4545993f53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:49 compute-1 nova_compute[226101]: 2025-12-06 07:19:49.420 226109 DEBUG oslo_concurrency.lockutils [req-6dab7c7f-df41-4b3b-8bf2-ddabf9ae2172 req-7e6f7056-c502-4f91-b6b5-aa4545993f53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:49 compute-1 nova_compute[226101]: 2025-12-06 07:19:49.421 226109 DEBUG nova.compute.manager [req-6dab7c7f-df41-4b3b-8bf2-ddabf9ae2172 req-7e6f7056-c502-4f91-b6b5-aa4545993f53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:49 compute-1 nova_compute[226101]: 2025-12-06 07:19:49.421 226109 WARNING nova.compute.manager [req-6dab7c7f-df41-4b3b-8bf2-ddabf9ae2172 req-7e6f7056-c502-4f91-b6b5-aa4545993f53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state active and task_state resize_migrated.
Dec 06 07:19:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1104714191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:19:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1104714191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:19:50 compute-1 nova_compute[226101]: 2025-12-06 07:19:50.102 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:50 compute-1 nova_compute[226101]: 2025-12-06 07:19:50.242 226109 DEBUG nova.network.neutron [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:50 compute-1 nova_compute[226101]: 2025-12-06 07:19:50.265 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:50 compute-1 nova_compute[226101]: 2025-12-06 07:19:50.376 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 07:19:50 compute-1 nova_compute[226101]: 2025-12-06 07:19:50.378 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:19:50 compute-1 nova_compute[226101]: 2025-12-06 07:19:50.378 226109 INFO nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Creating image(s)
Dec 06 07:19:50 compute-1 nova_compute[226101]: 2025-12-06 07:19:50.415 226109 DEBUG nova.storage.rbd_utils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] creating snapshot(nova-resize) on rbd image(67f0eb12-5070-4bc1-846b-09eb25dec88d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:19:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:50.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:50.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:51 compute-1 nova_compute[226101]: 2025-12-06 07:19:51.119 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e256 e256: 3 total, 3 up, 3 in
Dec 06 07:19:51 compute-1 ceph-mon[81689]: pgmap v1856: 305 pgs: 305 active+clean; 402 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Dec 06 07:19:52 compute-1 nova_compute[226101]: 2025-12-06 07:19:52.332 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:52.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:53 compute-1 podman[255254]: 2025-12-06 07:19:53.139613274 +0000 UTC m=+0.125122766 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.174 226109 DEBUG nova.objects.instance [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'trusted_certs' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.282 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.282 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Ensure instance console log exists: /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.283 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.283 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.283 226109 DEBUG oslo_concurrency.lockutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.285 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Start _get_guest_xml network_info=[{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:a2:d1:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.288 226109 WARNING nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.295 226109 DEBUG nova.virt.libvirt.host [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.295 226109 DEBUG nova.virt.libvirt.host [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.298 226109 DEBUG nova.virt.libvirt.host [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.298 226109 DEBUG nova.virt.libvirt.host [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.299 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.299 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.300 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.300 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.300 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.300 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.301 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.301 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.301 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.301 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.301 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.302 226109 DEBUG nova.virt.hardware [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.302 226109 DEBUG nova.objects.instance [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'vcpu_model' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:53 compute-1 nova_compute[226101]: 2025-12-06 07:19:53.321 226109 DEBUG oslo_concurrency.processutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:53 compute-1 ceph-mon[81689]: osdmap e256: 3 total, 3 up, 3 in
Dec 06 07:19:53 compute-1 ceph-mon[81689]: pgmap v1858: 305 pgs: 305 active+clean; 279 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 410 KiB/s rd, 250 KiB/s wr, 94 op/s
Dec 06 07:19:53 compute-1 sudo[255317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:53 compute-1 sudo[255317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:53 compute-1 sudo[255317]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:53 compute-1 sudo[255343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:19:53 compute-1 sudo[255343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:53 compute-1 sudo[255343]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:53 compute-1 sudo[255368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:53 compute-1 sudo[255368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:53 compute-1 sudo[255368]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:53 compute-1 sudo[255412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:19:53 compute-1 sudo[255412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:53 compute-1 sudo[255412]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:19:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/46416263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:54 compute-1 nova_compute[226101]: 2025-12-06 07:19:54.252 226109 DEBUG oslo_concurrency.processutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.931s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:54.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:54.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:54 compute-1 nova_compute[226101]: 2025-12-06 07:19:54.757 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:54 compute-1 nova_compute[226101]: 2025-12-06 07:19:54.762 226109 DEBUG oslo_concurrency.processutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:54 compute-1 sudo[255479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:54 compute-1 sudo[255479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:54 compute-1 sudo[255479]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/46416263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:54 compute-1 ceph-mon[81689]: pgmap v1859: 305 pgs: 305 active+clean; 279 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 410 KiB/s rd, 250 KiB/s wr, 94 op/s
Dec 06 07:19:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4290631668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:54 compute-1 sudo[255514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:19:54 compute-1 sudo[255514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:54 compute-1 sudo[255514]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:55 compute-1 sudo[255548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:55 compute-1 sudo[255548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:55 compute-1 sudo[255548]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:55 compute-1 sudo[255573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:19:55 compute-1 sudo[255573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.105 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:19:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2403216222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.272 226109 DEBUG oslo_concurrency.processutils [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.274 226109 DEBUG nova.virt.libvirt.vif [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:19:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:a2:d1:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.274 226109 DEBUG nova.network.os_vif_util [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:a2:d1:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.275 226109 DEBUG nova.network.os_vif_util [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.278 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <uuid>67f0eb12-5070-4bc1-846b-09eb25dec88d</uuid>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <name>instance-0000004d</name>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <memory>196608</memory>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerActionsTestJSON-server-1140474602</nova:name>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:19:53</nova:creationTime>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <nova:flavor name="m1.micro">
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <nova:memory>192</nova:memory>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <nova:port uuid="b26720ee-8d01-41bb-ad74-31e572887a36">
Dec 06 07:19:55 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <system>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <entry name="serial">67f0eb12-5070-4bc1-846b-09eb25dec88d</entry>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <entry name="uuid">67f0eb12-5070-4bc1-846b-09eb25dec88d</entry>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </system>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <os>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   </os>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <features>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   </features>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/67f0eb12-5070-4bc1-846b-09eb25dec88d_disk">
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       </source>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/67f0eb12-5070-4bc1-846b-09eb25dec88d_disk.config">
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       </source>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:19:55 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:a2:d1:1e"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <target dev="tapb26720ee-8d"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/console.log" append="off"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <video>
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </video>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:19:55 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:19:55 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:19:55 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:19:55 compute-1 nova_compute[226101]: </domain>
Dec 06 07:19:55 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.279 226109 DEBUG nova.virt.libvirt.vif [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:19:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:a2:d1:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.280 226109 DEBUG nova.network.os_vif_util [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:a2:d1:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.280 226109 DEBUG nova.network.os_vif_util [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.281 226109 DEBUG os_vif [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.281 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.281 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.282 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.284 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.285 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb26720ee-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.285 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb26720ee-8d, col_values=(('external_ids', {'iface-id': 'b26720ee-8d01-41bb-ad74-31e572887a36', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:d1:1e', 'vm-uuid': '67f0eb12-5070-4bc1-846b-09eb25dec88d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.287 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 NetworkManager[49031]: <info>  [1765005595.2885] manager: (tapb26720ee-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.290 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.294 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.295 226109 INFO os_vif [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d')
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.409 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.409 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.410 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No VIF found with MAC fa:16:3e:a2:d1:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.410 226109 INFO nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Using config drive
Dec 06 07:19:55 compute-1 kernel: tapb26720ee-8d: entered promiscuous mode
Dec 06 07:19:55 compute-1 NetworkManager[49031]: <info>  [1765005595.5006] manager: (tapb26720ee-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Dec 06 07:19:55 compute-1 ovn_controller[130279]: 2025-12-06T07:19:55Z|00311|binding|INFO|Claiming lport b26720ee-8d01-41bb-ad74-31e572887a36 for this chassis.
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.527 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 ovn_controller[130279]: 2025-12-06T07:19:55Z|00312|binding|INFO|b26720ee-8d01-41bb-ad74-31e572887a36: Claiming fa:16:3e:a2:d1:1e 10.100.0.14
Dec 06 07:19:55 compute-1 sudo[255573]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.539 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:d1:1e 10.100.0.14'], port_security=['fa:16:3e:a2:d1:1e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '67f0eb12-5070-4bc1-846b-09eb25dec88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '5', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b26720ee-8d01-41bb-ad74-31e572887a36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.540 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b26720ee-8d01-41bb-ad74-31e572887a36 in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.541 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:19:55 compute-1 ovn_controller[130279]: 2025-12-06T07:19:55Z|00313|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 ovn-installed in OVS
Dec 06 07:19:55 compute-1 ovn_controller[130279]: 2025-12-06T07:19:55Z|00314|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 up in Southbound
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.546 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.552 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d52ec1da-a8da-4e3c-bd75-5958e840a6f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.552 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:19:55 compute-1 systemd-machined[190302]: New machine qemu-37-instance-0000004d.
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.554 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.554 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[01fb143b-23f6-46f6-998b-60fe195c538f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.555 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e95400c5-54e7-4310-a6a4-9b82177d44f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.565 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ba44a34a-bd0c-4adc-a0c0-a6698fbb96f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 systemd[1]: Started Virtual Machine qemu-37-instance-0000004d.
Dec 06 07:19:55 compute-1 systemd-udevd[255667]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.590 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2cc028-0e22-4177-b2c9-e8ea2b2801fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 NetworkManager[49031]: <info>  [1765005595.5948] device (tapb26720ee-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:19:55 compute-1 NetworkManager[49031]: <info>  [1765005595.5956] device (tapb26720ee-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.618 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e59c3fa6-4aec-42f7-a3ca-fdd07da17a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.623 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d8099f-616c-4250-aa8d-1c11ef91d0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 NetworkManager[49031]: <info>  [1765005595.6250] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.657 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d8b1fa-3675-4441-9a67-05ac19120d58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.660 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0c406656-f5fb-4738-b79b-8abce19c68c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 NetworkManager[49031]: <info>  [1765005595.6866] device (tap4d599401-30): carrier: link connected
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.691 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[afbef06e-eeec-4b9e-8f01-6df535b91610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.708 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bb09ea62-1d08-45ed-9165-4d8f7d9d3c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587519, 'reachable_time': 36707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255697, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.723 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[928cb45c-9b93-4ac8-9e24-f82a56977575]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587519, 'tstamp': 587519}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255698, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.741 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c491b56e-549a-4b58-9614-7a709c0144ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587519, 'reachable_time': 36707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255699, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.769 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e98d82a6-d52e-47e6-b457-083a7760ae1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.819 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[27474bec-6098-4642-9a74-4e8c3050df3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.820 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.820 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.821 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:55 compute-1 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:19:55 compute-1 NetworkManager[49031]: <info>  [1765005595.8232] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.822 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.825 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.826 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 ovn_controller[130279]: 2025-12-06T07:19:55Z|00315|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.841 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.842 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.843 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[69b3f70d-381a-4083-bf38-a3d1f2047819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.844 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:19:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:19:55.845 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.878 226109 DEBUG nova.compute.manager [req-52379d24-8f5f-420a-ae4d-2b55b87e3fbb req-ada4ac70-323f-4ca8-bd41-2644f3df0b74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.879 226109 DEBUG oslo_concurrency.lockutils [req-52379d24-8f5f-420a-ae4d-2b55b87e3fbb req-ada4ac70-323f-4ca8-bd41-2644f3df0b74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.879 226109 DEBUG oslo_concurrency.lockutils [req-52379d24-8f5f-420a-ae4d-2b55b87e3fbb req-ada4ac70-323f-4ca8-bd41-2644f3df0b74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.879 226109 DEBUG oslo_concurrency.lockutils [req-52379d24-8f5f-420a-ae4d-2b55b87e3fbb req-ada4ac70-323f-4ca8-bd41-2644f3df0b74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.880 226109 DEBUG nova.compute.manager [req-52379d24-8f5f-420a-ae4d-2b55b87e3fbb req-ada4ac70-323f-4ca8-bd41-2644f3df0b74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:55 compute-1 nova_compute[226101]: 2025-12-06 07:19:55.880 226109 WARNING nova.compute.manager [req-52379d24-8f5f-420a-ae4d-2b55b87e3fbb req-ada4ac70-323f-4ca8-bd41-2644f3df0b74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state active and task_state resize_finish.
Dec 06 07:19:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2403216222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:19:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:19:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:19:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:19:56 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.192 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 67f0eb12-5070-4bc1-846b-09eb25dec88d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.194 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005596.191923, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.195 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Resumed (Lifecycle Event)
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.197 226109 DEBUG nova.compute.manager [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.201 226109 INFO nova.virt.libvirt.driver [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance running successfully.
Dec 06 07:19:56 compute-1 virtqemud[225710]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.203 226109 DEBUG nova.virt.libvirt.guest [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.204 226109 DEBUG nova.virt.libvirt.driver [None req-346925c9-b204-4d73-8e03-e76d69a236b4 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 07:19:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:56 compute-1 podman[255772]: 2025-12-06 07:19:56.193731569 +0000 UTC m=+0.022854951 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:19:56 compute-1 podman[255772]: 2025-12-06 07:19:56.291636025 +0000 UTC m=+0.120759377 container create c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:19:56 compute-1 systemd[1]: Started libpod-conmon-c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8.scope.
Dec 06 07:19:56 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:19:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a77a0280b8cc74643fde9bed5af4a4726434b01717642ac4bad1797f338d76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.410 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.414 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:56 compute-1 podman[255772]: 2025-12-06 07:19:56.434367866 +0000 UTC m=+0.263491248 container init c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:19:56 compute-1 podman[255772]: 2025-12-06 07:19:56.440166664 +0000 UTC m=+0.269290016 container start c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.454 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.454 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005596.1933947, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.454 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Started (Lifecycle Event)
Dec 06 07:19:56 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[255788]: [NOTICE]   (255792) : New worker (255794) forked
Dec 06 07:19:56 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[255788]: [NOTICE]   (255792) : Loading success.
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.481 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.484 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:19:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:19:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:56.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:56 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:56.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.861 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.862 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.862 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:19:56 compute-1 nova_compute[226101]: 2025-12-06 07:19:56.862 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:57 compute-1 ceph-mon[81689]: pgmap v1860: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 95 KiB/s wr, 78 op/s
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.012 226109 DEBUG nova.compute.manager [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.013 226109 DEBUG oslo_concurrency.lockutils [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.014 226109 DEBUG oslo_concurrency.lockutils [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.014 226109 DEBUG oslo_concurrency.lockutils [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.014 226109 DEBUG nova.compute.manager [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.014 226109 WARNING nova.compute.manager [req-778c24c2-71b7-4d3d-bb0a-16f4fdfd6f26 req-ee23b868-d615-41fc-9491-112bacdab257 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state resized and task_state resize_reverting.
Dec 06 07:19:58 compute-1 podman[255803]: 2025-12-06 07:19:58.073338894 +0000 UTC m=+0.056515294 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:19:58 compute-1 podman[255804]: 2025-12-06 07:19:58.073722135 +0000 UTC m=+0.054999444 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 07:19:58 compute-1 ovn_controller[130279]: 2025-12-06T07:19:58Z|00316|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.272 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2245945823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:58 compute-1 ceph-mon[81689]: pgmap v1861: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 23 KiB/s wr, 70 op/s
Dec 06 07:19:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1875531211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:19:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:58.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:19:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:19:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:58.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.812 226109 DEBUG nova.network.neutron [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Port b26720ee-8d01-41bb-ad74-31e572887a36 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Dec 06 07:19:58 compute-1 nova_compute[226101]: 2025-12-06 07:19:58.812 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.770 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.790 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.791 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.791 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.792 226109 DEBUG nova.network.neutron [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.793 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.794 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.794 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.794 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.874 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.875 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.875 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.875 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:19:59 compute-1 nova_compute[226101]: 2025-12-06 07:19:59.876 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.108 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.287 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:20:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2215598199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.332 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.414 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.415 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.565 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.566 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4449MB free_disk=20.851470947265625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.566 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.567 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.622 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Applying migration context for instance 67f0eb12-5070-4bc1-846b-09eb25dec88d as it has an incoming, in-progress migration aedcdaea-1330-4252-b5a0-89a67cbe90a1. Migration status is reverting _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.623 226109 INFO nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating resource usage from migration aedcdaea-1330-4252-b5a0-89a67cbe90a1
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.673 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Migration aedcdaea-1330-4252-b5a0-89a67cbe90a1 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.673 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 67f0eb12-5070-4bc1-846b-09eb25dec88d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.673 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.674 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:20:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:00.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:00.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:00 compute-1 nova_compute[226101]: 2025-12-06 07:20:00.745 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:20:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2588148620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:01 compute-1 nova_compute[226101]: 2025-12-06 07:20:01.198 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:01 compute-1 nova_compute[226101]: 2025-12-06 07:20:01.207 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:20:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:01 compute-1 nova_compute[226101]: 2025-12-06 07:20:01.229 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:20:01 compute-1 nova_compute[226101]: 2025-12-06 07:20:01.250 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:20:01 compute-1 nova_compute[226101]: 2025-12-06 07:20:01.250 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:01 compute-1 ceph-mon[81689]: pgmap v1862: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 23 KiB/s wr, 70 op/s
Dec 06 07:20:01 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:20:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2215598199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:01.637 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:01.638 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:01.638 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.470 226109 DEBUG nova.network.neutron [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.489 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:02 compute-1 kernel: tapb26720ee-8d (unregistering): left promiscuous mode
Dec 06 07:20:02 compute-1 NetworkManager[49031]: <info>  [1765005602.6393] device (tapb26720ee-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:20:02 compute-1 ovn_controller[130279]: 2025-12-06T07:20:02Z|00317|binding|INFO|Releasing lport b26720ee-8d01-41bb-ad74-31e572887a36 from this chassis (sb_readonly=0)
Dec 06 07:20:02 compute-1 ovn_controller[130279]: 2025-12-06T07:20:02Z|00318|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 down in Southbound
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.650 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:02 compute-1 ovn_controller[130279]: 2025-12-06T07:20:02Z|00319|binding|INFO|Removing iface tapb26720ee-8d ovn-installed in OVS
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.653 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:02.659 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:d1:1e 10.100.0.14'], port_security=['fa:16:3e:a2:d1:1e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '67f0eb12-5070-4bc1-846b-09eb25dec88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '6', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b26720ee-8d01-41bb-ad74-31e572887a36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:02.661 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b26720ee-8d01-41bb-ad74-31e572887a36 in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:20:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:02.665 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:20:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:02.667 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[df02d6a9-ed8d-42b6-a25a-ea1e0e65ae1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:02.668 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:02 compute-1 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Dec 06 07:20:02 compute-1 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000004d.scope: Consumed 6.975s CPU time.
Dec 06 07:20:02 compute-1 systemd-machined[190302]: Machine qemu-37-instance-0000004d terminated.
Dec 06 07:20:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:02.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2588148620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:02 compute-1 ceph-mon[81689]: pgmap v1863: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 113 op/s
Dec 06 07:20:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:20:02 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[255788]: [NOTICE]   (255792) : haproxy version is 2.8.14-c23fe91
Dec 06 07:20:02 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[255788]: [NOTICE]   (255792) : path to executable is /usr/sbin/haproxy
Dec 06 07:20:02 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[255788]: [WARNING]  (255792) : Exiting Master process...
Dec 06 07:20:02 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[255788]: [ALERT]    (255792) : Current worker (255794) exited with code 143 (Terminated)
Dec 06 07:20:02 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[255788]: [WARNING]  (255792) : All workers exited. Exiting... (0)
Dec 06 07:20:02 compute-1 systemd[1]: libpod-c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8.scope: Deactivated successfully.
Dec 06 07:20:02 compute-1 podman[255908]: 2025-12-06 07:20:02.880031868 +0000 UTC m=+0.124081756 container died c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:20:02 compute-1 sudo[255921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:20:02 compute-1 sudo[255921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:02 compute-1 sudo[255921]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.940 226109 INFO nova.virt.libvirt.driver [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance destroyed successfully.
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.941 226109 DEBUG nova.objects.instance [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.962 226109 DEBUG nova.virt.libvirt.vif [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.963 226109 DEBUG nova.network.os_vif_util [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.964 226109 DEBUG nova.network.os_vif_util [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.965 226109 DEBUG os_vif [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.966 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.966 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb26720ee-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.968 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.969 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.971 226109 INFO os_vif [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d')
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.975 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:02 compute-1 nova_compute[226101]: 2025-12-06 07:20:02.975 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:02 compute-1 sudo[255964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:20:02 compute-1 sudo[255964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:02 compute-1 sudo[255964]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.001 226109 DEBUG nova.objects.instance [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'migration_context' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:03 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8-userdata-shm.mount: Deactivated successfully.
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.048 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-32a77a0280b8cc74643fde9bed5af4a4726434b01717642ac4bad1797f338d76-merged.mount: Deactivated successfully.
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.050 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:03 compute-1 podman[255908]: 2025-12-06 07:20:03.061996984 +0000 UTC m=+0.306046862 container cleanup c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:20:03 compute-1 systemd[1]: libpod-conmon-c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8.scope: Deactivated successfully.
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.105 226109 DEBUG oslo_concurrency.processutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:03 compute-1 podman[256000]: 2025-12-06 07:20:03.133069952 +0000 UTC m=+0.047283553 container remove c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.138 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[743e7855-5f3b-4818-a3ef-098205b599f9]: (4, ('Sat Dec  6 07:20:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8)\nc099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8\nSat Dec  6 07:20:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (c099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8)\nc099c88dca07720bc6a0161ecee9ae049bbba45d0d762d505390e1a953c578e8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.139 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5ef4ec-9d0c-4c56-85c6-eecd6009c3e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.140 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:03 compute-1 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.142 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.155 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.158 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[74fb6e57-d61c-4c6e-b69a-ce3790b93e57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.180 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b51a4b50-b0d9-4e5d-aac1-e6c7b3b6b610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.182 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[54629c23-e4e8-49a8-b052-47f5390d32f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.195 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[955a454b-967d-4c82-9f11-c437661a4c81]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587511, 'reachable_time': 44197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256016, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:03 compute-1 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.199 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:20:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:03.199 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[0957610e-ad49-4e9c-a866-1c3cffff32fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.213 226109 DEBUG nova.compute.manager [req-978f03ac-f78e-4b1f-bbf2-2a4b598092c3 req-ab8aab91-3a51-4044-a8a4-31489b50a045 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.213 226109 DEBUG oslo_concurrency.lockutils [req-978f03ac-f78e-4b1f-bbf2-2a4b598092c3 req-ab8aab91-3a51-4044-a8a4-31489b50a045 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.213 226109 DEBUG oslo_concurrency.lockutils [req-978f03ac-f78e-4b1f-bbf2-2a4b598092c3 req-ab8aab91-3a51-4044-a8a4-31489b50a045 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.213 226109 DEBUG oslo_concurrency.lockutils [req-978f03ac-f78e-4b1f-bbf2-2a4b598092c3 req-ab8aab91-3a51-4044-a8a4-31489b50a045 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.214 226109 DEBUG nova.compute.manager [req-978f03ac-f78e-4b1f-bbf2-2a4b598092c3 req-ab8aab91-3a51-4044-a8a4-31489b50a045 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.214 226109 WARNING nova.compute.manager [req-978f03ac-f78e-4b1f-bbf2-2a4b598092c3 req-ab8aab91-3a51-4044-a8a4-31489b50a045 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state resized and task_state resize_reverting.
Dec 06 07:20:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:20:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1169435881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.537 226109 DEBUG oslo_concurrency.processutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.544 226109 DEBUG nova.compute.provider_tree [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.644 226109 DEBUG nova.scheduler.client.report [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.766 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:20:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1169435881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2530500757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:03 compute-1 nova_compute[226101]: 2025-12-06 07:20:03.977 226109 INFO nova.compute.manager [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Swapping old allocation on dict_keys(['466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83']) held by migration aedcdaea-1330-4252-b5a0-89a67cbe90a1 for instance
Dec 06 07:20:04 compute-1 nova_compute[226101]: 2025-12-06 07:20:04.064 226109 DEBUG nova.scheduler.client.report [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Overwriting current allocation {'allocations': {'466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 51}}, 'project_id': '929e2be1488d4b80b7ad8946093a6abe', 'user_id': '627c36bb63534e52a4b1d5adf47e6ffd', 'consumer_generation': 1} on consumer 67f0eb12-5070-4bc1-846b-09eb25dec88d move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Dec 06 07:20:04 compute-1 nova_compute[226101]: 2025-12-06 07:20:04.367 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:04 compute-1 nova_compute[226101]: 2025-12-06 07:20:04.367 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:04 compute-1 nova_compute[226101]: 2025-12-06 07:20:04.368 226109 DEBUG nova.network.neutron [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:20:04 compute-1 nova_compute[226101]: 2025-12-06 07:20:04.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:04.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:04.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:04 compute-1 ceph-mon[81689]: pgmap v1864: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 89 op/s
Dec 06 07:20:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/529352192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:05 compute-1 nova_compute[226101]: 2025-12-06 07:20:05.109 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:05 compute-1 nova_compute[226101]: 2025-12-06 07:20:05.350 226109 DEBUG nova.compute.manager [req-de583d35-25fa-4406-accd-4f4c0123571e req-ba15f6e0-e68f-43cc-a266-079f8571abd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:05 compute-1 nova_compute[226101]: 2025-12-06 07:20:05.350 226109 DEBUG oslo_concurrency.lockutils [req-de583d35-25fa-4406-accd-4f4c0123571e req-ba15f6e0-e68f-43cc-a266-079f8571abd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:05 compute-1 nova_compute[226101]: 2025-12-06 07:20:05.350 226109 DEBUG oslo_concurrency.lockutils [req-de583d35-25fa-4406-accd-4f4c0123571e req-ba15f6e0-e68f-43cc-a266-079f8571abd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:05 compute-1 nova_compute[226101]: 2025-12-06 07:20:05.350 226109 DEBUG oslo_concurrency.lockutils [req-de583d35-25fa-4406-accd-4f4c0123571e req-ba15f6e0-e68f-43cc-a266-079f8571abd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:05 compute-1 nova_compute[226101]: 2025-12-06 07:20:05.351 226109 DEBUG nova.compute.manager [req-de583d35-25fa-4406-accd-4f4c0123571e req-ba15f6e0-e68f-43cc-a266-079f8571abd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:05 compute-1 nova_compute[226101]: 2025-12-06 07:20:05.351 226109 WARNING nova.compute.manager [req-de583d35-25fa-4406-accd-4f4c0123571e req-ba15f6e0-e68f-43cc-a266-079f8571abd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state resized and task_state resize_reverting.
Dec 06 07:20:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:05.418 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:05 compute-1 nova_compute[226101]: 2025-12-06 07:20:05.419 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:05.419 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:20:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:06 compute-1 nova_compute[226101]: 2025-12-06 07:20:06.533 226109 DEBUG nova.network.neutron [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:06 compute-1 nova_compute[226101]: 2025-12-06 07:20:06.597 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-67f0eb12-5070-4bc1-846b-09eb25dec88d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:06 compute-1 nova_compute[226101]: 2025-12-06 07:20:06.598 226109 DEBUG nova.virt.libvirt.driver [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Dec 06 07:20:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:06.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/378322614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:06 compute-1 ceph-mon[81689]: pgmap v1865: 305 pgs: 305 active+clean; 305 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 600 KiB/s wr, 107 op/s
Dec 06 07:20:07 compute-1 nova_compute[226101]: 2025-12-06 07:20:07.137 226109 DEBUG nova.storage.rbd_utils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rolling back rbd image(67f0eb12-5070-4bc1-846b-09eb25dec88d_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Dec 06 07:20:08 compute-1 nova_compute[226101]: 2025-12-06 07:20:08.017 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:08 compute-1 nova_compute[226101]: 2025-12-06 07:20:08.018 226109 DEBUG nova.storage.rbd_utils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] removing snapshot(nova-resize) on rbd image(67f0eb12-5070-4bc1-846b-09eb25dec88d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:20:08 compute-1 ceph-mon[81689]: pgmap v1866: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Dec 06 07:20:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2441678181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:20:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2441678181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:20:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e257 e257: 3 total, 3 up, 3 in
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.754 226109 DEBUG nova.virt.libvirt.driver [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Start _get_guest_xml network_info=[{"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.757 226109 WARNING nova.virt.libvirt.driver [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.761 226109 DEBUG nova.virt.libvirt.host [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.762 226109 DEBUG nova.virt.libvirt.host [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.764 226109 DEBUG nova.virt.libvirt.host [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.765 226109 DEBUG nova.virt.libvirt.host [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.766 226109 DEBUG nova.virt.libvirt.driver [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.766 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.767 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.767 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.767 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.767 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.767 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.767 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.768 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.768 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.768 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.769 226109 DEBUG nova.virt.hardware [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.769 226109 DEBUG nova.objects.instance [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'vcpu_model' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:09 compute-1 nova_compute[226101]: 2025-12-06 07:20:09.787 226109 DEBUG oslo_concurrency.processutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:10 compute-1 nova_compute[226101]: 2025-12-06 07:20:10.111 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:20:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1711979528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:10 compute-1 nova_compute[226101]: 2025-12-06 07:20:10.241 226109 DEBUG oslo_concurrency.processutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:10 compute-1 nova_compute[226101]: 2025-12-06 07:20:10.282 226109 DEBUG oslo_concurrency.processutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:10 compute-1 nova_compute[226101]: 2025-12-06 07:20:10.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:20:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3107736314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:11 compute-1 ceph-mon[81689]: osdmap e257: 3 total, 3 up, 3 in
Dec 06 07:20:11 compute-1 ceph-mon[81689]: pgmap v1868: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Dec 06 07:20:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1797948778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1711979528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3582177779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.104 226109 DEBUG oslo_concurrency.processutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.822s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.106 226109 DEBUG nova.virt.libvirt.vif [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.107 226109 DEBUG nova.network.os_vif_util [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.108 226109 DEBUG nova.network.os_vif_util [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.111 226109 DEBUG nova.virt.libvirt.driver [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <uuid>67f0eb12-5070-4bc1-846b-09eb25dec88d</uuid>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <name>instance-0000004d</name>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerActionsTestJSON-server-1140474602</nova:name>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:20:09</nova:creationTime>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <nova:port uuid="b26720ee-8d01-41bb-ad74-31e572887a36">
Dec 06 07:20:11 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <system>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <entry name="serial">67f0eb12-5070-4bc1-846b-09eb25dec88d</entry>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <entry name="uuid">67f0eb12-5070-4bc1-846b-09eb25dec88d</entry>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </system>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <os>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   </os>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <features>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   </features>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/67f0eb12-5070-4bc1-846b-09eb25dec88d_disk">
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       </source>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/67f0eb12-5070-4bc1-846b-09eb25dec88d_disk.config">
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       </source>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:20:11 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:a2:d1:1e"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <target dev="tapb26720ee-8d"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d/console.log" append="off"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <video>
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </video>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:20:11 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:20:11 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:20:11 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:20:11 compute-1 nova_compute[226101]: </domain>
Dec 06 07:20:11 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.113 226109 DEBUG nova.compute.manager [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Preparing to wait for external event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.113 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.113 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.113 226109 DEBUG oslo_concurrency.lockutils [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.114 226109 DEBUG nova.virt.libvirt.vif [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.114 226109 DEBUG nova.network.os_vif_util [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.115 226109 DEBUG nova.network.os_vif_util [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.115 226109 DEBUG os_vif [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.116 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.117 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.117 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.119 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.120 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb26720ee-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.120 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb26720ee-8d, col_values=(('external_ids', {'iface-id': 'b26720ee-8d01-41bb-ad74-31e572887a36', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:d1:1e', 'vm-uuid': '67f0eb12-5070-4bc1-846b-09eb25dec88d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.122 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 NetworkManager[49031]: <info>  [1765005611.1229] manager: (tapb26720ee-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.125 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.127 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.128 226109 INFO os_vif [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d')
Dec 06 07:20:11 compute-1 kernel: tapb26720ee-8d: entered promiscuous mode
Dec 06 07:20:11 compute-1 NetworkManager[49031]: <info>  [1765005611.1979] manager: (tapb26720ee-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.197 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 ovn_controller[130279]: 2025-12-06T07:20:11Z|00320|binding|INFO|Claiming lport b26720ee-8d01-41bb-ad74-31e572887a36 for this chassis.
Dec 06 07:20:11 compute-1 ovn_controller[130279]: 2025-12-06T07:20:11Z|00321|binding|INFO|b26720ee-8d01-41bb-ad74-31e572887a36: Claiming fa:16:3e:a2:d1:1e 10.100.0.14
Dec 06 07:20:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:11 compute-1 ovn_controller[130279]: 2025-12-06T07:20:11Z|00322|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 ovn-installed in OVS
Dec 06 07:20:11 compute-1 ovn_controller[130279]: 2025-12-06T07:20:11Z|00323|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 up in Southbound
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.219 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:d1:1e 10.100.0.14'], port_security=['fa:16:3e:a2:d1:1e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '67f0eb12-5070-4bc1-846b-09eb25dec88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '7', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b26720ee-8d01-41bb-ad74-31e572887a36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.220 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b26720ee-8d01-41bb-ad74-31e572887a36 in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.220 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.221 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.222 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 systemd-udevd[256170]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.234 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[794aa066-b664-4156-bae9-328ed5eb936b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.234 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.237 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.237 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5783bf-0e3c-4b6a-804b-e998fb519a41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.238 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[14734486-a2a6-46d2-951e-81e2f625c5c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 systemd-machined[190302]: New machine qemu-38-instance-0000004d.
Dec 06 07:20:11 compute-1 NetworkManager[49031]: <info>  [1765005611.2478] device (tapb26720ee-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:20:11 compute-1 NetworkManager[49031]: <info>  [1765005611.2487] device (tapb26720ee-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.251 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[daaf7206-dc56-48f9-a872-8e41f06016c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 systemd[1]: Started Virtual Machine qemu-38-instance-0000004d.
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.268 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c3465074-b1a6-4453-a03d-ca374d95b545]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.302 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2a50cf1a-24a7-40bf-a634-b0da1a9087cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 systemd-udevd[256174]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:20:11 compute-1 NetworkManager[49031]: <info>  [1765005611.3102] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.311 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcfe1c0-29d9-4fef-aac3-e03da80533ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.344 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[fdca7af1-0ec5-483e-8669-a1be38b9eb9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.347 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[63b6448e-8378-474b-9e69-e76d78f49dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 NetworkManager[49031]: <info>  [1765005611.3690] device (tap4d599401-30): carrier: link connected
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.372 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d899ca23-73da-4661-9c6a-ccfded7923d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.393 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ea1289-5794-420e-b777-299930f88e21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589087, 'reachable_time': 17500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256203, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.409 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8b68ee8a-17de-4495-b7dc-a2bfa1d22d62]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589087, 'tstamp': 589087}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256204, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.432 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2a2cef-b839-4cf1-93a4-9847b3ac95ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589087, 'reachable_time': 17500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256205, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.461 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdde07f-452d-4099-8b2e-f1c67edc6435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.513 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6354a580-2496-424c-b967-ad22629d38cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.514 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.514 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.514 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:11 compute-1 NetworkManager[49031]: <info>  [1765005611.5225] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Dec 06 07:20:11 compute-1 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.522 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.525 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.527 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 ovn_controller[130279]: 2025-12-06T07:20:11Z|00324|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.528 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.529 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.529 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[833e7d0f-8f52-4279-acd8-b2feb4dfcb97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.530 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:20:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:11.531 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.543 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.713 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 67f0eb12-5070-4bc1-846b-09eb25dec88d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.714 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005611.7129526, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.714 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Started (Lifecycle Event)
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.745 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.749 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005611.7142675, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.749 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Paused (Lifecycle Event)
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.772 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.776 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:20:11 compute-1 nova_compute[226101]: 2025-12-06 07:20:11.805 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:20:11 compute-1 podman[256279]: 2025-12-06 07:20:11.897542754 +0000 UTC m=+0.057198453 container create cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:20:11 compute-1 systemd[1]: Started libpod-conmon-cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a.scope.
Dec 06 07:20:11 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:20:11 compute-1 podman[256279]: 2025-12-06 07:20:11.864584199 +0000 UTC m=+0.024239888 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:20:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a9e843bb6c8a91b40481f41b3c8fa60aebc538edaf78a92aa8f753b68715f63/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:20:11 compute-1 podman[256279]: 2025-12-06 07:20:11.972922029 +0000 UTC m=+0.132577758 container init cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:20:11 compute-1 podman[256279]: 2025-12-06 07:20:11.980151755 +0000 UTC m=+0.139807454 container start cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:20:12 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[256295]: [NOTICE]   (256299) : New worker (256301) forked
Dec 06 07:20:12 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[256295]: [NOTICE]   (256299) : Loading success.
Dec 06 07:20:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3107736314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:12 compute-1 ceph-mon[81689]: pgmap v1869: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 299 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:20:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/100782781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.291 226109 DEBUG nova.compute.manager [req-04a2861d-4dfe-475f-8083-5c7098495921 req-7438cfd5-bfc2-4145-89b3-180960859465 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.292 226109 DEBUG oslo_concurrency.lockutils [req-04a2861d-4dfe-475f-8083-5c7098495921 req-7438cfd5-bfc2-4145-89b3-180960859465 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.293 226109 DEBUG oslo_concurrency.lockutils [req-04a2861d-4dfe-475f-8083-5c7098495921 req-7438cfd5-bfc2-4145-89b3-180960859465 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.293 226109 DEBUG oslo_concurrency.lockutils [req-04a2861d-4dfe-475f-8083-5c7098495921 req-7438cfd5-bfc2-4145-89b3-180960859465 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.293 226109 DEBUG nova.compute.manager [req-04a2861d-4dfe-475f-8083-5c7098495921 req-7438cfd5-bfc2-4145-89b3-180960859465 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Processing event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.294 226109 DEBUG nova.compute.manager [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.298 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005612.297625, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.298 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Resumed (Lifecycle Event)
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.303 226109 INFO nova.virt.libvirt.driver [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance running successfully.
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.304 226109 DEBUG nova.virt.libvirt.driver [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.335 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.339 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.379 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:20:12 compute-1 nova_compute[226101]: 2025-12-06 07:20:12.407 226109 INFO nova.compute.manager [None req-7a598c97-9398-46f9-a014-60958b0b4cde 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance to original state: 'active'
Dec 06 07:20:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:12.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:13.421 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.251 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "59997242-8393-4545-b401-5bb1501cd680" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.253 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.279 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.367 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.368 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.379 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.381 226109 INFO nova.compute.claims [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.517 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.736 226109 DEBUG nova.compute.manager [req-1d5dc391-e515-4c44-b6f2-02d5f98f6c26 req-a38e8ec1-3c34-47d3-94f9-3d2ba2a2a494 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.737 226109 DEBUG oslo_concurrency.lockutils [req-1d5dc391-e515-4c44-b6f2-02d5f98f6c26 req-a38e8ec1-3c34-47d3-94f9-3d2ba2a2a494 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.738 226109 DEBUG oslo_concurrency.lockutils [req-1d5dc391-e515-4c44-b6f2-02d5f98f6c26 req-a38e8ec1-3c34-47d3-94f9-3d2ba2a2a494 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.738 226109 DEBUG oslo_concurrency.lockutils [req-1d5dc391-e515-4c44-b6f2-02d5f98f6c26 req-a38e8ec1-3c34-47d3-94f9-3d2ba2a2a494 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.739 226109 DEBUG nova.compute.manager [req-1d5dc391-e515-4c44-b6f2-02d5f98f6c26 req-a38e8ec1-3c34-47d3-94f9-3d2ba2a2a494 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.739 226109 WARNING nova.compute.manager [req-1d5dc391-e515-4c44-b6f2-02d5f98f6c26 req-a38e8ec1-3c34-47d3-94f9-3d2ba2a2a494 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state active and task_state None.
Dec 06 07:20:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:14.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:20:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4200603272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.939 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.944 226109 DEBUG nova.compute.provider_tree [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:20:14 compute-1 nova_compute[226101]: 2025-12-06 07:20:14.965 226109 DEBUG nova.scheduler.client.report [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.000 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.000 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.087 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.088 226109 DEBUG nova.network.neutron [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.116 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.119 226109 INFO nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.140 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.242 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.243 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.244 226109 INFO nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Creating image(s)
Dec 06 07:20:15 compute-1 ceph-mon[81689]: pgmap v1870: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 299 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:20:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4200603272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.716 226109 DEBUG nova.storage.rbd_utils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 59997242-8393-4545-b401-5bb1501cd680_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.740 226109 DEBUG nova.storage.rbd_utils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 59997242-8393-4545-b401-5bb1501cd680_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.761 226109 DEBUG nova.storage.rbd_utils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 59997242-8393-4545-b401-5bb1501cd680_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.765 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.792 226109 DEBUG nova.policy [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3d8ea3dcb5b4ee3b9bd24c62fdd61b2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da41861044a74eee9bc96841e54c57cc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.830 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.831 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.831 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.832 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.853 226109 DEBUG nova.storage.rbd_utils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 59997242-8393-4545-b401-5bb1501cd680_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:15 compute-1 nova_compute[226101]: 2025-12-06 07:20:15.857 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 59997242-8393-4545-b401-5bb1501cd680_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.123 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.583 226109 DEBUG nova.network.neutron [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Successfully created port: 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:20:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:16.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:16.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.849 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.851 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.852 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.852 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.852 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.854 226109 INFO nova.compute.manager [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Terminating instance
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.855 226109 DEBUG nova.compute.manager [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:20:16 compute-1 kernel: tapb26720ee-8d (unregistering): left promiscuous mode
Dec 06 07:20:16 compute-1 NetworkManager[49031]: <info>  [1765005616.9010] device (tapb26720ee-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:20:16 compute-1 ovn_controller[130279]: 2025-12-06T07:20:16Z|00325|binding|INFO|Releasing lport b26720ee-8d01-41bb-ad74-31e572887a36 from this chassis (sb_readonly=0)
Dec 06 07:20:16 compute-1 ovn_controller[130279]: 2025-12-06T07:20:16Z|00326|binding|INFO|Setting lport b26720ee-8d01-41bb-ad74-31e572887a36 down in Southbound
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.907 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:16 compute-1 ovn_controller[130279]: 2025-12-06T07:20:16Z|00327|binding|INFO|Removing iface tapb26720ee-8d ovn-installed in OVS
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.911 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:16.918 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:d1:1e 10.100.0.14'], port_security=['fa:16:3e:a2:d1:1e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '67f0eb12-5070-4bc1-846b-09eb25dec88d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '8', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b26720ee-8d01-41bb-ad74-31e572887a36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:16.920 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b26720ee-8d01-41bb-ad74-31e572887a36 in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:20:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:16.921 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:20:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:16.922 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eba41ad0-bd4d-4f64-a2d1-72b2545f6e56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:16.923 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:20:16 compute-1 nova_compute[226101]: 2025-12-06 07:20:16.932 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:16 compute-1 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Dec 06 07:20:16 compute-1 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000004d.scope: Consumed 5.211s CPU time.
Dec 06 07:20:16 compute-1 systemd-machined[190302]: Machine qemu-38-instance-0000004d terminated.
Dec 06 07:20:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2102635153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:17 compute-1 ceph-mon[81689]: pgmap v1871: 305 pgs: 305 active+clean; 278 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 931 KiB/s rd, 2.3 MiB/s wr, 126 op/s
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.073 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.077 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.089 226109 INFO nova.virt.libvirt.driver [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Instance destroyed successfully.
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.090 226109 DEBUG nova.objects.instance [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid 67f0eb12-5070-4bc1-846b-09eb25dec88d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.106 226109 DEBUG nova.virt.libvirt.vif [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1140474602',display_name='tempest-ServerActionsTestJSON-server-1140474602',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1140474602',id=77,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-hcjpqf2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:20:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=67f0eb12-5070-4bc1-846b-09eb25dec88d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.107 226109 DEBUG nova.network.os_vif_util [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "b26720ee-8d01-41bb-ad74-31e572887a36", "address": "fa:16:3e:a2:d1:1e", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb26720ee-8d", "ovs_interfaceid": "b26720ee-8d01-41bb-ad74-31e572887a36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.110 226109 DEBUG nova.network.os_vif_util [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.110 226109 DEBUG os_vif [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.112 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.112 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb26720ee-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.114 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.116 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.118 226109 INFO os_vif [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:d1:1e,bridge_name='br-int',has_traffic_filtering=True,id=b26720ee-8d01-41bb-ad74-31e572887a36,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb26720ee-8d')
Dec 06 07:20:17 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[256295]: [NOTICE]   (256299) : haproxy version is 2.8.14-c23fe91
Dec 06 07:20:17 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[256295]: [NOTICE]   (256299) : path to executable is /usr/sbin/haproxy
Dec 06 07:20:17 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[256295]: [WARNING]  (256299) : Exiting Master process...
Dec 06 07:20:17 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[256295]: [ALERT]    (256299) : Current worker (256301) exited with code 143 (Terminated)
Dec 06 07:20:17 compute-1 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[256295]: [WARNING]  (256299) : All workers exited. Exiting... (0)
Dec 06 07:20:17 compute-1 systemd[1]: libpod-cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a.scope: Deactivated successfully.
Dec 06 07:20:17 compute-1 podman[256450]: 2025-12-06 07:20:17.34613241 +0000 UTC m=+0.333996260 container died cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.420 226109 DEBUG nova.network.neutron [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Successfully updated port: 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.442 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 59997242-8393-4545-b401-5bb1501cd680_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.474 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.475 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquired lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.475 226109 DEBUG nova.network.neutron [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.645 226109 DEBUG nova.compute.manager [req-48938fb1-05ed-488e-871d-66ca1c7247bc req-e11d706b-8455-4804-a35f-6dff6669efb0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received event network-changed-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.645 226109 DEBUG nova.compute.manager [req-48938fb1-05ed-488e-871d-66ca1c7247bc req-e11d706b-8455-4804-a35f-6dff6669efb0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Refreshing instance network info cache due to event network-changed-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.645 226109 DEBUG oslo_concurrency.lockutils [req-48938fb1-05ed-488e-871d-66ca1c7247bc req-e11d706b-8455-4804-a35f-6dff6669efb0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.649 226109 DEBUG nova.storage.rbd_utils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] resizing rbd image 59997242-8393-4545-b401-5bb1501cd680_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:20:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a-userdata-shm.mount: Deactivated successfully.
Dec 06 07:20:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-7a9e843bb6c8a91b40481f41b3c8fa60aebc538edaf78a92aa8f753b68715f63-merged.mount: Deactivated successfully.
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.764 226109 DEBUG nova.objects.instance [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'migration_context' on Instance uuid 59997242-8393-4545-b401-5bb1501cd680 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.779 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.780 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Ensure instance console log exists: /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.780 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.780 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.780 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:17 compute-1 nova_compute[226101]: 2025-12-06 07:20:17.796 226109 DEBUG nova.network.neutron [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:20:17 compute-1 podman[256450]: 2025-12-06 07:20:17.829610285 +0000 UTC m=+0.817474136 container cleanup cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:20:17 compute-1 systemd[1]: libpod-conmon-cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a.scope: Deactivated successfully.
Dec 06 07:20:18 compute-1 podman[256580]: 2025-12-06 07:20:18.218542445 +0000 UTC m=+0.367399037 container remove cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.225 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f36a2a66-b759-4cad-a02f-0f306d314868]: (4, ('Sat Dec  6 07:20:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a)\ncb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a\nSat Dec  6 07:20:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (cb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a)\ncb89f2544f8156a9053036257f8462545ad339f9dcfdeed766356353314c307a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.228 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d36f26-f3af-472a-b7b4-84d3ac8f0360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.230 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:18 compute-1 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.249 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.254 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5f8f5e-ba56-442b-afe0-bf22b5d37c33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.273 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfdc609-02a2-4cf1-8b5f-211b454c5f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.276 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc7a9be-7f39-4f7d-9969-9745d8df820f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.296 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2412b6-b604-463e-acaf-00698a45bbf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589079, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256596, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.301 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:20:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:18.301 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[22161e5f-f8e6-41d3-b139-1673c2938b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2191333907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4124821027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:18 compute-1 ceph-mon[81689]: pgmap v1872: 305 pgs: 305 active+clean; 293 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Dec 06 07:20:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1593184799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.576 226109 DEBUG nova.network.neutron [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Updating instance_info_cache with network_info: [{"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.605 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Releasing lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.606 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Instance network_info: |[{"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.606 226109 DEBUG oslo_concurrency.lockutils [req-48938fb1-05ed-488e-871d-66ca1c7247bc req-e11d706b-8455-4804-a35f-6dff6669efb0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.606 226109 DEBUG nova.network.neutron [req-48938fb1-05ed-488e-871d-66ca1c7247bc req-e11d706b-8455-4804-a35f-6dff6669efb0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Refreshing network info cache for port 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.609 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Start _get_guest_xml network_info=[{"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '412dd61d-1b1e-439f-b7f9-7e7c4e42924c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.614 226109 WARNING nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.622 226109 DEBUG nova.virt.libvirt.host [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.623 226109 DEBUG nova.virt.libvirt.host [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.630 226109 DEBUG nova.virt.libvirt.host [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.631 226109 DEBUG nova.virt.libvirt.host [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.633 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.633 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.633 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.633 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.634 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.634 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.634 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.634 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.634 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.635 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.635 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.635 226109 DEBUG nova.virt.hardware [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:20:18 compute-1 nova_compute[226101]: 2025-12-06 07:20:18.638 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:18.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:18.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.097 226109 DEBUG nova.compute.manager [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.098 226109 DEBUG oslo_concurrency.lockutils [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.098 226109 DEBUG oslo_concurrency.lockutils [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.098 226109 DEBUG oslo_concurrency.lockutils [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.099 226109 DEBUG nova.compute.manager [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.099 226109 DEBUG nova.compute.manager [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-unplugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.099 226109 DEBUG nova.compute.manager [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.099 226109 DEBUG oslo_concurrency.lockutils [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.099 226109 DEBUG oslo_concurrency.lockutils [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.100 226109 DEBUG oslo_concurrency.lockutils [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.100 226109 DEBUG nova.compute.manager [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] No waiting events found dispatching network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:19 compute-1 nova_compute[226101]: 2025-12-06 07:20:19.100 226109 WARNING nova.compute.manager [req-33c7f091-195e-44ec-b4d9-40bd9c44cc2a req-2066a001-113e-41eb-a432-536a55e4fed0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received unexpected event network-vif-plugged-b26720ee-8d01-41bb-ad74-31e572887a36 for instance with vm_state active and task_state deleting.
Dec 06 07:20:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 e258: 3 total, 3 up, 3 in
Dec 06 07:20:20 compute-1 nova_compute[226101]: 2025-12-06 07:20:20.115 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:20:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1452383183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:20 compute-1 nova_compute[226101]: 2025-12-06 07:20:20.330 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:20 compute-1 nova_compute[226101]: 2025-12-06 07:20:20.355 226109 DEBUG nova.storage.rbd_utils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 59997242-8393-4545-b401-5bb1501cd680_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:20 compute-1 nova_compute[226101]: 2025-12-06 07:20:20.359 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:20 compute-1 nova_compute[226101]: 2025-12-06 07:20:20.384 226109 DEBUG nova.network.neutron [req-48938fb1-05ed-488e-871d-66ca1c7247bc req-e11d706b-8455-4804-a35f-6dff6669efb0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Updated VIF entry in instance network info cache for port 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:20:20 compute-1 nova_compute[226101]: 2025-12-06 07:20:20.385 226109 DEBUG nova.network.neutron [req-48938fb1-05ed-488e-871d-66ca1c7247bc req-e11d706b-8455-4804-a35f-6dff6669efb0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Updating instance_info_cache with network_info: [{"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:20 compute-1 nova_compute[226101]: 2025-12-06 07:20:20.398 226109 DEBUG oslo_concurrency.lockutils [req-48938fb1-05ed-488e-871d-66ca1c7247bc req-e11d706b-8455-4804-a35f-6dff6669efb0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:20.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:20:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/400921334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.215 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.857s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.218 226109 DEBUG nova.virt.libvirt.vif [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1871962430',display_name='tempest-ListServerFiltersTestJSON-instance-1871962430',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1871962430',id=84,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-v07a1ebn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:15Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=59997242-8393-4545-b401-5bb1501cd680,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.219 226109 DEBUG nova.network.os_vif_util [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.220 226109 DEBUG nova.network.os_vif_util [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:12:96,bridge_name='br-int',has_traffic_filtering=True,id=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a96cb69-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.222 226109 DEBUG nova.objects.instance [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 59997242-8393-4545-b401-5bb1501cd680 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.240 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <uuid>59997242-8393-4545-b401-5bb1501cd680</uuid>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <name>instance-00000054</name>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1871962430</nova:name>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:20:18</nova:creationTime>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <nova:user uuid="e3d8ea3dcb5b4ee3b9bd24c62fdd61b2">tempest-ListServerFiltersTestJSON-1681699386-project-member</nova:user>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <nova:project uuid="da41861044a74eee9bc96841e54c57cc">tempest-ListServerFiltersTestJSON-1681699386</nova:project>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <nova:port uuid="7a96cb69-1b0c-4a9b-ba10-a52d7978baf1">
Dec 06 07:20:21 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <system>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <entry name="serial">59997242-8393-4545-b401-5bb1501cd680</entry>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <entry name="uuid">59997242-8393-4545-b401-5bb1501cd680</entry>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </system>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <os>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   </os>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <features>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   </features>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/59997242-8393-4545-b401-5bb1501cd680_disk">
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       </source>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/59997242-8393-4545-b401-5bb1501cd680_disk.config">
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       </source>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:20:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:98:12:96"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <target dev="tap7a96cb69-1b"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680/console.log" append="off"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <video>
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </video>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:20:21 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:20:21 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:20:21 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:20:21 compute-1 nova_compute[226101]: </domain>
Dec 06 07:20:21 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.241 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Preparing to wait for external event network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.242 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "59997242-8393-4545-b401-5bb1501cd680-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.242 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.243 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.244 226109 DEBUG nova.virt.libvirt.vif [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1871962430',display_name='tempest-ListServerFiltersTestJSON-instance-1871962430',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1871962430',id=84,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-v07a1ebn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:15Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=59997242-8393-4545-b401-5bb1501cd680,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.244 226109 DEBUG nova.network.os_vif_util [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.245 226109 DEBUG nova.network.os_vif_util [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:12:96,bridge_name='br-int',has_traffic_filtering=True,id=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a96cb69-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.245 226109 DEBUG os_vif [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:12:96,bridge_name='br-int',has_traffic_filtering=True,id=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a96cb69-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.246 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.246 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.247 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.249 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.249 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a96cb69-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.250 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a96cb69-1b, col_values=(('external_ids', {'iface-id': '7a96cb69-1b0c-4a9b-ba10-a52d7978baf1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:12:96', 'vm-uuid': '59997242-8393-4545-b401-5bb1501cd680'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.251 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:21 compute-1 NetworkManager[49031]: <info>  [1765005621.2527] manager: (tap7a96cb69-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.254 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.259 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.260 226109 INFO os_vif [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:12:96,bridge_name='br-int',has_traffic_filtering=True,id=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a96cb69-1b')
Dec 06 07:20:21 compute-1 ceph-mon[81689]: osdmap e258: 3 total, 3 up, 3 in
Dec 06 07:20:21 compute-1 ceph-mon[81689]: pgmap v1874: 305 pgs: 305 active+clean; 293 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Dec 06 07:20:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1452383183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.404 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.405 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.405 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] No VIF found with MAC fa:16:3e:98:12:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.405 226109 INFO nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Using config drive
Dec 06 07:20:21 compute-1 nova_compute[226101]: 2025-12-06 07:20:21.661 226109 DEBUG nova.storage.rbd_utils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 59997242-8393-4545-b401-5bb1501cd680_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.095 226109 INFO nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Creating config drive at /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680/disk.config
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.099 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpth5cl2fd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.126 226109 INFO nova.virt.libvirt.driver [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Deleting instance files /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d_del
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.127 226109 INFO nova.virt.libvirt.driver [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Deletion of /var/lib/nova/instances/67f0eb12-5070-4bc1-846b-09eb25dec88d_del complete
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.175 226109 INFO nova.compute.manager [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Took 5.32 seconds to destroy the instance on the hypervisor.
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.176 226109 DEBUG oslo.service.loopingcall [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.177 226109 DEBUG nova.compute.manager [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.177 226109 DEBUG nova.network.neutron [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.228 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpth5cl2fd" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.264 226109 DEBUG nova.storage.rbd_utils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] rbd image 59997242-8393-4545-b401-5bb1501cd680_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.267 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680/disk.config 59997242-8393-4545-b401-5bb1501cd680_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/400921334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:22 compute-1 ceph-mon[81689]: pgmap v1875: 305 pgs: 305 active+clean; 227 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 360 op/s
Dec 06 07:20:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:22.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:22.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.870 226109 DEBUG oslo_concurrency.processutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680/disk.config 59997242-8393-4545-b401-5bb1501cd680_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.870 226109 INFO nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Deleting local config drive /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680/disk.config because it was imported into RBD.
Dec 06 07:20:22 compute-1 kernel: tap7a96cb69-1b: entered promiscuous mode
Dec 06 07:20:22 compute-1 NetworkManager[49031]: <info>  [1765005622.9196] manager: (tap7a96cb69-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/151)
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.920 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:22 compute-1 ovn_controller[130279]: 2025-12-06T07:20:22Z|00328|binding|INFO|Claiming lport 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 for this chassis.
Dec 06 07:20:22 compute-1 ovn_controller[130279]: 2025-12-06T07:20:22Z|00329|binding|INFO|7a96cb69-1b0c-4a9b-ba10-a52d7978baf1: Claiming fa:16:3e:98:12:96 10.100.0.14
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.930 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:12:96 10.100.0.14'], port_security=['fa:16:3e:98:12:96 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '59997242-8393-4545-b401-5bb1501cd680', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da41861044a74eee9bc96841e54c57cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef95b64-f92c-4c6f-a9f3-c169d32d1826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ca3fb81-8bc9-4918-afd1-69859f260a75, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.932 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 in datapath b71f2bd2-56d4-4afa-bd37-2f65a2d061fe bound to our chassis
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.934 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b71f2bd2-56d4-4afa-bd37-2f65a2d061fe
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.946 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0d9bce-7d68-48a4-9bee-60c8d059d4a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.947 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb71f2bd2-51 in ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:20:22 compute-1 systemd-udevd[256734]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.949 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb71f2bd2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.949 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1cd263-9e3b-4b56-8e1c-84e5085d8c96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.950 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6c7d8d-646c-40c0-b37a-b529d00a70b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:22 compute-1 systemd-machined[190302]: New machine qemu-39-instance-00000054.
Dec 06 07:20:22 compute-1 NetworkManager[49031]: <info>  [1765005622.9621] device (tap7a96cb69-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.959 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5067de3c-6112-4350-8c05-7132495b58e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:22 compute-1 NetworkManager[49031]: <info>  [1765005622.9632] device (tap7a96cb69-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.969 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:22 compute-1 systemd[1]: Started Virtual Machine qemu-39-instance-00000054.
Dec 06 07:20:22 compute-1 ovn_controller[130279]: 2025-12-06T07:20:22Z|00330|binding|INFO|Setting lport 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 up in Southbound
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:22 compute-1 ovn_controller[130279]: 2025-12-06T07:20:22Z|00331|binding|INFO|Setting lport 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 ovn-installed in OVS
Dec 06 07:20:22 compute-1 nova_compute[226101]: 2025-12-06 07:20:22.975 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:22.983 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a508da0d-edf1-451c-88ff-9e7a2e013a54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.010 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a4de69a5-c2ab-4f58-a0a4-34a056c2a315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 NetworkManager[49031]: <info>  [1765005623.0164] manager: (tapb71f2bd2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/152)
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.016 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2e615c93-22e7-45cb-ae7c-de830c163909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.042 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[74239daf-a74b-4dc7-8a64-06925448c14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.045 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8a29d469-16e5-4932-bd0e-dbfdeac85024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 NetworkManager[49031]: <info>  [1765005623.0669] device (tapb71f2bd2-50): carrier: link connected
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.075 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[65b15ea4-e8b4-4782-832a-412a823fb148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.095 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6b83a916-bf7f-4d81-811e-74be1e839d2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb71f2bd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:86:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590257, 'reachable_time': 17145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256767, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.112 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3ece0625-21a0-4287-bd78-2135a3e14477]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:8633'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 590257, 'tstamp': 590257}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256768, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.129 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fb059132-092e-4c17-bc25-efc4b0cc27bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb71f2bd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:86:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590257, 'reachable_time': 17145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256769, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.159 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fea34dc1-08a0-4752-ab48-c3d08e977ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.221 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b58b4268-6d4c-43cd-91ab-efacb9db5c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.222 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71f2bd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.222 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.222 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb71f2bd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:23 compute-1 kernel: tapb71f2bd2-50: entered promiscuous mode
Dec 06 07:20:23 compute-1 NetworkManager[49031]: <info>  [1765005623.2259] manager: (tapb71f2bd2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.226 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb71f2bd2-50, col_values=(('external_ids', {'iface-id': '4c28345a-5229-44ea-b258-9761247daac8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:23 compute-1 ovn_controller[130279]: 2025-12-06T07:20:23Z|00332|binding|INFO|Releasing lport 4c28345a-5229-44ea-b258-9761247daac8 from this chassis (sb_readonly=0)
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.245 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.246 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b71f2bd2-56d4-4afa-bd37-2f65a2d061fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b71f2bd2-56d4-4afa-bd37-2f65a2d061fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.247 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0432c332-fed3-4781-8f04-8aa9f11315f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.248 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/b71f2bd2-56d4-4afa-bd37-2f65a2d061fe.pid.haproxy
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID b71f2bd2-56d4-4afa-bd37-2f65a2d061fe
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:20:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:23.249 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'env', 'PROCESS_TAG=haproxy-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b71f2bd2-56d4-4afa-bd37-2f65a2d061fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.257 226109 DEBUG nova.network.neutron [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.276 226109 INFO nova.compute.manager [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Took 1.10 seconds to deallocate network for instance.
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.342 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.343 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.348 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.391 226109 INFO nova.scheduler.client.report [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Deleted allocations for instance 67f0eb12-5070-4bc1-846b-09eb25dec88d
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.487 226109 DEBUG oslo_concurrency.lockutils [None req-464ab7f9-8461-4a4b-b612-4283d6613e99 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "67f0eb12-5070-4bc1-846b-09eb25dec88d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.558 226109 DEBUG nova.compute.manager [req-21b34763-6a1a-40f6-a1c7-c864dccff190 req-c4615810-74f0-47f4-8d45-265faec08093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received event network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.560 226109 DEBUG oslo_concurrency.lockutils [req-21b34763-6a1a-40f6-a1c7-c864dccff190 req-c4615810-74f0-47f4-8d45-265faec08093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "59997242-8393-4545-b401-5bb1501cd680-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.561 226109 DEBUG oslo_concurrency.lockutils [req-21b34763-6a1a-40f6-a1c7-c864dccff190 req-c4615810-74f0-47f4-8d45-265faec08093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.561 226109 DEBUG oslo_concurrency.lockutils [req-21b34763-6a1a-40f6-a1c7-c864dccff190 req-c4615810-74f0-47f4-8d45-265faec08093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.562 226109 DEBUG nova.compute.manager [req-21b34763-6a1a-40f6-a1c7-c864dccff190 req-c4615810-74f0-47f4-8d45-265faec08093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Processing event network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:20:23 compute-1 podman[256801]: 2025-12-06 07:20:23.654741165 +0000 UTC m=+0.100405274 container create a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:20:23 compute-1 podman[256801]: 2025-12-06 07:20:23.575636719 +0000 UTC m=+0.021300848 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:20:23 compute-1 systemd[1]: Started libpod-conmon-a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc.scope.
Dec 06 07:20:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3992363602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:23 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:20:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12da398366e7ebef90635ec64cf7aa6576cb6bdbbe04b3735f728365e877b55f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:20:23 compute-1 podman[256801]: 2025-12-06 07:20:23.739966657 +0000 UTC m=+0.185630796 container init a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:20:23 compute-1 podman[256801]: 2025-12-06 07:20:23.746900065 +0000 UTC m=+0.192564174 container start a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:20:23 compute-1 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[256838]: [NOTICE]   (256872) : New worker (256882) forked
Dec 06 07:20:23 compute-1 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[256838]: [NOTICE]   (256872) : Loading success.
Dec 06 07:20:23 compute-1 podman[256814]: 2025-12-06 07:20:23.773089075 +0000 UTC m=+0.085652394 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.885 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.886 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005623.885334, 59997242-8393-4545-b401-5bb1501cd680 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.886 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] VM Started (Lifecycle Event)
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.889 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.892 226109 INFO nova.virt.libvirt.driver [-] [instance: 59997242-8393-4545-b401-5bb1501cd680] Instance spawned successfully.
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.892 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.911 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.917 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.922 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.922 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.923 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.923 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.924 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.924 226109 DEBUG nova.virt.libvirt.driver [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.948 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.948 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005623.8854723, 59997242-8393-4545-b401-5bb1501cd680 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:23 compute-1 nova_compute[226101]: 2025-12-06 07:20:23.949 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] VM Paused (Lifecycle Event)
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.021 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.024 226109 INFO nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Took 8.78 seconds to spawn the instance on the hypervisor.
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.024 226109 DEBUG nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.028 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005623.8888428, 59997242-8393-4545-b401-5bb1501cd680 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.028 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] VM Resumed (Lifecycle Event)
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.058 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.061 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.081 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.092 226109 INFO nova.compute.manager [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Took 9.75 seconds to build instance.
Dec 06 07:20:24 compute-1 nova_compute[226101]: 2025-12-06 07:20:24.108 226109 DEBUG oslo_concurrency.lockutils [None req-c4d07e55-582b-4b06-8924-472c4e1447ea e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:24.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:25 compute-1 nova_compute[226101]: 2025-12-06 07:20:25.117 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:25 compute-1 nova_compute[226101]: 2025-12-06 07:20:25.260 226109 DEBUG nova.compute.manager [req-dc3d9fb8-daa3-465e-b491-15f5c0bf0bc2 req-debc9763-03f1-4186-bbf9-ca544b9cd267 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Received event network-vif-deleted-b26720ee-8d01-41bb-ad74-31e572887a36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:25 compute-1 nova_compute[226101]: 2025-12-06 07:20:25.811 226109 DEBUG nova.compute.manager [req-aea84267-9034-44cc-beac-fe98292c94fd req-4ae70fea-3766-4c19-875d-da1c3d336abe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received event network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:25 compute-1 nova_compute[226101]: 2025-12-06 07:20:25.812 226109 DEBUG oslo_concurrency.lockutils [req-aea84267-9034-44cc-beac-fe98292c94fd req-4ae70fea-3766-4c19-875d-da1c3d336abe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "59997242-8393-4545-b401-5bb1501cd680-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:25 compute-1 nova_compute[226101]: 2025-12-06 07:20:25.812 226109 DEBUG oslo_concurrency.lockutils [req-aea84267-9034-44cc-beac-fe98292c94fd req-4ae70fea-3766-4c19-875d-da1c3d336abe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:25 compute-1 nova_compute[226101]: 2025-12-06 07:20:25.812 226109 DEBUG oslo_concurrency.lockutils [req-aea84267-9034-44cc-beac-fe98292c94fd req-4ae70fea-3766-4c19-875d-da1c3d336abe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:25 compute-1 nova_compute[226101]: 2025-12-06 07:20:25.812 226109 DEBUG nova.compute.manager [req-aea84267-9034-44cc-beac-fe98292c94fd req-4ae70fea-3766-4c19-875d-da1c3d336abe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] No waiting events found dispatching network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:25 compute-1 nova_compute[226101]: 2025-12-06 07:20:25.813 226109 WARNING nova.compute.manager [req-aea84267-9034-44cc-beac-fe98292c94fd req-4ae70fea-3766-4c19-875d-da1c3d336abe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received unexpected event network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 for instance with vm_state active and task_state None.
Dec 06 07:20:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/511668033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:26 compute-1 ceph-mon[81689]: pgmap v1876: 305 pgs: 305 active+clean; 227 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 360 op/s
Dec 06 07:20:26 compute-1 nova_compute[226101]: 2025-12-06 07:20:26.298 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:26.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:27 compute-1 ceph-mon[81689]: pgmap v1877: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.6 MiB/s wr, 348 op/s
Dec 06 07:20:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1648838874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:29 compute-1 podman[256898]: 2025-12-06 07:20:29.07122966 +0000 UTC m=+0.048081715 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:20:29 compute-1 podman[256897]: 2025-12-06 07:20:29.071279883 +0000 UTC m=+0.050317216 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 07:20:30 compute-1 nova_compute[226101]: 2025-12-06 07:20:30.120 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:31 compute-1 nova_compute[226101]: 2025-12-06 07:20:31.301 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:32 compute-1 ceph-mon[81689]: pgmap v1878: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.3 MiB/s wr, 358 op/s
Dec 06 07:20:32 compute-1 nova_compute[226101]: 2025-12-06 07:20:32.088 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005617.0877323, 67f0eb12-5070-4bc1-846b-09eb25dec88d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:32 compute-1 nova_compute[226101]: 2025-12-06 07:20:32.089 226109 INFO nova.compute.manager [-] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] VM Stopped (Lifecycle Event)
Dec 06 07:20:32 compute-1 nova_compute[226101]: 2025-12-06 07:20:32.108 226109 DEBUG nova.compute.manager [None req-e7989f98-e148-4e99-8d8e-a14ec7bfc40e - - - - - -] [instance: 67f0eb12-5070-4bc1-846b-09eb25dec88d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:32.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:33 compute-1 ceph-mon[81689]: pgmap v1879: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.2 MiB/s wr, 347 op/s
Dec 06 07:20:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3117456983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:33 compute-1 ceph-mon[81689]: pgmap v1880: 305 pgs: 305 active+clean; 242 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.1 MiB/s wr, 373 op/s
Dec 06 07:20:34 compute-1 ceph-mon[81689]: pgmap v1881: 305 pgs: 305 active+clean; 243 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.6 MiB/s wr, 238 op/s
Dec 06 07:20:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:34.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:34 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:34.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:35 compute-1 nova_compute[226101]: 2025-12-06 07:20:35.120 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:35 compute-1 ovn_controller[130279]: 2025-12-06T07:20:35Z|00333|binding|INFO|Releasing lport 4c28345a-5229-44ea-b258-9761247daac8 from this chassis (sb_readonly=0)
Dec 06 07:20:35 compute-1 nova_compute[226101]: 2025-12-06 07:20:35.313 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:35 compute-1 ovn_controller[130279]: 2025-12-06T07:20:35Z|00334|binding|INFO|Releasing lport 4c28345a-5229-44ea-b258-9761247daac8 from this chassis (sb_readonly=0)
Dec 06 07:20:35 compute-1 nova_compute[226101]: 2025-12-06 07:20:35.485 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:36 compute-1 nova_compute[226101]: 2025-12-06 07:20:36.303 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:36 compute-1 ceph-mon[81689]: pgmap v1882: 305 pgs: 305 active+clean; 263 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.8 MiB/s wr, 293 op/s
Dec 06 07:20:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:36.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:37 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec 06 07:20:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4203592398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2887186267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:38 compute-1 ovn_controller[130279]: 2025-12-06T07:20:38Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:12:96 10.100.0.14
Dec 06 07:20:38 compute-1 ovn_controller[130279]: 2025-12-06T07:20:38Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:12:96 10.100.0.14
Dec 06 07:20:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:38.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:39 compute-1 ceph-mon[81689]: pgmap v1883: 305 pgs: 305 active+clean; 343 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 313 op/s
Dec 06 07:20:40 compute-1 nova_compute[226101]: 2025-12-06 07:20:40.124 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-1 ceph-mon[81689]: pgmap v1884: 305 pgs: 305 active+clean; 343 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 230 op/s
Dec 06 07:20:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:40 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:41 compute-1 nova_compute[226101]: 2025-12-06 07:20:41.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:42.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:42.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:44.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:44 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:44.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:45 compute-1 nova_compute[226101]: 2025-12-06 07:20:45.125 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:45 compute-1 ceph-mon[81689]: pgmap v1885: 305 pgs: 305 active+clean; 367 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 8.3 MiB/s wr, 300 op/s
Dec 06 07:20:46 compute-1 nova_compute[226101]: 2025-12-06 07:20:46.309 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:46 compute-1 ceph-mon[81689]: pgmap v1886: 305 pgs: 305 active+clean; 377 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.5 MiB/s wr, 244 op/s
Dec 06 07:20:46 compute-1 ceph-mon[81689]: pgmap v1887: 305 pgs: 305 active+clean; 394 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.5 MiB/s wr, 262 op/s
Dec 06 07:20:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2225651671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:46 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:46.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:46.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3154939193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:48.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:48.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:49 compute-1 ceph-mon[81689]: pgmap v1888: 305 pgs: 305 active+clean; 394 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 274 op/s
Dec 06 07:20:50 compute-1 nova_compute[226101]: 2025-12-06 07:20:50.128 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:50 compute-1 ceph-mon[81689]: pgmap v1889: 305 pgs: 305 active+clean; 394 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Dec 06 07:20:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:50.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:50.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:51 compute-1 nova_compute[226101]: 2025-12-06 07:20:51.311 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:20:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:52.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:52.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:52 compute-1 ceph-mon[81689]: pgmap v1890: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 243 op/s
Dec 06 07:20:54 compute-1 podman[256932]: 2025-12-06 07:20:54.124437706 +0000 UTC m=+0.098478823 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:20:54 compute-1 nova_compute[226101]: 2025-12-06 07:20:54.314 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:54.314 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:20:54.315 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:20:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:54.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:54.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:55 compute-1 nova_compute[226101]: 2025-12-06 07:20:55.129 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:56 compute-1 nova_compute[226101]: 2025-12-06 07:20:56.313 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:56 compute-1 nova_compute[226101]: 2025-12-06 07:20:56.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:56 compute-1 nova_compute[226101]: 2025-12-06 07:20:56.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:20:56 compute-1 nova_compute[226101]: 2025-12-06 07:20:56.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:20:56 compute-1 ceph-mon[81689]: pgmap v1891: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 206 op/s
Dec 06 07:20:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:56.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:56 compute-1 nova_compute[226101]: 2025-12-06 07:20:56.887 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:56 compute-1 nova_compute[226101]: 2025-12-06 07:20:56.888 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:56 compute-1 nova_compute[226101]: 2025-12-06 07:20:56.888 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:20:56 compute-1 nova_compute[226101]: 2025-12-06 07:20:56.888 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 59997242-8393-4545-b401-5bb1501cd680 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:56.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:58.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:20:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:20:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:58.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:20:59 compute-1 ceph-mon[81689]: pgmap v1892: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 189 op/s
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.795 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Updating instance_info_cache with network_info: [{"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.833 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-59997242-8393-4545-b401-5bb1501cd680" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.833 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.834 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.834 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.834 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.834 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.858 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.859 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.859 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.859 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:20:59 compute-1 nova_compute[226101]: 2025-12-06 07:20:59.860 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:00 compute-1 podman[256969]: 2025-12-06 07:21:00.084995958 +0000 UTC m=+0.062180719 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:21:00 compute-1 podman[256979]: 2025-12-06 07:21:00.085441489 +0000 UTC m=+0.058611931 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.131 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3765889850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.311 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.417 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.417 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.584 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.585 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4425MB free_disk=20.793289184570312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.585 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.586 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.652 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 59997242-8393-4545-b401-5bb1501cd680 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.653 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.653 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:21:00 compute-1 nova_compute[226101]: 2025-12-06 07:21:00.688 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:00.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3683988776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:01 compute-1 nova_compute[226101]: 2025-12-06 07:21:01.117 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:01 compute-1 nova_compute[226101]: 2025-12-06 07:21:01.126 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:01 compute-1 nova_compute[226101]: 2025-12-06 07:21:01.195 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:01 compute-1 nova_compute[226101]: 2025-12-06 07:21:01.226 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:21:01 compute-1 nova_compute[226101]: 2025-12-06 07:21:01.226 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:01 compute-1 nova_compute[226101]: 2025-12-06 07:21:01.314 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:01.638 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:01.638 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:01.639 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:01 compute-1 ceph-mon[81689]: pgmap v1893: 305 pgs: 305 active+clean; 405 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 209 KiB/s wr, 156 op/s
Dec 06 07:21:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2743282048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:01 compute-1 ceph-mon[81689]: pgmap v1894: 305 pgs: 305 active+clean; 405 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 115 KiB/s wr, 88 op/s
Dec 06 07:21:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:02.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:21:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:02.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:21:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3765889850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3683988776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:03 compute-1 ceph-mon[81689]: pgmap v1895: 305 pgs: 305 active+clean; 354 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 148 op/s
Dec 06 07:21:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/546256446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:03 compute-1 sudo[257041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:03 compute-1 sudo[257041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:03 compute-1 sudo[257041]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:03 compute-1 sudo[257066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:21:03 compute-1 sudo[257066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:03 compute-1 sudo[257066]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:03 compute-1 sudo[257091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:03 compute-1 sudo[257091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:03 compute-1 sudo[257091]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:03.317 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:03 compute-1 sudo[257116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:21:03 compute-1 sudo[257116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:03 compute-1 sudo[257116]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:03 compute-1 nova_compute[226101]: 2025-12-06 07:21:03.982 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:03 compute-1 nova_compute[226101]: 2025-12-06 07:21:03.983 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:04 compute-1 nova_compute[226101]: 2025-12-06 07:21:04.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:04 compute-1 nova_compute[226101]: 2025-12-06 07:21:04.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:21:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:04.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:21:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:04.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:04 compute-1 ceph-mon[81689]: pgmap v1896: 305 pgs: 305 active+clean; 347 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 112 op/s
Dec 06 07:21:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:21:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:21:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2274334364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.139 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.325 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "29933cde-dbb3-4316-87f2-51a52500e040" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.325 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.350 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.436 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.436 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.442 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.442 226109 INFO nova.compute.claims [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:05 compute-1 nova_compute[226101]: 2025-12-06 07:21:05.618 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:06 compute-1 nova_compute[226101]: 2025-12-06 07:21:06.363 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:06.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:06.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3572484380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:21:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:21:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/934338195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.669 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.675 226109 DEBUG nova.compute.provider_tree [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.695 226109 DEBUG nova.scheduler.client.report [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.725 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.726 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.782 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.783 226109 DEBUG nova.network.neutron [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.805 226109 INFO nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.820 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.919 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.920 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.921 226109 INFO nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Creating image(s)
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.948 226109 DEBUG nova.storage.rbd_utils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 29933cde-dbb3-4316-87f2-51a52500e040_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:07 compute-1 nova_compute[226101]: 2025-12-06 07:21:07.986 226109 DEBUG nova.storage.rbd_utils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 29933cde-dbb3-4316-87f2-51a52500e040_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.017 226109 DEBUG nova.storage.rbd_utils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 29933cde-dbb3-4316-87f2-51a52500e040_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.022 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.048 226109 DEBUG nova.policy [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd966fefcb38a45219b9cc637c46a3d62', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.086 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.086 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.087 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.087 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.116 226109 DEBUG nova.storage.rbd_utils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 29933cde-dbb3-4316-87f2-51a52500e040_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.120 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 29933cde-dbb3-4316-87f2-51a52500e040_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:08 compute-1 ceph-mon[81689]: pgmap v1897: 305 pgs: 305 active+clean; 300 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:21:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:21:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:21:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:21:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2443488506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:08 compute-1 ceph-mon[81689]: pgmap v1898: 305 pgs: 305 active+clean; 272 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 2.2 MiB/s wr, 146 op/s
Dec 06 07:21:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:08.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:08 compute-1 nova_compute[226101]: 2025-12-06 07:21:08.882 226109 DEBUG nova.network.neutron [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Successfully created port: 30bdfeaa-383d-4195-b12e-c2a72618ba1d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:21:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:08.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:09 compute-1 nova_compute[226101]: 2025-12-06 07:21:09.037 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 29933cde-dbb3-4316-87f2-51a52500e040_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.917s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:09 compute-1 nova_compute[226101]: 2025-12-06 07:21:09.106 226109 DEBUG nova.storage.rbd_utils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] resizing rbd image 29933cde-dbb3-4316-87f2-51a52500e040_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:21:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/559791660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:21:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/559791660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:21:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2657122733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:09 compute-1 nova_compute[226101]: 2025-12-06 07:21:09.870 226109 DEBUG nova.objects.instance [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'migration_context' on Instance uuid 29933cde-dbb3-4316-87f2-51a52500e040 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:09 compute-1 nova_compute[226101]: 2025-12-06 07:21:09.971 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:21:09 compute-1 nova_compute[226101]: 2025-12-06 07:21:09.972 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Ensure instance console log exists: /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:21:09 compute-1 nova_compute[226101]: 2025-12-06 07:21:09.972 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:09 compute-1 nova_compute[226101]: 2025-12-06 07:21:09.973 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:09 compute-1 nova_compute[226101]: 2025-12-06 07:21:09.973 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.061 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "59997242-8393-4545-b401-5bb1501cd680" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.062 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.062 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "59997242-8393-4545-b401-5bb1501cd680-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.062 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.062 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.064 226109 INFO nova.compute.manager [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Terminating instance
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.065 226109 DEBUG nova.compute.manager [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:21:10 compute-1 kernel: tap7a96cb69-1b (unregistering): left promiscuous mode
Dec 06 07:21:10 compute-1 NetworkManager[49031]: <info>  [1765005670.1157] device (tap7a96cb69-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:21:10 compute-1 ovn_controller[130279]: 2025-12-06T07:21:10Z|00335|binding|INFO|Releasing lport 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 from this chassis (sb_readonly=0)
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.127 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 ovn_controller[130279]: 2025-12-06T07:21:10Z|00336|binding|INFO|Setting lport 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 down in Southbound
Dec 06 07:21:10 compute-1 ovn_controller[130279]: 2025-12-06T07:21:10Z|00337|binding|INFO|Removing iface tap7a96cb69-1b ovn-installed in OVS
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.130 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.133 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:12:96 10.100.0.14'], port_security=['fa:16:3e:98:12:96 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '59997242-8393-4545-b401-5bb1501cd680', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da41861044a74eee9bc96841e54c57cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef95b64-f92c-4c6f-a9f3-c169d32d1826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ca3fb81-8bc9-4918-afd1-69859f260a75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.134 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 in datapath b71f2bd2-56d4-4afa-bd37-2f65a2d061fe unbound from our chassis
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.135 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.137 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[472363e9-2881-4041-a824-9decc09ef405]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.137 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe namespace which is not needed anymore
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.142 226109 DEBUG nova.network.neutron [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Successfully updated port: 30bdfeaa-383d-4195-b12e-c2a72618ba1d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.146 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.157 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-29933cde-dbb3-4316-87f2-51a52500e040" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.157 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-29933cde-dbb3-4316-87f2-51a52500e040" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.157 226109 DEBUG nova.network.neutron [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:21:10 compute-1 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000054.scope: Deactivated successfully.
Dec 06 07:21:10 compute-1 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000054.scope: Consumed 15.733s CPU time.
Dec 06 07:21:10 compute-1 systemd-machined[190302]: Machine qemu-39-instance-00000054 terminated.
Dec 06 07:21:10 compute-1 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[256838]: [NOTICE]   (256872) : haproxy version is 2.8.14-c23fe91
Dec 06 07:21:10 compute-1 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[256838]: [NOTICE]   (256872) : path to executable is /usr/sbin/haproxy
Dec 06 07:21:10 compute-1 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[256838]: [ALERT]    (256872) : Current worker (256882) exited with code 143 (Terminated)
Dec 06 07:21:10 compute-1 neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe[256838]: [WARNING]  (256872) : All workers exited. Exiting... (0)
Dec 06 07:21:10 compute-1 systemd[1]: libpod-a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc.scope: Deactivated successfully.
Dec 06 07:21:10 compute-1 NetworkManager[49031]: <info>  [1765005670.2854] manager: (tap7a96cb69-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.286 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 podman[257383]: 2025-12-06 07:21:10.292549002 +0000 UTC m=+0.064901691 container died a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.293 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.303 226109 INFO nova.virt.libvirt.driver [-] [instance: 59997242-8393-4545-b401-5bb1501cd680] Instance destroyed successfully.
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.303 226109 DEBUG nova.objects.instance [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lazy-loading 'resources' on Instance uuid 59997242-8393-4545-b401-5bb1501cd680 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.322 226109 DEBUG nova.virt.libvirt.vif [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1871962430',display_name='tempest-ListServerFiltersTestJSON-instance-1871962430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1871962430',id=84,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da41861044a74eee9bc96841e54c57cc',ramdisk_id='',reservation_id='r-v07a1ebn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1681699386',owner_user_name='tempest-ListServerFiltersTestJSON-1681699386-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:20:24Z,user_data=None,user_id='e3d8ea3dcb5b4ee3b9bd24c62fdd61b2',uuid=59997242-8393-4545-b401-5bb1501cd680,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.323 226109 DEBUG nova.network.os_vif_util [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converting VIF {"id": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "address": "fa:16:3e:98:12:96", "network": {"id": "b71f2bd2-56d4-4afa-bd37-2f65a2d061fe", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1522505220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da41861044a74eee9bc96841e54c57cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a96cb69-1b", "ovs_interfaceid": "7a96cb69-1b0c-4a9b-ba10-a52d7978baf1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.324 226109 DEBUG nova.network.os_vif_util [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:12:96,bridge_name='br-int',has_traffic_filtering=True,id=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a96cb69-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.324 226109 DEBUG os_vif [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:12:96,bridge_name='br-int',has_traffic_filtering=True,id=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a96cb69-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.327 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.328 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a96cb69-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.329 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.330 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.333 226109 INFO os_vif [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:12:96,bridge_name='br-int',has_traffic_filtering=True,id=7a96cb69-1b0c-4a9b-ba10-a52d7978baf1,network=Network(b71f2bd2-56d4-4afa-bd37-2f65a2d061fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a96cb69-1b')
Dec 06 07:21:10 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc-userdata-shm.mount: Deactivated successfully.
Dec 06 07:21:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-12da398366e7ebef90635ec64cf7aa6576cb6bdbbe04b3735f728365e877b55f-merged.mount: Deactivated successfully.
Dec 06 07:21:10 compute-1 podman[257383]: 2025-12-06 07:21:10.521102792 +0000 UTC m=+0.293455481 container cleanup a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:21:10 compute-1 systemd[1]: libpod-conmon-a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc.scope: Deactivated successfully.
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.542 226109 DEBUG nova.network.neutron [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:10 compute-1 podman[257439]: 2025-12-06 07:21:10.758920453 +0000 UTC m=+0.216057201 container remove a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.764 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1d70ad72-639e-4f0f-94f2-2f6f00c5e97b]: (4, ('Sat Dec  6 07:21:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe (a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc)\na3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc\nSat Dec  6 07:21:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe (a3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc)\na3bfd9b79c9facf4d782552de8edcd185398e9a7af6f5523424d38910e7715fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.766 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3ddb44f0-1f81-499c-87cf-0ec604b08565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.767 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71f2bd2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:10 compute-1 ceph-mon[81689]: pgmap v1899: 305 pgs: 305 active+clean; 272 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 841 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 kernel: tapb71f2bd2-50: left promiscuous mode
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.783 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.787 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a023139f-9aab-4a9d-b147-8609b7446cb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.800 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2c6309-40ff-4569-8eea-e7201f3b0c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.801 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[be5c377a-17b3-455e-b252-9e46b9e19547]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.819 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9a32f3-108f-4347-96bc-32a5b53adda8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590251, 'reachable_time': 44164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257455, 'error': None, 'target': 'ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:10 compute-1 systemd[1]: run-netns-ovnmeta\x2db71f2bd2\x2d56d4\x2d4afa\x2dbd37\x2d2f65a2d061fe.mount: Deactivated successfully.
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.823 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b71f2bd2-56d4-4afa-bd37-2f65a2d061fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:21:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:10.823 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[36eba588-c040-48f3-b6d8-bee0f677b5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:10.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.852 226109 DEBUG nova.compute.manager [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received event network-changed-30bdfeaa-383d-4195-b12e-c2a72618ba1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.853 226109 DEBUG nova.compute.manager [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Refreshing instance network info cache due to event network-changed-30bdfeaa-383d-4195-b12e-c2a72618ba1d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:21:10 compute-1 nova_compute[226101]: 2025-12-06 07:21:10.853 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-29933cde-dbb3-4316-87f2-51a52500e040" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:21:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:10.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.034 226109 INFO nova.virt.libvirt.driver [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Deleting instance files /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680_del
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.035 226109 INFO nova.virt.libvirt.driver [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Deletion of /var/lib/nova/instances/59997242-8393-4545-b401-5bb1501cd680_del complete
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.090 226109 INFO nova.compute.manager [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Took 1.03 seconds to destroy the instance on the hypervisor.
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.091 226109 DEBUG oslo.service.loopingcall [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.091 226109 DEBUG nova.compute.manager [-] [instance: 59997242-8393-4545-b401-5bb1501cd680] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.091 226109 DEBUG nova.network.neutron [-] [instance: 59997242-8393-4545-b401-5bb1501cd680] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.531 226109 DEBUG nova.network.neutron [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Updating instance_info_cache with network_info: [{"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.556 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-29933cde-dbb3-4316-87f2-51a52500e040" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.556 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Instance network_info: |[{"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.557 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-29933cde-dbb3-4316-87f2-51a52500e040" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.557 226109 DEBUG nova.network.neutron [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Refreshing network info cache for port 30bdfeaa-383d-4195-b12e-c2a72618ba1d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.560 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Start _get_guest_xml network_info=[{"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.563 226109 WARNING nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.567 226109 DEBUG nova.virt.libvirt.host [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.568 226109 DEBUG nova.virt.libvirt.host [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.570 226109 DEBUG nova.virt.libvirt.host [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.570 226109 DEBUG nova.virt.libvirt.host [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.571 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.572 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.572 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.572 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.572 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.572 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.572 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.573 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.573 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.573 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.573 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.573 226109 DEBUG nova.virt.hardware [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.575 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.949 226109 DEBUG nova.network.neutron [-] [instance: 59997242-8393-4545-b401-5bb1501cd680] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:11 compute-1 nova_compute[226101]: 2025-12-06 07:21:11.968 226109 INFO nova.compute.manager [-] [instance: 59997242-8393-4545-b401-5bb1501cd680] Took 0.88 seconds to deallocate network for instance.
Dec 06 07:21:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3685708829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.013 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.013 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.019 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:12 compute-1 sudo[257476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:12 compute-1 sudo[257476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:12 compute-1 sudo[257476]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.045 226109 DEBUG nova.storage.rbd_utils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 29933cde-dbb3-4316-87f2-51a52500e040_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.050 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:12 compute-1 sudo[257521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:21:12 compute-1 sudo[257521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:12 compute-1 sudo[257521]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.158 226109 DEBUG oslo_concurrency.processutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/670055331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.515 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.517 226109 DEBUG nova.virt.libvirt.vif [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1203945453',display_name='tempest-DeleteServersTestJSON-server-1203945453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1203945453',id=87,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-s5nhyanl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:07Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=29933cde-dbb3-4316-87f2-51a52500e040,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.518 226109 DEBUG nova.network.os_vif_util [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.519 226109 DEBUG nova.network.os_vif_util [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:0d,bridge_name='br-int',has_traffic_filtering=True,id=30bdfeaa-383d-4195-b12e-c2a72618ba1d,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bdfeaa-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.520 226109 DEBUG nova.objects.instance [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29933cde-dbb3-4316-87f2-51a52500e040 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.547 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <uuid>29933cde-dbb3-4316-87f2-51a52500e040</uuid>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <name>instance-00000057</name>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <nova:name>tempest-DeleteServersTestJSON-server-1203945453</nova:name>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:21:11</nova:creationTime>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <nova:user uuid="d966fefcb38a45219b9cc637c46a3d62">tempest-DeleteServersTestJSON-1764569218-project-member</nova:user>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <nova:project uuid="c6d2f50c0db54315bfa96a24511dda90">tempest-DeleteServersTestJSON-1764569218</nova:project>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <nova:port uuid="30bdfeaa-383d-4195-b12e-c2a72618ba1d">
Dec 06 07:21:12 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <system>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <entry name="serial">29933cde-dbb3-4316-87f2-51a52500e040</entry>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <entry name="uuid">29933cde-dbb3-4316-87f2-51a52500e040</entry>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </system>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <os>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   </os>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <features>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   </features>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/29933cde-dbb3-4316-87f2-51a52500e040_disk">
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       </source>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/29933cde-dbb3-4316-87f2-51a52500e040_disk.config">
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       </source>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:21:12 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:36:66:0d"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <target dev="tap30bdfeaa-38"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040/console.log" append="off"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <video>
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </video>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:21:12 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:21:12 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:21:12 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:21:12 compute-1 nova_compute[226101]: </domain>
Dec 06 07:21:12 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.553 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Preparing to wait for external event network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.553 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "29933cde-dbb3-4316-87f2-51a52500e040-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.554 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.554 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.555 226109 DEBUG nova.virt.libvirt.vif [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1203945453',display_name='tempest-DeleteServersTestJSON-server-1203945453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1203945453',id=87,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-s5nhyanl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:07Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=29933cde-dbb3-4316-87f2-51a52500e040,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.555 226109 DEBUG nova.network.os_vif_util [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.556 226109 DEBUG nova.network.os_vif_util [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:0d,bridge_name='br-int',has_traffic_filtering=True,id=30bdfeaa-383d-4195-b12e-c2a72618ba1d,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bdfeaa-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.556 226109 DEBUG os_vif [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:0d,bridge_name='br-int',has_traffic_filtering=True,id=30bdfeaa-383d-4195-b12e-c2a72618ba1d,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bdfeaa-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.557 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.558 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.558 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.560 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.560 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30bdfeaa-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.561 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30bdfeaa-38, col_values=(('external_ids', {'iface-id': '30bdfeaa-383d-4195-b12e-c2a72618ba1d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:66:0d', 'vm-uuid': '29933cde-dbb3-4316-87f2-51a52500e040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.562 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:12 compute-1 NetworkManager[49031]: <info>  [1765005672.5636] manager: (tap30bdfeaa-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.566 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.568 226109 INFO os_vif [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:0d,bridge_name='br-int',has_traffic_filtering=True,id=30bdfeaa-383d-4195-b12e-c2a72618ba1d,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bdfeaa-38')
Dec 06 07:21:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3839594188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.606 226109 DEBUG oslo_concurrency.processutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.613 226109 DEBUG nova.compute.provider_tree [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.624 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.624 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.624 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:36:66:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.625 226109 INFO nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Using config drive
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.655 226109 DEBUG nova.storage.rbd_utils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 29933cde-dbb3-4316-87f2-51a52500e040_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.665 226109 DEBUG nova.scheduler.client.report [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.690 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.751 226109 INFO nova.scheduler.client.report [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Deleted allocations for instance 59997242-8393-4545-b401-5bb1501cd680
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.822 226109 DEBUG oslo_concurrency.lockutils [None req-35010beb-a69b-4aa7-b0e7-ace358145b1e e3d8ea3dcb5b4ee3b9bd24c62fdd61b2 da41861044a74eee9bc96841e54c57cc - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:21:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:12.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:21:12 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:12 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:12 compute-1 ceph-mon[81689]: pgmap v1900: 305 pgs: 305 active+clean; 297 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 905 KiB/s rd, 3.9 MiB/s wr, 220 op/s
Dec 06 07:21:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3685708829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/670055331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3839594188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:21:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:12.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:21:12 compute-1 nova_compute[226101]: 2025-12-06 07:21:12.958 226109 DEBUG nova.compute.manager [req-eac3f8b4-412b-48ca-b85a-a9044ddbb26b req-31035160-57fc-449f-af15-e4f1a0e88ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received event network-vif-deleted-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.099 226109 INFO nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Creating config drive at /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040/disk.config
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.104 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa4q1c590 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.240 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa4q1c590" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.270 226109 DEBUG nova.storage.rbd_utils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 29933cde-dbb3-4316-87f2-51a52500e040_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.274 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040/disk.config 29933cde-dbb3-4316-87f2-51a52500e040_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.316 226109 DEBUG nova.network.neutron [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Updated VIF entry in instance network info cache for port 30bdfeaa-383d-4195-b12e-c2a72618ba1d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.317 226109 DEBUG nova.network.neutron [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Updating instance_info_cache with network_info: [{"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.336 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-29933cde-dbb3-4316-87f2-51a52500e040" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.337 226109 DEBUG nova.compute.manager [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received event network-vif-unplugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.337 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "59997242-8393-4545-b401-5bb1501cd680-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.338 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.338 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.338 226109 DEBUG nova.compute.manager [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] No waiting events found dispatching network-vif-unplugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.338 226109 DEBUG nova.compute.manager [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received event network-vif-unplugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.339 226109 DEBUG nova.compute.manager [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received event network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.339 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "59997242-8393-4545-b401-5bb1501cd680-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.339 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.339 226109 DEBUG oslo_concurrency.lockutils [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "59997242-8393-4545-b401-5bb1501cd680-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.339 226109 DEBUG nova.compute.manager [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] No waiting events found dispatching network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.339 226109 WARNING nova.compute.manager [req-0282b6a0-17a6-4030-8e15-297890849bf0 req-0e43c053-00b5-44c1-9a66-9784422186cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 59997242-8393-4545-b401-5bb1501cd680] Received unexpected event network-vif-plugged-7a96cb69-1b0c-4a9b-ba10-a52d7978baf1 for instance with vm_state active and task_state deleting.
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.699 226109 DEBUG oslo_concurrency.processutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040/disk.config 29933cde-dbb3-4316-87f2-51a52500e040_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.700 226109 INFO nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Deleting local config drive /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040/disk.config because it was imported into RBD.
Dec 06 07:21:14 compute-1 kernel: tap30bdfeaa-38: entered promiscuous mode
Dec 06 07:21:14 compute-1 NetworkManager[49031]: <info>  [1765005674.7489] manager: (tap30bdfeaa-38): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Dec 06 07:21:14 compute-1 ovn_controller[130279]: 2025-12-06T07:21:14Z|00338|binding|INFO|Claiming lport 30bdfeaa-383d-4195-b12e-c2a72618ba1d for this chassis.
Dec 06 07:21:14 compute-1 ovn_controller[130279]: 2025-12-06T07:21:14Z|00339|binding|INFO|30bdfeaa-383d-4195-b12e-c2a72618ba1d: Claiming fa:16:3e:36:66:0d 10.100.0.10
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.750 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.765 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:66:0d 10.100.0.10'], port_security=['fa:16:3e:36:66:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '29933cde-dbb3-4316-87f2-51a52500e040', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=30bdfeaa-383d-4195-b12e-c2a72618ba1d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.767 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 30bdfeaa-383d-4195-b12e-c2a72618ba1d in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 bound to our chassis
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.768 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:21:14 compute-1 systemd-udevd[257665]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.780 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[00dcac1c-3b5d-49f8-9b11-7a06e06ce323]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.781 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85cfbf28-71 in ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.783 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85cfbf28-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.783 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[820dc2e7-cd5d-4648-a21f-664c0e4ae391]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.784 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b54c57d1-56a8-4a84-8b84-57f902429c79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 systemd-machined[190302]: New machine qemu-40-instance-00000057.
Dec 06 07:21:14 compute-1 NetworkManager[49031]: <info>  [1765005674.7936] device (tap30bdfeaa-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:21:14 compute-1 NetworkManager[49031]: <info>  [1765005674.7947] device (tap30bdfeaa-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.796 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a981a4-fd70-4c0a-9eef-8f4dca495510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 systemd[1]: Started Virtual Machine qemu-40-instance-00000057.
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.820 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.820 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[db89fadb-9cdc-45f4-a6f7-b8e361f2f586]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 ovn_controller[130279]: 2025-12-06T07:21:14Z|00340|binding|INFO|Setting lport 30bdfeaa-383d-4195-b12e-c2a72618ba1d ovn-installed in OVS
Dec 06 07:21:14 compute-1 ovn_controller[130279]: 2025-12-06T07:21:14Z|00341|binding|INFO|Setting lport 30bdfeaa-383d-4195-b12e-c2a72618ba1d up in Southbound
Dec 06 07:21:14 compute-1 nova_compute[226101]: 2025-12-06 07:21:14.828 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.849 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c490ca-0252-4487-97d8-bdbe3df257cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 NetworkManager[49031]: <info>  [1765005674.8544] manager: (tap85cfbf28-70): new Veth device (/org/freedesktop/NetworkManager/Devices/157)
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.853 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b68f7a1a-f91d-4dac-921e-556e81188333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.885 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[14560211-d8ec-4462-85a9-3240ff7ca55a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.889 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[62ba5e0f-26c3-492b-9099-4ff5162b9a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 NetworkManager[49031]: <info>  [1765005674.9138] device (tap85cfbf28-70): carrier: link connected
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.916 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[df229de6-e9fa-44c7-a3e2-6afc2a06c3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.933 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[291261ed-dd70-4379-b153-658ca5028c2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595441, 'reachable_time': 41786, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257697, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.947 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d1e458-ce49-4e8f-9c74-f81d4cf63a87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595441, 'tstamp': 595441}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257698, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.961 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0eba0f81-da05-4a92-bf9c-283ac6500790]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595441, 'reachable_time': 41786, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257699, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:14.989 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e0dbde05-fba0-4507-a7af-598b97c53f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.043 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[05695cd3-b782-4553-a8a8-769a6f12616a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.045 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.045 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.046 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85cfbf28-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:15 compute-1 NetworkManager[49031]: <info>  [1765005675.0481] manager: (tap85cfbf28-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.047 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:15 compute-1 kernel: tap85cfbf28-70: entered promiscuous mode
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.052 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.053 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85cfbf28-70, col_values=(('external_ids', {'iface-id': '41b1b168-8e0e-4991-9750-9b31221f4863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:15 compute-1 ovn_controller[130279]: 2025-12-06T07:21:15Z|00342|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.054 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.073 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.075 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.076 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b90406fd-99a7-4c3f-a994-116a2e0087ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.076 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:21:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:15.078 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'env', 'PROCESS_TAG=haproxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85cfbf28-7016-4776-8fc2-2eb08a6b8347.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.325 226109 DEBUG nova.compute.manager [req-b3cb6b82-d7dc-4e1c-955c-c162c9b23467 req-f22726bd-888e-42e2-9cbc-da44a717b85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received event network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.325 226109 DEBUG oslo_concurrency.lockutils [req-b3cb6b82-d7dc-4e1c-955c-c162c9b23467 req-f22726bd-888e-42e2-9cbc-da44a717b85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29933cde-dbb3-4316-87f2-51a52500e040-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.325 226109 DEBUG oslo_concurrency.lockutils [req-b3cb6b82-d7dc-4e1c-955c-c162c9b23467 req-f22726bd-888e-42e2-9cbc-da44a717b85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.325 226109 DEBUG oslo_concurrency.lockutils [req-b3cb6b82-d7dc-4e1c-955c-c162c9b23467 req-f22726bd-888e-42e2-9cbc-da44a717b85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.326 226109 DEBUG nova.compute.manager [req-b3cb6b82-d7dc-4e1c-955c-c162c9b23467 req-f22726bd-888e-42e2-9cbc-da44a717b85d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Processing event network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:21:15 compute-1 podman[257749]: 2025-12-06 07:21:15.407387246 +0000 UTC m=+0.022183403 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.535 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005675.5353925, 29933cde-dbb3-4316-87f2-51a52500e040 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.536 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] VM Started (Lifecycle Event)
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.538 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.542 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.546 226109 INFO nova.virt.libvirt.driver [-] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Instance spawned successfully.
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.546 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.565 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.569 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.579 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.579 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.579 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.580 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.580 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.580 226109 DEBUG nova.virt.libvirt.driver [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.589 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.590 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005675.5378022, 29933cde-dbb3-4316-87f2-51a52500e040 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.590 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] VM Paused (Lifecycle Event)
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.620 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.623 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005675.540755, 29933cde-dbb3-4316-87f2-51a52500e040 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.623 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] VM Resumed (Lifecycle Event)
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.651 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.653 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.662 226109 INFO nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Took 7.74 seconds to spawn the instance on the hypervisor.
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.662 226109 DEBUG nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.690 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:21:15 compute-1 ceph-mon[81689]: pgmap v1901: 305 pgs: 305 active+clean; 275 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 2.5 MiB/s wr, 194 op/s
Dec 06 07:21:15 compute-1 podman[257749]: 2025-12-06 07:21:15.728238739 +0000 UTC m=+0.343034876 container create a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.746 226109 INFO nova.compute.manager [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Took 10.34 seconds to build instance.
Dec 06 07:21:15 compute-1 systemd[1]: Started libpod-conmon-a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97.scope.
Dec 06 07:21:15 compute-1 nova_compute[226101]: 2025-12-06 07:21:15.773 226109 DEBUG oslo_concurrency.lockutils [None req-9b41c6ae-d28c-4e11-b663-70b51c360393 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:15 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:21:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68effd324d7f390714f599c7b4b8b62ba6e2defd88b775246378cf8a08acf978/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:16 compute-1 podman[257749]: 2025-12-06 07:21:16.146028022 +0000 UTC m=+0.760824189 container init a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:21:16 compute-1 podman[257749]: 2025-12-06 07:21:16.152140928 +0000 UTC m=+0.766937065 container start a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:21:16 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[257788]: [NOTICE]   (257792) : New worker (257794) forked
Dec 06 07:21:16 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[257788]: [NOTICE]   (257792) : Loading success.
Dec 06 07:21:16 compute-1 ceph-mon[81689]: pgmap v1902: 305 pgs: 305 active+clean; 218 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 508 KiB/s rd, 2.0 MiB/s wr, 236 op/s
Dec 06 07:21:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3709436296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:16.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:16.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.470 226109 DEBUG nova.compute.manager [req-537f6014-de8d-4e4b-b660-681d4a883063 req-13400ffd-1ee4-4fa5-b224-3c7de3abc99b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received event network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.470 226109 DEBUG oslo_concurrency.lockutils [req-537f6014-de8d-4e4b-b660-681d4a883063 req-13400ffd-1ee4-4fa5-b224-3c7de3abc99b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29933cde-dbb3-4316-87f2-51a52500e040-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.471 226109 DEBUG oslo_concurrency.lockutils [req-537f6014-de8d-4e4b-b660-681d4a883063 req-13400ffd-1ee4-4fa5-b224-3c7de3abc99b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.471 226109 DEBUG oslo_concurrency.lockutils [req-537f6014-de8d-4e4b-b660-681d4a883063 req-13400ffd-1ee4-4fa5-b224-3c7de3abc99b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.471 226109 DEBUG nova.compute.manager [req-537f6014-de8d-4e4b-b660-681d4a883063 req-13400ffd-1ee4-4fa5-b224-3c7de3abc99b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] No waiting events found dispatching network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.471 226109 WARNING nova.compute.manager [req-537f6014-de8d-4e4b-b660-681d4a883063 req-13400ffd-1ee4-4fa5-b224-3c7de3abc99b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received unexpected event network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d for instance with vm_state active and task_state None.
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.564 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.615 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "29933cde-dbb3-4316-87f2-51a52500e040" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.616 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.616 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "29933cde-dbb3-4316-87f2-51a52500e040-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.616 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.616 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.617 226109 INFO nova.compute.manager [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Terminating instance
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.618 226109 DEBUG nova.compute.manager [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:21:17 compute-1 kernel: tap30bdfeaa-38 (unregistering): left promiscuous mode
Dec 06 07:21:17 compute-1 NetworkManager[49031]: <info>  [1765005677.6604] device (tap30bdfeaa-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.661 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 ovn_controller[130279]: 2025-12-06T07:21:17Z|00343|binding|INFO|Releasing lport 30bdfeaa-383d-4195-b12e-c2a72618ba1d from this chassis (sb_readonly=0)
Dec 06 07:21:17 compute-1 ovn_controller[130279]: 2025-12-06T07:21:17Z|00344|binding|INFO|Setting lport 30bdfeaa-383d-4195-b12e-c2a72618ba1d down in Southbound
Dec 06 07:21:17 compute-1 ovn_controller[130279]: 2025-12-06T07:21:17Z|00345|binding|INFO|Removing iface tap30bdfeaa-38 ovn-installed in OVS
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.675 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.677 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.681 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:66:0d 10.100.0.10'], port_security=['fa:16:3e:36:66:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '29933cde-dbb3-4316-87f2-51a52500e040', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=30bdfeaa-383d-4195-b12e-c2a72618ba1d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.682 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 30bdfeaa-383d-4195-b12e-c2a72618ba1d in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 unbound from our chassis
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.683 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.684 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0e90ee30-8143-45e8-aaf2-0969665be8b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.685 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace which is not needed anymore
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.691 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000057.scope: Deactivated successfully.
Dec 06 07:21:17 compute-1 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000057.scope: Consumed 2.741s CPU time.
Dec 06 07:21:17 compute-1 systemd-machined[190302]: Machine qemu-40-instance-00000057 terminated.
Dec 06 07:21:17 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[257788]: [NOTICE]   (257792) : haproxy version is 2.8.14-c23fe91
Dec 06 07:21:17 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[257788]: [NOTICE]   (257792) : path to executable is /usr/sbin/haproxy
Dec 06 07:21:17 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[257788]: [WARNING]  (257792) : Exiting Master process...
Dec 06 07:21:17 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[257788]: [ALERT]    (257792) : Current worker (257794) exited with code 143 (Terminated)
Dec 06 07:21:17 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[257788]: [WARNING]  (257792) : All workers exited. Exiting... (0)
Dec 06 07:21:17 compute-1 systemd[1]: libpod-a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97.scope: Deactivated successfully.
Dec 06 07:21:17 compute-1 podman[257824]: 2025-12-06 07:21:17.812526337 +0000 UTC m=+0.042028291 container died a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:21:17 compute-1 NetworkManager[49031]: <info>  [1765005677.8448] manager: (tap30bdfeaa-38): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.844 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.852 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97-userdata-shm.mount: Deactivated successfully.
Dec 06 07:21:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-68effd324d7f390714f599c7b4b8b62ba6e2defd88b775246378cf8a08acf978-merged.mount: Deactivated successfully.
Dec 06 07:21:17 compute-1 podman[257824]: 2025-12-06 07:21:17.863198691 +0000 UTC m=+0.092700645 container cleanup a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.864 226109 INFO nova.virt.libvirt.driver [-] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Instance destroyed successfully.
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.865 226109 DEBUG nova.objects.instance [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'resources' on Instance uuid 29933cde-dbb3-4316-87f2-51a52500e040 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:17 compute-1 systemd[1]: libpod-conmon-a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97.scope: Deactivated successfully.
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.898 226109 DEBUG nova.virt.libvirt.vif [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1203945453',display_name='tempest-DeleteServersTestJSON-server-1203945453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1203945453',id=87,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-s5nhyanl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:15Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=29933cde-dbb3-4316-87f2-51a52500e040,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.899 226109 DEBUG nova.network.os_vif_util [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "address": "fa:16:3e:36:66:0d", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30bdfeaa-38", "ovs_interfaceid": "30bdfeaa-383d-4195-b12e-c2a72618ba1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.900 226109 DEBUG nova.network.os_vif_util [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:0d,bridge_name='br-int',has_traffic_filtering=True,id=30bdfeaa-383d-4195-b12e-c2a72618ba1d,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bdfeaa-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.900 226109 DEBUG os_vif [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:0d,bridge_name='br-int',has_traffic_filtering=True,id=30bdfeaa-383d-4195-b12e-c2a72618ba1d,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bdfeaa-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.902 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.902 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30bdfeaa-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.905 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.907 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.910 226109 INFO os_vif [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:66:0d,bridge_name='br-int',has_traffic_filtering=True,id=30bdfeaa-383d-4195-b12e-c2a72618ba1d,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30bdfeaa-38')
Dec 06 07:21:17 compute-1 podman[257856]: 2025-12-06 07:21:17.935861852 +0000 UTC m=+0.053550043 container remove a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.941 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f1497b-6e7e-4abb-91ee-3bc83ec8bfbf]: (4, ('Sat Dec  6 07:21:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97)\na2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97\nSat Dec  6 07:21:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (a2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97)\na2bcf31db5eb4cb5809386242c5219c495b4ed5754aeca1e34877dd07c4c6b97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.944 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[159d4646-e8df-42f3-b204-5a2d4884d06b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.945 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:17 compute-1 kernel: tap85cfbf28-70: left promiscuous mode
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.947 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 nova_compute[226101]: 2025-12-06 07:21:17.960 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:17.963 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4f775a7d-d960-402c-ab8d-a014ad431b60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:18.014 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6608e3d0-2f8b-4e6c-9401-c5969c61c719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:18.016 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d726244d-245c-4525-8756-6aa57fcf90ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:18.031 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7426f049-0229-4397-b918-0b3458d6d102]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595434, 'reachable_time': 42027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257897, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d85cfbf28\x2d7016\x2d4776\x2d8fc2\x2d2eb08a6b8347.mount: Deactivated successfully.
Dec 06 07:21:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:18.035 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:21:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:18.035 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1169c1-01b1-4491-bfa9-c3cedec31872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:18 compute-1 nova_compute[226101]: 2025-12-06 07:21:18.041 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:18 compute-1 nova_compute[226101]: 2025-12-06 07:21:18.340 226109 INFO nova.virt.libvirt.driver [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Deleting instance files /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040_del
Dec 06 07:21:18 compute-1 nova_compute[226101]: 2025-12-06 07:21:18.340 226109 INFO nova.virt.libvirt.driver [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Deletion of /var/lib/nova/instances/29933cde-dbb3-4316-87f2-51a52500e040_del complete
Dec 06 07:21:18 compute-1 nova_compute[226101]: 2025-12-06 07:21:18.404 226109 INFO nova.compute.manager [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Took 0.79 seconds to destroy the instance on the hypervisor.
Dec 06 07:21:18 compute-1 nova_compute[226101]: 2025-12-06 07:21:18.404 226109 DEBUG oslo.service.loopingcall [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:21:18 compute-1 nova_compute[226101]: 2025-12-06 07:21:18.405 226109 DEBUG nova.compute.manager [-] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:21:18 compute-1 nova_compute[226101]: 2025-12-06 07:21:18.405 226109 DEBUG nova.network.neutron [-] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:21:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:18.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:18 compute-1 ceph-mon[81689]: pgmap v1903: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 324 op/s
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.107 226109 DEBUG nova.network.neutron [-] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.127 226109 INFO nova.compute.manager [-] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Took 0.72 seconds to deallocate network for instance.
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.164 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.165 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.207 226109 DEBUG oslo_concurrency.processutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.273 226109 DEBUG nova.compute.manager [req-94ab3f31-481a-43e6-8a52-29ee39c0f0d6 req-de04346b-3178-486f-b4ab-891a7259023c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received event network-vif-deleted-30bdfeaa-383d-4195-b12e-c2a72618ba1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.627 226109 DEBUG nova.compute.manager [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received event network-vif-unplugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.628 226109 DEBUG oslo_concurrency.lockutils [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29933cde-dbb3-4316-87f2-51a52500e040-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.628 226109 DEBUG oslo_concurrency.lockutils [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.628 226109 DEBUG oslo_concurrency.lockutils [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.628 226109 DEBUG nova.compute.manager [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] No waiting events found dispatching network-vif-unplugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.629 226109 WARNING nova.compute.manager [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received unexpected event network-vif-unplugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d for instance with vm_state deleted and task_state None.
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.629 226109 DEBUG nova.compute.manager [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received event network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.629 226109 DEBUG oslo_concurrency.lockutils [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29933cde-dbb3-4316-87f2-51a52500e040-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.629 226109 DEBUG oslo_concurrency.lockutils [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.630 226109 DEBUG oslo_concurrency.lockutils [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.630 226109 DEBUG nova.compute.manager [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] No waiting events found dispatching network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.630 226109 WARNING nova.compute.manager [req-138ed8f7-88d6-43a4-96e0-5f8612ba4c82 req-57452ac5-137e-4b9c-82b6-001a25f6a360 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Received unexpected event network-vif-plugged-30bdfeaa-383d-4195-b12e-c2a72618ba1d for instance with vm_state deleted and task_state None.
Dec 06 07:21:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4047505159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.678 226109 DEBUG oslo_concurrency.processutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.684 226109 DEBUG nova.compute.provider_tree [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.702 226109 DEBUG nova.scheduler.client.report [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.746 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.777 226109 INFO nova.scheduler.client.report [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Deleted allocations for instance 29933cde-dbb3-4316-87f2-51a52500e040
Dec 06 07:21:19 compute-1 nova_compute[226101]: 2025-12-06 07:21:19.865 226109 DEBUG oslo_concurrency.lockutils [None req-54863836-977a-4fe2-9633-387ff494dfc5 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "29933cde-dbb3-4316-87f2-51a52500e040" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:20 compute-1 nova_compute[226101]: 2025-12-06 07:21:20.187 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4047505159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:20.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:20.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:21 compute-1 ceph-mon[81689]: pgmap v1904: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 284 op/s
Dec 06 07:21:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:22 compute-1 ceph-mon[81689]: pgmap v1905: 305 pgs: 305 active+clean; 125 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 373 op/s
Dec 06 07:21:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:22.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:22.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:22 compute-1 nova_compute[226101]: 2025-12-06 07:21:22.954 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1246840157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:23 compute-1 nova_compute[226101]: 2025-12-06 07:21:23.979 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:23 compute-1 nova_compute[226101]: 2025-12-06 07:21:23.980 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:24 compute-1 nova_compute[226101]: 2025-12-06 07:21:24.014 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:21:24 compute-1 nova_compute[226101]: 2025-12-06 07:21:24.098 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:24 compute-1 nova_compute[226101]: 2025-12-06 07:21:24.098 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:24 compute-1 nova_compute[226101]: 2025-12-06 07:21:24.104 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:21:24 compute-1 nova_compute[226101]: 2025-12-06 07:21:24.104 226109 INFO nova.compute.claims [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:21:24 compute-1 nova_compute[226101]: 2025-12-06 07:21:24.212 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:24.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:24.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:24 compute-1 ceph-mon[81689]: pgmap v1906: 305 pgs: 305 active+clean; 121 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 94 KiB/s wr, 303 op/s
Dec 06 07:21:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1804177915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:25 compute-1 podman[257942]: 2025-12-06 07:21:25.10322899 +0000 UTC m=+0.089088737 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 07:21:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4052461057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.138 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.926s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.145 226109 DEBUG nova.compute.provider_tree [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.179 226109 DEBUG nova.scheduler.client.report [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.187 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.212 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.213 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.266 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.267 226109 DEBUG nova.network.neutron [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.287 226109 INFO nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.302 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005670.3021169, 59997242-8393-4545-b401-5bb1501cd680 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.303 226109 INFO nova.compute.manager [-] [instance: 59997242-8393-4545-b401-5bb1501cd680] VM Stopped (Lifecycle Event)
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.305 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.331 226109 DEBUG nova.compute.manager [None req-e897ab81-d846-4496-869e-718d01abb83c - - - - - -] [instance: 59997242-8393-4545-b401-5bb1501cd680] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.417 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.419 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.419 226109 INFO nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Creating image(s)
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.441 226109 DEBUG nova.storage.rbd_utils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.465 226109 DEBUG nova.storage.rbd_utils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.492 226109 DEBUG nova.storage.rbd_utils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.496 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.553 226109 DEBUG nova.policy [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd966fefcb38a45219b9cc637c46a3d62', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.568 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.569 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.569 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.570 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.592 226109 DEBUG nova.storage.rbd_utils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.597 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.895 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:25 compute-1 nova_compute[226101]: 2025-12-06 07:21:25.973 226109 DEBUG nova.storage.rbd_utils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] resizing rbd image d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:21:26 compute-1 nova_compute[226101]: 2025-12-06 07:21:26.076 226109 DEBUG nova.objects.instance [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'migration_context' on Instance uuid d9b5a381-4236-4a72-bedd-3fe7eca161bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:26 compute-1 nova_compute[226101]: 2025-12-06 07:21:26.090 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:21:26 compute-1 nova_compute[226101]: 2025-12-06 07:21:26.090 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Ensure instance console log exists: /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:21:26 compute-1 nova_compute[226101]: 2025-12-06 07:21:26.091 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:26 compute-1 nova_compute[226101]: 2025-12-06 07:21:26.091 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:26 compute-1 nova_compute[226101]: 2025-12-06 07:21:26.091 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4052461057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:26 compute-1 ceph-mon[81689]: pgmap v1907: 305 pgs: 305 active+clean; 121 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 46 KiB/s wr, 269 op/s
Dec 06 07:21:26 compute-1 nova_compute[226101]: 2025-12-06 07:21:26.199 226109 DEBUG nova.network.neutron [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Successfully created port: c511139b-3716-4c0f-90d2-31f2ce3ba4cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:21:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:26.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:26.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:27 compute-1 nova_compute[226101]: 2025-12-06 07:21:27.820 226109 DEBUG nova.network.neutron [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Successfully updated port: c511139b-3716-4c0f-90d2-31f2ce3ba4cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:21:27 compute-1 nova_compute[226101]: 2025-12-06 07:21:27.842 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-d9b5a381-4236-4a72-bedd-3fe7eca161bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:27 compute-1 nova_compute[226101]: 2025-12-06 07:21:27.843 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-d9b5a381-4236-4a72-bedd-3fe7eca161bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:27 compute-1 nova_compute[226101]: 2025-12-06 07:21:27.843 226109 DEBUG nova.network.neutron [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:21:27 compute-1 nova_compute[226101]: 2025-12-06 07:21:27.956 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:27 compute-1 nova_compute[226101]: 2025-12-06 07:21:27.991 226109 DEBUG nova.compute.manager [req-264de44e-214c-44d8-b291-f7de429fbdea req-e73332bc-ab14-447e-a532-0ed9dd1b8a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received event network-changed-c511139b-3716-4c0f-90d2-31f2ce3ba4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:27 compute-1 nova_compute[226101]: 2025-12-06 07:21:27.991 226109 DEBUG nova.compute.manager [req-264de44e-214c-44d8-b291-f7de429fbdea req-e73332bc-ab14-447e-a532-0ed9dd1b8a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Refreshing instance network info cache due to event network-changed-c511139b-3716-4c0f-90d2-31f2ce3ba4cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:21:27 compute-1 nova_compute[226101]: 2025-12-06 07:21:27.991 226109 DEBUG oslo_concurrency.lockutils [req-264de44e-214c-44d8-b291-f7de429fbdea req-e73332bc-ab14-447e-a532-0ed9dd1b8a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d9b5a381-4236-4a72-bedd-3fe7eca161bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:28 compute-1 nova_compute[226101]: 2025-12-06 07:21:28.055 226109 DEBUG nova.network.neutron [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:21:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:28.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:28.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:29 compute-1 ceph-mon[81689]: pgmap v1908: 305 pgs: 305 active+clean; 151 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 297 op/s
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.271 226109 DEBUG nova.network.neutron [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Updating instance_info_cache with network_info: [{"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.295 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-d9b5a381-4236-4a72-bedd-3fe7eca161bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.296 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Instance network_info: |[{"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.296 226109 DEBUG oslo_concurrency.lockutils [req-264de44e-214c-44d8-b291-f7de429fbdea req-e73332bc-ab14-447e-a532-0ed9dd1b8a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d9b5a381-4236-4a72-bedd-3fe7eca161bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.296 226109 DEBUG nova.network.neutron [req-264de44e-214c-44d8-b291-f7de429fbdea req-e73332bc-ab14-447e-a532-0ed9dd1b8a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Refreshing network info cache for port c511139b-3716-4c0f-90d2-31f2ce3ba4cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.300 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Start _get_guest_xml network_info=[{"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.304 226109 WARNING nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.309 226109 DEBUG nova.virt.libvirt.host [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.309 226109 DEBUG nova.virt.libvirt.host [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.316 226109 DEBUG nova.virt.libvirt.host [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.316 226109 DEBUG nova.virt.libvirt.host [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.318 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.318 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.318 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.319 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.319 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.319 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.319 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.319 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.320 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.320 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.320 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.320 226109 DEBUG nova.virt.hardware [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.323 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/875121014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.802 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.827 226109 DEBUG nova.storage.rbd_utils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:29 compute-1 nova_compute[226101]: 2025-12-06 07:21:29.830 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.190 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:30 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/297117926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/875121014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.789 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.959s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.791 226109 DEBUG nova.virt.libvirt.vif [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1128131811',display_name='tempest-DeleteServersTestJSON-server-1128131811',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1128131811',id=88,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-ejh4rcdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:25Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=d9b5a381-4236-4a72-bedd-3fe7eca161bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.792 226109 DEBUG nova.network.os_vif_util [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.794 226109 DEBUG nova.network.os_vif_util [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:6c:18,bridge_name='br-int',has_traffic_filtering=True,id=c511139b-3716-4c0f-90d2-31f2ce3ba4cd,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc511139b-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.796 226109 DEBUG nova.objects.instance [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9b5a381-4236-4a72-bedd-3fe7eca161bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.831 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <uuid>d9b5a381-4236-4a72-bedd-3fe7eca161bc</uuid>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <name>instance-00000058</name>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <nova:name>tempest-DeleteServersTestJSON-server-1128131811</nova:name>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:21:29</nova:creationTime>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <nova:user uuid="d966fefcb38a45219b9cc637c46a3d62">tempest-DeleteServersTestJSON-1764569218-project-member</nova:user>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <nova:project uuid="c6d2f50c0db54315bfa96a24511dda90">tempest-DeleteServersTestJSON-1764569218</nova:project>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <nova:port uuid="c511139b-3716-4c0f-90d2-31f2ce3ba4cd">
Dec 06 07:21:30 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <system>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <entry name="serial">d9b5a381-4236-4a72-bedd-3fe7eca161bc</entry>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <entry name="uuid">d9b5a381-4236-4a72-bedd-3fe7eca161bc</entry>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </system>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <os>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   </os>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <features>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   </features>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk">
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       </source>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk.config">
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       </source>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:21:30 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:0a:6c:18"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <target dev="tapc511139b-37"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc/console.log" append="off"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <video>
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </video>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:21:30 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:21:30 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:21:30 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:21:30 compute-1 nova_compute[226101]: </domain>
Dec 06 07:21:30 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.832 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Preparing to wait for external event network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.834 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.834 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.834 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.835 226109 DEBUG nova.virt.libvirt.vif [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1128131811',display_name='tempest-DeleteServersTestJSON-server-1128131811',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1128131811',id=88,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-ejh4rcdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:25Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=d9b5a381-4236-4a72-bedd-3fe7eca161bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.835 226109 DEBUG nova.network.os_vif_util [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.836 226109 DEBUG nova.network.os_vif_util [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:6c:18,bridge_name='br-int',has_traffic_filtering=True,id=c511139b-3716-4c0f-90d2-31f2ce3ba4cd,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc511139b-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.836 226109 DEBUG os_vif [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:6c:18,bridge_name='br-int',has_traffic_filtering=True,id=c511139b-3716-4c0f-90d2-31f2ce3ba4cd,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc511139b-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.837 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.837 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.837 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.840 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.840 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc511139b-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.840 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc511139b-37, col_values=(('external_ids', {'iface-id': 'c511139b-3716-4c0f-90d2-31f2ce3ba4cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:6c:18', 'vm-uuid': 'd9b5a381-4236-4a72-bedd-3fe7eca161bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.842 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-1 NetworkManager[49031]: <info>  [1765005690.8427] manager: (tapc511139b-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.845 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.847 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-1 nova_compute[226101]: 2025-12-06 07:21:30.849 226109 INFO os_vif [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:6c:18,bridge_name='br-int',has_traffic_filtering=True,id=c511139b-3716-4c0f-90d2-31f2ce3ba4cd,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc511139b-37')
Dec 06 07:21:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:30.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:30.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:30 compute-1 podman[258202]: 2025-12-06 07:21:30.962733303 +0000 UTC m=+0.057011487 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:21:30 compute-1 podman[258201]: 2025-12-06 07:21:30.963268487 +0000 UTC m=+0.058912418 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.107 226109 DEBUG nova.network.neutron [req-264de44e-214c-44d8-b291-f7de429fbdea req-e73332bc-ab14-447e-a532-0ed9dd1b8a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Updated VIF entry in instance network info cache for port c511139b-3716-4c0f-90d2-31f2ce3ba4cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.107 226109 DEBUG nova.network.neutron [req-264de44e-214c-44d8-b291-f7de429fbdea req-e73332bc-ab14-447e-a532-0ed9dd1b8a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Updating instance_info_cache with network_info: [{"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.130 226109 DEBUG oslo_concurrency.lockutils [req-264de44e-214c-44d8-b291-f7de429fbdea req-e73332bc-ab14-447e-a532-0ed9dd1b8a5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d9b5a381-4236-4a72-bedd-3fe7eca161bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.171 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.172 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.172 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:0a:6c:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.173 226109 INFO nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Using config drive
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.199 226109 DEBUG nova.storage.rbd_utils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:31 compute-1 ceph-mon[81689]: pgmap v1909: 305 pgs: 305 active+clean; 151 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 181 op/s
Dec 06 07:21:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/297117926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.609 226109 INFO nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Creating config drive at /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc/disk.config
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.614 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp6v06nn2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.744 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp6v06nn2" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.781 226109 DEBUG nova.storage.rbd_utils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:31 compute-1 nova_compute[226101]: 2025-12-06 07:21:31.786 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc/disk.config d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.300 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.302 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.328 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.428 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.428 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.435 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.436 226109 INFO nova.compute.claims [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.625 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:32.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.863 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005677.8623972, 29933cde-dbb3-4316-87f2-51a52500e040 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.864 226109 INFO nova.compute.manager [-] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] VM Stopped (Lifecycle Event)
Dec 06 07:21:32 compute-1 nova_compute[226101]: 2025-12-06 07:21:32.884 226109 DEBUG nova.compute.manager [None req-663d3bc3-161f-45dd-bb35-7aedc98506f8 - - - - - -] [instance: 29933cde-dbb3-4316-87f2-51a52500e040] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:32.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:33 compute-1 ceph-mon[81689]: pgmap v1910: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.376 226109 DEBUG oslo_concurrency.processutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc/disk.config d9b5a381-4236-4a72-bedd-3fe7eca161bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.377 226109 INFO nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Deleting local config drive /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc/disk.config because it was imported into RBD.
Dec 06 07:21:33 compute-1 kernel: tapc511139b-37: entered promiscuous mode
Dec 06 07:21:33 compute-1 NetworkManager[49031]: <info>  [1765005693.4489] manager: (tapc511139b-37): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.449 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 ovn_controller[130279]: 2025-12-06T07:21:33Z|00346|binding|INFO|Claiming lport c511139b-3716-4c0f-90d2-31f2ce3ba4cd for this chassis.
Dec 06 07:21:33 compute-1 ovn_controller[130279]: 2025-12-06T07:21:33Z|00347|binding|INFO|c511139b-3716-4c0f-90d2-31f2ce3ba4cd: Claiming fa:16:3e:0a:6c:18 10.100.0.8
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.466 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:6c:18 10.100.0.8'], port_security=['fa:16:3e:0a:6c:18 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd9b5a381-4236-4a72-bedd-3fe7eca161bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=c511139b-3716-4c0f-90d2-31f2ce3ba4cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.467 139580 INFO neutron.agent.ovn.metadata.agent [-] Port c511139b-3716-4c0f-90d2-31f2ce3ba4cd in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 bound to our chassis
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.469 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.481 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[05878564-999d-45fd-a57f-aa87558519af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.482 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85cfbf28-71 in ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.484 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85cfbf28-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.484 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7b87628f-83ba-4b6f-a5b0-7b32e2c0c444]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.485 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d78afa3-3de0-400c-94e6-3b0a35331c34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.497 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[fe955d2c-edd6-469e-b0ea-ea66ba582333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 systemd-machined[190302]: New machine qemu-41-instance-00000058.
Dec 06 07:21:33 compute-1 systemd[1]: Started Virtual Machine qemu-41-instance-00000058.
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.522 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5b7309-803c-4aa0-a903-fb0fa3ffe9bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 systemd-udevd[258335]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.533 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 ovn_controller[130279]: 2025-12-06T07:21:33Z|00348|binding|INFO|Setting lport c511139b-3716-4c0f-90d2-31f2ce3ba4cd ovn-installed in OVS
Dec 06 07:21:33 compute-1 NetworkManager[49031]: <info>  [1765005693.5438] device (tapc511139b-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:21:33 compute-1 ovn_controller[130279]: 2025-12-06T07:21:33Z|00349|binding|INFO|Setting lport c511139b-3716-4c0f-90d2-31f2ce3ba4cd up in Southbound
Dec 06 07:21:33 compute-1 NetworkManager[49031]: <info>  [1765005693.5449] device (tapc511139b-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.545 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.560 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[886747e5-58c0-462b-837a-9dd2b1d9478c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 systemd-udevd[258340]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:33 compute-1 NetworkManager[49031]: <info>  [1765005693.5676] manager: (tap85cfbf28-70): new Veth device (/org/freedesktop/NetworkManager/Devices/162)
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.566 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f273db-9fbc-4c3b-9cad-88667a678891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.600 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[888ef3dd-a3d8-453f-9f20-eb64b2c5ab3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.604 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3abfed59-7c7b-4823-9f2d-f77ecbd4bf50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 NetworkManager[49031]: <info>  [1765005693.6310] device (tap85cfbf28-70): carrier: link connected
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.639 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7a7ea5-2c7b-439f-90ce-339e9f68f501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2879884322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/743799355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2130103832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.658 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[55d6f128-72a6-4e8b-ae3c-3b1b4a74ced5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597313, 'reachable_time': 37651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258366, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.669 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.676 226109 DEBUG nova.compute.provider_tree [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.674 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[77301af1-55bc-48cc-88fe-fbc4b73f208f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597313, 'tstamp': 597313}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258368, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.696 226109 DEBUG nova.scheduler.client.report [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.707 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5c18a49e-85e5-4852-a047-bbd38f30989a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597313, 'reachable_time': 37651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258369, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.732 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.733 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.741 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0223262e-c113-4ccd-a74b-a88e4003e27b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.790 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.790 226109 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.809 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e19ec3-b934-4d83-8791-f1f389d69cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.811 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.811 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.811 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85cfbf28-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.813 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 NetworkManager[49031]: <info>  [1765005693.8143] manager: (tap85cfbf28-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Dec 06 07:21:33 compute-1 kernel: tap85cfbf28-70: entered promiscuous mode
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.820 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85cfbf28-70, col_values=(('external_ids', {'iface-id': '41b1b168-8e0e-4991-9750-9b31221f4863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.823 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 ovn_controller[130279]: 2025-12-06T07:21:33Z|00350|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.825 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.826 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.828 226109 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.827 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cb402a12-dd45-481c-9f93-dadbde2ee5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.831 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:21:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:33.832 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'env', 'PROCESS_TAG=haproxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85cfbf28-7016-4776-8fc2-2eb08a6b8347.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.840 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.853 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.959 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.961 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:21:33 compute-1 nova_compute[226101]: 2025-12-06 07:21:33.962 226109 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Creating image(s)
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.000 226109 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.046 226109 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.082 226109 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.089 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.117 226109 DEBUG nova.policy [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a52e2b4388994d8791443483bd42cc33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b558585a6aa14470bdad319926a98046', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.159 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.160 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.161 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.161 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.192 226109 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.198 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:34.239 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:34 compute-1 podman[258510]: 2025-12-06 07:21:34.230208106 +0000 UTC m=+0.024951968 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.355 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005694.3544607, d9b5a381-4236-4a72-bedd-3fe7eca161bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.355 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] VM Started (Lifecycle Event)
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.380 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:34 compute-1 podman[258510]: 2025-12-06 07:21:34.382498737 +0000 UTC m=+0.177242579 container create 9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.387 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005694.354648, d9b5a381-4236-4a72-bedd-3fe7eca161bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.387 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] VM Paused (Lifecycle Event)
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.411 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.416 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.440 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:21:34 compute-1 systemd[1]: Started libpod-conmon-9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350.scope.
Dec 06 07:21:34 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:21:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d22399957cc92cdd6eb1e83b56642f3a9eb70ee162ae31936c129cadc8fecd4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:34 compute-1 podman[258510]: 2025-12-06 07:21:34.7221918 +0000 UTC m=+0.516935672 container init 9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:21:34 compute-1 podman[258510]: 2025-12-06 07:21:34.730996879 +0000 UTC m=+0.525740721 container start 9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:21:34 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[258552]: [NOTICE]   (258556) : New worker (258558) forked
Dec 06 07:21:34 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[258552]: [NOTICE]   (258556) : Loading success.
Dec 06 07:21:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:34.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:34.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:34 compute-1 nova_compute[226101]: 2025-12-06 07:21:34.961 226109 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Successfully created port: ea6e84b9-0f0b-4b71-8b33-d343caef7863 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:21:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:34.982 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:21:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2879884322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:35 compute-1 ceph-mon[81689]: pgmap v1911: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.065 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.867s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.145 226109 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] resizing rbd image b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.193 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.316 226109 DEBUG nova.objects.instance [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'migration_context' on Instance uuid b6dd6db5-9860-4cfb-81d1-f52f900fd4c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.331 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.332 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Ensure instance console log exists: /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.332 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.332 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.333 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.604 226109 DEBUG nova.compute.manager [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received event network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.605 226109 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.605 226109 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.605 226109 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.605 226109 DEBUG nova.compute.manager [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Processing event network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.606 226109 DEBUG nova.compute.manager [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received event network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.606 226109 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.606 226109 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.606 226109 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.607 226109 DEBUG nova.compute.manager [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] No waiting events found dispatching network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.607 226109 WARNING nova.compute.manager [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received unexpected event network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd for instance with vm_state building and task_state spawning.
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.608 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.614 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005695.614002, d9b5a381-4236-4a72-bedd-3fe7eca161bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.614 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] VM Resumed (Lifecycle Event)
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.616 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.619 226109 INFO nova.virt.libvirt.driver [-] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Instance spawned successfully.
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.620 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.636 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.643 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.648 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.649 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.649 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.650 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.650 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.651 226109 DEBUG nova.virt.libvirt.driver [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.681 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.794 226109 INFO nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Took 10.38 seconds to spawn the instance on the hypervisor.
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.795 226109 DEBUG nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.842 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.883 226109 INFO nova.compute.manager [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Took 11.81 seconds to build instance.
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.914 226109 DEBUG oslo_concurrency.lockutils [None req-89890e19-1c7d-4756-938f-f0da38c6c987 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.965 226109 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Successfully updated port: ea6e84b9-0f0b-4b71-8b33-d343caef7863 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.979 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "refresh_cache-b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.979 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquired lock "refresh_cache-b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:35 compute-1 nova_compute[226101]: 2025-12-06 07:21:35.979 226109 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:21:36 compute-1 nova_compute[226101]: 2025-12-06 07:21:36.091 226109 DEBUG nova.compute.manager [req-741098fe-a74e-4727-844a-7cb3128b9841 req-0d234956-bf2d-443a-a8ba-d55099b589ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-changed-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:36 compute-1 nova_compute[226101]: 2025-12-06 07:21:36.092 226109 DEBUG nova.compute.manager [req-741098fe-a74e-4727-844a-7cb3128b9841 req-0d234956-bf2d-443a-a8ba-d55099b589ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Refreshing instance network info cache due to event network-changed-ea6e84b9-0f0b-4b71-8b33-d343caef7863. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:21:36 compute-1 nova_compute[226101]: 2025-12-06 07:21:36.092 226109 DEBUG oslo_concurrency.lockutils [req-741098fe-a74e-4727-844a-7cb3128b9841 req-0d234956-bf2d-443a-a8ba-d55099b589ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:36 compute-1 nova_compute[226101]: 2025-12-06 07:21:36.161 226109 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:21:36 compute-1 ceph-mon[81689]: pgmap v1912: 305 pgs: 305 active+clean; 237 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 138 op/s
Dec 06 07:21:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:36.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:36.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:36.984 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.059 226109 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Updating instance_info_cache with network_info: [{"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.082 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Releasing lock "refresh_cache-b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.083 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Instance network_info: |[{"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.084 226109 DEBUG oslo_concurrency.lockutils [req-741098fe-a74e-4727-844a-7cb3128b9841 req-0d234956-bf2d-443a-a8ba-d55099b589ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.084 226109 DEBUG nova.network.neutron [req-741098fe-a74e-4727-844a-7cb3128b9841 req-0d234956-bf2d-443a-a8ba-d55099b589ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Refreshing network info cache for port ea6e84b9-0f0b-4b71-8b33-d343caef7863 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.087 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Start _get_guest_xml network_info=[{"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.093 226109 WARNING nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.098 226109 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.099 226109 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.107 226109 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.107 226109 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.110 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.111 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.111 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.111 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.111 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.112 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.112 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.112 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.113 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.113 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.113 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.113 226109 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:21:37 compute-1 nova_compute[226101]: 2025-12-06 07:21:37.116 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/130116494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1888702857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2797562651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.057 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.941s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.088 226109 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.093 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.484 226109 DEBUG nova.network.neutron [req-741098fe-a74e-4727-844a-7cb3128b9841 req-0d234956-bf2d-443a-a8ba-d55099b589ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Updated VIF entry in instance network info cache for port ea6e84b9-0f0b-4b71-8b33-d343caef7863. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.486 226109 DEBUG nova.network.neutron [req-741098fe-a74e-4727-844a-7cb3128b9841 req-0d234956-bf2d-443a-a8ba-d55099b589ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Updating instance_info_cache with network_info: [{"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.505 226109 DEBUG oslo_concurrency.lockutils [req-741098fe-a74e-4727-844a-7cb3128b9841 req-0d234956-bf2d-443a-a8ba-d55099b589ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/311766206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.636 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.637 226109 DEBUG nova.virt.libvirt.vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-3',id=91,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:33Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=b6dd6db5-9860-4cfb-81d1-f52f900fd4c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.638 226109 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.639 226109 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:ac:69,bridge_name='br-int',has_traffic_filtering=True,id=ea6e84b9-0f0b-4b71-8b33-d343caef7863,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea6e84b9-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.640 226109 DEBUG nova.objects.instance [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6dd6db5-9860-4cfb-81d1-f52f900fd4c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.656 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <uuid>b6dd6db5-9860-4cfb-81d1-f52f900fd4c8</uuid>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <name>instance-0000005b</name>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1422412228-3</nova:name>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:21:37</nova:creationTime>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <nova:user uuid="a52e2b4388994d8791443483bd42cc33">tempest-ListServersNegativeTestJSON-179719916-project-member</nova:user>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <nova:project uuid="b558585a6aa14470bdad319926a98046">tempest-ListServersNegativeTestJSON-179719916</nova:project>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <nova:port uuid="ea6e84b9-0f0b-4b71-8b33-d343caef7863">
Dec 06 07:21:38 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <system>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <entry name="serial">b6dd6db5-9860-4cfb-81d1-f52f900fd4c8</entry>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <entry name="uuid">b6dd6db5-9860-4cfb-81d1-f52f900fd4c8</entry>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </system>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <os>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   </os>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <features>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   </features>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk">
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       </source>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk.config">
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       </source>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:21:38 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:cc:ac:69"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <target dev="tapea6e84b9-0f"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8/console.log" append="off"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <video>
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </video>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:21:38 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:21:38 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:21:38 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:21:38 compute-1 nova_compute[226101]: </domain>
Dec 06 07:21:38 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.658 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Preparing to wait for external event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.660 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.660 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.660 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.661 226109 DEBUG nova.virt.libvirt.vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-3',id=91,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:33Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=b6dd6db5-9860-4cfb-81d1-f52f900fd4c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.661 226109 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.662 226109 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:ac:69,bridge_name='br-int',has_traffic_filtering=True,id=ea6e84b9-0f0b-4b71-8b33-d343caef7863,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea6e84b9-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.663 226109 DEBUG os_vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:ac:69,bridge_name='br-int',has_traffic_filtering=True,id=ea6e84b9-0f0b-4b71-8b33-d343caef7863,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea6e84b9-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.663 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.664 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.665 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.668 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.669 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea6e84b9-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.670 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea6e84b9-0f, col_values=(('external_ids', {'iface-id': 'ea6e84b9-0f0b-4b71-8b33-d343caef7863', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:ac:69', 'vm-uuid': 'b6dd6db5-9860-4cfb-81d1-f52f900fd4c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:38 compute-1 NetworkManager[49031]: <info>  [1765005698.6725] manager: (tapea6e84b9-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.671 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.675 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.680 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.682 226109 INFO os_vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:ac:69,bridge_name='br-int',has_traffic_filtering=True,id=ea6e84b9-0f0b-4b71-8b33-d343caef7863,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea6e84b9-0f')
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.860 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.861 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.861 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No VIF found with MAC fa:16:3e:cc:ac:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.862 226109 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Using config drive
Dec 06 07:21:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:38.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:38 compute-1 nova_compute[226101]: 2025-12-06 07:21:38.890 226109 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:38.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:39 compute-1 nova_compute[226101]: 2025-12-06 07:21:39.266 226109 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Creating config drive at /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8/disk.config
Dec 06 07:21:39 compute-1 nova_compute[226101]: 2025-12-06 07:21:39.270 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9t2_91o3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:39 compute-1 nova_compute[226101]: 2025-12-06 07:21:39.408 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9t2_91o3" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/130116494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2419729644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:39 compute-1 ceph-mon[81689]: pgmap v1913: 305 pgs: 305 active+clean; 306 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 229 op/s
Dec 06 07:21:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3550746447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.563 226109 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.569 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8/disk.config b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.604 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.609 226109 DEBUG oslo_concurrency.lockutils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.610 226109 DEBUG oslo_concurrency.lockutils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.629 226109 DEBUG nova.objects.instance [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'flavor' on Instance uuid d9b5a381-4236-4a72-bedd-3fe7eca161bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.677 226109 DEBUG oslo_concurrency.lockutils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:40.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.887 226109 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8/disk.config b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.888 226109 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Deleting local config drive /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8/disk.config because it was imported into RBD.
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.899 226109 DEBUG oslo_concurrency.lockutils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.900 226109 DEBUG oslo_concurrency.lockutils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.900 226109 INFO nova.compute.manager [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Attaching volume 47e6e09c-3fa4-4d8f-9362-4fced2e03fd2 to /dev/vdb
Dec 06 07:21:40 compute-1 kernel: tapea6e84b9-0f: entered promiscuous mode
Dec 06 07:21:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:40.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:40 compute-1 NetworkManager[49031]: <info>  [1765005700.9817] manager: (tapea6e84b9-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.981 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:40 compute-1 ovn_controller[130279]: 2025-12-06T07:21:40Z|00351|binding|INFO|Claiming lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 for this chassis.
Dec 06 07:21:40 compute-1 ovn_controller[130279]: 2025-12-06T07:21:40Z|00352|binding|INFO|ea6e84b9-0f0b-4b71-8b33-d343caef7863: Claiming fa:16:3e:cc:ac:69 10.100.0.9
Dec 06 07:21:40 compute-1 nova_compute[226101]: 2025-12-06 07:21:40.985 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:40.993 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:ac:69 10.100.0.9'], port_security=['fa:16:3e:cc:ac:69 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6dd6db5-9860-4cfb-81d1-f52f900fd4c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b558585a6aa14470bdad319926a98046', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0eb8f52e-2f68-4151-8464-0d3b0eb6798f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c0e6fde-267f-49e6-86d0-b0c0ced92a7c, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=ea6e84b9-0f0b-4b71-8b33-d343caef7863) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:40.996 139580 INFO neutron.agent.ovn.metadata.agent [-] Port ea6e84b9-0f0b-4b71-8b33-d343caef7863 in datapath 77f3ccc8-bb54-46a1-b015-cc5b8a445202 bound to our chassis
Dec 06 07:21:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:40.998 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77f3ccc8-bb54-46a1-b015-cc5b8a445202
Dec 06 07:21:41 compute-1 systemd-udevd[258775]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.010 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[acf76075-78af-474e-82fd-90fbaeb238a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.011 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77f3ccc8-b1 in ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.013 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77f3ccc8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.013 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bb89e21c-580a-4ce9-b387-6d44ef2e5671]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.015 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fa4d2a-1f5d-498b-8e97-93fb0908b26e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 NetworkManager[49031]: <info>  [1765005701.0235] device (tapea6e84b9-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:21:41 compute-1 NetworkManager[49031]: <info>  [1765005701.0246] device (tapea6e84b9-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:21:41 compute-1 systemd-machined[190302]: New machine qemu-42-instance-0000005b.
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.030 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[665c50ac-1e20-4cc0-9e16-fdd0c0ecf9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:21:41 compute-1 systemd[1]: Started Virtual Machine qemu-42-instance-0000005b.
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.054 226109 DEBUG os_brick.utils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.055 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.063 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7854e597-3829-4e38-a140-4e66d3348591]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.066 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:41 compute-1 ovn_controller[130279]: 2025-12-06T07:21:41Z|00353|binding|INFO|Setting lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 ovn-installed in OVS
Dec 06 07:21:41 compute-1 ovn_controller[130279]: 2025-12-06T07:21:41Z|00354|binding|INFO|Setting lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 up in Southbound
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.072 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.072 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[f80dbea4-907f-415d-96ee-3c9bc6c2c676]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.079 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.089 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.090 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[01112eb9-5aad-4b80-b520-3d70ca517875]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.091 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.102 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c49a18dd-467f-4b2b-aa8e-b44014d845b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.102 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.103 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[d9416780-efdb-43c2-9550-52624d76b553]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.104 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4b33c5-c757-46f2-860f-4f0f61731d5d]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.105 226109 DEBUG oslo_concurrency.processutils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:41 compute-1 systemd-udevd[258779]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:41 compute-1 NetworkManager[49031]: <info>  [1765005701.1113] manager: (tap77f3ccc8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/166)
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.113 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[00ded4de-b722-4123-9932-22c6512af516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.142 226109 DEBUG oslo_concurrency.processutils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.145 226109 DEBUG os_brick.initiator.connectors.lightos [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.145 226109 DEBUG os_brick.initiator.connectors.lightos [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.146 226109 DEBUG os_brick.initiator.connectors.lightos [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.146 226109 DEBUG os_brick.utils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.146 226109 DEBUG nova.virt.block_device [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Updating existing volume attachment record: cf440460-4f1e-4d37-b8a7-a54a1867bde1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.152 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d710f5ed-5483-4b37-bfcd-421bb9523f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.155 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[13ea09a0-cab7-4969-b931-5949bf26685f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/311766206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:41 compute-1 ceph-mon[81689]: pgmap v1914: 305 pgs: 305 active+clean; 306 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 146 op/s
Dec 06 07:21:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3014881078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:41 compute-1 NetworkManager[49031]: <info>  [1765005701.1866] device (tap77f3ccc8-b0): carrier: link connected
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.193 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[342f5f81-13fe-414b-ae7a-744253d4d720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.212 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[39c8d78b-eda2-4494-9d64-ce741e2d3bac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77f3ccc8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:76:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598069, 'reachable_time': 32689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258816, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.230 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d07af2c9-252d-4d16-9c23-4a56480b7e74]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:769c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598069, 'tstamp': 598069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258817, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.251 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[80f06261-677e-44b7-9793-7dd101b4ebd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77f3ccc8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:76:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598069, 'reachable_time': 32689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258818, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.291 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d7796a75-b0e5-47c2-aa6c-488a8c2bd4e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.300 226109 DEBUG nova.compute.manager [req-3da223df-b0c2-48b8-97d1-60afa4632980 req-c1509146-cef8-40ea-aa8d-4209ceb48f4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.300 226109 DEBUG oslo_concurrency.lockutils [req-3da223df-b0c2-48b8-97d1-60afa4632980 req-c1509146-cef8-40ea-aa8d-4209ceb48f4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.300 226109 DEBUG oslo_concurrency.lockutils [req-3da223df-b0c2-48b8-97d1-60afa4632980 req-c1509146-cef8-40ea-aa8d-4209ceb48f4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.301 226109 DEBUG oslo_concurrency.lockutils [req-3da223df-b0c2-48b8-97d1-60afa4632980 req-c1509146-cef8-40ea-aa8d-4209ceb48f4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.301 226109 DEBUG nova.compute.manager [req-3da223df-b0c2-48b8-97d1-60afa4632980 req-c1509146-cef8-40ea-aa8d-4209ceb48f4e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Processing event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.365 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf5f431-43cf-4ed6-9699-eb9067067726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.366 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77f3ccc8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.366 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.367 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77f3ccc8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:41 compute-1 NetworkManager[49031]: <info>  [1765005701.3697] manager: (tap77f3ccc8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.369 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:41 compute-1 kernel: tap77f3ccc8-b0: entered promiscuous mode
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.372 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77f3ccc8-b0, col_values=(('external_ids', {'iface-id': '556411fa-e8d1-4a0d-8496-4416c2200434'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:41 compute-1 ovn_controller[130279]: 2025-12-06T07:21:41Z|00355|binding|INFO|Releasing lport 556411fa-e8d1-4a0d-8496-4416c2200434 from this chassis (sb_readonly=0)
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.389 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.390 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.391 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a104f108-bd38-4b48-8e92-f7f6e25b7e8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.393 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-77f3ccc8-bb54-46a1-b015-cc5b8a445202
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 77f3ccc8-bb54-46a1-b015-cc5b8a445202
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:21:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:41.395 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'env', 'PROCESS_TAG=haproxy-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77f3ccc8-bb54-46a1-b015-cc5b8a445202.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.821 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005701.8208213, b6dd6db5-9860-4cfb-81d1-f52f900fd4c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.821 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] VM Started (Lifecycle Event)
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.823 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.829 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.833 226109 INFO nova.virt.libvirt.driver [-] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Instance spawned successfully.
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.834 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:21:41 compute-1 podman[258884]: 2025-12-06 07:21:41.73767167 +0000 UTC m=+0.023392936 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.843 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.848 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:41 compute-1 podman[258884]: 2025-12-06 07:21:41.857191252 +0000 UTC m=+0.142912498 container create 4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.858 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.859 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.859 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.860 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.860 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.860 226109 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.889 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.890 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005701.82102, b6dd6db5-9860-4cfb-81d1-f52f900fd4c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.890 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] VM Paused (Lifecycle Event)
Dec 06 07:21:41 compute-1 systemd[1]: Started libpod-conmon-4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff.scope.
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.913 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.918 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005701.8305774, b6dd6db5-9860-4cfb-81d1-f52f900fd4c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.918 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] VM Resumed (Lifecycle Event)
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.935 226109 INFO nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Took 7.98 seconds to spawn the instance on the hypervisor.
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.936 226109 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:41 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:21:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c9d2bc8ae67ad227ca98fac34d60ca8574231e0b59ed7bac5b18f8c442cebc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.949 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.953 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:41 compute-1 podman[258884]: 2025-12-06 07:21:41.964542504 +0000 UTC m=+0.250263780 container init 4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:21:41 compute-1 podman[258884]: 2025-12-06 07:21:41.970841314 +0000 UTC m=+0.256562560 container start 4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.979 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:21:41 compute-1 nova_compute[226101]: 2025-12-06 07:21:41.989 226109 DEBUG nova.objects.instance [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'flavor' on Instance uuid d9b5a381-4236-4a72-bedd-3fe7eca161bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:42 compute-1 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[258906]: [NOTICE]   (258910) : New worker (258912) forked
Dec 06 07:21:42 compute-1 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[258906]: [NOTICE]   (258910) : Loading success.
Dec 06 07:21:42 compute-1 nova_compute[226101]: 2025-12-06 07:21:42.033 226109 INFO nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Took 9.64 seconds to build instance.
Dec 06 07:21:42 compute-1 nova_compute[226101]: 2025-12-06 07:21:42.046 226109 DEBUG nova.virt.libvirt.driver [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Attempting to attach volume 47e6e09c-3fa4-4d8f-9362-4fced2e03fd2 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:21:42 compute-1 nova_compute[226101]: 2025-12-06 07:21:42.050 226109 DEBUG nova.virt.libvirt.guest [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:21:42 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:21:42 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-47e6e09c-3fa4-4d8f-9362-4fced2e03fd2">
Dec 06 07:21:42 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:42 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:42 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:42 compute-1 nova_compute[226101]:   </source>
Dec 06 07:21:42 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:21:42 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:42 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:21:42 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:21:42 compute-1 nova_compute[226101]:   <serial>47e6e09c-3fa4-4d8f-9362-4fced2e03fd2</serial>
Dec 06 07:21:42 compute-1 nova_compute[226101]: </disk>
Dec 06 07:21:42 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:21:42 compute-1 nova_compute[226101]: 2025-12-06 07:21:42.074 226109 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:42.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:42.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/126457107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:43 compute-1 ceph-mon[81689]: pgmap v1915: 305 pgs: 305 active+clean; 280 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.1 MiB/s wr, 215 op/s
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.426 226109 DEBUG nova.compute.manager [req-b60f7969-b460-4aca-bf21-6e9e514d0e6d req-e7ffeb66-c184-4f48-8be2-7f658dae5f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.426 226109 DEBUG oslo_concurrency.lockutils [req-b60f7969-b460-4aca-bf21-6e9e514d0e6d req-e7ffeb66-c184-4f48-8be2-7f658dae5f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.427 226109 DEBUG oslo_concurrency.lockutils [req-b60f7969-b460-4aca-bf21-6e9e514d0e6d req-e7ffeb66-c184-4f48-8be2-7f658dae5f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.427 226109 DEBUG oslo_concurrency.lockutils [req-b60f7969-b460-4aca-bf21-6e9e514d0e6d req-e7ffeb66-c184-4f48-8be2-7f658dae5f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.427 226109 DEBUG nova.compute.manager [req-b60f7969-b460-4aca-bf21-6e9e514d0e6d req-e7ffeb66-c184-4f48-8be2-7f658dae5f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] No waiting events found dispatching network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.428 226109 WARNING nova.compute.manager [req-b60f7969-b460-4aca-bf21-6e9e514d0e6d req-e7ffeb66-c184-4f48-8be2-7f658dae5f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received unexpected event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 for instance with vm_state active and task_state None.
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.513 226109 DEBUG nova.virt.libvirt.driver [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.513 226109 DEBUG nova.virt.libvirt.driver [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.513 226109 DEBUG nova.virt.libvirt.driver [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.513 226109 DEBUG nova.virt.libvirt.driver [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:0a:6c:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.673 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:43 compute-1 nova_compute[226101]: 2025-12-06 07:21:43.745 226109 DEBUG oslo_concurrency.lockutils [None req-90e9c6d7-1dcb-4f53-9350-5551688e264b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:44 compute-1 ceph-mon[81689]: pgmap v1916: 305 pgs: 305 active+clean; 256 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 251 op/s
Dec 06 07:21:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2109164090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.680 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.680 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.680 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.680 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.681 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.682 226109 INFO nova.compute.manager [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Terminating instance
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.683 226109 DEBUG nova.compute.manager [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:21:44 compute-1 kernel: tapc511139b-37 (unregistering): left promiscuous mode
Dec 06 07:21:44 compute-1 NetworkManager[49031]: <info>  [1765005704.8671] device (tapc511139b-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:21:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:44.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:44 compute-1 ovn_controller[130279]: 2025-12-06T07:21:44Z|00356|binding|INFO|Releasing lport c511139b-3716-4c0f-90d2-31f2ce3ba4cd from this chassis (sb_readonly=0)
Dec 06 07:21:44 compute-1 ovn_controller[130279]: 2025-12-06T07:21:44Z|00357|binding|INFO|Setting lport c511139b-3716-4c0f-90d2-31f2ce3ba4cd down in Southbound
Dec 06 07:21:44 compute-1 ovn_controller[130279]: 2025-12-06T07:21:44Z|00358|binding|INFO|Removing iface tapc511139b-37 ovn-installed in OVS
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.875 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:44.880 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:6c:18 10.100.0.8'], port_security=['fa:16:3e:0a:6c:18 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd9b5a381-4236-4a72-bedd-3fe7eca161bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=c511139b-3716-4c0f-90d2-31f2ce3ba4cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:44.881 139580 INFO neutron.agent.ovn.metadata.agent [-] Port c511139b-3716-4c0f-90d2-31f2ce3ba4cd in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 unbound from our chassis
Dec 06 07:21:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:44.884 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:21:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:44.885 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[54f53e58-5d2b-4f2b-9ba5-dea611b39edd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:44.887 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace which is not needed anymore
Dec 06 07:21:44 compute-1 nova_compute[226101]: 2025-12-06 07:21:44.892 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:44 compute-1 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000058.scope: Deactivated successfully.
Dec 06 07:21:44 compute-1 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000058.scope: Consumed 8.758s CPU time.
Dec 06 07:21:44 compute-1 systemd-machined[190302]: Machine qemu-41-instance-00000058 terminated.
Dec 06 07:21:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:44.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.118 226109 INFO nova.virt.libvirt.driver [-] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Instance destroyed successfully.
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.118 226109 DEBUG nova.objects.instance [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'resources' on Instance uuid d9b5a381-4236-4a72-bedd-3fe7eca161bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.138 226109 DEBUG nova.virt.libvirt.vif [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1128131811',display_name='tempest-DeleteServersTestJSON-server-1128131811',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1128131811',id=88,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:21:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-ejh4rcdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:35Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=d9b5a381-4236-4a72-bedd-3fe7eca161bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.139 226109 DEBUG nova.network.os_vif_util [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "address": "fa:16:3e:0a:6c:18", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc511139b-37", "ovs_interfaceid": "c511139b-3716-4c0f-90d2-31f2ce3ba4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.139 226109 DEBUG nova.network.os_vif_util [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:6c:18,bridge_name='br-int',has_traffic_filtering=True,id=c511139b-3716-4c0f-90d2-31f2ce3ba4cd,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc511139b-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.140 226109 DEBUG os_vif [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:6c:18,bridge_name='br-int',has_traffic_filtering=True,id=c511139b-3716-4c0f-90d2-31f2ce3ba4cd,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc511139b-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.143 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.143 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc511139b-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.145 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.149 226109 INFO os_vif [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:6c:18,bridge_name='br-int',has_traffic_filtering=True,id=c511139b-3716-4c0f-90d2-31f2ce3ba4cd,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc511139b-37')
Dec 06 07:21:45 compute-1 nova_compute[226101]: 2025-12-06 07:21:45.426 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:45 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[258552]: [NOTICE]   (258556) : haproxy version is 2.8.14-c23fe91
Dec 06 07:21:45 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[258552]: [NOTICE]   (258556) : path to executable is /usr/sbin/haproxy
Dec 06 07:21:45 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[258552]: [WARNING]  (258556) : Exiting Master process...
Dec 06 07:21:45 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[258552]: [ALERT]    (258556) : Current worker (258558) exited with code 143 (Terminated)
Dec 06 07:21:45 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[258552]: [WARNING]  (258556) : All workers exited. Exiting... (0)
Dec 06 07:21:45 compute-1 systemd[1]: libpod-9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350.scope: Deactivated successfully.
Dec 06 07:21:45 compute-1 podman[258964]: 2025-12-06 07:21:45.451822328 +0000 UTC m=+0.477448513 container died 9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.293 226109 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received event network-vif-unplugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.293 226109 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.293 226109 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.294 226109 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.294 226109 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] No waiting events found dispatching network-vif-unplugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.294 226109 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received event network-vif-unplugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.294 226109 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received event network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.294 226109 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.295 226109 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.295 226109 DEBUG oslo_concurrency.lockutils [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.295 226109 DEBUG nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] No waiting events found dispatching network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:46 compute-1 nova_compute[226101]: 2025-12-06 07:21:46.295 226109 WARNING nova.compute.manager [req-ea43d9a9-bcc3-4086-9eda-bead5a535067 req-7990fef3-d8b8-4cc4-8c72-2dbecf822100 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received unexpected event network-vif-plugged-c511139b-3716-4c0f-90d2-31f2ce3ba4cd for instance with vm_state active and task_state deleting.
Dec 06 07:21:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1530936474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4245318948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:46 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350-userdata-shm.mount: Deactivated successfully.
Dec 06 07:21:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-9d22399957cc92cdd6eb1e83b56642f3a9eb70ee162ae31936c129cadc8fecd4-merged.mount: Deactivated successfully.
Dec 06 07:21:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:46.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:46.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:47 compute-1 podman[258964]: 2025-12-06 07:21:47.66753356 +0000 UTC m=+2.693159735 container cleanup 9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:21:47 compute-1 systemd[1]: libpod-conmon-9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350.scope: Deactivated successfully.
Dec 06 07:21:47 compute-1 ceph-mon[81689]: pgmap v1917: 305 pgs: 305 active+clean; 273 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.1 MiB/s wr, 377 op/s
Dec 06 07:21:48 compute-1 podman[259027]: 2025-12-06 07:21:48.103678471 +0000 UTC m=+0.414338561 container remove 9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.110 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f9159632-dae5-4904-8ff5-68a9a08e5635]: (4, ('Sat Dec  6 07:21:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350)\n9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350\nSat Dec  6 07:21:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350)\n9dfa3d1b7e887dee877299db50e46f7f08f0cab3d713b252b0ab5650ba210350\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.112 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e9050352-1ffe-4f0e-a619-1d73b7729542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.113 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:48 compute-1 nova_compute[226101]: 2025-12-06 07:21:48.114 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:48 compute-1 kernel: tap85cfbf28-70: left promiscuous mode
Dec 06 07:21:48 compute-1 nova_compute[226101]: 2025-12-06 07:21:48.131 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.136 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4114ce-a47e-428c-a148-02efa44f9567]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.150 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[64dec308-b45d-40ad-bdea-c39d82bb538d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.152 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8a91c1-1bb1-4823-80c8-422f33fa6caf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.175 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7a408e-fc89-409d-9f53-a3fb429e1141]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597306, 'reachable_time': 39234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259041, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:48 compute-1 systemd[1]: run-netns-ovnmeta\x2d85cfbf28\x2d7016\x2d4776\x2d8fc2\x2d2eb08a6b8347.mount: Deactivated successfully.
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.179 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:21:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:21:48.179 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[8e126acf-5b94-436b-9255-5f71b9ff6851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:48.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:48.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:49 compute-1 ceph-mon[81689]: pgmap v1918: 305 pgs: 305 active+clean; 273 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 4.2 MiB/s wr, 411 op/s
Dec 06 07:21:50 compute-1 nova_compute[226101]: 2025-12-06 07:21:50.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:50 compute-1 nova_compute[226101]: 2025-12-06 07:21:50.200 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:50.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:50.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:51 compute-1 ceph-mon[81689]: pgmap v1919: 305 pgs: 305 active+clean; 273 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.8 MiB/s wr, 319 op/s
Dec 06 07:21:51 compute-1 nova_compute[226101]: 2025-12-06 07:21:51.666 226109 INFO nova.virt.libvirt.driver [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Deleting instance files /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc_del
Dec 06 07:21:51 compute-1 nova_compute[226101]: 2025-12-06 07:21:51.667 226109 INFO nova.virt.libvirt.driver [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Deletion of /var/lib/nova/instances/d9b5a381-4236-4a72-bedd-3fe7eca161bc_del complete
Dec 06 07:21:51 compute-1 nova_compute[226101]: 2025-12-06 07:21:51.764 226109 INFO nova.compute.manager [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Took 7.08 seconds to destroy the instance on the hypervisor.
Dec 06 07:21:51 compute-1 nova_compute[226101]: 2025-12-06 07:21:51.764 226109 DEBUG oslo.service.loopingcall [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:21:51 compute-1 nova_compute[226101]: 2025-12-06 07:21:51.765 226109 DEBUG nova.compute.manager [-] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:21:51 compute-1 nova_compute[226101]: 2025-12-06 07:21:51.765 226109 DEBUG nova.network.neutron [-] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:21:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4182883929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:52 compute-1 ceph-mon[81689]: pgmap v1920: 305 pgs: 305 active+clean; 211 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 1.8 MiB/s wr, 409 op/s
Dec 06 07:21:52 compute-1 nova_compute[226101]: 2025-12-06 07:21:52.634 226109 DEBUG nova.network.neutron [-] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:52 compute-1 nova_compute[226101]: 2025-12-06 07:21:52.651 226109 INFO nova.compute.manager [-] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Took 0.89 seconds to deallocate network for instance.
Dec 06 07:21:52 compute-1 nova_compute[226101]: 2025-12-06 07:21:52.726 226109 DEBUG nova.compute.manager [req-4eadd1b5-c894-45a8-9946-6aef4c111c13 req-414ebc9e-63de-40c7-9632-fd7b35d45395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Received event network-vif-deleted-c511139b-3716-4c0f-90d2-31f2ce3ba4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:52 compute-1 nova_compute[226101]: 2025-12-06 07:21:52.877 226109 INFO nova.compute.manager [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Took 0.23 seconds to detach 1 volumes for instance.
Dec 06 07:21:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:52.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:52 compute-1 nova_compute[226101]: 2025-12-06 07:21:52.930 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:52 compute-1 nova_compute[226101]: 2025-12-06 07:21:52.931 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:52.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:52 compute-1 nova_compute[226101]: 2025-12-06 07:21:52.995 226109 DEBUG oslo_concurrency.processutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/295510486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:53 compute-1 nova_compute[226101]: 2025-12-06 07:21:53.517 226109 DEBUG oslo_concurrency.processutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:53 compute-1 nova_compute[226101]: 2025-12-06 07:21:53.523 226109 DEBUG nova.compute.provider_tree [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:53 compute-1 nova_compute[226101]: 2025-12-06 07:21:53.540 226109 DEBUG nova.scheduler.client.report [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:53 compute-1 nova_compute[226101]: 2025-12-06 07:21:53.562 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:53 compute-1 nova_compute[226101]: 2025-12-06 07:21:53.591 226109 INFO nova.scheduler.client.report [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Deleted allocations for instance d9b5a381-4236-4a72-bedd-3fe7eca161bc
Dec 06 07:21:53 compute-1 nova_compute[226101]: 2025-12-06 07:21:53.650 226109 DEBUG oslo_concurrency.lockutils [None req-e12fc373-8b90-4bdb-b1f9-17ae13cad292 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "d9b5a381-4236-4a72-bedd-3fe7eca161bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/295510486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:54.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:54.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.149 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.200 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.509 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.509 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.510 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.510 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.510 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.511 226109 INFO nova.compute.manager [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Terminating instance
Dec 06 07:21:55 compute-1 nova_compute[226101]: 2025-12-06 07:21:55.512 226109 DEBUG nova.compute.manager [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:21:56 compute-1 podman[259069]: 2025-12-06 07:21:56.166146521 +0000 UTC m=+0.140334409 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:21:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:56.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:56.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.614 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.615 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.615 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.636 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.636 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.636 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.636 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:21:57 compute-1 nova_compute[226101]: 2025-12-06 07:21:57.636 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:58.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:21:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:58.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:00 compute-1 nova_compute[226101]: 2025-12-06 07:22:00.117 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005705.115892, d9b5a381-4236-4a72-bedd-3fe7eca161bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:00 compute-1 nova_compute[226101]: 2025-12-06 07:22:00.118 226109 INFO nova.compute.manager [-] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] VM Stopped (Lifecycle Event)
Dec 06 07:22:00 compute-1 nova_compute[226101]: 2025-12-06 07:22:00.153 226109 DEBUG nova.compute.manager [None req-f7253a1f-6e18-4641-9728-4d729a706794 - - - - - -] [instance: d9b5a381-4236-4a72-bedd-3fe7eca161bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:00 compute-1 nova_compute[226101]: 2025-12-06 07:22:00.154 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:00 compute-1 nova_compute[226101]: 2025-12-06 07:22:00.202 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:00.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:00 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.062458992s
Dec 06 07:22:00 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.062042713s
Dec 06 07:22:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:00.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:01 compute-1 podman[259108]: 2025-12-06 07:22:01.067404469 +0000 UTC m=+0.053827341 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:22:01 compute-1 podman[259109]: 2025-12-06 07:22:01.088171232 +0000 UTC m=+0.064164501 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:22:01 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:22:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:01.638 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:01.639 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:01.641 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:02 compute-1 ceph-mon[81689]: pgmap v1921: 305 pgs: 305 active+clean; 181 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 1.3 MiB/s wr, 364 op/s
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.690196037s
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.748894215s, txc = 0x55b55387c000
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.748116970s, txc = 0x55b553a65500
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.747332096s, txc = 0x55b553510300
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.707353592s, txc = 0x55b55395e300
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.598544598s, txc = 0x55b5547dcc00
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.597258568s, txc = 0x55b552a41800
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.596543312s, txc = 0x55b5526f4f00
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.595772743s, txc = 0x55b5538d3b00
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.595279217s, txc = 0x55b552b1f500
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.594916344s, txc = 0x55b552be4000
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.594634056s, txc = 0x55b5539c3500
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.594254017s, txc = 0x55b5546b0000
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.593992710s, txc = 0x55b552764000
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.593751907s, txc = 0x55b5535e2000
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.593311787s, txc = 0x55b55387c600
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.591164112s, txc = 0x55b552c94600
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.570933819s, txc = 0x55b552de1500
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.568391800s, txc = 0x55b5535c4c00
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.518134117s, txc = 0x55b55391a600
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.516536236s, txc = 0x55b553ccc600
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.515519142s, txc = 0x55b5535d0000
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.515571117s, txc = 0x55b552b40300
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.464084625s, txc = 0x55b552be4f00
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.454565048s, txc = 0x55b553510f00
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.304744244s, txc = 0x55b5538f9200
Dec 06 07:22:02 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1[81685]: 2025-12-06T07:22:02.487+0000 7fc98ce5b640 -1 mon.compute-1@2(peon).paxos(paxos updating c 3264..3924) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.464719445s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 07:22:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 3264..3924) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 0.464719445s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 07:22:02 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.823511124s, txc = 0x55b552d0ef00
Dec 06 07:22:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.022000583s ======
Dec 06 07:22:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:02.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.022000583s
Dec 06 07:22:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:02.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:04 compute-1 ceph-mon[81689]: pgmap v1922: 305 pgs: 305 active+clean; 187 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.8 MiB/s wr, 331 op/s
Dec 06 07:22:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:04.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:04 compute-1 kernel: tapea6e84b9-0f (unregistering): left promiscuous mode
Dec 06 07:22:04 compute-1 NetworkManager[49031]: <info>  [1765005724.9414] device (tapea6e84b9-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:22:04 compute-1 ovn_controller[130279]: 2025-12-06T07:22:04Z|00359|binding|INFO|Releasing lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 from this chassis (sb_readonly=0)
Dec 06 07:22:04 compute-1 ovn_controller[130279]: 2025-12-06T07:22:04Z|00360|binding|INFO|Setting lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 down in Southbound
Dec 06 07:22:04 compute-1 nova_compute[226101]: 2025-12-06 07:22:04.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:04 compute-1 ovn_controller[130279]: 2025-12-06T07:22:04Z|00361|binding|INFO|Removing iface tapea6e84b9-0f ovn-installed in OVS
Dec 06 07:22:04 compute-1 nova_compute[226101]: 2025-12-06 07:22:04.954 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:04.959 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:ac:69 10.100.0.9'], port_security=['fa:16:3e:cc:ac:69 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6dd6db5-9860-4cfb-81d1-f52f900fd4c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b558585a6aa14470bdad319926a98046', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0eb8f52e-2f68-4151-8464-0d3b0eb6798f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c0e6fde-267f-49e6-86d0-b0c0ced92a7c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=ea6e84b9-0f0b-4b71-8b33-d343caef7863) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:04.961 139580 INFO neutron.agent.ovn.metadata.agent [-] Port ea6e84b9-0f0b-4b71-8b33-d343caef7863 in datapath 77f3ccc8-bb54-46a1-b015-cc5b8a445202 unbound from our chassis
Dec 06 07:22:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:04.963 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77f3ccc8-bb54-46a1-b015-cc5b8a445202, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:22:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:04.967 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee641db-3759-475a-8645-872d5232d114]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:04.968 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 namespace which is not needed anymore
Dec 06 07:22:04 compute-1 nova_compute[226101]: 2025-12-06 07:22:04.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:04.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:05 compute-1 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Dec 06 07:22:05 compute-1 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005b.scope: Consumed 12.953s CPU time.
Dec 06 07:22:05 compute-1 systemd-machined[190302]: Machine qemu-42-instance-0000005b terminated.
Dec 06 07:22:05 compute-1 kernel: tapea6e84b9-0f: entered promiscuous mode
Dec 06 07:22:05 compute-1 systemd-udevd[259163]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:22:05 compute-1 NetworkManager[49031]: <info>  [1765005725.1406] manager: (tapea6e84b9-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00362|binding|INFO|Claiming lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 for this chassis.
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00363|binding|INFO|ea6e84b9-0f0b-4b71-8b33-d343caef7863: Claiming fa:16:3e:cc:ac:69 10.100.0.9
Dec 06 07:22:05 compute-1 kernel: tapea6e84b9-0f (unregistering): left promiscuous mode
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.142 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:05.146 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:ac:69 10.100.0.9'], port_security=['fa:16:3e:cc:ac:69 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6dd6db5-9860-4cfb-81d1-f52f900fd4c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b558585a6aa14470bdad319926a98046', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0eb8f52e-2f68-4151-8464-0d3b0eb6798f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c0e6fde-267f-49e6-86d0-b0c0ced92a7c, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=ea6e84b9-0f0b-4b71-8b33-d343caef7863) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.168 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00364|binding|INFO|Setting lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 ovn-installed in OVS
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00365|binding|INFO|Setting lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 up in Southbound
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00366|binding|INFO|Releasing lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 from this chassis (sb_readonly=1)
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00367|if_status|INFO|Not setting lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 down as sb is readonly
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00368|binding|INFO|Removing iface tapea6e84b9-0f ovn-installed in OVS
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.200 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00369|binding|INFO|Releasing lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 from this chassis (sb_readonly=0)
Dec 06 07:22:05 compute-1 ovn_controller[130279]: 2025-12-06T07:22:05Z|00370|binding|INFO|Setting lport ea6e84b9-0f0b-4b71-8b33-d343caef7863 down in Southbound
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.202 226109 INFO nova.virt.libvirt.driver [-] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Instance destroyed successfully.
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.203 226109 DEBUG nova.objects.instance [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'resources' on Instance uuid b6dd6db5-9860-4cfb-81d1-f52f900fd4c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.203 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:05.210 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:ac:69 10.100.0.9'], port_security=['fa:16:3e:cc:ac:69 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6dd6db5-9860-4cfb-81d1-f52f900fd4c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b558585a6aa14470bdad319926a98046', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0eb8f52e-2f68-4151-8464-0d3b0eb6798f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c0e6fde-267f-49e6-86d0-b0c0ced92a7c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=ea6e84b9-0f0b-4b71-8b33-d343caef7863) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.217 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.219 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.220 226109 DEBUG nova.virt.libvirt.vif [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-3',id=91,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-12-06T07:21:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:41Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=b6dd6db5-9860-4cfb-81d1-f52f900fd4c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.221 226109 DEBUG nova.network.os_vif_util [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "address": "fa:16:3e:cc:ac:69", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea6e84b9-0f", "ovs_interfaceid": "ea6e84b9-0f0b-4b71-8b33-d343caef7863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.222 226109 DEBUG nova.network.os_vif_util [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:ac:69,bridge_name='br-int',has_traffic_filtering=True,id=ea6e84b9-0f0b-4b71-8b33-d343caef7863,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea6e84b9-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.222 226109 DEBUG os_vif [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:ac:69,bridge_name='br-int',has_traffic_filtering=True,id=ea6e84b9-0f0b-4b71-8b33-d343caef7863,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea6e84b9-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.224 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.225 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea6e84b9-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.226 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.229 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:22:05 compute-1 nova_compute[226101]: 2025-12-06 07:22:05.231 226109 INFO os_vif [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:ac:69,bridge_name='br-int',has_traffic_filtering=True,id=ea6e84b9-0f0b-4b71-8b33-d343caef7863,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea6e84b9-0f')
Dec 06 07:22:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/597978741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:05 compute-1 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[258906]: [NOTICE]   (258910) : haproxy version is 2.8.14-c23fe91
Dec 06 07:22:05 compute-1 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[258906]: [NOTICE]   (258910) : path to executable is /usr/sbin/haproxy
Dec 06 07:22:05 compute-1 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[258906]: [WARNING]  (258910) : Exiting Master process...
Dec 06 07:22:05 compute-1 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[258906]: [ALERT]    (258910) : Current worker (258912) exited with code 143 (Terminated)
Dec 06 07:22:05 compute-1 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[258906]: [WARNING]  (258910) : All workers exited. Exiting... (0)
Dec 06 07:22:05 compute-1 systemd[1]: libpod-4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff.scope: Deactivated successfully.
Dec 06 07:22:05 compute-1 podman[259182]: 2025-12-06 07:22:05.683356828 +0000 UTC m=+0.612097284 container died 4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:22:05 compute-1 ceph-mon[81689]: pgmap v1923: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 209 op/s
Dec 06 07:22:05 compute-1 ceph-mon[81689]: pgmap v1924: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 135 op/s
Dec 06 07:22:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1874194034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2594007488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:05 compute-1 ceph-mon[81689]: pgmap v1925: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:22:05 compute-1 ceph-mon[81689]: pgmap v1926: 305 pgs: 305 active+clean; 194 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 450 KiB/s rd, 1.4 MiB/s wr, 56 op/s
Dec 06 07:22:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2984823432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.180 226109 DEBUG nova.compute.manager [req-7d464faa-b8a2-45dd-a268-3b35c1d0c890 req-23012e11-eb3b-4778-91d4-170475fe2e00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-unplugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.181 226109 DEBUG oslo_concurrency.lockutils [req-7d464faa-b8a2-45dd-a268-3b35c1d0c890 req-23012e11-eb3b-4778-91d4-170475fe2e00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.181 226109 DEBUG oslo_concurrency.lockutils [req-7d464faa-b8a2-45dd-a268-3b35c1d0c890 req-23012e11-eb3b-4778-91d4-170475fe2e00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.181 226109 DEBUG oslo_concurrency.lockutils [req-7d464faa-b8a2-45dd-a268-3b35c1d0c890 req-23012e11-eb3b-4778-91d4-170475fe2e00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.181 226109 DEBUG nova.compute.manager [req-7d464faa-b8a2-45dd-a268-3b35c1d0c890 req-23012e11-eb3b-4778-91d4-170475fe2e00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] No waiting events found dispatching network-vif-unplugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.182 226109 DEBUG nova.compute.manager [req-7d464faa-b8a2-45dd-a268-3b35c1d0c890 req-23012e11-eb3b-4778-91d4-170475fe2e00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-unplugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.182 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.183 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "f9492565-fe30-4766-8a95-1a0f213da9ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.183 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.292 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:22:06 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff-userdata-shm.mount: Deactivated successfully.
Dec 06 07:22:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-d2c9d2bc8ae67ad227ca98fac34d60ca8574231e0b59ed7bac5b18f8c442cebc-merged.mount: Deactivated successfully.
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.405 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.406 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.438 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.439 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.445 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.446 226109 INFO nova.compute.claims [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.584 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.656 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.657 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4610MB free_disk=20.90306854248047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:22:06 compute-1 nova_compute[226101]: 2025-12-06 07:22:06.657 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:06 compute-1 podman[259182]: 2025-12-06 07:22:06.709465882 +0000 UTC m=+1.638206338 container cleanup 4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:22:06 compute-1 systemd[1]: libpod-conmon-4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff.scope: Deactivated successfully.
Dec 06 07:22:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/597978741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-1 ceph-mon[81689]: pgmap v1927: 305 pgs: 305 active+clean; 219 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Dec 06 07:22:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/116807286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/948365744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:06.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/787302535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:07 compute-1 podman[259251]: 2025-12-06 07:22:07.748784834 +0000 UTC m=+1.011409466 container remove 4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.758 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[be2521aa-2871-4402-a043-fd05e45f29cd]: (4, ('Sat Dec  6 07:22:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 (4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff)\n4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff\nSat Dec  6 07:22:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 (4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff)\n4dfefcaf92f41f123a6f51caff108a1e56d8d0952e01b9e5c0307d3cd77f37ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.761 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0993d7-97d0-4318-9355-68248a40b7a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.762 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77f3ccc8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:07 compute-1 kernel: tap77f3ccc8-b0: left promiscuous mode
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.771 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.773 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.780 226109 DEBUG nova.compute.provider_tree [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.784 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2759816b-c7bd-4cd7-aaff-2f31ba266843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.797 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a00aa545-8a81-4808-bd9c-484ddfcb2d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.799 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e9fb4c95-20da-490d-9768-b73b3ae56cfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.801 226109 DEBUG nova.scheduler.client.report [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.813 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bc89085f-edce-4006-bcd9-e6f4583f29c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598060, 'reachable_time': 35317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259278, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 systemd[1]: run-netns-ovnmeta\x2d77f3ccc8\x2dbb54\x2d46a1\x2db015\x2dcc5b8a445202.mount: Deactivated successfully.
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.816 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.816 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[abee8902-2362-436f-bfe1-725d5430c097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.817 139580 INFO neutron.agent.ovn.metadata.agent [-] Port ea6e84b9-0f0b-4b71-8b33-d343caef7863 in datapath 77f3ccc8-bb54-46a1-b015-cc5b8a445202 unbound from our chassis
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.818 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77f3ccc8-bb54-46a1-b015-cc5b8a445202, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.819 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a04cc357-0efc-412e-9427-5d4830a80b75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.819 139580 INFO neutron.agent.ovn.metadata.agent [-] Port ea6e84b9-0f0b-4b71-8b33-d343caef7863 in datapath 77f3ccc8-bb54-46a1-b015-cc5b8a445202 unbound from our chassis
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.820 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77f3ccc8-bb54-46a1-b015-cc5b8a445202, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:07.821 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7d31db-193c-401d-8875-62a00d7b6800]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.827 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.828 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.830 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.920 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.921 226109 DEBUG nova.network.neutron [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.944 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance b6dd6db5-9860-4cfb-81d1-f52f900fd4c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.945 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance f9492565-fe30-4766-8a95-1a0f213da9ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.945 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.945 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.950 226109 INFO nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:22:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:07 compute-1 nova_compute[226101]: 2025-12-06 07:22:07.973 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:22:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3719153835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/474933109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.020 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.238 226109 DEBUG nova.compute.manager [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.238 226109 DEBUG oslo_concurrency.lockutils [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.239 226109 DEBUG oslo_concurrency.lockutils [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.239 226109 DEBUG oslo_concurrency.lockutils [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.239 226109 DEBUG nova.compute.manager [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] No waiting events found dispatching network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.239 226109 WARNING nova.compute.manager [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received unexpected event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 for instance with vm_state active and task_state deleting.
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.239 226109 DEBUG nova.compute.manager [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.240 226109 DEBUG oslo_concurrency.lockutils [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.240 226109 DEBUG oslo_concurrency.lockutils [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.240 226109 DEBUG oslo_concurrency.lockutils [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.240 226109 DEBUG nova.compute.manager [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] No waiting events found dispatching network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.240 226109 WARNING nova.compute.manager [req-89f04f99-d2ce-4134-b9b3-239c18912aa9 req-d44a06cd-432a-44c0-a300-477a7cc7bf43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received unexpected event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 for instance with vm_state active and task_state deleting.
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.244 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.246 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.246 226109 INFO nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Creating image(s)
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.272 226109 DEBUG nova.storage.rbd_utils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] rbd image f9492565-fe30-4766-8a95-1a0f213da9ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.302 226109 DEBUG nova.storage.rbd_utils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] rbd image f9492565-fe30-4766-8a95-1a0f213da9ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.328 226109 DEBUG nova.storage.rbd_utils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] rbd image f9492565-fe30-4766-8a95-1a0f213da9ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.332 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.357 226109 DEBUG nova.policy [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '19dfacbc6d334cada6c8eed809adafe2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd7dd2f88c9ba41b6baa5e20db9002858', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.398 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.399 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.400 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.400 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.430 226109 DEBUG nova.storage.rbd_utils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] rbd image f9492565-fe30-4766-8a95-1a0f213da9ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.434 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f9492565-fe30-4766-8a95-1a0f213da9ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1296058430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.483 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.489 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.505 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.525 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.526 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:08 compute-1 nova_compute[226101]: 2025-12-06 07:22:08.904 226109 DEBUG nova.network.neutron [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Successfully created port: f2b985b4-7bd2-4740-8d8b-340d08ed0a60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:22:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:08.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:09 compute-1 nova_compute[226101]: 2025-12-06 07:22:09.886 226109 DEBUG nova.network.neutron [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Successfully updated port: f2b985b4-7bd2-4740-8d8b-340d08ed0a60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:22:09 compute-1 nova_compute[226101]: 2025-12-06 07:22:09.903 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "refresh_cache-f9492565-fe30-4766-8a95-1a0f213da9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:09 compute-1 nova_compute[226101]: 2025-12-06 07:22:09.904 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquired lock "refresh_cache-f9492565-fe30-4766-8a95-1a0f213da9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:09 compute-1 nova_compute[226101]: 2025-12-06 07:22:09.904 226109 DEBUG nova.network.neutron [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:22:09 compute-1 nova_compute[226101]: 2025-12-06 07:22:09.968 226109 DEBUG nova.compute.manager [req-c3069723-1f01-4961-af01-43a5289df824 req-ba46f99e-2cec-4e59-aa1a-2494b764fe83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received event network-changed-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:09 compute-1 nova_compute[226101]: 2025-12-06 07:22:09.969 226109 DEBUG nova.compute.manager [req-c3069723-1f01-4961-af01-43a5289df824 req-ba46f99e-2cec-4e59-aa1a-2494b764fe83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Refreshing instance network info cache due to event network-changed-f2b985b4-7bd2-4740-8d8b-340d08ed0a60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:22:09 compute-1 nova_compute[226101]: 2025-12-06 07:22:09.969 226109 DEBUG oslo_concurrency.lockutils [req-c3069723-1f01-4961-af01-43a5289df824 req-ba46f99e-2cec-4e59-aa1a-2494b764fe83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f9492565-fe30-4766-8a95-1a0f213da9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.070 226109 DEBUG nova.network.neutron [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:22:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/787302535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:10 compute-1 ceph-mon[81689]: pgmap v1928: 305 pgs: 305 active+clean; 260 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 3.8 MiB/s wr, 67 op/s
Dec 06 07:22:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1296058430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.219 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.226 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.342 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f9492565-fe30-4766-8a95-1a0f213da9ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.909s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.390 226109 DEBUG nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.391 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.391 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.392 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.392 226109 DEBUG nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] No waiting events found dispatching network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.392 226109 WARNING nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received unexpected event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 for instance with vm_state active and task_state deleting.
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.392 226109 DEBUG nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-unplugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.393 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.393 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.393 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.394 226109 DEBUG nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] No waiting events found dispatching network-vif-unplugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.394 226109 DEBUG nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-unplugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.394 226109 DEBUG nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.394 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.395 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.395 226109 DEBUG oslo_concurrency.lockutils [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.395 226109 DEBUG nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] No waiting events found dispatching network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.395 226109 WARNING nova.compute.manager [req-6fb874e8-3f38-44f1-882e-ad59dc1c692a req-2575e43d-545a-4828-8239-8a416a9b93ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received unexpected event network-vif-plugged-ea6e84b9-0f0b-4b71-8b33-d343caef7863 for instance with vm_state active and task_state deleting.
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.451 226109 DEBUG nova.storage.rbd_utils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] resizing rbd image f9492565-fe30-4766-8a95-1a0f213da9ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.517 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.517 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.518 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.518 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.519 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.519 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.519 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.645 226109 DEBUG nova.objects.instance [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lazy-loading 'migration_context' on Instance uuid f9492565-fe30-4766-8a95-1a0f213da9ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.658 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.658 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Ensure instance console log exists: /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.659 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.659 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:10 compute-1 nova_compute[226101]: 2025-12-06 07:22:10.660 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:10.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:11.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.116 226109 DEBUG nova.network.neutron [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Updating instance_info_cache with network_info: [{"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.148 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Releasing lock "refresh_cache-f9492565-fe30-4766-8a95-1a0f213da9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.148 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Instance network_info: |[{"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.149 226109 DEBUG oslo_concurrency.lockutils [req-c3069723-1f01-4961-af01-43a5289df824 req-ba46f99e-2cec-4e59-aa1a-2494b764fe83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f9492565-fe30-4766-8a95-1a0f213da9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.149 226109 DEBUG nova.network.neutron [req-c3069723-1f01-4961-af01-43a5289df824 req-ba46f99e-2cec-4e59-aa1a-2494b764fe83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Refreshing network info cache for port f2b985b4-7bd2-4740-8d8b-340d08ed0a60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.153 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Start _get_guest_xml network_info=[{"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.158 226109 WARNING nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.163 226109 DEBUG nova.virt.libvirt.host [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.164 226109 DEBUG nova.virt.libvirt.host [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.172 226109 DEBUG nova.virt.libvirt.host [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.173 226109 DEBUG nova.virt.libvirt.host [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.174 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.174 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.175 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.175 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.175 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.176 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.176 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.176 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.176 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.177 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.177 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.177 226109 DEBUG nova.virt.hardware [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.180 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4019152381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:22:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4019152381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:22:11 compute-1 ceph-mon[81689]: pgmap v1929: 305 pgs: 305 active+clean; 260 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 3.4 MiB/s wr, 63 op/s
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131463299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.706 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.737 226109 DEBUG nova.storage.rbd_utils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] rbd image f9492565-fe30-4766-8a95-1a0f213da9ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.743 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.848 226109 INFO nova.virt.libvirt.driver [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Deleting instance files /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_del
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.850 226109 INFO nova.virt.libvirt.driver [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Deletion of /var/lib/nova/instances/b6dd6db5-9860-4cfb-81d1-f52f900fd4c8_del complete
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.898 226109 INFO nova.compute.manager [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Took 16.39 seconds to destroy the instance on the hypervisor.
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.898 226109 DEBUG oslo.service.loopingcall [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.900 226109 DEBUG nova.compute.manager [-] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:22:11 compute-1 nova_compute[226101]: 2025-12-06 07:22:11.900 226109 DEBUG nova.network.neutron [-] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:22:12 compute-1 sudo[259529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:12 compute-1 sudo[259529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:12 compute-1 sudo[259529]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:12 compute-1 sudo[259554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:22:12 compute-1 sudo[259554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:12 compute-1 sudo[259554]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:12 compute-1 sudo[259579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:12 compute-1 sudo[259579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:12 compute-1 sudo[259579]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:12 compute-1 sudo[259604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:22:12 compute-1 sudo[259604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2697805056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.814 226109 DEBUG nova.network.neutron [req-c3069723-1f01-4961-af01-43a5289df824 req-ba46f99e-2cec-4e59-aa1a-2494b764fe83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Updated VIF entry in instance network info cache for port f2b985b4-7bd2-4740-8d8b-340d08ed0a60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.814 226109 DEBUG nova.network.neutron [req-c3069723-1f01-4961-af01-43a5289df824 req-ba46f99e-2cec-4e59-aa1a-2494b764fe83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Updating instance_info_cache with network_info: [{"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4131463299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:12 compute-1 ceph-mon[81689]: pgmap v1930: 305 pgs: 305 active+clean; 256 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 158 KiB/s rd, 4.9 MiB/s wr, 100 op/s
Dec 06 07:22:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2351889120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.817 226109 DEBUG nova.network.neutron [-] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.835 226109 DEBUG oslo_concurrency.lockutils [req-c3069723-1f01-4961-af01-43a5289df824 req-ba46f99e-2cec-4e59-aa1a-2494b764fe83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f9492565-fe30-4766-8a95-1a0f213da9ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.839 226109 INFO nova.compute.manager [-] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Took 0.94 seconds to deallocate network for instance.
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.888 226109 DEBUG nova.compute.manager [req-49802906-45be-4343-9e23-a2dfc3c24c52 req-9736a0ed-4641-4787-a4a2-8aa6eef1688c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Received event network-vif-deleted-ea6e84b9-0f0b-4b71-8b33-d343caef7863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.890 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.890 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:12.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.953 226109 DEBUG oslo_concurrency.processutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:12 compute-1 sudo[259604]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.992 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.994 226109 DEBUG nova.virt.libvirt.vif [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:22:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1233275274',display_name='tempest-ServerAddressesTestJSON-server-1233275274',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1233275274',id=94,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d7dd2f88c9ba41b6baa5e20db9002858',ramdisk_id='',reservation_id='r-b5yg0bxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1940765146',owner_user_name='tempest-ServerAddressesTestJSON-1940765146-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:08Z,user_data=None,user_id='19dfacbc6d334cada6c8eed809adafe2',uuid=f9492565-fe30-4766-8a95-1a0f213da9ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.994 226109 DEBUG nova.network.os_vif_util [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Converting VIF {"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.995 226109 DEBUG nova.network.os_vif_util [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:3d:f7,bridge_name='br-int',has_traffic_filtering=True,id=f2b985b4-7bd2-4740-8d8b-340d08ed0a60,network=Network(743abfe1-29ca-4734-b6bd-e1d6500cdf52),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2b985b4-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:12 compute-1 nova_compute[226101]: 2025-12-06 07:22:12.996 226109 DEBUG nova.objects.instance [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lazy-loading 'pci_devices' on Instance uuid f9492565-fe30-4766-8a95-1a0f213da9ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:22:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:13.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.009 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <uuid>f9492565-fe30-4766-8a95-1a0f213da9ab</uuid>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <name>instance-0000005e</name>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerAddressesTestJSON-server-1233275274</nova:name>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:22:11</nova:creationTime>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <nova:user uuid="19dfacbc6d334cada6c8eed809adafe2">tempest-ServerAddressesTestJSON-1940765146-project-member</nova:user>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <nova:project uuid="d7dd2f88c9ba41b6baa5e20db9002858">tempest-ServerAddressesTestJSON-1940765146</nova:project>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <nova:port uuid="f2b985b4-7bd2-4740-8d8b-340d08ed0a60">
Dec 06 07:22:13 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <system>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <entry name="serial">f9492565-fe30-4766-8a95-1a0f213da9ab</entry>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <entry name="uuid">f9492565-fe30-4766-8a95-1a0f213da9ab</entry>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </system>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <os>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   </os>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <features>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   </features>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f9492565-fe30-4766-8a95-1a0f213da9ab_disk">
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       </source>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f9492565-fe30-4766-8a95-1a0f213da9ab_disk.config">
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       </source>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:22:13 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:db:3d:f7"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <target dev="tapf2b985b4-7b"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab/console.log" append="off"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <video>
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </video>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:22:13 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:22:13 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:22:13 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:22:13 compute-1 nova_compute[226101]: </domain>
Dec 06 07:22:13 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.009 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Preparing to wait for external event network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.010 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.010 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.010 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.011 226109 DEBUG nova.virt.libvirt.vif [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:22:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1233275274',display_name='tempest-ServerAddressesTestJSON-server-1233275274',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1233275274',id=94,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d7dd2f88c9ba41b6baa5e20db9002858',ramdisk_id='',reservation_id='r-b5yg0bxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1940765146',owner_user_name='tempest-ServerAddressesTestJSON-1940765146-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:08Z,user_data=None,user_id='19dfacbc6d334cada6c8eed809adafe2',uuid=f9492565-fe30-4766-8a95-1a0f213da9ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.011 226109 DEBUG nova.network.os_vif_util [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Converting VIF {"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.012 226109 DEBUG nova.network.os_vif_util [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:3d:f7,bridge_name='br-int',has_traffic_filtering=True,id=f2b985b4-7bd2-4740-8d8b-340d08ed0a60,network=Network(743abfe1-29ca-4734-b6bd-e1d6500cdf52),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2b985b4-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.012 226109 DEBUG os_vif [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:3d:f7,bridge_name='br-int',has_traffic_filtering=True,id=f2b985b4-7bd2-4740-8d8b-340d08ed0a60,network=Network(743abfe1-29ca-4734-b6bd-e1d6500cdf52),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2b985b4-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.012 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.013 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.013 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.016 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.016 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2b985b4-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.016 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2b985b4-7b, col_values=(('external_ids', {'iface-id': 'f2b985b4-7bd2-4740-8d8b-340d08ed0a60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:3d:f7', 'vm-uuid': 'f9492565-fe30-4766-8a95-1a0f213da9ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:13 compute-1 NetworkManager[49031]: <info>  [1765005733.0188] manager: (tapf2b985b4-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.020 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.024 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.025 226109 INFO os_vif [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:3d:f7,bridge_name='br-int',has_traffic_filtering=True,id=f2b985b4-7bd2-4740-8d8b-340d08ed0a60,network=Network(743abfe1-29ca-4734-b6bd-e1d6500cdf52),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2b985b4-7b')
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.117 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.117 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.117 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] No VIF found with MAC fa:16:3e:db:3d:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.117 226109 INFO nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Using config drive
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.137 226109 DEBUG nova.storage.rbd_utils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] rbd image f9492565-fe30-4766-8a95-1a0f213da9ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4258869415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.423 226109 INFO nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Creating config drive at /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab/disk.config
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.428 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpopijodw3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.451 226109 DEBUG oslo_concurrency.processutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.458 226109 DEBUG nova.compute.provider_tree [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.482 226109 DEBUG nova.scheduler.client.report [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.507 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.552 226109 INFO nova.scheduler.client.report [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Deleted allocations for instance b6dd6db5-9860-4cfb-81d1-f52f900fd4c8
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.559 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpopijodw3" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.585 226109 DEBUG nova.storage.rbd_utils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] rbd image f9492565-fe30-4766-8a95-1a0f213da9ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.591 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab/disk.config f9492565-fe30-4766-8a95-1a0f213da9ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.684 226109 DEBUG oslo_concurrency.lockutils [None req-0619e262-8c39-40fd-8345-966b64a3bdd0 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "b6dd6db5-9860-4cfb-81d1-f52f900fd4c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2697805056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4258869415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.811 226109 DEBUG oslo_concurrency.processutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab/disk.config f9492565-fe30-4766-8a95-1a0f213da9ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.812 226109 INFO nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Deleting local config drive /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab/disk.config because it was imported into RBD.
Dec 06 07:22:13 compute-1 kernel: tapf2b985b4-7b: entered promiscuous mode
Dec 06 07:22:13 compute-1 NetworkManager[49031]: <info>  [1765005733.8802] manager: (tapf2b985b4-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Dec 06 07:22:13 compute-1 ovn_controller[130279]: 2025-12-06T07:22:13Z|00371|binding|INFO|Claiming lport f2b985b4-7bd2-4740-8d8b-340d08ed0a60 for this chassis.
Dec 06 07:22:13 compute-1 ovn_controller[130279]: 2025-12-06T07:22:13Z|00372|binding|INFO|f2b985b4-7bd2-4740-8d8b-340d08ed0a60: Claiming fa:16:3e:db:3d:f7 10.100.0.13
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.911 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.927 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:3d:f7 10.100.0.13'], port_security=['fa:16:3e:db:3d:f7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f9492565-fe30-4766-8a95-1a0f213da9ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-743abfe1-29ca-4734-b6bd-e1d6500cdf52', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7dd2f88c9ba41b6baa5e20db9002858', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b2003e58-6e8a-483c-bf65-bd959a14350f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41798dc5-a807-44f5-a8b2-633c860696e0, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f2b985b4-7bd2-4740-8d8b-340d08ed0a60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.929 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f2b985b4-7bd2-4740-8d8b-340d08ed0a60 in datapath 743abfe1-29ca-4734-b6bd-e1d6500cdf52 bound to our chassis
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.931 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 743abfe1-29ca-4734-b6bd-e1d6500cdf52
Dec 06 07:22:13 compute-1 systemd-udevd[259754]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.946 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ee022b40-57bf-4bd2-9469-c1d643eae233]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.947 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap743abfe1-21 in ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:22:13 compute-1 systemd-machined[190302]: New machine qemu-43-instance-0000005e.
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.949 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap743abfe1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.949 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1b22cd-fef5-4120-bf1d-bb07b251bdfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.950 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a74d3592-4974-4461-afa5-c81490eb5847]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:13 compute-1 NetworkManager[49031]: <info>  [1765005733.9661] device (tapf2b985b4-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.965 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[aba2ad3d-ffe0-4121-a758-919c88020db6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:13 compute-1 NetworkManager[49031]: <info>  [1765005733.9674] device (tapf2b985b4-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:22:13 compute-1 systemd[1]: Started Virtual Machine qemu-43-instance-0000005e.
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.991 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:13.991 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[926219fd-6231-48b0-b62a-9a317e0d4324]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:13 compute-1 ovn_controller[130279]: 2025-12-06T07:22:13Z|00373|binding|INFO|Setting lport f2b985b4-7bd2-4740-8d8b-340d08ed0a60 ovn-installed in OVS
Dec 06 07:22:13 compute-1 ovn_controller[130279]: 2025-12-06T07:22:13Z|00374|binding|INFO|Setting lport f2b985b4-7bd2-4740-8d8b-340d08ed0a60 up in Southbound
Dec 06 07:22:13 compute-1 nova_compute[226101]: 2025-12-06 07:22:13.995 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.026 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a9e814-58e3-49f6-ac69-5673fe9ab0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 systemd-udevd[259758]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:22:14 compute-1 NetworkManager[49031]: <info>  [1765005734.0336] manager: (tap743abfe1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/171)
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.032 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f25731b9-d047-4b81-b582-f9123a40e47b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.068 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[027d4d09-b3c9-4b1d-9a23-216854caac99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.072 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[18f9d8ab-d9d9-4c35-ab2a-005c63c84f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 NetworkManager[49031]: <info>  [1765005734.0988] device (tap743abfe1-20): carrier: link connected
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.104 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[16148398-a3e4-46f7-8f43-a989720da1b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.123 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[888771e1-e275-4501-885e-0047abef0966]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap743abfe1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:8b:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601360, 'reachable_time': 35096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259787, 'error': None, 'target': 'ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.139 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[acc360f3-a9ba-411e-be36-30d8196f5c94]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:8ba6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601360, 'tstamp': 601360}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259788, 'error': None, 'target': 'ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.158 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d96081-6347-4735-98c7-e785a5751fe9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap743abfe1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:8b:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601360, 'reachable_time': 35096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259789, 'error': None, 'target': 'ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.190 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a7577443-2dab-40e6-9372-6cb859f55f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.250 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5e10d51d-6169-44b2-8919-980715a01347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.251 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap743abfe1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.251 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.252 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap743abfe1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.253 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:14 compute-1 kernel: tap743abfe1-20: entered promiscuous mode
Dec 06 07:22:14 compute-1 NetworkManager[49031]: <info>  [1765005734.2559] manager: (tap743abfe1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.257 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap743abfe1-20, col_values=(('external_ids', {'iface-id': '4d4ba015-c5f2-4efb-95b6-40c6ae90aa32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.258 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:14 compute-1 ovn_controller[130279]: 2025-12-06T07:22:14Z|00375|binding|INFO|Releasing lport 4d4ba015-c5f2-4efb-95b6-40c6ae90aa32 from this chassis (sb_readonly=0)
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.274 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.275 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/743abfe1-29ca-4734-b6bd-e1d6500cdf52.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/743abfe1-29ca-4734-b6bd-e1d6500cdf52.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.275 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[03a678ad-811f-49b0-a727-879ed3145714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.276 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-743abfe1-29ca-4734-b6bd-e1d6500cdf52
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/743abfe1-29ca-4734-b6bd-e1d6500cdf52.pid.haproxy
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 743abfe1-29ca-4734-b6bd-e1d6500cdf52
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:22:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:14.277 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52', 'env', 'PROCESS_TAG=haproxy-743abfe1-29ca-4734-b6bd-e1d6500cdf52', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/743abfe1-29ca-4734-b6bd-e1d6500cdf52.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.440 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005734.440275, f9492565-fe30-4766-8a95-1a0f213da9ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.441 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] VM Started (Lifecycle Event)
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.549 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.555 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005734.4407513, f9492565-fe30-4766-8a95-1a0f213da9ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.556 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] VM Paused (Lifecycle Event)
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.608 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.612 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:14 compute-1 podman[259863]: 2025-12-06 07:22:14.603657224 +0000 UTC m=+0.019890100 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:22:14 compute-1 nova_compute[226101]: 2025-12-06 07:22:14.776 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:14.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:15.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:15 compute-1 nova_compute[226101]: 2025-12-06 07:22:15.221 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:15 compute-1 podman[259863]: 2025-12-06 07:22:15.638895975 +0000 UTC m=+1.055128831 container create 676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:22:15 compute-1 systemd[1]: Started libpod-conmon-676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e.scope.
Dec 06 07:22:16 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:22:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566aed4692d9c5d0a6bca885a167fca6a5a5945afd8a8f86f6d2ce0e3400f65c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:16 compute-1 ceph-mon[81689]: pgmap v1931: 305 pgs: 305 active+clean; 241 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 942 KiB/s rd, 6.4 MiB/s wr, 183 op/s
Dec 06 07:22:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:22:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:22:16 compute-1 podman[259863]: 2025-12-06 07:22:16.366615065 +0000 UTC m=+1.782847941 container init 676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:22:16 compute-1 podman[259863]: 2025-12-06 07:22:16.371719464 +0000 UTC m=+1.787952320 container start 676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:22:16 compute-1 neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52[259878]: [NOTICE]   (259882) : New worker (259884) forked
Dec 06 07:22:16 compute-1 neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52[259878]: [NOTICE]   (259882) : Loading success.
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.526 226109 DEBUG nova.compute.manager [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received event network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.527 226109 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.527 226109 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.528 226109 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.528 226109 DEBUG nova.compute.manager [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Processing event network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.528 226109 DEBUG nova.compute.manager [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received event network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.528 226109 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.529 226109 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.529 226109 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.529 226109 DEBUG nova.compute.manager [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] No waiting events found dispatching network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.530 226109 WARNING nova.compute.manager [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received unexpected event network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 for instance with vm_state building and task_state spawning.
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.530 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.534 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005736.5344434, f9492565-fe30-4766-8a95-1a0f213da9ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.535 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] VM Resumed (Lifecycle Event)
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.537 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.539 226109 INFO nova.virt.libvirt.driver [-] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Instance spawned successfully.
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.540 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.622 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.630 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.633 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.634 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.635 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.635 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.635 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.636 226109 DEBUG nova.virt.libvirt.driver [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.701 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.725 226109 INFO nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Took 8.48 seconds to spawn the instance on the hypervisor.
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.726 226109 DEBUG nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.797 226109 INFO nova.compute.manager [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Took 10.38 seconds to build instance.
Dec 06 07:22:16 compute-1 nova_compute[226101]: 2025-12-06 07:22:16.821 226109 DEBUG oslo_concurrency.lockutils [None req-f9dd8bd9-1e1f-416d-9ab4-400746a2cc13 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:16.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:22:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:22:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:22:16 compute-1 ceph-mon[81689]: pgmap v1932: 305 pgs: 305 active+clean; 188 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 MiB/s wr, 229 op/s
Dec 06 07:22:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:17.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:18.131 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.132 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:18.133 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:22:18 compute-1 ovn_controller[130279]: 2025-12-06T07:22:18Z|00376|binding|INFO|Releasing lport 4d4ba015-c5f2-4efb-95b6-40c6ae90aa32 from this chassis (sb_readonly=0)
Dec 06 07:22:18 compute-1 ovn_controller[130279]: 2025-12-06T07:22:18Z|00377|binding|INFO|Releasing lport 4d4ba015-c5f2-4efb-95b6-40c6ae90aa32 from this chassis (sb_readonly=0)
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.433 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.678 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "f9492565-fe30-4766-8a95-1a0f213da9ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.679 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.679 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.679 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.680 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.681 226109 INFO nova.compute.manager [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Terminating instance
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.682 226109 DEBUG nova.compute.manager [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:22:18 compute-1 ceph-mon[81689]: pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.6 MiB/s wr, 255 op/s
Dec 06 07:22:18 compute-1 kernel: tapf2b985b4-7b (unregistering): left promiscuous mode
Dec 06 07:22:18 compute-1 NetworkManager[49031]: <info>  [1765005738.8063] device (tapf2b985b4-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:22:18 compute-1 ovn_controller[130279]: 2025-12-06T07:22:18Z|00378|binding|INFO|Releasing lport f2b985b4-7bd2-4740-8d8b-340d08ed0a60 from this chassis (sb_readonly=0)
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.815 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 ovn_controller[130279]: 2025-12-06T07:22:18Z|00379|binding|INFO|Setting lport f2b985b4-7bd2-4740-8d8b-340d08ed0a60 down in Southbound
Dec 06 07:22:18 compute-1 ovn_controller[130279]: 2025-12-06T07:22:18Z|00380|binding|INFO|Removing iface tapf2b985b4-7b ovn-installed in OVS
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.817 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:18.826 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:3d:f7 10.100.0.13'], port_security=['fa:16:3e:db:3d:f7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f9492565-fe30-4766-8a95-1a0f213da9ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-743abfe1-29ca-4734-b6bd-e1d6500cdf52', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7dd2f88c9ba41b6baa5e20db9002858', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b2003e58-6e8a-483c-bf65-bd959a14350f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41798dc5-a807-44f5-a8b2-633c860696e0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f2b985b4-7bd2-4740-8d8b-340d08ed0a60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:18.828 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f2b985b4-7bd2-4740-8d8b-340d08ed0a60 in datapath 743abfe1-29ca-4734-b6bd-e1d6500cdf52 unbound from our chassis
Dec 06 07:22:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:18.834 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 743abfe1-29ca-4734-b6bd-e1d6500cdf52, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:22:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:18.835 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c383da9e-1bb1-4445-80b0-ba0dcf60e766]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.835 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:18.836 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52 namespace which is not needed anymore
Dec 06 07:22:18 compute-1 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Dec 06 07:22:18 compute-1 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000005e.scope: Consumed 2.680s CPU time.
Dec 06 07:22:18 compute-1 systemd-machined[190302]: Machine qemu-43-instance-0000005e terminated.
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.919 226109 INFO nova.virt.libvirt.driver [-] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Instance destroyed successfully.
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.919 226109 DEBUG nova.objects.instance [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lazy-loading 'resources' on Instance uuid f9492565-fe30-4766-8a95-1a0f213da9ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.942 226109 DEBUG nova.virt.libvirt.vif [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:22:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1233275274',display_name='tempest-ServerAddressesTestJSON-server-1233275274',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1233275274',id=94,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:22:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d7dd2f88c9ba41b6baa5e20db9002858',ramdisk_id='',reservation_id='r-b5yg0bxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1940765146',owner_user_name='tempest-ServerAddressesTestJSON-1940765146-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:22:16Z,user_data=None,user_id='19dfacbc6d334cada6c8eed809adafe2',uuid=f9492565-fe30-4766-8a95-1a0f213da9ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.942 226109 DEBUG nova.network.os_vif_util [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Converting VIF {"id": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "address": "fa:16:3e:db:3d:f7", "network": {"id": "743abfe1-29ca-4734-b6bd-e1d6500cdf52", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1814061149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7dd2f88c9ba41b6baa5e20db9002858", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2b985b4-7b", "ovs_interfaceid": "f2b985b4-7bd2-4740-8d8b-340d08ed0a60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.943 226109 DEBUG nova.network.os_vif_util [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:3d:f7,bridge_name='br-int',has_traffic_filtering=True,id=f2b985b4-7bd2-4740-8d8b-340d08ed0a60,network=Network(743abfe1-29ca-4734-b6bd-e1d6500cdf52),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2b985b4-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.943 226109 DEBUG os_vif [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:3d:f7,bridge_name='br-int',has_traffic_filtering=True,id=f2b985b4-7bd2-4740-8d8b-340d08ed0a60,network=Network(743abfe1-29ca-4734-b6bd-e1d6500cdf52),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2b985b4-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.945 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.946 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2b985b4-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.948 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.949 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-1 nova_compute[226101]: 2025-12-06 07:22:18.952 226109 INFO os_vif [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:3d:f7,bridge_name='br-int',has_traffic_filtering=True,id=f2b985b4-7bd2-4740-8d8b-340d08ed0a60,network=Network(743abfe1-29ca-4734-b6bd-e1d6500cdf52),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2b985b4-7b')
Dec 06 07:22:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:19.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:19 compute-1 neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52[259878]: [NOTICE]   (259882) : haproxy version is 2.8.14-c23fe91
Dec 06 07:22:19 compute-1 neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52[259878]: [NOTICE]   (259882) : path to executable is /usr/sbin/haproxy
Dec 06 07:22:19 compute-1 neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52[259878]: [WARNING]  (259882) : Exiting Master process...
Dec 06 07:22:19 compute-1 neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52[259878]: [ALERT]    (259882) : Current worker (259884) exited with code 143 (Terminated)
Dec 06 07:22:19 compute-1 neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52[259878]: [WARNING]  (259882) : All workers exited. Exiting... (0)
Dec 06 07:22:19 compute-1 systemd[1]: libpod-676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e.scope: Deactivated successfully.
Dec 06 07:22:19 compute-1 podman[259927]: 2025-12-06 07:22:19.108749317 +0000 UTC m=+0.178446811 container died 676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:22:19 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e-userdata-shm.mount: Deactivated successfully.
Dec 06 07:22:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-566aed4692d9c5d0a6bca885a167fca6a5a5945afd8a8f86f6d2ce0e3400f65c-merged.mount: Deactivated successfully.
Dec 06 07:22:19 compute-1 podman[259927]: 2025-12-06 07:22:19.36506035 +0000 UTC m=+0.434757844 container cleanup 676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:22:19 compute-1 systemd[1]: libpod-conmon-676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e.scope: Deactivated successfully.
Dec 06 07:22:19 compute-1 podman[259975]: 2025-12-06 07:22:19.547197501 +0000 UTC m=+0.161996666 container remove 676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.554 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[89309966-6262-422c-8013-3f882fc7d78f]: (4, ('Sat Dec  6 07:22:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52 (676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e)\n676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e\nSat Dec  6 07:22:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52 (676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e)\n676c7439b217d3d5c53325c7a6e66b08c9158feebe9f0c9e57692946bc288a1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.556 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c335687a-1c6f-4217-856c-a225772e0be6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.557 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap743abfe1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:19 compute-1 nova_compute[226101]: 2025-12-06 07:22:19.559 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:19 compute-1 kernel: tap743abfe1-20: left promiscuous mode
Dec 06 07:22:19 compute-1 nova_compute[226101]: 2025-12-06 07:22:19.573 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.576 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[84243800-9e28-4fd6-a9d2-ae10f91abaea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.599 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[add4d4f0-868b-4924-8118-1c75141003b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.600 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d8660718-74a3-48ed-ad50-fd409a79a783]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.615 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d70654-fed4-46f6-affa-a44cc99f54ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601352, 'reachable_time': 33828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259991, 'error': None, 'target': 'ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.618 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-743abfe1-29ca-4734-b6bd-e1d6500cdf52 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:22:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:19.618 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[70fb240d-d01c-4177-bf69-de3ceeae0e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:19 compute-1 systemd[1]: run-netns-ovnmeta\x2d743abfe1\x2d29ca\x2d4734\x2db6bd\x2de1d6500cdf52.mount: Deactivated successfully.
Dec 06 07:22:19 compute-1 nova_compute[226101]: 2025-12-06 07:22:19.652 226109 DEBUG nova.compute.manager [req-8f7fbc0e-3048-47d4-bcf6-4bf6efe74329 req-d0fc1b7f-e6c8-4146-9463-bdc817883e13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received event network-vif-unplugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:19 compute-1 nova_compute[226101]: 2025-12-06 07:22:19.652 226109 DEBUG oslo_concurrency.lockutils [req-8f7fbc0e-3048-47d4-bcf6-4bf6efe74329 req-d0fc1b7f-e6c8-4146-9463-bdc817883e13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:19 compute-1 nova_compute[226101]: 2025-12-06 07:22:19.653 226109 DEBUG oslo_concurrency.lockutils [req-8f7fbc0e-3048-47d4-bcf6-4bf6efe74329 req-d0fc1b7f-e6c8-4146-9463-bdc817883e13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:19 compute-1 nova_compute[226101]: 2025-12-06 07:22:19.653 226109 DEBUG oslo_concurrency.lockutils [req-8f7fbc0e-3048-47d4-bcf6-4bf6efe74329 req-d0fc1b7f-e6c8-4146-9463-bdc817883e13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:19 compute-1 nova_compute[226101]: 2025-12-06 07:22:19.653 226109 DEBUG nova.compute.manager [req-8f7fbc0e-3048-47d4-bcf6-4bf6efe74329 req-d0fc1b7f-e6c8-4146-9463-bdc817883e13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] No waiting events found dispatching network-vif-unplugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:19 compute-1 nova_compute[226101]: 2025-12-06 07:22:19.653 226109 DEBUG nova.compute.manager [req-8f7fbc0e-3048-47d4-bcf6-4bf6efe74329 req-d0fc1b7f-e6c8-4146-9463-bdc817883e13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received event network-vif-unplugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:22:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2030584476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:20 compute-1 nova_compute[226101]: 2025-12-06 07:22:20.195 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005725.1939359, b6dd6db5-9860-4cfb-81d1-f52f900fd4c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:20 compute-1 nova_compute[226101]: 2025-12-06 07:22:20.195 226109 INFO nova.compute.manager [-] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] VM Stopped (Lifecycle Event)
Dec 06 07:22:20 compute-1 nova_compute[226101]: 2025-12-06 07:22:20.228 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:20 compute-1 nova_compute[226101]: 2025-12-06 07:22:20.232 226109 DEBUG nova.compute.manager [None req-99729bbb-da1d-4567-a2d0-fdb4dad1a528 - - - - - -] [instance: b6dd6db5-9860-4cfb-81d1-f52f900fd4c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:20.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:21.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.091 226109 INFO nova.virt.libvirt.driver [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Deleting instance files /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab_del
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.093 226109 INFO nova.virt.libvirt.driver [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Deletion of /var/lib/nova/instances/f9492565-fe30-4766-8a95-1a0f213da9ab_del complete
Dec 06 07:22:21 compute-1 ceph-mon[81689]: pgmap v1934: 305 pgs: 305 active+clean; 167 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 210 op/s
Dec 06 07:22:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:21.135 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.144 226109 INFO nova.compute.manager [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Took 2.46 seconds to destroy the instance on the hypervisor.
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.145 226109 DEBUG oslo.service.loopingcall [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.146 226109 DEBUG nova.compute.manager [-] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.146 226109 DEBUG nova.network.neutron [-] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:22:21 compute-1 sudo[259993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:21 compute-1 sudo[259993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:21 compute-1 sudo[259993]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.810 226109 DEBUG nova.compute.manager [req-a80759b9-ee39-4119-b626-de84a64aee8b req-481864c1-5990-4bf4-98ba-bdec39057339 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received event network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.811 226109 DEBUG oslo_concurrency.lockutils [req-a80759b9-ee39-4119-b626-de84a64aee8b req-481864c1-5990-4bf4-98ba-bdec39057339 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.812 226109 DEBUG oslo_concurrency.lockutils [req-a80759b9-ee39-4119-b626-de84a64aee8b req-481864c1-5990-4bf4-98ba-bdec39057339 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.812 226109 DEBUG oslo_concurrency.lockutils [req-a80759b9-ee39-4119-b626-de84a64aee8b req-481864c1-5990-4bf4-98ba-bdec39057339 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.812 226109 DEBUG nova.compute.manager [req-a80759b9-ee39-4119-b626-de84a64aee8b req-481864c1-5990-4bf4-98ba-bdec39057339 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] No waiting events found dispatching network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:21 compute-1 nova_compute[226101]: 2025-12-06 07:22:21.812 226109 WARNING nova.compute.manager [req-a80759b9-ee39-4119-b626-de84a64aee8b req-481864c1-5990-4bf4-98ba-bdec39057339 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received unexpected event network-vif-plugged-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 for instance with vm_state active and task_state deleting.
Dec 06 07:22:21 compute-1 sudo[260018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:22:21 compute-1 sudo[260018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:21 compute-1 sudo[260018]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.078 226109 DEBUG nova.network.neutron [-] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.094 226109 INFO nova.compute.manager [-] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Took 0.95 seconds to deallocate network for instance.
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.130 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.130 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.186 226109 DEBUG oslo_concurrency.processutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1397180867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:22 compute-1 ceph-mon[81689]: pgmap v1935: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 232 op/s
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.628 226109 DEBUG oslo_concurrency.processutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.638 226109 DEBUG nova.compute.provider_tree [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.656 226109 DEBUG nova.scheduler.client.report [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.678 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.716 226109 INFO nova.scheduler.client.report [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Deleted allocations for instance f9492565-fe30-4766-8a95-1a0f213da9ab
Dec 06 07:22:22 compute-1 nova_compute[226101]: 2025-12-06 07:22:22.798 226109 DEBUG oslo_concurrency.lockutils [None req-f8ef6b13-7ee8-4e27-ba1f-aa5a2b2371a3 19dfacbc6d334cada6c8eed809adafe2 d7dd2f88c9ba41b6baa5e20db9002858 - - default default] Lock "f9492565-fe30-4766-8a95-1a0f213da9ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:22.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:23.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1397180867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:23 compute-1 nova_compute[226101]: 2025-12-06 07:22:23.933 226109 DEBUG nova.compute.manager [req-b78b559c-ac4b-459e-9dbc-b6610e9676c8 req-56505698-ac66-46c7-a4bd-110dea5feb0e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Received event network-vif-deleted-f2b985b4-7bd2-4740-8d8b-340d08ed0a60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:23 compute-1 nova_compute[226101]: 2025-12-06 07:22:23.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:23 compute-1 nova_compute[226101]: 2025-12-06 07:22:23.994 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:23 compute-1 nova_compute[226101]: 2025-12-06 07:22:23.994 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.016 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.085 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.085 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.093 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.093 226109 INFO nova.compute.claims [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.186 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:24 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1757352582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:24 compute-1 ceph-mon[81689]: pgmap v1936: 305 pgs: 305 active+clean; 138 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.6 MiB/s wr, 259 op/s
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.872 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.880 226109 DEBUG nova.compute.provider_tree [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.900 226109 DEBUG nova.scheduler.client.report [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.923 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.924 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:22:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.975 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:22:24 compute-1 nova_compute[226101]: 2025-12-06 07:22:24.976 226109 DEBUG nova.network.neutron [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.009 226109 INFO nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:22:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:25.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.041 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.161 226109 DEBUG nova.policy [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd966fefcb38a45219b9cc637c46a3d62', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.165 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.166 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.167 226109 INFO nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Creating image(s)
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.193 226109 DEBUG nova.storage.rbd_utils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.221 226109 DEBUG nova.storage.rbd_utils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.245 226109 DEBUG nova.storage.rbd_utils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.248 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.270 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.308 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.309 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.309 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.309 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.336 226109 DEBUG nova.storage.rbd_utils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.339 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.688 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.767 226109 DEBUG nova.storage.rbd_utils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] resizing rbd image 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.797 226109 DEBUG nova.network.neutron [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Successfully created port: 743a63f7-3663-4403-bc5e-47980f86cda9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:22:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1757352582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.855 226109 DEBUG nova.objects.instance [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'migration_context' on Instance uuid 8be3ed9f-1d51-4e7b-b352-45a261ab48ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.884 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.885 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Ensure instance console log exists: /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.885 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.885 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:25 compute-1 nova_compute[226101]: 2025-12-06 07:22:25.886 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:26 compute-1 nova_compute[226101]: 2025-12-06 07:22:26.706 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:26 compute-1 nova_compute[226101]: 2025-12-06 07:22:26.854 226109 DEBUG nova.network.neutron [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Successfully updated port: 743a63f7-3663-4403-bc5e-47980f86cda9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:22:26 compute-1 nova_compute[226101]: 2025-12-06 07:22:26.897 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:26 compute-1 nova_compute[226101]: 2025-12-06 07:22:26.898 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:26 compute-1 nova_compute[226101]: 2025-12-06 07:22:26.898 226109 DEBUG nova.network.neutron [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:22:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:26.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:27.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.063 226109 DEBUG nova.compute.manager [req-05bf7fcb-4fd5-4b5c-9261-314669def66c req-21dcc018-0e0d-477f-9c1d-1572f106e6c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received event network-changed-743a63f7-3663-4403-bc5e-47980f86cda9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.064 226109 DEBUG nova.compute.manager [req-05bf7fcb-4fd5-4b5c-9261-314669def66c req-21dcc018-0e0d-477f-9c1d-1572f106e6c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Refreshing instance network info cache due to event network-changed-743a63f7-3663-4403-bc5e-47980f86cda9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.065 226109 DEBUG oslo_concurrency.lockutils [req-05bf7fcb-4fd5-4b5c-9261-314669def66c req-21dcc018-0e0d-477f-9c1d-1572f106e6c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:27 compute-1 podman[260253]: 2025-12-06 07:22:27.109660065 +0000 UTC m=+0.093424845 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.120 226109 DEBUG nova.network.neutron [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:22:27 compute-1 ceph-mon[81689]: pgmap v1937: 305 pgs: 305 active+clean; 121 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 43 KiB/s wr, 178 op/s
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.935 226109 DEBUG nova.network.neutron [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Updating instance_info_cache with network_info: [{"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.962 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.962 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance network_info: |[{"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.963 226109 DEBUG oslo_concurrency.lockutils [req-05bf7fcb-4fd5-4b5c-9261-314669def66c req-21dcc018-0e0d-477f-9c1d-1572f106e6c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.963 226109 DEBUG nova.network.neutron [req-05bf7fcb-4fd5-4b5c-9261-314669def66c req-21dcc018-0e0d-477f-9c1d-1572f106e6c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Refreshing network info cache for port 743a63f7-3663-4403-bc5e-47980f86cda9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.965 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Start _get_guest_xml network_info=[{"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.970 226109 WARNING nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:22:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.975 226109 DEBUG nova.virt.libvirt.host [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.975 226109 DEBUG nova.virt.libvirt.host [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.978 226109 DEBUG nova.virt.libvirt.host [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.978 226109 DEBUG nova.virt.libvirt.host [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.979 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.980 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.980 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.980 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.980 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.981 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.981 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.981 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.981 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.982 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.982 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.982 226109 DEBUG nova.virt.hardware [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:22:27 compute-1 nova_compute[226101]: 2025-12-06 07:22:27.984 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:28 compute-1 ceph-mon[81689]: pgmap v1938: 305 pgs: 305 active+clean; 150 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:22:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:28 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1655764102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:28 compute-1 nova_compute[226101]: 2025-12-06 07:22:28.598 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:28 compute-1 nova_compute[226101]: 2025-12-06 07:22:28.633 226109 DEBUG nova.storage.rbd_utils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:28 compute-1 nova_compute[226101]: 2025-12-06 07:22:28.638 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:28 compute-1 nova_compute[226101]: 2025-12-06 07:22:28.953 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:29.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1821247374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.099 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.101 226109 DEBUG nova.virt.libvirt.vif [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:22:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-895305952',display_name='tempest-DeleteServersTestJSON-server-895305952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-895305952',id=95,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-tl2ojcwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:25Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=8be3ed9f-1d51-4e7b-b352-45a261ab48ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.102 226109 DEBUG nova.network.os_vif_util [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.103 226109 DEBUG nova.network.os_vif_util [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:61:7a,bridge_name='br-int',has_traffic_filtering=True,id=743a63f7-3663-4403-bc5e-47980f86cda9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743a63f7-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.104 226109 DEBUG nova.objects.instance [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8be3ed9f-1d51-4e7b-b352-45a261ab48ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.119 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <uuid>8be3ed9f-1d51-4e7b-b352-45a261ab48ea</uuid>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <name>instance-0000005f</name>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <nova:name>tempest-DeleteServersTestJSON-server-895305952</nova:name>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:22:27</nova:creationTime>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <nova:user uuid="d966fefcb38a45219b9cc637c46a3d62">tempest-DeleteServersTestJSON-1764569218-project-member</nova:user>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <nova:project uuid="c6d2f50c0db54315bfa96a24511dda90">tempest-DeleteServersTestJSON-1764569218</nova:project>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <nova:port uuid="743a63f7-3663-4403-bc5e-47980f86cda9">
Dec 06 07:22:29 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <system>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <entry name="serial">8be3ed9f-1d51-4e7b-b352-45a261ab48ea</entry>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <entry name="uuid">8be3ed9f-1d51-4e7b-b352-45a261ab48ea</entry>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </system>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <os>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   </os>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <features>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   </features>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk">
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       </source>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk.config">
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       </source>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:22:29 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:93:61:7a"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <target dev="tap743a63f7-36"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea/console.log" append="off"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <video>
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </video>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:22:29 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:22:29 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:22:29 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:22:29 compute-1 nova_compute[226101]: </domain>
Dec 06 07:22:29 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.120 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Preparing to wait for external event network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.120 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.121 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.121 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.121 226109 DEBUG nova.virt.libvirt.vif [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:22:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-895305952',display_name='tempest-DeleteServersTestJSON-server-895305952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-895305952',id=95,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-tl2ojcwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:25Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=8be3ed9f-1d51-4e7b-b352-45a261ab48ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.122 226109 DEBUG nova.network.os_vif_util [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.122 226109 DEBUG nova.network.os_vif_util [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:61:7a,bridge_name='br-int',has_traffic_filtering=True,id=743a63f7-3663-4403-bc5e-47980f86cda9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743a63f7-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.122 226109 DEBUG os_vif [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:61:7a,bridge_name='br-int',has_traffic_filtering=True,id=743a63f7-3663-4403-bc5e-47980f86cda9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743a63f7-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.123 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.124 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.124 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.126 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.127 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap743a63f7-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.127 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap743a63f7-36, col_values=(('external_ids', {'iface-id': '743a63f7-3663-4403-bc5e-47980f86cda9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:61:7a', 'vm-uuid': '8be3ed9f-1d51-4e7b-b352-45a261ab48ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.129 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:29 compute-1 NetworkManager[49031]: <info>  [1765005749.1298] manager: (tap743a63f7-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.132 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.134 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.136 226109 INFO os_vif [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:61:7a,bridge_name='br-int',has_traffic_filtering=True,id=743a63f7-3663-4403-bc5e-47980f86cda9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743a63f7-36')
Dec 06 07:22:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1655764102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1821247374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.463 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.463 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.463 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:93:61:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.464 226109 INFO nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Using config drive
Dec 06 07:22:29 compute-1 nova_compute[226101]: 2025-12-06 07:22:29.495 226109 DEBUG nova.storage.rbd_utils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.172 226109 INFO nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Creating config drive at /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea/disk.config
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.178 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfywn_74a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.225 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.308 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfywn_74a" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.339 226109 DEBUG nova.storage.rbd_utils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.343 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea/disk.config 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.379 226109 DEBUG nova.network.neutron [req-05bf7fcb-4fd5-4b5c-9261-314669def66c req-21dcc018-0e0d-477f-9c1d-1572f106e6c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Updated VIF entry in instance network info cache for port 743a63f7-3663-4403-bc5e-47980f86cda9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.380 226109 DEBUG nova.network.neutron [req-05bf7fcb-4fd5-4b5c-9261-314669def66c req-21dcc018-0e0d-477f-9c1d-1572f106e6c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Updating instance_info_cache with network_info: [{"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.398 226109 DEBUG oslo_concurrency.lockutils [req-05bf7fcb-4fd5-4b5c-9261-314669def66c req-21dcc018-0e0d-477f-9c1d-1572f106e6c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:30 compute-1 ceph-mon[81689]: pgmap v1939: 305 pgs: 305 active+clean; 150 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 102 op/s
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.522 226109 DEBUG oslo_concurrency.processutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea/disk.config 8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.524 226109 INFO nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Deleting local config drive /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea/disk.config because it was imported into RBD.
Dec 06 07:22:30 compute-1 kernel: tap743a63f7-36: entered promiscuous mode
Dec 06 07:22:30 compute-1 NetworkManager[49031]: <info>  [1765005750.5839] manager: (tap743a63f7-36): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.583 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:30 compute-1 ovn_controller[130279]: 2025-12-06T07:22:30Z|00381|binding|INFO|Claiming lport 743a63f7-3663-4403-bc5e-47980f86cda9 for this chassis.
Dec 06 07:22:30 compute-1 ovn_controller[130279]: 2025-12-06T07:22:30Z|00382|binding|INFO|743a63f7-3663-4403-bc5e-47980f86cda9: Claiming fa:16:3e:93:61:7a 10.100.0.5
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.596 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:61:7a 10.100.0.5'], port_security=['fa:16:3e:93:61:7a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8be3ed9f-1d51-4e7b-b352-45a261ab48ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=743a63f7-3663-4403-bc5e-47980f86cda9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.598 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 743a63f7-3663-4403-bc5e-47980f86cda9 in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 bound to our chassis
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.600 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:22:30 compute-1 systemd-udevd[260414]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.613 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[65a67094-ad25-428c-9cec-bca99a72f661]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.614 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85cfbf28-71 in ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.617 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85cfbf28-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.617 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff46573-453a-46f0-aea8-2e6ac02cc65d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.618 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf9101d-8145-4b27-918d-8d34ba9f2c11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 NetworkManager[49031]: <info>  [1765005750.6288] device (tap743a63f7-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:22:30 compute-1 NetworkManager[49031]: <info>  [1765005750.6302] device (tap743a63f7-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:22:30 compute-1 systemd-machined[190302]: New machine qemu-44-instance-0000005f.
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.633 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb97d59-101a-4481-a514-9aa7fa2c566d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.657 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4d58134d-e57e-44fb-8971-288b8c0eea52]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_controller[130279]: 2025-12-06T07:22:30Z|00383|binding|INFO|Setting lport 743a63f7-3663-4403-bc5e-47980f86cda9 ovn-installed in OVS
Dec 06 07:22:30 compute-1 ovn_controller[130279]: 2025-12-06T07:22:30Z|00384|binding|INFO|Setting lport 743a63f7-3663-4403-bc5e-47980f86cda9 up in Southbound
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.684 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[32a397d4-dc77-4a85-b1c1-8f6096f24f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 systemd[1]: Started Virtual Machine qemu-44-instance-0000005f.
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.692 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:30 compute-1 systemd-udevd[260419]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:22:30 compute-1 NetworkManager[49031]: <info>  [1765005750.6948] manager: (tap85cfbf28-70): new Veth device (/org/freedesktop/NetworkManager/Devices/175)
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.693 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[26f48c5a-69de-4722-91f1-574caabd5859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.728 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd0e077-961f-4fb3-bb75-d302f7e3fcf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.731 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[052c0faf-a8bd-4c77-9357-d3ba1e95159a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 NetworkManager[49031]: <info>  [1765005750.7595] device (tap85cfbf28-70): carrier: link connected
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.764 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6715761f-556e-4ad5-b7b3-6035c407eb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.781 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0e27defc-c061-455d-a408-31784fb1f081]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603026, 'reachable_time': 25317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260448, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.797 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6259e501-a796-4450-9178-74d6e2133a21]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603026, 'tstamp': 603026}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260449, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.812 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c58281e6-0167-4c6c-9839-4e3d1bd8df36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603026, 'reachable_time': 25317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260450, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.844 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6411c286-04d1-4218-b326-8bcda31d7a69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.919 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fda51cdb-1d41-4918-9923-c6087b45ba48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.921 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.922 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.923 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85cfbf28-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:30 compute-1 NetworkManager[49031]: <info>  [1765005750.9262] manager: (tap85cfbf28-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.926 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:30 compute-1 kernel: tap85cfbf28-70: entered promiscuous mode
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.932 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.934 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85cfbf28-70, col_values=(('external_ids', {'iface-id': '41b1b168-8e0e-4991-9750-9b31221f4863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.936 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.937 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:30 compute-1 ovn_controller[130279]: 2025-12-06T07:22:30Z|00385|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.938 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.941 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3a61bdf0-fae2-45eb-b101-eb3b1ecbd736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.942 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:22:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:30.943 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'env', 'PROCESS_TAG=haproxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85cfbf28-7016-4776-8fc2-2eb08a6b8347.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:22:30 compute-1 nova_compute[226101]: 2025-12-06 07:22:30.954 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:30.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.016 226109 DEBUG nova.compute.manager [req-7c83a4b6-ac83-4f43-a998-34263c87c891 req-77a0e8fb-44c4-4082-9049-1b3caa0ca58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received event network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.017 226109 DEBUG oslo_concurrency.lockutils [req-7c83a4b6-ac83-4f43-a998-34263c87c891 req-77a0e8fb-44c4-4082-9049-1b3caa0ca58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.017 226109 DEBUG oslo_concurrency.lockutils [req-7c83a4b6-ac83-4f43-a998-34263c87c891 req-77a0e8fb-44c4-4082-9049-1b3caa0ca58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.017 226109 DEBUG oslo_concurrency.lockutils [req-7c83a4b6-ac83-4f43-a998-34263c87c891 req-77a0e8fb-44c4-4082-9049-1b3caa0ca58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.018 226109 DEBUG nova.compute.manager [req-7c83a4b6-ac83-4f43-a998-34263c87c891 req-77a0e8fb-44c4-4082-9049-1b3caa0ca58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Processing event network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:22:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:31.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.176 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.178 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005751.1759598, 8be3ed9f-1d51-4e7b-b352-45a261ab48ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.178 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] VM Started (Lifecycle Event)
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.182 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.189 226109 INFO nova.virt.libvirt.driver [-] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance spawned successfully.
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.189 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.196 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.200 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.210 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.211 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.212 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.213 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.213 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.214 226109 DEBUG nova.virt.libvirt.driver [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.219 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.219 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005751.1773534, 8be3ed9f-1d51-4e7b-b352-45a261ab48ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.219 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] VM Paused (Lifecycle Event)
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.252 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.256 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005751.1813076, 8be3ed9f-1d51-4e7b-b352-45a261ab48ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.256 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] VM Resumed (Lifecycle Event)
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.285 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.296 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.303 226109 INFO nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Took 6.14 seconds to spawn the instance on the hypervisor.
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.303 226109 DEBUG nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.317 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.386 226109 INFO nova.compute.manager [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Took 7.32 seconds to build instance.
Dec 06 07:22:31 compute-1 podman[260523]: 2025-12-06 07:22:31.402230734 +0000 UTC m=+0.057531101 container create 8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 06 07:22:31 compute-1 nova_compute[226101]: 2025-12-06 07:22:31.406 226109 DEBUG oslo_concurrency.lockutils [None req-fddc498a-91e1-4d58-972e-e8b346a78a98 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:31 compute-1 systemd[1]: Started libpod-conmon-8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2.scope.
Dec 06 07:22:31 compute-1 podman[260523]: 2025-12-06 07:22:31.368252232 +0000 UTC m=+0.023552629 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:22:31 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:22:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5b053d4c85a0988b1f6b22d778f4da9cdc7068d0d22fddacb345101782b6db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:31 compute-1 podman[260523]: 2025-12-06 07:22:31.493402737 +0000 UTC m=+0.148703124 container init 8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:22:31 compute-1 podman[260523]: 2025-12-06 07:22:31.500703005 +0000 UTC m=+0.156003372 container start 8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:22:31 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[260553]: [NOTICE]   (260576) : New worker (260578) forked
Dec 06 07:22:31 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[260553]: [NOTICE]   (260576) : Loading success.
Dec 06 07:22:31 compute-1 podman[260539]: 2025-12-06 07:22:31.538142061 +0000 UTC m=+0.094618098 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:22:31 compute-1 podman[260535]: 2025-12-06 07:22:31.542569851 +0000 UTC m=+0.102896283 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 06 07:22:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:32.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:33.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.181 226109 DEBUG nova.compute.manager [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received event network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.182 226109 DEBUG oslo_concurrency.lockutils [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.183 226109 DEBUG oslo_concurrency.lockutils [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.183 226109 DEBUG oslo_concurrency.lockutils [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.184 226109 DEBUG nova.compute.manager [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] No waiting events found dispatching network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.184 226109 WARNING nova.compute.manager [req-ee4bbdb6-0b09-4149-afc1-5d6ad5a9ea5e req-c19d8aad-f287-48df-bf6a-7f12b8d5cb66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received unexpected event network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 for instance with vm_state active and task_state None.
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.823 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.823 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.824 226109 INFO nova.compute.manager [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Shelving
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.851 226109 DEBUG nova.virt.libvirt.driver [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.918 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005738.917517, f9492565-fe30-4766-8a95-1a0f213da9ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.918 226109 INFO nova.compute.manager [-] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] VM Stopped (Lifecycle Event)
Dec 06 07:22:33 compute-1 nova_compute[226101]: 2025-12-06 07:22:33.934 226109 DEBUG nova.compute.manager [None req-a90d179b-3cac-43e0-abda-159803015c1a - - - - - -] [instance: f9492565-fe30-4766-8a95-1a0f213da9ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:34 compute-1 nova_compute[226101]: 2025-12-06 07:22:34.131 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:34.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:35.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:35 compute-1 nova_compute[226101]: 2025-12-06 07:22:35.227 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:36 compute-1 ceph-mon[81689]: pgmap v1940: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Dec 06 07:22:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/123745592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:36.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:37.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:38 compute-1 ceph-mon[81689]: pgmap v1941: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Dec 06 07:22:38 compute-1 ceph-mon[81689]: pgmap v1942: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Dec 06 07:22:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/123745592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:38.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:39 compute-1 ceph-mon[81689]: pgmap v1943: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:22:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:39.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:39 compute-1 nova_compute[226101]: 2025-12-06 07:22:39.137 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:40 compute-1 nova_compute[226101]: 2025-12-06 07:22:40.229 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:40 compute-1 ceph-mon[81689]: pgmap v1944: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 590 KiB/s wr, 88 op/s
Dec 06 07:22:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:40.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:41.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:42 compute-1 ceph-mon[81689]: pgmap v1945: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 590 KiB/s wr, 88 op/s
Dec 06 07:22:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:42.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:43.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:43 compute-1 nova_compute[226101]: 2025-12-06 07:22:43.895 226109 DEBUG nova.virt.libvirt.driver [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:22:44 compute-1 nova_compute[226101]: 2025-12-06 07:22:44.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:44 compute-1 ceph-mon[81689]: pgmap v1946: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:22:44 compute-1 ovn_controller[130279]: 2025-12-06T07:22:44Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:61:7a 10.100.0.5
Dec 06 07:22:44 compute-1 ovn_controller[130279]: 2025-12-06T07:22:44Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:61:7a 10.100.0.5
Dec 06 07:22:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:44.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:45.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3565512716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:45 compute-1 nova_compute[226101]: 2025-12-06 07:22:45.231 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:46.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:47.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2596672294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:47 compute-1 ceph-mon[81689]: pgmap v1947: 305 pgs: 305 active+clean; 177 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 618 KiB/s wr, 70 op/s
Dec 06 07:22:48 compute-1 ceph-mon[81689]: pgmap v1948: 305 pgs: 305 active+clean; 184 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:22:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:48.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:49.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:49 compute-1 nova_compute[226101]: 2025-12-06 07:22:49.145 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:49 compute-1 kernel: tap743a63f7-36 (unregistering): left promiscuous mode
Dec 06 07:22:49 compute-1 NetworkManager[49031]: <info>  [1765005769.5836] device (tap743a63f7-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:22:49 compute-1 nova_compute[226101]: 2025-12-06 07:22:49.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:49 compute-1 ovn_controller[130279]: 2025-12-06T07:22:49Z|00386|binding|INFO|Releasing lport 743a63f7-3663-4403-bc5e-47980f86cda9 from this chassis (sb_readonly=0)
Dec 06 07:22:49 compute-1 ovn_controller[130279]: 2025-12-06T07:22:49Z|00387|binding|INFO|Setting lport 743a63f7-3663-4403-bc5e-47980f86cda9 down in Southbound
Dec 06 07:22:49 compute-1 ovn_controller[130279]: 2025-12-06T07:22:49Z|00388|binding|INFO|Removing iface tap743a63f7-36 ovn-installed in OVS
Dec 06 07:22:49 compute-1 nova_compute[226101]: 2025-12-06 07:22:49.594 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:49.598 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:61:7a 10.100.0.5'], port_security=['fa:16:3e:93:61:7a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8be3ed9f-1d51-4e7b-b352-45a261ab48ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=743a63f7-3663-4403-bc5e-47980f86cda9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:49.599 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 743a63f7-3663-4403-bc5e-47980f86cda9 in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 unbound from our chassis
Dec 06 07:22:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:49.600 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:22:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:49.601 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2538e8-4774-49c1-b4f6-da366e2678df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:49.602 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace which is not needed anymore
Dec 06 07:22:49 compute-1 nova_compute[226101]: 2025-12-06 07:22:49.612 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:49 compute-1 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Dec 06 07:22:49 compute-1 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005f.scope: Consumed 14.009s CPU time.
Dec 06 07:22:49 compute-1 systemd-machined[190302]: Machine qemu-44-instance-0000005f terminated.
Dec 06 07:22:49 compute-1 nova_compute[226101]: 2025-12-06 07:22:49.912 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:49 compute-1 nova_compute[226101]: 2025-12-06 07:22:49.923 226109 INFO nova.virt.libvirt.driver [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance shutdown successfully after 16 seconds.
Dec 06 07:22:49 compute-1 nova_compute[226101]: 2025-12-06 07:22:49.929 226109 INFO nova.virt.libvirt.driver [-] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance destroyed successfully.
Dec 06 07:22:49 compute-1 nova_compute[226101]: 2025-12-06 07:22:49.929 226109 DEBUG nova.objects.instance [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8be3ed9f-1d51-4e7b-b352-45a261ab48ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.210 226109 INFO nova.virt.libvirt.driver [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Beginning cold snapshot process
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.232 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:50 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[260553]: [NOTICE]   (260576) : haproxy version is 2.8.14-c23fe91
Dec 06 07:22:50 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[260553]: [NOTICE]   (260576) : path to executable is /usr/sbin/haproxy
Dec 06 07:22:50 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[260553]: [WARNING]  (260576) : Exiting Master process...
Dec 06 07:22:50 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[260553]: [ALERT]    (260576) : Current worker (260578) exited with code 143 (Terminated)
Dec 06 07:22:50 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[260553]: [WARNING]  (260576) : All workers exited. Exiting... (0)
Dec 06 07:22:50 compute-1 systemd[1]: libpod-8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2.scope: Deactivated successfully.
Dec 06 07:22:50 compute-1 podman[260617]: 2025-12-06 07:22:50.589703376 +0000 UTC m=+0.899213573 container died 8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.592 226109 DEBUG nova.compute.manager [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received event network-vif-unplugged-743a63f7-3663-4403-bc5e-47980f86cda9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.592 226109 DEBUG oslo_concurrency.lockutils [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.592 226109 DEBUG oslo_concurrency.lockutils [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.593 226109 DEBUG oslo_concurrency.lockutils [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.593 226109 DEBUG nova.compute.manager [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] No waiting events found dispatching network-vif-unplugged-743a63f7-3663-4403-bc5e-47980f86cda9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.593 226109 WARNING nova.compute.manager [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received unexpected event network-vif-unplugged-743a63f7-3663-4403-bc5e-47980f86cda9 for instance with vm_state active and task_state shelving_image_uploading.
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.598 226109 DEBUG nova.virt.libvirt.imagebackend [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:22:50 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2-userdata-shm.mount: Deactivated successfully.
Dec 06 07:22:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-ab5b053d4c85a0988b1f6b22d778f4da9cdc7068d0d22fddacb345101782b6db-merged.mount: Deactivated successfully.
Dec 06 07:22:50 compute-1 podman[260617]: 2025-12-06 07:22:50.842309688 +0000 UTC m=+1.151819885 container cleanup 8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.843 226109 DEBUG nova.storage.rbd_utils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] creating snapshot(a409b64343454f5e8b07f29aded0cc05) on rbd image(8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:22:50 compute-1 systemd[1]: libpod-conmon-8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2.scope: Deactivated successfully.
Dec 06 07:22:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1015985670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:50 compute-1 podman[260696]: 2025-12-06 07:22:50.970705471 +0000 UTC m=+0.105145182 container remove 8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:22:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:50.978 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a03ce2bc-e099-430d-b697-6039368f4554]: (4, ('Sat Dec  6 07:22:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2)\n8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2\nSat Dec  6 07:22:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2)\n8dea930aaeeadc4ea3afb162eb7b2c215be367e8c5894e82837ab9596cbed9c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:50.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:50.981 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cf25f492-8af3-4b52-961d-50c4df12be7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:50.982 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:50 compute-1 nova_compute[226101]: 2025-12-06 07:22:50.984 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:50 compute-1 kernel: tap85cfbf28-70: left promiscuous mode
Dec 06 07:22:51 compute-1 nova_compute[226101]: 2025-12-06 07:22:51.003 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:51.005 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[77976bac-0658-4bb6-92a8-87c833c39d9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:51.019 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[463d1dc4-16f6-49c7-bbbf-c512dd789fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:51.021 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cc36810d-c00e-45ba-b7dd-46910d7ded7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:51.038 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcbbcca-b05e-4359-8ed8-e1c85d4daa4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603018, 'reachable_time': 30343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260729, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:51.041 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:22:51.041 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a09a9dba-9a42-4f20-9e7f-66f46e216496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:51 compute-1 systemd[1]: run-netns-ovnmeta\x2d85cfbf28\x2d7016\x2d4776\x2d8fc2\x2d2eb08a6b8347.mount: Deactivated successfully.
Dec 06 07:22:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:51.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:51 compute-1 nova_compute[226101]: 2025-12-06 07:22:51.944 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:52 compute-1 ceph-mon[81689]: pgmap v1949: 305 pgs: 305 active+clean; 184 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Dec 06 07:22:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3752189496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/957506070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:52 compute-1 nova_compute[226101]: 2025-12-06 07:22:52.600 226109 DEBUG nova.compute.manager [req-98d24fc5-fe4d-4b60-9368-3f08b3c55d94 req-61169199-61ba-44e9-b436-6583a94e3a07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received event network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:52 compute-1 nova_compute[226101]: 2025-12-06 07:22:52.601 226109 DEBUG oslo_concurrency.lockutils [req-98d24fc5-fe4d-4b60-9368-3f08b3c55d94 req-61169199-61ba-44e9-b436-6583a94e3a07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:52 compute-1 nova_compute[226101]: 2025-12-06 07:22:52.601 226109 DEBUG oslo_concurrency.lockutils [req-98d24fc5-fe4d-4b60-9368-3f08b3c55d94 req-61169199-61ba-44e9-b436-6583a94e3a07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:52 compute-1 nova_compute[226101]: 2025-12-06 07:22:52.601 226109 DEBUG oslo_concurrency.lockutils [req-98d24fc5-fe4d-4b60-9368-3f08b3c55d94 req-61169199-61ba-44e9-b436-6583a94e3a07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:52 compute-1 nova_compute[226101]: 2025-12-06 07:22:52.601 226109 DEBUG nova.compute.manager [req-98d24fc5-fe4d-4b60-9368-3f08b3c55d94 req-61169199-61ba-44e9-b436-6583a94e3a07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] No waiting events found dispatching network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:52 compute-1 nova_compute[226101]: 2025-12-06 07:22:52.602 226109 WARNING nova.compute.manager [req-98d24fc5-fe4d-4b60-9368-3f08b3c55d94 req-61169199-61ba-44e9-b436-6583a94e3a07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received unexpected event network-vif-plugged-743a63f7-3663-4403-bc5e-47980f86cda9 for instance with vm_state active and task_state shelving_image_uploading.
Dec 06 07:22:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:52.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:53.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e259 e259: 3 total, 3 up, 3 in
Dec 06 07:22:53 compute-1 sshd-session[260730]: Connection reset by authenticating user root 45.135.232.92 port 37716 [preauth]
Dec 06 07:22:54 compute-1 nova_compute[226101]: 2025-12-06 07:22:54.150 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:54.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:55.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:55 compute-1 nova_compute[226101]: 2025-12-06 07:22:55.235 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:55 compute-1 ceph-mon[81689]: pgmap v1950: 305 pgs: 305 active+clean; 214 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 5.0 MiB/s wr, 129 op/s
Dec 06 07:22:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1469328797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/9378072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:55 compute-1 nova_compute[226101]: 2025-12-06 07:22:55.701 226109 DEBUG nova.storage.rbd_utils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] cloning vms/8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk@a409b64343454f5e8b07f29aded0cc05 to images/ce6c8150-0026-4838-b3e9-1b872e9a3cee clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:22:56 compute-1 sshd-session[260732]: Invalid user admin from 45.135.232.92 port 62950
Dec 06 07:22:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:56 compute-1 ceph-mon[81689]: osdmap e259: 3 total, 3 up, 3 in
Dec 06 07:22:56 compute-1 ceph-mon[81689]: pgmap v1952: 305 pgs: 305 active+clean; 213 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 436 KiB/s rd, 6.9 MiB/s wr, 184 op/s
Dec 06 07:22:56 compute-1 ceph-mon[81689]: pgmap v1953: 305 pgs: 305 active+clean; 213 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 6.2 MiB/s wr, 164 op/s
Dec 06 07:22:56 compute-1 sshd-session[260732]: Connection reset by invalid user admin 45.135.232.92 port 62950 [preauth]
Dec 06 07:22:56 compute-1 nova_compute[226101]: 2025-12-06 07:22:56.741 226109 DEBUG nova.storage.rbd_utils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] flattening images/ce6c8150-0026-4838-b3e9-1b872e9a3cee flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:22:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:56.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:57.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:58 compute-1 podman[260791]: 2025-12-06 07:22:58.133700153 +0000 UTC m=+0.117162780 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:22:58 compute-1 sshd-session[260771]: Invalid user vagrant from 45.135.232.92 port 62960
Dec 06 07:22:58 compute-1 nova_compute[226101]: 2025-12-06 07:22:58.620 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:58 compute-1 nova_compute[226101]: 2025-12-06 07:22:58.648 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:58 compute-1 nova_compute[226101]: 2025-12-06 07:22:58.648 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:58 compute-1 nova_compute[226101]: 2025-12-06 07:22:58.648 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:58 compute-1 nova_compute[226101]: 2025-12-06 07:22:58.648 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:22:58 compute-1 nova_compute[226101]: 2025-12-06 07:22:58.649 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:58 compute-1 sshd-session[260771]: Connection reset by invalid user vagrant 45.135.232.92 port 62960 [preauth]
Dec 06 07:22:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:22:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:59.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:59 compute-1 nova_compute[226101]: 2025-12-06 07:22:59.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:00 compute-1 nova_compute[226101]: 2025-12-06 07:23:00.237 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:00 compute-1 ceph-mon[81689]: pgmap v1954: 305 pgs: 305 active+clean; 213 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 111 op/s
Dec 06 07:23:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:00.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:01.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3269548074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.389 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.460 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.460 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.609 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.610 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4571MB free_disk=20.901050567626953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.611 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.611 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:01.640 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:01.640 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:01.641 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.688 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 8be3ed9f-1d51-4e7b-b352-45a261ab48ea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.688 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.689 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:23:01 compute-1 sshd-session[260828]: Invalid user user from 45.135.232.92 port 62978
Dec 06 07:23:01 compute-1 nova_compute[226101]: 2025-12-06 07:23:01.743 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:01 compute-1 podman[260842]: 2025-12-06 07:23:01.755277879 +0000 UTC m=+0.052395292 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:23:01 compute-1 podman[260843]: 2025-12-06 07:23:01.756353698 +0000 UTC m=+0.045812244 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:23:02 compute-1 sshd-session[260828]: Connection reset by invalid user user 45.135.232.92 port 62978 [preauth]
Dec 06 07:23:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:02.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:03.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:03 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.201039314s, txc = 0x55b552764600
Dec 06 07:23:03 compute-1 ceph-mon[81689]: pgmap v1955: 305 pgs: 305 active+clean; 213 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 111 op/s
Dec 06 07:23:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/512214763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3269548074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:03 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.800856113s, txc = 0x55b5546b0000
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:03.764950) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783764991, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2270, "num_deletes": 255, "total_data_size": 5425307, "memory_usage": 5522240, "flush_reason": "Manual Compaction"}
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783795193, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2182472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39603, "largest_seqno": 41868, "table_properties": {"data_size": 2175312, "index_size": 3782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19466, "raw_average_key_size": 21, "raw_value_size": 2159328, "raw_average_value_size": 2399, "num_data_blocks": 166, "num_entries": 900, "num_filter_entries": 900, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005574, "oldest_key_time": 1765005574, "file_creation_time": 1765005783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 30285 microseconds, and 6134 cpu microseconds.
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:03.795233) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2182472 bytes OK
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:03.795252) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:03.804356) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:03.804396) EVENT_LOG_v1 {"time_micros": 1765005783804387, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:03.804438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5415102, prev total WAL file size 5415102, number of live WAL files 2.
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:03.805654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323731' seq:72057594037927935, type:22 .. '6D6772737461740031353236' seq:0, type:0; will stop at (end)
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2131KB)], [75(10MB)]
Dec 06 07:23:03 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783805722, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13469494, "oldest_snapshot_seqno": -1}
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 7276 keys, 10898089 bytes, temperature: kUnknown
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005784036055, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 10898089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10850153, "index_size": 28624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 187193, "raw_average_key_size": 25, "raw_value_size": 10720672, "raw_average_value_size": 1473, "num_data_blocks": 1140, "num_entries": 7276, "num_filter_entries": 7276, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765005783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:23:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/183565267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.074 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.080 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.103 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.124 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.125 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:04.036326) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10898089 bytes
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:04.210567) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.5 rd, 47.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.8 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(11.2) write-amplify(5.0) OK, records in: 7718, records dropped: 442 output_compression: NoCompression
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:04.210609) EVENT_LOG_v1 {"time_micros": 1765005784210594, "job": 46, "event": "compaction_finished", "compaction_time_micros": 230399, "compaction_time_cpu_micros": 30705, "output_level": 6, "num_output_files": 1, "total_output_size": 10898089, "num_input_records": 7718, "num_output_records": 7276, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005784211123, "job": 46, "event": "table_file_deletion", "file_number": 77}
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005784213037, "job": 46, "event": "table_file_deletion", "file_number": 75}
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:03.805553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:04.213099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:04.213105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:04.213107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:04.213109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:04.213111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-1 sshd-session[260900]: Connection reset by authenticating user root 45.135.232.92 port 62982 [preauth]
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.530 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.922 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005769.922045, 8be3ed9f-1d51-4e7b-b352-45a261ab48ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.923 226109 INFO nova.compute.manager [-] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] VM Stopped (Lifecycle Event)
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.943 226109 DEBUG nova.compute.manager [None req-a4b09f16-e075-40e7-b1d0-47bc539c8d22 - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.946 226109 DEBUG nova.compute.manager [None req-a4b09f16-e075-40e7-b1d0-47bc539c8d22 - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:23:04 compute-1 nova_compute[226101]: 2025-12-06 07:23:04.972 226109 INFO nova.compute.manager [None req-a4b09f16-e075-40e7-b1d0-47bc539c8d22 - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.
Dec 06 07:23:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:04.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:05 compute-1 nova_compute[226101]: 2025-12-06 07:23:05.094 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:05 compute-1 nova_compute[226101]: 2025-12-06 07:23:05.095 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:23:05 compute-1 nova_compute[226101]: 2025-12-06 07:23:05.095 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:23:05 compute-1 nova_compute[226101]: 2025-12-06 07:23:05.118 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:05 compute-1 nova_compute[226101]: 2025-12-06 07:23:05.119 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:05 compute-1 nova_compute[226101]: 2025-12-06 07:23:05.119 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:23:05 compute-1 nova_compute[226101]: 2025-12-06 07:23:05.119 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8be3ed9f-1d51-4e7b-b352-45a261ab48ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:05 compute-1 ceph-mon[81689]: pgmap v1956: 305 pgs: 305 active+clean; 225 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 129 op/s
Dec 06 07:23:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2070338189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:05 compute-1 ceph-mon[81689]: pgmap v1957: 305 pgs: 305 active+clean; 269 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 3.5 MiB/s wr, 156 op/s
Dec 06 07:23:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/183565267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:05 compute-1 nova_compute[226101]: 2025-12-06 07:23:05.266 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:23:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828009080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:23:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:23:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828009080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:23:06 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:06.151 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.152 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:06 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:06.153 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:23:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.236 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Updating instance_info_cache with network_info: [{"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.262 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.263 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.263 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.263 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.264 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.264 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.284 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 8be3ed9f-1d51-4e7b-b352-45a261ab48ea _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.284 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.285 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:06 compute-1 nova_compute[226101]: 2025-12-06 07:23:06.285 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:23:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:07.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:07.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:07.155 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:07 compute-1 ceph-mon[81689]: pgmap v1958: 305 pgs: 305 active+clean; 272 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.9 MiB/s wr, 183 op/s
Dec 06 07:23:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3978424021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:07 compute-1 nova_compute[226101]: 2025-12-06 07:23:07.528 226109 DEBUG nova.storage.rbd_utils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] removing snapshot(a409b64343454f5e8b07f29aded0cc05) on rbd image(8be3ed9f-1d51-4e7b-b352-45a261ab48ea_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:23:07 compute-1 nova_compute[226101]: 2025-12-06 07:23:07.610 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:07 compute-1 nova_compute[226101]: 2025-12-06 07:23:07.611 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:07 compute-1 nova_compute[226101]: 2025-12-06 07:23:07.629 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:09.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:09 compute-1 nova_compute[226101]: 2025-12-06 07:23:09.532 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3828009080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:23:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3828009080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:23:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2534009470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:09 compute-1 ceph-mon[81689]: pgmap v1959: 305 pgs: 305 active+clean; 261 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.9 MiB/s wr, 189 op/s
Dec 06 07:23:10 compute-1 nova_compute[226101]: 2025-12-06 07:23:10.269 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1387182121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:23:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1387182121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:23:10 compute-1 ceph-mon[81689]: pgmap v1960: 305 pgs: 305 active+clean; 273 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Dec 06 07:23:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:11.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:11.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:11 compute-1 nova_compute[226101]: 2025-12-06 07:23:11.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:11 compute-1 nova_compute[226101]: 2025-12-06 07:23:11.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:23:11 compute-1 nova_compute[226101]: 2025-12-06 07:23:11.620 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:23:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e260 e260: 3 total, 3 up, 3 in
Dec 06 07:23:11 compute-1 nova_compute[226101]: 2025-12-06 07:23:11.662 226109 DEBUG nova.storage.rbd_utils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] creating snapshot(snap) on rbd image(ce6c8150-0026-4838-b3e9-1b872e9a3cee) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:23:12 compute-1 nova_compute[226101]: 2025-12-06 07:23:12.620 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:13.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:13.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e261 e261: 3 total, 3 up, 3 in
Dec 06 07:23:13 compute-1 ceph-mon[81689]: osdmap e260: 3 total, 3 up, 3 in
Dec 06 07:23:13 compute-1 ceph-mon[81689]: pgmap v1962: 305 pgs: 305 active+clean; 246 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 162 op/s
Dec 06 07:23:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/444199091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:14 compute-1 nova_compute[226101]: 2025-12-06 07:23:14.534 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:14 compute-1 ceph-mon[81689]: osdmap e261: 3 total, 3 up, 3 in
Dec 06 07:23:14 compute-1 ceph-mon[81689]: pgmap v1964: 305 pgs: 305 active+clean; 246 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 06 07:23:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:15.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.270 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.585 226109 INFO nova.virt.libvirt.driver [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Snapshot image upload complete
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.585 226109 DEBUG nova.compute.manager [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.638 226109 INFO nova.compute.manager [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Shelve offloading
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.643 226109 INFO nova.virt.libvirt.driver [-] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance destroyed successfully.
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.644 226109 DEBUG nova.compute.manager [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.646 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.646 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:15 compute-1 nova_compute[226101]: 2025-12-06 07:23:15.646 226109 DEBUG nova.network.neutron [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:23:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:17.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:17.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:17 compute-1 ceph-mon[81689]: pgmap v1965: 305 pgs: 305 active+clean; 246 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 1.5 MiB/s wr, 49 op/s
Dec 06 07:23:17 compute-1 nova_compute[226101]: 2025-12-06 07:23:17.325 226109 DEBUG nova.network.neutron [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Updating instance_info_cache with network_info: [{"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:17 compute-1 nova_compute[226101]: 2025-12-06 07:23:17.342 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:17 compute-1 nova_compute[226101]: 2025-12-06 07:23:17.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:17 compute-1 nova_compute[226101]: 2025-12-06 07:23:17.592 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:23:18 compute-1 ceph-mon[81689]: pgmap v1966: 305 pgs: 305 active+clean; 258 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 770 KiB/s wr, 54 op/s
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.834 226109 INFO nova.virt.libvirt.driver [-] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Instance destroyed successfully.
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.835 226109 DEBUG nova.objects.instance [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'resources' on Instance uuid 8be3ed9f-1d51-4e7b-b352-45a261ab48ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.855 226109 DEBUG nova.virt.libvirt.vif [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:22:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-895305952',display_name='tempest-DeleteServersTestJSON-server-895305952',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-895305952',id=95,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:22:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-tl2ojcwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member',shelved_at='2025-12-06T07:23:15.585749',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='ce6c8150-0026-4838-b3e9-1b872e9a3cee'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:22:50Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=8be3ed9f-1d51-4e7b-b352-45a261ab48ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.855 226109 DEBUG nova.network.os_vif_util [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743a63f7-36", "ovs_interfaceid": "743a63f7-3663-4403-bc5e-47980f86cda9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.856 226109 DEBUG nova.network.os_vif_util [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:61:7a,bridge_name='br-int',has_traffic_filtering=True,id=743a63f7-3663-4403-bc5e-47980f86cda9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743a63f7-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.856 226109 DEBUG os_vif [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:61:7a,bridge_name='br-int',has_traffic_filtering=True,id=743a63f7-3663-4403-bc5e-47980f86cda9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743a63f7-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.858 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.859 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap743a63f7-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.860 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.862 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:18 compute-1 nova_compute[226101]: 2025-12-06 07:23:18.864 226109 INFO os_vif [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:61:7a,bridge_name='br-int',has_traffic_filtering=True,id=743a63f7-3663-4403-bc5e-47980f86cda9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743a63f7-36')
Dec 06 07:23:19 compute-1 nova_compute[226101]: 2025-12-06 07:23:19.017 226109 DEBUG nova.compute.manager [req-8e7cec79-d361-4375-8e35-2a6797ceda40 req-d3eaf8d8-b068-4561-8c6b-5298be8e1f5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Received event network-changed-743a63f7-3663-4403-bc5e-47980f86cda9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:19 compute-1 nova_compute[226101]: 2025-12-06 07:23:19.017 226109 DEBUG nova.compute.manager [req-8e7cec79-d361-4375-8e35-2a6797ceda40 req-d3eaf8d8-b068-4561-8c6b-5298be8e1f5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Refreshing instance network info cache due to event network-changed-743a63f7-3663-4403-bc5e-47980f86cda9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:23:19 compute-1 nova_compute[226101]: 2025-12-06 07:23:19.017 226109 DEBUG oslo_concurrency.lockutils [req-8e7cec79-d361-4375-8e35-2a6797ceda40 req-d3eaf8d8-b068-4561-8c6b-5298be8e1f5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:19 compute-1 nova_compute[226101]: 2025-12-06 07:23:19.018 226109 DEBUG oslo_concurrency.lockutils [req-8e7cec79-d361-4375-8e35-2a6797ceda40 req-d3eaf8d8-b068-4561-8c6b-5298be8e1f5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:19 compute-1 nova_compute[226101]: 2025-12-06 07:23:19.018 226109 DEBUG nova.network.neutron [req-8e7cec79-d361-4375-8e35-2a6797ceda40 req-d3eaf8d8-b068-4561-8c6b-5298be8e1f5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Refreshing network info cache for port 743a63f7-3663-4403-bc5e-47980f86cda9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:23:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:19.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:19 compute-1 nova_compute[226101]: 2025-12-06 07:23:19.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:20 compute-1 nova_compute[226101]: 2025-12-06 07:23:20.274 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:20 compute-1 ceph-mon[81689]: pgmap v1967: 305 pgs: 305 active+clean; 264 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 1.4 MiB/s wr, 62 op/s
Dec 06 07:23:20 compute-1 nova_compute[226101]: 2025-12-06 07:23:20.438 226109 DEBUG nova.network.neutron [req-8e7cec79-d361-4375-8e35-2a6797ceda40 req-d3eaf8d8-b068-4561-8c6b-5298be8e1f5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Updated VIF entry in instance network info cache for port 743a63f7-3663-4403-bc5e-47980f86cda9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:23:20 compute-1 nova_compute[226101]: 2025-12-06 07:23:20.439 226109 DEBUG nova.network.neutron [req-8e7cec79-d361-4375-8e35-2a6797ceda40 req-d3eaf8d8-b068-4561-8c6b-5298be8e1f5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Updating instance_info_cache with network_info: [{"id": "743a63f7-3663-4403-bc5e-47980f86cda9", "address": "fa:16:3e:93:61:7a", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": null, "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap743a63f7-36", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:20 compute-1 nova_compute[226101]: 2025-12-06 07:23:20.471 226109 DEBUG oslo_concurrency.lockutils [req-8e7cec79-d361-4375-8e35-2a6797ceda40 req-d3eaf8d8-b068-4561-8c6b-5298be8e1f5a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-8be3ed9f-1d51-4e7b-b352-45a261ab48ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:21.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.147 226109 INFO nova.virt.libvirt.driver [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Deleting instance files /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea_del
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.148 226109 INFO nova.virt.libvirt.driver [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] Deletion of /var/lib/nova/instances/8be3ed9f-1d51-4e7b-b352-45a261ab48ea_del complete
Dec 06 07:23:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.404 226109 INFO nova.scheduler.client.report [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Deleted allocations for instance 8be3ed9f-1d51-4e7b-b352-45a261ab48ea
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.465 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.465 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.566 226109 DEBUG nova.scheduler.client.report [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.656 226109 DEBUG nova.scheduler.client.report [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.656 226109 DEBUG nova.compute.provider_tree [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.675 226109 DEBUG nova.scheduler.client.report [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.707 226109 DEBUG nova.scheduler.client.report [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:23:21 compute-1 nova_compute[226101]: 2025-12-06 07:23:21.736 226109 DEBUG oslo_concurrency.processutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.051508) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802051545, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 445, "num_deletes": 251, "total_data_size": 574647, "memory_usage": 584208, "flush_reason": "Manual Compaction"}
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Dec 06 07:23:22 compute-1 sudo[260981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:22 compute-1 sudo[260981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:22 compute-1 sudo[260981]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802126196, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 379122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41873, "largest_seqno": 42313, "table_properties": {"data_size": 376555, "index_size": 667, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6341, "raw_average_key_size": 19, "raw_value_size": 371351, "raw_average_value_size": 1121, "num_data_blocks": 30, "num_entries": 331, "num_filter_entries": 331, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005784, "oldest_key_time": 1765005784, "file_creation_time": 1765005802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 74746 microseconds, and 1556 cpu microseconds.
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.126253) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 379122 bytes OK
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.126275) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.129560) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.129572) EVENT_LOG_v1 {"time_micros": 1765005802129568, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.129588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 571857, prev total WAL file size 571857, number of live WAL files 2.
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.130243) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(370KB)], [78(10MB)]
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802130300, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 11277211, "oldest_snapshot_seqno": -1}
Dec 06 07:23:22 compute-1 sudo[261006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:23:22 compute-1 sudo[261006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:22 compute-1 sudo[261006]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/24472655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:22 compute-1 nova_compute[226101]: 2025-12-06 07:23:22.229 226109 DEBUG oslo_concurrency.processutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:22 compute-1 nova_compute[226101]: 2025-12-06 07:23:22.235 226109 DEBUG nova.compute.provider_tree [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:23:22 compute-1 sudo[261031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:22 compute-1 sudo[261031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:22 compute-1 sudo[261031]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:22 compute-1 nova_compute[226101]: 2025-12-06 07:23:22.255 226109 DEBUG nova.scheduler.client.report [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 7092 keys, 9294337 bytes, temperature: kUnknown
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802305142, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9294337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9249174, "index_size": 26299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17797, "raw_key_size": 184148, "raw_average_key_size": 25, "raw_value_size": 9124432, "raw_average_value_size": 1286, "num_data_blocks": 1033, "num_entries": 7092, "num_filter_entries": 7092, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765005802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.306676) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9294337 bytes
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.308599) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 64.0 rd, 52.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(54.3) write-amplify(24.5) OK, records in: 7607, records dropped: 515 output_compression: NoCompression
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.308629) EVENT_LOG_v1 {"time_micros": 1765005802308615, "job": 48, "event": "compaction_finished", "compaction_time_micros": 176079, "compaction_time_cpu_micros": 26740, "output_level": 6, "num_output_files": 1, "total_output_size": 9294337, "num_input_records": 7607, "num_output_records": 7092, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802308881, "job": 48, "event": "table_file_deletion", "file_number": 80}
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802310813, "job": 48, "event": "table_file_deletion", "file_number": 78}
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.130072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.310894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.310901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.310902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.310904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:23:22.310905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-1 sudo[261058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:23:22 compute-1 sudo[261058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:22 compute-1 nova_compute[226101]: 2025-12-06 07:23:22.318 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:22 compute-1 nova_compute[226101]: 2025-12-06 07:23:22.410 226109 DEBUG oslo_concurrency.lockutils [None req-6e4c924d-be87-41fc-a7be-32e17bb72074 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 48.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:22 compute-1 nova_compute[226101]: 2025-12-06 07:23:22.411 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 16.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:22 compute-1 nova_compute[226101]: 2025-12-06 07:23:22.411 226109 INFO nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 8be3ed9f-1d51-4e7b-b352-45a261ab48ea] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.
Dec 06 07:23:22 compute-1 nova_compute[226101]: 2025-12-06 07:23:22.412 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "8be3ed9f-1d51-4e7b-b352-45a261ab48ea" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:22 compute-1 sudo[261058]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:23.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:23 compute-1 nova_compute[226101]: 2025-12-06 07:23:23.862 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e262 e262: 3 total, 3 up, 3 in
Dec 06 07:23:24 compute-1 ceph-mon[81689]: pgmap v1968: 305 pgs: 305 active+clean; 249 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 2.6 MiB/s wr, 110 op/s
Dec 06 07:23:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/24472655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:23:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:23:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:23:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:23:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:23:24 compute-1 ceph-mon[81689]: osdmap e262: 3 total, 3 up, 3 in
Dec 06 07:23:24 compute-1 ceph-mon[81689]: pgmap v1970: 305 pgs: 305 active+clean; 200 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 130 op/s
Dec 06 07:23:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e263 e263: 3 total, 3 up, 3 in
Dec 06 07:23:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:25.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:25 compute-1 nova_compute[226101]: 2025-12-06 07:23:25.275 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:25 compute-1 ceph-mon[81689]: osdmap e263: 3 total, 3 up, 3 in
Dec 06 07:23:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:26 compute-1 ceph-mon[81689]: pgmap v1972: 305 pgs: 305 active+clean; 200 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 2.4 MiB/s wr, 138 op/s
Dec 06 07:23:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:27.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:27.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:28 compute-1 nova_compute[226101]: 2025-12-06 07:23:28.867 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:28 compute-1 ceph-mon[81689]: pgmap v1973: 305 pgs: 305 active+clean; 172 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Dec 06 07:23:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4237680350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:29.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:29 compute-1 podman[261114]: 2025-12-06 07:23:29.139309097 +0000 UTC m=+0.128454416 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:23:29 compute-1 nova_compute[226101]: 2025-12-06 07:23:29.367 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:29 compute-1 nova_compute[226101]: 2025-12-06 07:23:29.367 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:29 compute-1 nova_compute[226101]: 2025-12-06 07:23:29.403 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:23:29 compute-1 nova_compute[226101]: 2025-12-06 07:23:29.505 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:29 compute-1 nova_compute[226101]: 2025-12-06 07:23:29.506 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:29 compute-1 nova_compute[226101]: 2025-12-06 07:23:29.514 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:23:29 compute-1 nova_compute[226101]: 2025-12-06 07:23:29.514 226109 INFO nova.compute.claims [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:23:29 compute-1 nova_compute[226101]: 2025-12-06 07:23:29.636 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:30 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1442763614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.073 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.079 226109 DEBUG nova.compute.provider_tree [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.096 226109 DEBUG nova.scheduler.client.report [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.115 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.116 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.178 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.178 226109 DEBUG nova.network.neutron [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.200 226109 INFO nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.232 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.277 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.339 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.341 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.341 226109 INFO nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Creating image(s)
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.369 226109 DEBUG nova.storage.rbd_utils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.397 226109 DEBUG nova.storage.rbd_utils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.427 226109 DEBUG nova.storage.rbd_utils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.431 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.460 226109 DEBUG nova.policy [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd966fefcb38a45219b9cc637c46a3d62', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.498 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.499 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.500 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.500 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.535 226109 DEBUG nova.storage.rbd_utils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:30 compute-1 nova_compute[226101]: 2025-12-06 07:23:30.540 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:31.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:31.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:32 compute-1 podman[261259]: 2025-12-06 07:23:32.082889283 +0000 UTC m=+0.057173012 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 06 07:23:32 compute-1 podman[261258]: 2025-12-06 07:23:32.083578702 +0000 UTC m=+0.062210738 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec 06 07:23:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:33.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:33.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:33 compute-1 nova_compute[226101]: 2025-12-06 07:23:33.881 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:33 compute-1 nova_compute[226101]: 2025-12-06 07:23:33.976 226109 DEBUG nova.network.neutron [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Successfully created port: f9d7d351-d081-4727-a686-93ca17bf53b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:23:34 compute-1 ceph-mon[81689]: pgmap v1974: 305 pgs: 305 active+clean; 135 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 23 KiB/s wr, 68 op/s
Dec 06 07:23:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1442763614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:34 compute-1 nova_compute[226101]: 2025-12-06 07:23:34.301 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 e264: 3 total, 3 up, 3 in
Dec 06 07:23:34 compute-1 nova_compute[226101]: 2025-12-06 07:23:34.491 226109 DEBUG nova.storage.rbd_utils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] resizing rbd image fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:23:34 compute-1 sudo[261348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:34 compute-1 sudo[261348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:34 compute-1 sudo[261348]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:34 compute-1 nova_compute[226101]: 2025-12-06 07:23:34.624 226109 DEBUG nova.objects.instance [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'migration_context' on Instance uuid fa33eb8b-971a-4974-9b2e-8a6b85faa5dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:34 compute-1 sudo[261375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:23:34 compute-1 sudo[261375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:34 compute-1 nova_compute[226101]: 2025-12-06 07:23:34.637 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:23:34 compute-1 nova_compute[226101]: 2025-12-06 07:23:34.637 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Ensure instance console log exists: /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:23:34 compute-1 nova_compute[226101]: 2025-12-06 07:23:34.638 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:34 compute-1 nova_compute[226101]: 2025-12-06 07:23:34.638 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:34 compute-1 nova_compute[226101]: 2025-12-06 07:23:34.638 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:34 compute-1 sudo[261375]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:35.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:35.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:35 compute-1 nova_compute[226101]: 2025-12-06 07:23:35.279 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:35 compute-1 ceph-mon[81689]: pgmap v1975: 305 pgs: 305 active+clean; 144 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.2 MiB/s wr, 86 op/s
Dec 06 07:23:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/610633148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/377675331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:35 compute-1 ceph-mon[81689]: pgmap v1976: 305 pgs: 305 active+clean; 167 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:23:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:35 compute-1 ceph-mon[81689]: osdmap e264: 3 total, 3 up, 3 in
Dec 06 07:23:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2798063090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:36 compute-1 nova_compute[226101]: 2025-12-06 07:23:36.773 226109 DEBUG nova.network.neutron [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Successfully updated port: f9d7d351-d081-4727-a686-93ca17bf53b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:23:36 compute-1 nova_compute[226101]: 2025-12-06 07:23:36.815 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:36 compute-1 nova_compute[226101]: 2025-12-06 07:23:36.815 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:36 compute-1 nova_compute[226101]: 2025-12-06 07:23:36.815 226109 DEBUG nova.network.neutron [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:23:36 compute-1 nova_compute[226101]: 2025-12-06 07:23:36.915 226109 DEBUG nova.compute.manager [req-a7526bb0-893d-498e-9ac6-4cfb87c1963e req-01872c18-2402-4f11-b41d-99a01fdd4017 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received event network-changed-f9d7d351-d081-4727-a686-93ca17bf53b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:36 compute-1 nova_compute[226101]: 2025-12-06 07:23:36.915 226109 DEBUG nova.compute.manager [req-a7526bb0-893d-498e-9ac6-4cfb87c1963e req-01872c18-2402-4f11-b41d-99a01fdd4017 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Refreshing instance network info cache due to event network-changed-f9d7d351-d081-4727-a686-93ca17bf53b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:23:36 compute-1 nova_compute[226101]: 2025-12-06 07:23:36.915 226109 DEBUG oslo_concurrency.lockutils [req-a7526bb0-893d-498e-9ac6-4cfb87c1963e req-01872c18-2402-4f11-b41d-99a01fdd4017 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:37.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:37 compute-1 ceph-mon[81689]: pgmap v1978: 305 pgs: 305 active+clean; 205 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 4.1 MiB/s wr, 113 op/s
Dec 06 07:23:37 compute-1 nova_compute[226101]: 2025-12-06 07:23:37.082 226109 DEBUG nova.network.neutron [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:23:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.244 226109 DEBUG nova.network.neutron [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Updating instance_info_cache with network_info: [{"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.262 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.263 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance network_info: |[{"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.264 226109 DEBUG oslo_concurrency.lockutils [req-a7526bb0-893d-498e-9ac6-4cfb87c1963e req-01872c18-2402-4f11-b41d-99a01fdd4017 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.265 226109 DEBUG nova.network.neutron [req-a7526bb0-893d-498e-9ac6-4cfb87c1963e req-01872c18-2402-4f11-b41d-99a01fdd4017 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Refreshing network info cache for port f9d7d351-d081-4727-a686-93ca17bf53b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.270 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Start _get_guest_xml network_info=[{"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.277 226109 WARNING nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.282 226109 DEBUG nova.virt.libvirt.host [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.283 226109 DEBUG nova.virt.libvirt.host [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.288 226109 DEBUG nova.virt.libvirt.host [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.289 226109 DEBUG nova.virt.libvirt.host [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.290 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.290 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.291 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.291 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.291 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.291 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.292 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.292 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.292 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.292 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.292 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.293 226109 DEBUG nova.virt.hardware [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.295 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:23:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2675889841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.726 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.746 226109 DEBUG nova.storage.rbd_utils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.753 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:38 compute-1 nova_compute[226101]: 2025-12-06 07:23:38.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:39.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:23:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2596147566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.464 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.465 226109 DEBUG nova.virt.libvirt.vif [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:23:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1064235958',display_name='tempest-DeleteServersTestJSON-server-1064235958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1064235958',id=99,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-idmsfxu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:30Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=fa33eb8b-971a-4974-9b2e-8a6b85faa5dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.466 226109 DEBUG nova.network.os_vif_util [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.467 226109 DEBUG nova.network.os_vif_util [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:ec:0c,bridge_name='br-int',has_traffic_filtering=True,id=f9d7d351-d081-4727-a686-93ca17bf53b9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d7d351-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.467 226109 DEBUG nova.objects.instance [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid fa33eb8b-971a-4974-9b2e-8a6b85faa5dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.485 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <uuid>fa33eb8b-971a-4974-9b2e-8a6b85faa5dc</uuid>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <name>instance-00000063</name>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <nova:name>tempest-DeleteServersTestJSON-server-1064235958</nova:name>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:23:38</nova:creationTime>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <nova:user uuid="d966fefcb38a45219b9cc637c46a3d62">tempest-DeleteServersTestJSON-1764569218-project-member</nova:user>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <nova:project uuid="c6d2f50c0db54315bfa96a24511dda90">tempest-DeleteServersTestJSON-1764569218</nova:project>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <nova:port uuid="f9d7d351-d081-4727-a686-93ca17bf53b9">
Dec 06 07:23:39 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <system>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <entry name="serial">fa33eb8b-971a-4974-9b2e-8a6b85faa5dc</entry>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <entry name="uuid">fa33eb8b-971a-4974-9b2e-8a6b85faa5dc</entry>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </system>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <os>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   </os>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <features>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   </features>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk">
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       </source>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk.config">
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       </source>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:23:39 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d8:ec:0c"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <target dev="tapf9d7d351-d0"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc/console.log" append="off"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <video>
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </video>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:23:39 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:23:39 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:23:39 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:23:39 compute-1 nova_compute[226101]: </domain>
Dec 06 07:23:39 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.486 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Preparing to wait for external event network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.487 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.487 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.487 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.488 226109 DEBUG nova.virt.libvirt.vif [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:23:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1064235958',display_name='tempest-DeleteServersTestJSON-server-1064235958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1064235958',id=99,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-idmsfxu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:30Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=fa33eb8b-971a-4974-9b2e-8a6b85faa5dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.488 226109 DEBUG nova.network.os_vif_util [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.489 226109 DEBUG nova.network.os_vif_util [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:ec:0c,bridge_name='br-int',has_traffic_filtering=True,id=f9d7d351-d081-4727-a686-93ca17bf53b9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d7d351-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.489 226109 DEBUG os_vif [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:ec:0c,bridge_name='br-int',has_traffic_filtering=True,id=f9d7d351-d081-4727-a686-93ca17bf53b9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d7d351-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.490 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.490 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.491 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.493 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.494 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9d7d351-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.494 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf9d7d351-d0, col_values=(('external_ids', {'iface-id': 'f9d7d351-d081-4727-a686-93ca17bf53b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:ec:0c', 'vm-uuid': 'fa33eb8b-971a-4974-9b2e-8a6b85faa5dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.495 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:39 compute-1 NetworkManager[49031]: <info>  [1765005819.4967] manager: (tapf9d7d351-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.499 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.502 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.502 226109 INFO os_vif [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:ec:0c,bridge_name='br-int',has_traffic_filtering=True,id=f9d7d351-d081-4727-a686-93ca17bf53b9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d7d351-d0')
Dec 06 07:23:39 compute-1 ceph-mon[81689]: pgmap v1979: 305 pgs: 305 active+clean; 205 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 4.1 MiB/s wr, 93 op/s
Dec 06 07:23:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2675889841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.711 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.711 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.711 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:d8:ec:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:23:39 compute-1 nova_compute[226101]: 2025-12-06 07:23:39.712 226109 INFO nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Using config drive
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.149 226109 DEBUG nova.storage.rbd_utils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.155 226109 DEBUG nova.network.neutron [req-a7526bb0-893d-498e-9ac6-4cfb87c1963e req-01872c18-2402-4f11-b41d-99a01fdd4017 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Updated VIF entry in instance network info cache for port f9d7d351-d081-4727-a686-93ca17bf53b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.155 226109 DEBUG nova.network.neutron [req-a7526bb0-893d-498e-9ac6-4cfb87c1963e req-01872c18-2402-4f11-b41d-99a01fdd4017 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Updating instance_info_cache with network_info: [{"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.161 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.161 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.198 226109 DEBUG oslo_concurrency.lockutils [req-a7526bb0-893d-498e-9ac6-4cfb87c1963e req-01872c18-2402-4f11-b41d-99a01fdd4017 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.200 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.280 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.284 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.285 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.292 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.292 226109 INFO nova.compute.claims [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:23:40 compute-1 nova_compute[226101]: 2025-12-06 07:23:40.448 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:41.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.084 226109 INFO nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Creating config drive at /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc/disk.config
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.092 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxkr1u7on execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:41.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2596147566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1605953259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:41 compute-1 ceph-mon[81689]: pgmap v1980: 305 pgs: 305 active+clean; 260 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 6.4 MiB/s wr, 134 op/s
Dec 06 07:23:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3502054305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/432722194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.231 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxkr1u7on" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.272 226109 DEBUG nova.storage.rbd_utils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.276 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc/disk.config fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.304 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.856s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.310 226109 DEBUG nova.compute.provider_tree [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.326 226109 DEBUG nova.scheduler.client.report [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.361 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.362 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.414 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.415 226109 DEBUG nova.network.neutron [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.436 226109 INFO nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.452 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.555 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.557 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.557 226109 INFO nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Creating image(s)
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.605 226109 DEBUG nova.storage.rbd_utils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.635 226109 DEBUG nova.storage.rbd_utils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.662 226109 DEBUG nova.storage.rbd_utils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.666 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.732 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.733 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.733 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.734 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.763 226109 DEBUG nova.storage.rbd_utils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:41 compute-1 nova_compute[226101]: 2025-12-06 07:23:41.767 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.010 226109 DEBUG nova.policy [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'baddb65c90da47a58d026b0db966f6c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '001e2256cb8b430d93c1ff613010d199', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.119 226109 DEBUG oslo_concurrency.processutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc/disk.config fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.120 226109 INFO nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Deleting local config drive /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc/disk.config because it was imported into RBD.
Dec 06 07:23:42 compute-1 kernel: tapf9d7d351-d0: entered promiscuous mode
Dec 06 07:23:42 compute-1 NetworkManager[49031]: <info>  [1765005822.1730] manager: (tapf9d7d351-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Dec 06 07:23:42 compute-1 ovn_controller[130279]: 2025-12-06T07:23:42Z|00389|binding|INFO|Claiming lport f9d7d351-d081-4727-a686-93ca17bf53b9 for this chassis.
Dec 06 07:23:42 compute-1 ovn_controller[130279]: 2025-12-06T07:23:42Z|00390|binding|INFO|f9d7d351-d081-4727-a686-93ca17bf53b9: Claiming fa:16:3e:d8:ec:0c 10.100.0.8
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.175 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-1 ovn_controller[130279]: 2025-12-06T07:23:42Z|00391|binding|INFO|Setting lport f9d7d351-d081-4727-a686-93ca17bf53b9 ovn-installed in OVS
Dec 06 07:23:42 compute-1 systemd-udevd[261666]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:23:42 compute-1 ovn_controller[130279]: 2025-12-06T07:23:42Z|00392|binding|INFO|Setting lport f9d7d351-d081-4727-a686-93ca17bf53b9 up in Southbound
Dec 06 07:23:42 compute-1 NetworkManager[49031]: <info>  [1765005822.2277] device (tapf9d7d351-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.227 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:ec:0c 10.100.0.8'], port_security=['fa:16:3e:d8:ec:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fa33eb8b-971a-4974-9b2e-8a6b85faa5dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f9d7d351-d081-4727-a686-93ca17bf53b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:23:42 compute-1 NetworkManager[49031]: <info>  [1765005822.2284] device (tapf9d7d351-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.228 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.229 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f9d7d351-d081-4727-a686-93ca17bf53b9 in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 bound to our chassis
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.231 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.243 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1c3fea-312a-45fc-bc34-1528b37fdd16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.244 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85cfbf28-71 in ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.246 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85cfbf28-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.246 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[22566521-9f82-4c2b-a735-6008b17e31a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 systemd-machined[190302]: New machine qemu-45-instance-00000063.
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.248 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a89ae9bc-52e0-495e-a482-d810cbb26715]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 systemd[1]: Started Virtual Machine qemu-45-instance-00000063.
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.261 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[453f93ba-2479-4803-a765-f3ed60c12c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.278 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2ddb5e-724e-4cba-9584-c99b11970313]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.320 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e9892c7c-2f30-4bad-b332-066c8998fc8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 systemd-udevd[261669]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:23:42 compute-1 NetworkManager[49031]: <info>  [1765005822.3278] manager: (tap85cfbf28-70): new Veth device (/org/freedesktop/NetworkManager/Devices/179)
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.327 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d0fbab-2807-4db6-a06b-1603e53077ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.356 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[94ff0b9f-e49a-4482-87cd-c2073b019501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.361 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7eeaba04-7291-44da-8448-387105e7e837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 NetworkManager[49031]: <info>  [1765005822.3900] device (tap85cfbf28-70): carrier: link connected
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.398 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc4d123-15b3-4633-94e9-00c8edd2d83e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.422 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee7ed8c-6bf6-4b62-8f65-2c996304bd7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610189, 'reachable_time': 18874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261702, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.439 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3c21d40f-4b92-4520-b69d-6979e4f00dda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610189, 'tstamp': 610189}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261703, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.463 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[095c2666-8e0c-487b-8c19-b59c5906776c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610189, 'reachable_time': 18874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261704, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.490 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[56eecfaa-1318-4512-aecc-86f7d7b134a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/432722194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:42 compute-1 ceph-mon[81689]: pgmap v1981: 305 pgs: 305 active+clean; 260 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.4 MiB/s wr, 154 op/s
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.547 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dad612fb-5875-4472-acc1-66b5be90f224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.548 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.549 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.549 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85cfbf28-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.551 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-1 NetworkManager[49031]: <info>  [1765005822.5517] manager: (tap85cfbf28-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Dec 06 07:23:42 compute-1 kernel: tap85cfbf28-70: entered promiscuous mode
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.553 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.555 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85cfbf28-70, col_values=(('external_ids', {'iface-id': '41b1b168-8e0e-4991-9750-9b31221f4863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:42 compute-1 ovn_controller[130279]: 2025-12-06T07:23:42Z|00393|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.557 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.558 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.559 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[df555410-c4c4-47c6-a71c-f3e1ab86cf88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.560 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:23:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:42.561 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'env', 'PROCESS_TAG=haproxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85cfbf28-7016-4776-8fc2-2eb08a6b8347.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.574 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.665 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.898s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.751 226109 DEBUG nova.storage.rbd_utils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] resizing rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.825 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005822.824567, fa33eb8b-971a-4974-9b2e-8a6b85faa5dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.825 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] VM Started (Lifecycle Event)
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.832 226109 DEBUG nova.network.neutron [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Successfully created port: f1259ceb-8bfa-4339-914f-ce900f9a2cf0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.862 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.866 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005822.8256106, fa33eb8b-971a-4974-9b2e-8a6b85faa5dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.867 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] VM Paused (Lifecycle Event)
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.919 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.922 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:23:42 compute-1 nova_compute[226101]: 2025-12-06 07:23:42.949 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:23:43 compute-1 podman[261832]: 2025-12-06 07:23:42.940890063 +0000 UTC m=+0.022069589 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:23:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:43.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:43.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:44 compute-1 podman[261832]: 2025-12-06 07:23:44.255984596 +0000 UTC m=+1.337164102 container create 1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:23:44 compute-1 systemd[1]: Started libpod-conmon-1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad.scope.
Dec 06 07:23:44 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:23:44 compute-1 ceph-mon[81689]: pgmap v1982: 305 pgs: 305 active+clean; 291 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.9 MiB/s wr, 193 op/s
Dec 06 07:23:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54fdf20b29d5a37a12440a35ed6e393b9f78753b9ba87918af4c788d6684879/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.630 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.633 226109 DEBUG nova.compute.manager [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received event network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.633 226109 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.634 226109 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:44 compute-1 podman[261832]: 2025-12-06 07:23:44.635816839 +0000 UTC m=+1.716996345 container init 1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.636 226109 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.636 226109 DEBUG nova.compute.manager [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Processing event network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.636 226109 DEBUG nova.compute.manager [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received event network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.636 226109 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.637 226109 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.637 226109 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.637 226109 DEBUG nova.compute.manager [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] No waiting events found dispatching network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.637 226109 WARNING nova.compute.manager [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received unexpected event network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 for instance with vm_state building and task_state spawning.
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.638 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:23:44 compute-1 podman[261832]: 2025-12-06 07:23:44.64139343 +0000 UTC m=+1.722572936 container start 1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.646 226109 DEBUG nova.objects.instance [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'migration_context' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.648 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005824.6420884, fa33eb8b-971a-4974-9b2e-8a6b85faa5dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.648 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] VM Resumed (Lifecycle Event)
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.650 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.653 226109 INFO nova.virt.libvirt.driver [-] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance spawned successfully.
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.654 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:23:44 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[261861]: [NOTICE]   (261869) : New worker (261871) forked
Dec 06 07:23:44 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[261861]: [NOTICE]   (261869) : Loading success.
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.674 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.675 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Ensure instance console log exists: /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.675 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.675 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.676 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.677 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.682 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.685 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.685 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.686 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.686 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.686 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.687 226109 DEBUG nova.virt.libvirt.driver [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.715 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.754 226109 INFO nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Took 14.41 seconds to spawn the instance on the hypervisor.
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.755 226109 DEBUG nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.835 226109 INFO nova.compute.manager [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Took 15.37 seconds to build instance.
Dec 06 07:23:44 compute-1 nova_compute[226101]: 2025-12-06 07:23:44.859 226109 DEBUG oslo_concurrency.lockutils [None req-39c829d2-bbbf-4c76-8bf6-ca752b9b061c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.007 226109 DEBUG nova.network.neutron [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Successfully updated port: f1259ceb-8bfa-4339-914f-ce900f9a2cf0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.025 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "refresh_cache-87a430a2-0883-4e5d-a1ac-e8d75c834ac7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.026 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquired lock "refresh_cache-87a430a2-0883-4e5d-a1ac-e8d75c834ac7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.026 226109 DEBUG nova.network.neutron [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:23:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:45.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.150 226109 DEBUG nova.compute.manager [req-8d6dd8ed-a2dd-4591-8a9b-14962417fb97 req-bfba87fb-f333-41cd-8294-0ceb345116d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-changed-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.150 226109 DEBUG nova.compute.manager [req-8d6dd8ed-a2dd-4591-8a9b-14962417fb97 req-bfba87fb-f333-41cd-8294-0ceb345116d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Refreshing instance network info cache due to event network-changed-f1259ceb-8bfa-4339-914f-ce900f9a2cf0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.151 226109 DEBUG oslo_concurrency.lockutils [req-8d6dd8ed-a2dd-4591-8a9b-14962417fb97 req-bfba87fb-f333-41cd-8294-0ceb345116d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-87a430a2-0883-4e5d-a1ac-e8d75c834ac7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.207 226109 DEBUG nova.network.neutron [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:23:45 compute-1 nova_compute[226101]: 2025-12-06 07:23:45.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.316 226109 DEBUG oslo_concurrency.lockutils [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.317 226109 DEBUG oslo_concurrency.lockutils [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.317 226109 DEBUG nova.compute.manager [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.319 226109 DEBUG nova.compute.manager [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.320 226109 DEBUG nova.objects.instance [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'flavor' on Instance uuid fa33eb8b-971a-4974-9b2e-8a6b85faa5dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.322 226109 DEBUG nova.network.neutron [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Updating instance_info_cache with network_info: [{"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.346 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Releasing lock "refresh_cache-87a430a2-0883-4e5d-a1ac-e8d75c834ac7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.346 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance network_info: |[{"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.347 226109 DEBUG oslo_concurrency.lockutils [req-8d6dd8ed-a2dd-4591-8a9b-14962417fb97 req-bfba87fb-f333-41cd-8294-0ceb345116d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-87a430a2-0883-4e5d-a1ac-e8d75c834ac7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.347 226109 DEBUG nova.network.neutron [req-8d6dd8ed-a2dd-4591-8a9b-14962417fb97 req-bfba87fb-f333-41cd-8294-0ceb345116d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Refreshing network info cache for port f1259ceb-8bfa-4339-914f-ce900f9a2cf0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.350 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Start _get_guest_xml network_info=[{"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.352 226109 DEBUG nova.virt.libvirt.driver [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.354 226109 WARNING nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.363 226109 DEBUG nova.virt.libvirt.host [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.363 226109 DEBUG nova.virt.libvirt.host [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.367 226109 DEBUG nova.virt.libvirt.host [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.368 226109 DEBUG nova.virt.libvirt.host [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.369 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.369 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.369 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.370 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.370 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.370 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.370 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.370 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.371 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.371 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.371 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.371 226109 DEBUG nova.virt.hardware [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.374 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:23:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1704251606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.808 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.844 226109 DEBUG nova.storage.rbd_utils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:46 compute-1 nova_compute[226101]: 2025-12-06 07:23:46.850 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:47.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:47.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:47 compute-1 ceph-mon[81689]: pgmap v1983: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 215 op/s
Dec 06 07:23:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1704251606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:23:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2919027265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.445 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.447 226109 DEBUG nova.virt.libvirt.vif [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1613212427',display_name='tempest-tempest.common.compute-instance-1613212427',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1613212427',id=101,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-5ax3ju2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:41Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=87a430a2-0883-4e5d-a1ac-e8d75c834ac7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.447 226109 DEBUG nova.network.os_vif_util [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.448 226109 DEBUG nova.network.os_vif_util [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.449 226109 DEBUG nova.objects.instance [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.482 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <uuid>87a430a2-0883-4e5d-a1ac-e8d75c834ac7</uuid>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <name>instance-00000065</name>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <nova:name>tempest-tempest.common.compute-instance-1613212427</nova:name>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:23:46</nova:creationTime>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <nova:user uuid="baddb65c90da47a58d026b0db966f6c8">tempest-ServerActionsTestOtherA-1949739102-project-member</nova:user>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <nova:project uuid="001e2256cb8b430d93c1ff613010d199">tempest-ServerActionsTestOtherA-1949739102</nova:project>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <nova:port uuid="f1259ceb-8bfa-4339-914f-ce900f9a2cf0">
Dec 06 07:23:47 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <system>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <entry name="serial">87a430a2-0883-4e5d-a1ac-e8d75c834ac7</entry>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <entry name="uuid">87a430a2-0883-4e5d-a1ac-e8d75c834ac7</entry>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </system>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <os>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   </os>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <features>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   </features>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk">
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       </source>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config">
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       </source>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:23:47 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:a9:d9:9e"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <target dev="tapf1259ceb-8b"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/console.log" append="off"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <video>
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </video>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:23:47 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:23:47 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:23:47 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:23:47 compute-1 nova_compute[226101]: </domain>
Dec 06 07:23:47 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.482 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Preparing to wait for external event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.482 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.483 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.483 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.484 226109 DEBUG nova.virt.libvirt.vif [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1613212427',display_name='tempest-tempest.common.compute-instance-1613212427',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1613212427',id=101,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-5ax3ju2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:41Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=87a430a2-0883-4e5d-a1ac-e8d75c834ac7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.484 226109 DEBUG nova.network.os_vif_util [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.484 226109 DEBUG nova.network.os_vif_util [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.485 226109 DEBUG os_vif [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.485 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.486 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.486 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.489 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.489 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1259ceb-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.489 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1259ceb-8b, col_values=(('external_ids', {'iface-id': 'f1259ceb-8bfa-4339-914f-ce900f9a2cf0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:d9:9e', 'vm-uuid': '87a430a2-0883-4e5d-a1ac-e8d75c834ac7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:47 compute-1 NetworkManager[49031]: <info>  [1765005827.4918] manager: (tapf1259ceb-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.492 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.497 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:47 compute-1 nova_compute[226101]: 2025-12-06 07:23:47.498 226109 INFO os_vif [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b')
Dec 06 07:23:48 compute-1 nova_compute[226101]: 2025-12-06 07:23:48.022 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:23:48 compute-1 nova_compute[226101]: 2025-12-06 07:23:48.023 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:23:48 compute-1 nova_compute[226101]: 2025-12-06 07:23:48.023 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:a9:d9:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:23:48 compute-1 nova_compute[226101]: 2025-12-06 07:23:48.023 226109 INFO nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Using config drive
Dec 06 07:23:48 compute-1 nova_compute[226101]: 2025-12-06 07:23:48.053 226109 DEBUG nova.storage.rbd_utils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:49.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:49.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2919027265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:49 compute-1 ceph-mon[81689]: pgmap v1984: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.7 MiB/s wr, 160 op/s
Dec 06 07:23:49 compute-1 nova_compute[226101]: 2025-12-06 07:23:49.221 226109 INFO nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Creating config drive at /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config
Dec 06 07:23:49 compute-1 nova_compute[226101]: 2025-12-06 07:23:49.226 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfs87c7j1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:49 compute-1 nova_compute[226101]: 2025-12-06 07:23:49.310 226109 DEBUG nova.network.neutron [req-8d6dd8ed-a2dd-4591-8a9b-14962417fb97 req-bfba87fb-f333-41cd-8294-0ceb345116d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Updated VIF entry in instance network info cache for port f1259ceb-8bfa-4339-914f-ce900f9a2cf0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:23:49 compute-1 nova_compute[226101]: 2025-12-06 07:23:49.311 226109 DEBUG nova.network.neutron [req-8d6dd8ed-a2dd-4591-8a9b-14962417fb97 req-bfba87fb-f333-41cd-8294-0ceb345116d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Updating instance_info_cache with network_info: [{"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:49 compute-1 nova_compute[226101]: 2025-12-06 07:23:49.334 226109 DEBUG oslo_concurrency.lockutils [req-8d6dd8ed-a2dd-4591-8a9b-14962417fb97 req-bfba87fb-f333-41cd-8294-0ceb345116d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-87a430a2-0883-4e5d-a1ac-e8d75c834ac7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:49 compute-1 nova_compute[226101]: 2025-12-06 07:23:49.368 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfs87c7j1" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:49 compute-1 nova_compute[226101]: 2025-12-06 07:23:49.542 226109 DEBUG nova.storage.rbd_utils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:49 compute-1 nova_compute[226101]: 2025-12-06 07:23:49.545 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:50 compute-1 nova_compute[226101]: 2025-12-06 07:23:50.285 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:51.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:51.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.197 226109 DEBUG oslo_concurrency.processutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.198 226109 INFO nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Deleting local config drive /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config because it was imported into RBD.
Dec 06 07:23:51 compute-1 kernel: tapf1259ceb-8b: entered promiscuous mode
Dec 06 07:23:51 compute-1 NetworkManager[49031]: <info>  [1765005831.2552] manager: (tapf1259ceb-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.255 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 ovn_controller[130279]: 2025-12-06T07:23:51Z|00394|binding|INFO|Claiming lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for this chassis.
Dec 06 07:23:51 compute-1 ovn_controller[130279]: 2025-12-06T07:23:51Z|00395|binding|INFO|f1259ceb-8bfa-4339-914f-ce900f9a2cf0: Claiming fa:16:3e:a9:d9:9e 10.100.0.11
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.263 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.272 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 NetworkManager[49031]: <info>  [1765005831.2753] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.278 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:d9:9e 10.100.0.11'], port_security=['fa:16:3e:a9:d9:9e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '87a430a2-0883-4e5d-a1ac-e8d75c834ac7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f2ed01ce-ee24-45dc-b59f-29fb74c119b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f1259ceb-8bfa-4339-914f-ce900f9a2cf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.279 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f1259ceb-8bfa-4339-914f-ce900f9a2cf0 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e bound to our chassis
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.281 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:23:51 compute-1 NetworkManager[49031]: <info>  [1765005831.2821] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Dec 06 07:23:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.293 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bafe91f1-3217-4ad2-8929-c10b20a1d3f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.294 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6209aab-d1 in ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:23:51 compute-1 systemd-udevd[262016]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.296 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6209aab-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.296 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bb89ff72-c04c-4fc8-b5ff-51250440fd7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.297 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[aa69442c-2ba5-44ed-b4eb-169fbef75698]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ceph-mon[81689]: pgmap v1985: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Dec 06 07:23:51 compute-1 systemd-machined[190302]: New machine qemu-46-instance-00000065.
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.312 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5eaf2288-d61f-4181-93ec-efd56c3d77a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 systemd[1]: Started Virtual Machine qemu-46-instance-00000065.
Dec 06 07:23:51 compute-1 NetworkManager[49031]: <info>  [1765005831.3169] device (tapf1259ceb-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:23:51 compute-1 NetworkManager[49031]: <info>  [1765005831.3179] device (tapf1259ceb-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.343 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a124d09a-139f-43cf-b6e9-3024f2d671c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.376 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[dddad18c-fd7f-41d3-bf53-af0b5c1818da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 NetworkManager[49031]: <info>  [1765005831.3849] manager: (tapf6209aab-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/185)
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.384 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[810fb2ef-1632-4304-8cba-49b692a2bcb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.419 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[64727678-7321-4128-a7fd-2122e0aea444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.422 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c3daa709-9c00-4e5e-a664-561552f2b748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 NetworkManager[49031]: <info>  [1765005831.4450] device (tapf6209aab-d0): carrier: link connected
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.451 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[617fa54e-b750-4b72-80c7-f18a757dbfc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.458 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.468 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2e123bfc-1eda-4e0b-90c8-273fd5841513]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611094, 'reachable_time': 17161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262049, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_controller[130279]: 2025-12-06T07:23:51Z|00396|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.484 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8040f8ca-242e-4eed-99b0-3f92c8b06d43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c5a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611094, 'tstamp': 611094}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262050, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.493 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.500 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[80a5949f-456a-4d99-965c-972af36cfa4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611094, 'reachable_time': 17161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262051, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_controller[130279]: 2025-12-06T07:23:51Z|00397|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 ovn-installed in OVS
Dec 06 07:23:51 compute-1 ovn_controller[130279]: 2025-12-06T07:23:51Z|00398|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 up in Southbound
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.507 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.527 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[25c21313-c731-4551-b928-c6ccd7c1dc82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.579 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0833e9cf-3bc1-4ad7-ad84-11a456e74e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.581 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.581 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.581 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.583 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 NetworkManager[49031]: <info>  [1765005831.5841] manager: (tapf6209aab-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Dec 06 07:23:51 compute-1 kernel: tapf6209aab-d0: entered promiscuous mode
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.585 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.590 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-1 ovn_controller[130279]: 2025-12-06T07:23:51Z|00399|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.594 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.595 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a740911b-452e-4146-bc62-f7eab821e92e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.596 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:23:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:23:51.596 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'env', 'PROCESS_TAG=haproxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6209aab-d53f-4d58-9b94-ffb7adc6239e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:23:51 compute-1 nova_compute[226101]: 2025-12-06 07:23:51.608 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:52 compute-1 podman[262080]: 2025-12-06 07:23:52.004034255 +0000 UTC m=+0.073854613 container create ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:23:52 compute-1 systemd[1]: Started libpod-conmon-ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a.scope.
Dec 06 07:23:52 compute-1 podman[262080]: 2025-12-06 07:23:51.956961829 +0000 UTC m=+0.026782207 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:23:52 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:23:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665ce9512c2463022d592a372693ba3ecd9626686ac83c73c483f478a560d058/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:52 compute-1 podman[262080]: 2025-12-06 07:23:52.097597893 +0000 UTC m=+0.167418271 container init ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:23:52 compute-1 podman[262080]: 2025-12-06 07:23:52.1040912 +0000 UTC m=+0.173911558 container start ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:23:52 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[262095]: [NOTICE]   (262099) : New worker (262101) forked
Dec 06 07:23:52 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[262095]: [NOTICE]   (262099) : Loading success.
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.226 226109 DEBUG nova.compute.manager [req-92684c62-e6ee-4a93-9765-bebbe981603b req-bab3ba70-0a84-4fee-86ea-656d2253f4f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.226 226109 DEBUG oslo_concurrency.lockutils [req-92684c62-e6ee-4a93-9765-bebbe981603b req-bab3ba70-0a84-4fee-86ea-656d2253f4f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.226 226109 DEBUG oslo_concurrency.lockutils [req-92684c62-e6ee-4a93-9765-bebbe981603b req-bab3ba70-0a84-4fee-86ea-656d2253f4f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.227 226109 DEBUG oslo_concurrency.lockutils [req-92684c62-e6ee-4a93-9765-bebbe981603b req-bab3ba70-0a84-4fee-86ea-656d2253f4f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.227 226109 DEBUG nova.compute.manager [req-92684c62-e6ee-4a93-9765-bebbe981603b req-bab3ba70-0a84-4fee-86ea-656d2253f4f4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Processing event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.393 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005832.392609, 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.393 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] VM Started (Lifecycle Event)
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.396 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.399 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.404 226109 INFO nova.virt.libvirt.driver [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance spawned successfully.
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.405 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.429 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.432 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.443 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.443 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.444 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.444 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.445 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.445 226109 DEBUG nova.virt.libvirt.driver [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.467 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.467 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005832.3927517, 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.467 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] VM Paused (Lifecycle Event)
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.492 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.493 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.496 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005832.3983376, 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.496 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] VM Resumed (Lifecycle Event)
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.513 226109 INFO nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Took 10.96 seconds to spawn the instance on the hypervisor.
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.513 226109 DEBUG nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.516 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.521 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.548 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.585 226109 INFO nova.compute.manager [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Took 12.33 seconds to build instance.
Dec 06 07:23:52 compute-1 nova_compute[226101]: 2025-12-06 07:23:52.610 226109 DEBUG oslo_concurrency.lockutils [None req-5bae0798-5ea9-41e0-a3b7-c913276bfcad baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:53.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:53.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:53 compute-1 ceph-mon[81689]: pgmap v1986: 305 pgs: 305 active+clean; 307 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.3 MiB/s wr, 230 op/s
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.337 226109 DEBUG nova.compute.manager [req-d437e8d0-a62f-4bb1-b9b6-599bf4ba475e req-ce8c323b-baed-49ca-b676-ef43382253cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.338 226109 DEBUG oslo_concurrency.lockutils [req-d437e8d0-a62f-4bb1-b9b6-599bf4ba475e req-ce8c323b-baed-49ca-b676-ef43382253cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.339 226109 DEBUG oslo_concurrency.lockutils [req-d437e8d0-a62f-4bb1-b9b6-599bf4ba475e req-ce8c323b-baed-49ca-b676-ef43382253cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.339 226109 DEBUG oslo_concurrency.lockutils [req-d437e8d0-a62f-4bb1-b9b6-599bf4ba475e req-ce8c323b-baed-49ca-b676-ef43382253cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.339 226109 DEBUG nova.compute.manager [req-d437e8d0-a62f-4bb1-b9b6-599bf4ba475e req-ce8c323b-baed-49ca-b676-ef43382253cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] No waiting events found dispatching network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.340 226109 WARNING nova.compute.manager [req-d437e8d0-a62f-4bb1-b9b6-599bf4ba475e req-ce8c323b-baed-49ca-b676-ef43382253cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received unexpected event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for instance with vm_state active and task_state None.
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.547 226109 DEBUG oslo_concurrency.lockutils [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.548 226109 DEBUG oslo_concurrency.lockutils [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.548 226109 DEBUG nova.compute.manager [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.551 226109 DEBUG nova.compute.manager [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.552 226109 DEBUG nova.objects.instance [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'flavor' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:54 compute-1 nova_compute[226101]: 2025-12-06 07:23:54.575 226109 DEBUG nova.virt.libvirt.driver [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:23:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:55.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.005000132s ======
Dec 06 07:23:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:55.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000132s
Dec 06 07:23:55 compute-1 nova_compute[226101]: 2025-12-06 07:23:55.288 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:56 compute-1 ceph-mon[81689]: pgmap v1987: 305 pgs: 305 active+clean; 295 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Dec 06 07:23:56 compute-1 nova_compute[226101]: 2025-12-06 07:23:56.608 226109 DEBUG nova.virt.libvirt.driver [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:23:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:57.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:57.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:57 compute-1 nova_compute[226101]: 2025-12-06 07:23:57.495 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:58 compute-1 ceph-mon[81689]: pgmap v1988: 305 pgs: 305 active+clean; 289 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.6 MiB/s wr, 273 op/s
Dec 06 07:23:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:59.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:23:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:59.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:59 compute-1 nova_compute[226101]: 2025-12-06 07:23:59.627 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:59 compute-1 nova_compute[226101]: 2025-12-06 07:23:59.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:59 compute-1 nova_compute[226101]: 2025-12-06 07:23:59.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:59 compute-1 nova_compute[226101]: 2025-12-06 07:23:59.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:59 compute-1 nova_compute[226101]: 2025-12-06 07:23:59.649 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:23:59 compute-1 nova_compute[226101]: 2025-12-06 07:23:59.650 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:59 compute-1 ceph-mon[81689]: pgmap v1989: 305 pgs: 305 active+clean; 289 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Dec 06 07:24:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:24:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1495295474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.131 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:00 compute-1 podman[262172]: 2025-12-06 07:24:00.156952878 +0000 UTC m=+0.136202316 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.200 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.201 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.204 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.205 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.289 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.364 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.366 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4199MB free_disk=20.855472564697266GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.366 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.366 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.506 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance fa33eb8b-971a-4974-9b2e-8a6b85faa5dc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.507 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.507 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.507 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:24:00 compute-1 nova_compute[226101]: 2025-12-06 07:24:00.595 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:01.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2136141956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:01 compute-1 ceph-mon[81689]: pgmap v1990: 305 pgs: 305 active+clean; 293 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Dec 06 07:24:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1495295474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:01.641 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:01.642 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:01.643 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:24:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2011446636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:02 compute-1 nova_compute[226101]: 2025-12-06 07:24:02.195 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:02 compute-1 nova_compute[226101]: 2025-12-06 07:24:02.201 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:24:02 compute-1 nova_compute[226101]: 2025-12-06 07:24:02.216 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:24:02 compute-1 nova_compute[226101]: 2025-12-06 07:24:02.234 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:24:02 compute-1 nova_compute[226101]: 2025-12-06 07:24:02.235 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3617914429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:02 compute-1 nova_compute[226101]: 2025-12-06 07:24:02.498 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:03 compute-1 podman[262225]: 2025-12-06 07:24:03.072672629 +0000 UTC m=+0.050335017 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:24:03 compute-1 podman[262224]: 2025-12-06 07:24:03.071752293 +0000 UTC m=+0.053392439 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 06 07:24:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:03.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:03 compute-1 nova_compute[226101]: 2025-12-06 07:24:03.197 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:03 compute-1 nova_compute[226101]: 2025-12-06 07:24:03.198 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:24:03 compute-1 nova_compute[226101]: 2025-12-06 07:24:03.198 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:24:03 compute-1 ceph-mon[81689]: pgmap v1991: 305 pgs: 305 active+clean; 309 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 213 op/s
Dec 06 07:24:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1586755400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2011446636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:03 compute-1 nova_compute[226101]: 2025-12-06 07:24:03.216 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:24:03 compute-1 nova_compute[226101]: 2025-12-06 07:24:03.216 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:24:03 compute-1 nova_compute[226101]: 2025-12-06 07:24:03.216 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:24:03 compute-1 nova_compute[226101]: 2025-12-06 07:24:03.216 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid fa33eb8b-971a-4974-9b2e-8a6b85faa5dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:03 compute-1 ovn_controller[130279]: 2025-12-06T07:24:03Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:ec:0c 10.100.0.8
Dec 06 07:24:03 compute-1 ovn_controller[130279]: 2025-12-06T07:24:03Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:ec:0c 10.100.0.8
Dec 06 07:24:04 compute-1 ceph-mon[81689]: pgmap v1992: 305 pgs: 305 active+clean; 272 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 216 op/s
Dec 06 07:24:04 compute-1 nova_compute[226101]: 2025-12-06 07:24:04.639 226109 DEBUG nova.virt.libvirt.driver [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:24:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:05.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:05 compute-1 ovn_controller[130279]: 2025-12-06T07:24:05Z|00400|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:24:05 compute-1 ovn_controller[130279]: 2025-12-06T07:24:05Z|00401|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.168 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.292 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.471 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Updating instance_info_cache with network_info: [{"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.495 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.496 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.496 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.497 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.497 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.498 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:05 compute-1 nova_compute[226101]: 2025-12-06 07:24:05.498 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:24:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:07.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:07.087 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:07 compute-1 nova_compute[226101]: 2025-12-06 07:24:07.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:07.089 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:24:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:07.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:07 compute-1 nova_compute[226101]: 2025-12-06 07:24:07.501 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:07 compute-1 nova_compute[226101]: 2025-12-06 07:24:07.653 226109 DEBUG nova.virt.libvirt.driver [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:24:07 compute-1 ceph-mon[81689]: pgmap v1993: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 179 op/s
Dec 06 07:24:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:09.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:09 compute-1 ovn_controller[130279]: 2025-12-06T07:24:09Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:d9:9e 10.100.0.11
Dec 06 07:24:09 compute-1 ovn_controller[130279]: 2025-12-06T07:24:09Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:d9:9e 10.100.0.11
Dec 06 07:24:09 compute-1 nova_compute[226101]: 2025-12-06 07:24:09.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:09 compute-1 nova_compute[226101]: 2025-12-06 07:24:09.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:09 compute-1 ceph-mon[81689]: pgmap v1994: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 122 op/s
Dec 06 07:24:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/637376290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3540164652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:24:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3540164652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:24:10 compute-1 nova_compute[226101]: 2025-12-06 07:24:10.293 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/806788559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:10 compute-1 ceph-mon[81689]: pgmap v1995: 305 pgs: 305 active+clean; 269 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 157 op/s
Dec 06 07:24:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:11.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:11.092 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:11.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1136767738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1705575113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:12 compute-1 kernel: tapf9d7d351-d0 (unregistering): left promiscuous mode
Dec 06 07:24:12 compute-1 NetworkManager[49031]: <info>  [1765005852.4621] device (tapf9d7d351-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:24:12 compute-1 ovn_controller[130279]: 2025-12-06T07:24:12Z|00402|binding|INFO|Releasing lport f9d7d351-d081-4727-a686-93ca17bf53b9 from this chassis (sb_readonly=0)
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.471 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:12 compute-1 ovn_controller[130279]: 2025-12-06T07:24:12Z|00403|binding|INFO|Setting lport f9d7d351-d081-4727-a686-93ca17bf53b9 down in Southbound
Dec 06 07:24:12 compute-1 ovn_controller[130279]: 2025-12-06T07:24:12Z|00404|binding|INFO|Removing iface tapf9d7d351-d0 ovn-installed in OVS
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.473 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.485 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.502 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:12.502 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:ec:0c 10.100.0.8'], port_security=['fa:16:3e:d8:ec:0c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fa33eb8b-971a-4974-9b2e-8a6b85faa5dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f9d7d351-d081-4727-a686-93ca17bf53b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:12.504 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f9d7d351-d081-4727-a686-93ca17bf53b9 in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 unbound from our chassis
Dec 06 07:24:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:12.506 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:24:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:12.507 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8312b334-d1f1-45a0-bba5-18f77331a9bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:12.508 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace which is not needed anymore
Dec 06 07:24:12 compute-1 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000063.scope: Deactivated successfully.
Dec 06 07:24:12 compute-1 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000063.scope: Consumed 14.789s CPU time.
Dec 06 07:24:12 compute-1 systemd-machined[190302]: Machine qemu-45-instance-00000063 terminated.
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.697 226109 INFO nova.virt.libvirt.driver [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance shutdown successfully after 26 seconds.
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.703 226109 INFO nova.virt.libvirt.driver [-] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance destroyed successfully.
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.703 226109 DEBUG nova.objects.instance [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'numa_topology' on Instance uuid fa33eb8b-971a-4974-9b2e-8a6b85faa5dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.731 226109 DEBUG nova.compute.manager [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:12 compute-1 nova_compute[226101]: 2025-12-06 07:24:12.770 226109 DEBUG oslo_concurrency.lockutils [None req-e2a8583a-454c-4720-bb82-14f401677d29 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 26.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:12 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[261861]: [NOTICE]   (261869) : haproxy version is 2.8.14-c23fe91
Dec 06 07:24:12 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[261861]: [NOTICE]   (261869) : path to executable is /usr/sbin/haproxy
Dec 06 07:24:12 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[261861]: [WARNING]  (261869) : Exiting Master process...
Dec 06 07:24:12 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[261861]: [ALERT]    (261869) : Current worker (261871) exited with code 143 (Terminated)
Dec 06 07:24:12 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[261861]: [WARNING]  (261869) : All workers exited. Exiting... (0)
Dec 06 07:24:12 compute-1 systemd[1]: libpod-1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad.scope: Deactivated successfully.
Dec 06 07:24:12 compute-1 podman[262287]: 2025-12-06 07:24:12.981801456 +0000 UTC m=+0.377007777 container died 1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:24:12 compute-1 ceph-mon[81689]: pgmap v1996: 305 pgs: 305 active+clean; 291 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.0 MiB/s wr, 140 op/s
Dec 06 07:24:13 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad-userdata-shm.mount: Deactivated successfully.
Dec 06 07:24:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-b54fdf20b29d5a37a12440a35ed6e393b9f78753b9ba87918af4c788d6684879-merged.mount: Deactivated successfully.
Dec 06 07:24:13 compute-1 podman[262287]: 2025-12-06 07:24:13.047155329 +0000 UTC m=+0.442361650 container cleanup 1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 07:24:13 compute-1 systemd[1]: libpod-conmon-1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad.scope: Deactivated successfully.
Dec 06 07:24:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:13.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:13 compute-1 podman[262328]: 2025-12-06 07:24:13.122443861 +0000 UTC m=+0.055875596 container remove 1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.128 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab8ae46-4c58-4827-8faf-05ef734240a1]: (4, ('Sat Dec  6 07:24:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad)\n1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad\nSat Dec  6 07:24:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad)\n1397de2e84cd0865d818469014dac632533ffa94ba77d96f6b2ad6a3bc1262ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.130 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5f237a-b160-4a7f-b234-febe4b421290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.131 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:13 compute-1 nova_compute[226101]: 2025-12-06 07:24:13.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:13 compute-1 kernel: tap85cfbf28-70: left promiscuous mode
Dec 06 07:24:13 compute-1 nova_compute[226101]: 2025-12-06 07:24:13.151 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.153 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f0da6bab-0354-4650-9b94-871328c33ddb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.179 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[efc406df-0f98-464c-b81d-fd53273efe40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.181 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b68e302-1723-4580-81b4-06eced3617fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:13.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.196 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6ff048-18b4-4819-8c7f-d00e4de096c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610181, 'reachable_time': 25109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262346, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.199 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:24:13 compute-1 systemd[1]: run-netns-ovnmeta\x2d85cfbf28\x2d7016\x2d4776\x2d8fc2\x2d2eb08a6b8347.mount: Deactivated successfully.
Dec 06 07:24:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:13.199 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[60870840-d2cd-4fa7-bf8a-45afc69eda74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:13 compute-1 nova_compute[226101]: 2025-12-06 07:24:13.729 226109 DEBUG nova.compute.manager [req-901c72ec-b325-48ee-a99a-38372932515a req-5c664784-2900-4989-9724-312894dbf8a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received event network-vif-unplugged-f9d7d351-d081-4727-a686-93ca17bf53b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:13 compute-1 nova_compute[226101]: 2025-12-06 07:24:13.730 226109 DEBUG oslo_concurrency.lockutils [req-901c72ec-b325-48ee-a99a-38372932515a req-5c664784-2900-4989-9724-312894dbf8a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:13 compute-1 nova_compute[226101]: 2025-12-06 07:24:13.730 226109 DEBUG oslo_concurrency.lockutils [req-901c72ec-b325-48ee-a99a-38372932515a req-5c664784-2900-4989-9724-312894dbf8a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:13 compute-1 nova_compute[226101]: 2025-12-06 07:24:13.730 226109 DEBUG oslo_concurrency.lockutils [req-901c72ec-b325-48ee-a99a-38372932515a req-5c664784-2900-4989-9724-312894dbf8a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:13 compute-1 nova_compute[226101]: 2025-12-06 07:24:13.731 226109 DEBUG nova.compute.manager [req-901c72ec-b325-48ee-a99a-38372932515a req-5c664784-2900-4989-9724-312894dbf8a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] No waiting events found dispatching network-vif-unplugged-f9d7d351-d081-4727-a686-93ca17bf53b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:13 compute-1 nova_compute[226101]: 2025-12-06 07:24:13.731 226109 WARNING nova.compute.manager [req-901c72ec-b325-48ee-a99a-38372932515a req-5c664784-2900-4989-9724-312894dbf8a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received unexpected event network-vif-unplugged-f9d7d351-d081-4727-a686-93ca17bf53b9 for instance with vm_state stopped and task_state None.
Dec 06 07:24:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:15.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:15.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:15 compute-1 ceph-mon[81689]: pgmap v1997: 305 pgs: 305 active+clean; 305 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 139 op/s
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.294 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.386 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.387 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.387 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.387 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.387 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.388 226109 INFO nova.compute.manager [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Terminating instance
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.389 226109 DEBUG nova.compute.manager [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.394 226109 INFO nova.virt.libvirt.driver [-] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Instance destroyed successfully.
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.395 226109 DEBUG nova.objects.instance [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'resources' on Instance uuid fa33eb8b-971a-4974-9b2e-8a6b85faa5dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.409 226109 DEBUG nova.virt.libvirt.vif [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:23:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1064235958',display_name='tempest-DeleteServersTestJSON-server-1064235958',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1064235958',id=99,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:23:44Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-idmsfxu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:24:12Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=fa33eb8b-971a-4974-9b2e-8a6b85faa5dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.409 226109 DEBUG nova.network.os_vif_util [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "f9d7d351-d081-4727-a686-93ca17bf53b9", "address": "fa:16:3e:d8:ec:0c", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9d7d351-d0", "ovs_interfaceid": "f9d7d351-d081-4727-a686-93ca17bf53b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.410 226109 DEBUG nova.network.os_vif_util [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:ec:0c,bridge_name='br-int',has_traffic_filtering=True,id=f9d7d351-d081-4727-a686-93ca17bf53b9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d7d351-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.410 226109 DEBUG os_vif [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:ec:0c,bridge_name='br-int',has_traffic_filtering=True,id=f9d7d351-d081-4727-a686-93ca17bf53b9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d7d351-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.412 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.412 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9d7d351-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.413 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.415 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.417 226109 INFO os_vif [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:ec:0c,bridge_name='br-int',has_traffic_filtering=True,id=f9d7d351-d081-4727-a686-93ca17bf53b9,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9d7d351-d0')
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.689 226109 DEBUG nova.virt.libvirt.driver [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.857 226109 DEBUG nova.compute.manager [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received event network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.857 226109 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.857 226109 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.857 226109 DEBUG oslo_concurrency.lockutils [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.858 226109 DEBUG nova.compute.manager [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] No waiting events found dispatching network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:15 compute-1 nova_compute[226101]: 2025-12-06 07:24:15.858 226109 WARNING nova.compute.manager [req-8c8d90ff-5d91-44e9-ab58-c13f4b6e747b req-149847e4-61d5-4e1b-ae00-b24094314860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received unexpected event network-vif-plugged-f9d7d351-d081-4727-a686-93ca17bf53b9 for instance with vm_state stopped and task_state deleting.
Dec 06 07:24:16 compute-1 nova_compute[226101]: 2025-12-06 07:24:16.128 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:16 compute-1 ceph-mon[81689]: pgmap v1998: 305 pgs: 305 active+clean; 326 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 142 op/s
Dec 06 07:24:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:17.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:18 compute-1 nova_compute[226101]: 2025-12-06 07:24:18.701 226109 INFO nova.virt.libvirt.driver [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance shutdown successfully after 24 seconds.
Dec 06 07:24:18 compute-1 kernel: tapf1259ceb-8b (unregistering): left promiscuous mode
Dec 06 07:24:18 compute-1 NetworkManager[49031]: <info>  [1765005858.7829] device (tapf1259ceb-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:24:18 compute-1 ovn_controller[130279]: 2025-12-06T07:24:18Z|00405|binding|INFO|Releasing lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 from this chassis (sb_readonly=0)
Dec 06 07:24:18 compute-1 ovn_controller[130279]: 2025-12-06T07:24:18Z|00406|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 down in Southbound
Dec 06 07:24:18 compute-1 nova_compute[226101]: 2025-12-06 07:24:18.789 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:18 compute-1 ovn_controller[130279]: 2025-12-06T07:24:18Z|00407|binding|INFO|Removing iface tapf1259ceb-8b ovn-installed in OVS
Dec 06 07:24:18 compute-1 nova_compute[226101]: 2025-12-06 07:24:18.792 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:18 compute-1 nova_compute[226101]: 2025-12-06 07:24:18.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:18 compute-1 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000065.scope: Deactivated successfully.
Dec 06 07:24:18 compute-1 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000065.scope: Consumed 15.052s CPU time.
Dec 06 07:24:18 compute-1 systemd-machined[190302]: Machine qemu-46-instance-00000065 terminated.
Dec 06 07:24:18 compute-1 nova_compute[226101]: 2025-12-06 07:24:18.919 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:18 compute-1 nova_compute[226101]: 2025-12-06 07:24:18.925 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:18 compute-1 nova_compute[226101]: 2025-12-06 07:24:18.933 226109 INFO nova.virt.libvirt.driver [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance destroyed successfully.
Dec 06 07:24:18 compute-1 nova_compute[226101]: 2025-12-06 07:24:18.934 226109 DEBUG nova.objects.instance [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'numa_topology' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:19.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:19.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:19 compute-1 nova_compute[226101]: 2025-12-06 07:24:19.230 226109 DEBUG nova.compute.manager [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:19 compute-1 ceph-mon[81689]: pgmap v1999: 305 pgs: 305 active+clean; 326 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 116 op/s
Dec 06 07:24:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:19.391 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:d9:9e 10.100.0.11'], port_security=['fa:16:3e:a9:d9:9e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '87a430a2-0883-4e5d-a1ac-e8d75c834ac7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f2ed01ce-ee24-45dc-b59f-29fb74c119b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f1259ceb-8bfa-4339-914f-ce900f9a2cf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:19.392 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f1259ceb-8bfa-4339-914f-ce900f9a2cf0 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:24:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:19.393 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6209aab-d53f-4d58-9b94-ffb7adc6239e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:24:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:19.394 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[abcebd50-751b-4a91-97ac-bc1b81f055d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:19.394 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace which is not needed anymore
Dec 06 07:24:19 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[262095]: [NOTICE]   (262099) : haproxy version is 2.8.14-c23fe91
Dec 06 07:24:19 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[262095]: [NOTICE]   (262099) : path to executable is /usr/sbin/haproxy
Dec 06 07:24:19 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[262095]: [WARNING]  (262099) : Exiting Master process...
Dec 06 07:24:19 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[262095]: [ALERT]    (262099) : Current worker (262101) exited with code 143 (Terminated)
Dec 06 07:24:19 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[262095]: [WARNING]  (262099) : All workers exited. Exiting... (0)
Dec 06 07:24:19 compute-1 systemd[1]: libpod-ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a.scope: Deactivated successfully.
Dec 06 07:24:19 compute-1 podman[262399]: 2025-12-06 07:24:19.74229559 +0000 UTC m=+0.267048125 container died ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:24:19 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a-userdata-shm.mount: Deactivated successfully.
Dec 06 07:24:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-665ce9512c2463022d592a372693ba3ecd9626686ac83c73c483f478a560d058-merged.mount: Deactivated successfully.
Dec 06 07:24:19 compute-1 podman[262399]: 2025-12-06 07:24:19.882111092 +0000 UTC m=+0.406863627 container cleanup ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:24:19 compute-1 systemd[1]: libpod-conmon-ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a.scope: Deactivated successfully.
Dec 06 07:24:20 compute-1 podman[262429]: 2025-12-06 07:24:20.035601336 +0000 UTC m=+0.132399983 container remove ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.042 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9340cb-1306-4894-9668-427a49777dd4]: (4, ('Sat Dec  6 07:24:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a)\nab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a\nSat Dec  6 07:24:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (ab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a)\nab18e5c7bfde1f86665ce827001879f56b78f73c8e579c3b123fe4cb44dacb5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.045 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9171bca2-5736-4509-b3e9-9a0356182585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.047 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.049 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:20 compute-1 kernel: tapf6209aab-d0: left promiscuous mode
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.067 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.070 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[858e9148-fc4e-4daa-8956-5a8e381d3578]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.087 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[da65f5e7-128d-48b3-b7ca-c1280c7f2b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.090 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb6a39c-a9b6-47d5-bd8d-6e9aaac8bab6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.112 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[962de746-bc8b-4350-b41d-f3f4ac6ec4c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611087, 'reachable_time': 41658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262448, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.114 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:24:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:20.114 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b949b3-f447-4f2a-a23d-305428da709c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:20 compute-1 systemd[1]: run-netns-ovnmeta\x2df6209aab\x2dd53f\x2d4d58\x2d9b94\x2dffb7adc6239e.mount: Deactivated successfully.
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.161 226109 DEBUG oslo_concurrency.lockutils [None req-07698f7e-29bb-4fc0-b3ab-f2d46dc7391f baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 25.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:20 compute-1 ceph-mon[81689]: pgmap v2000: 305 pgs: 305 active+clean; 293 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.297 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.414 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.706 226109 INFO nova.virt.libvirt.driver [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Deleting instance files /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_del
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.707 226109 INFO nova.virt.libvirt.driver [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Deletion of /var/lib/nova/instances/fa33eb8b-971a-4974-9b2e-8a6b85faa5dc_del complete
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.773 226109 INFO nova.compute.manager [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Took 5.38 seconds to destroy the instance on the hypervisor.
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.775 226109 DEBUG oslo.service.loopingcall [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.775 226109 DEBUG nova.compute.manager [-] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:24:20 compute-1 nova_compute[226101]: 2025-12-06 07:24:20.775 226109 DEBUG nova.network.neutron [-] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:24:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:21.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:21.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:21 compute-1 nova_compute[226101]: 2025-12-06 07:24:21.605 226109 DEBUG nova.compute.manager [req-c2678b27-6eec-4057-b0ff-4413f231a374 req-60dd3c7b-a8de-44f4-907a-0b4e5a498dc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-unplugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:21 compute-1 nova_compute[226101]: 2025-12-06 07:24:21.605 226109 DEBUG oslo_concurrency.lockutils [req-c2678b27-6eec-4057-b0ff-4413f231a374 req-60dd3c7b-a8de-44f4-907a-0b4e5a498dc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:21 compute-1 nova_compute[226101]: 2025-12-06 07:24:21.605 226109 DEBUG oslo_concurrency.lockutils [req-c2678b27-6eec-4057-b0ff-4413f231a374 req-60dd3c7b-a8de-44f4-907a-0b4e5a498dc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:21 compute-1 nova_compute[226101]: 2025-12-06 07:24:21.606 226109 DEBUG oslo_concurrency.lockutils [req-c2678b27-6eec-4057-b0ff-4413f231a374 req-60dd3c7b-a8de-44f4-907a-0b4e5a498dc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:21 compute-1 nova_compute[226101]: 2025-12-06 07:24:21.606 226109 DEBUG nova.compute.manager [req-c2678b27-6eec-4057-b0ff-4413f231a374 req-60dd3c7b-a8de-44f4-907a-0b4e5a498dc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] No waiting events found dispatching network-vif-unplugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:21 compute-1 nova_compute[226101]: 2025-12-06 07:24:21.606 226109 WARNING nova.compute.manager [req-c2678b27-6eec-4057-b0ff-4413f231a374 req-60dd3c7b-a8de-44f4-907a-0b4e5a498dc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received unexpected event network-vif-unplugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for instance with vm_state stopped and task_state None.
Dec 06 07:24:21 compute-1 nova_compute[226101]: 2025-12-06 07:24:21.688 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.039 226109 DEBUG nova.network.neutron [-] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.054 226109 INFO nova.compute.manager [-] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Took 1.28 seconds to deallocate network for instance.
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.106 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.106 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.163 226109 DEBUG nova.compute.manager [req-7bea2d31-9db9-417f-b2ab-da3d996ff888 req-a39b5c04-b154-4091-9476-6b4a19e697aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Received event network-vif-deleted-f9d7d351-d081-4727-a686-93ca17bf53b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.175 226109 DEBUG oslo_concurrency.processutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:22 compute-1 ceph-mon[81689]: pgmap v2001: 305 pgs: 305 active+clean; 229 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 177 op/s
Dec 06 07:24:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:24:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2161360991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.765 226109 DEBUG oslo_concurrency.processutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.771 226109 DEBUG nova.compute.provider_tree [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.787 226109 DEBUG nova.scheduler.client.report [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.808 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.849 226109 INFO nova.scheduler.client.report [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Deleted allocations for instance fa33eb8b-971a-4974-9b2e-8a6b85faa5dc
Dec 06 07:24:22 compute-1 nova_compute[226101]: 2025-12-06 07:24:22.947 226109 DEBUG oslo_concurrency.lockutils [None req-45e03341-f2e8-4b00-a13c-ca0fdcd8f543 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "fa33eb8b-971a-4974-9b2e-8a6b85faa5dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:23.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:23.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3871687488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2161360991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:23 compute-1 nova_compute[226101]: 2025-12-06 07:24:23.794 226109 DEBUG nova.compute.manager [req-b1c63bf2-84ee-4a52-b8d2-9aa05ed7e8bc req-6448c009-7fee-4e1b-9919-6baa124e9883 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:23 compute-1 nova_compute[226101]: 2025-12-06 07:24:23.794 226109 DEBUG oslo_concurrency.lockutils [req-b1c63bf2-84ee-4a52-b8d2-9aa05ed7e8bc req-6448c009-7fee-4e1b-9919-6baa124e9883 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:23 compute-1 nova_compute[226101]: 2025-12-06 07:24:23.795 226109 DEBUG oslo_concurrency.lockutils [req-b1c63bf2-84ee-4a52-b8d2-9aa05ed7e8bc req-6448c009-7fee-4e1b-9919-6baa124e9883 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:23 compute-1 nova_compute[226101]: 2025-12-06 07:24:23.795 226109 DEBUG oslo_concurrency.lockutils [req-b1c63bf2-84ee-4a52-b8d2-9aa05ed7e8bc req-6448c009-7fee-4e1b-9919-6baa124e9883 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:23 compute-1 nova_compute[226101]: 2025-12-06 07:24:23.795 226109 DEBUG nova.compute.manager [req-b1c63bf2-84ee-4a52-b8d2-9aa05ed7e8bc req-6448c009-7fee-4e1b-9919-6baa124e9883 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] No waiting events found dispatching network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:23 compute-1 nova_compute[226101]: 2025-12-06 07:24:23.795 226109 WARNING nova.compute.manager [req-b1c63bf2-84ee-4a52-b8d2-9aa05ed7e8bc req-6448c009-7fee-4e1b-9919-6baa124e9883 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received unexpected event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for instance with vm_state stopped and task_state None.
Dec 06 07:24:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1819027873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:24 compute-1 ceph-mon[81689]: pgmap v2002: 305 pgs: 305 active+clean; 220 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 155 op/s
Dec 06 07:24:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:25.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:25.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:25 compute-1 nova_compute[226101]: 2025-12-06 07:24:25.302 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:25 compute-1 nova_compute[226101]: 2025-12-06 07:24:25.415 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:25 compute-1 nova_compute[226101]: 2025-12-06 07:24:25.606 226109 INFO nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Rebuilding instance
Dec 06 07:24:26 compute-1 sshd-session[262472]: error: kex_exchange_identification: read: Connection reset by peer
Dec 06 07:24:26 compute-1 sshd-session[262472]: Connection reset by 45.140.17.97 port 7755
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.081 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.099 226109 DEBUG nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.153 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_requests' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.174 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.195 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'resources' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.209 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'migration_context' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.220 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.223 226109 INFO nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance already shutdown.
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.229 226109 INFO nova.virt.libvirt.driver [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance destroyed successfully.
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.233 226109 INFO nova.virt.libvirt.driver [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance destroyed successfully.
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.234 226109 DEBUG nova.virt.libvirt.vif [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1613212427',display_name='tempest-tempest.common.compute-instance-1613212427',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1613212427',id=101,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:23:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-5ax3ju2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:24Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=87a430a2-0883-4e5d-a1ac-e8d75c834ac7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.235 226109 DEBUG nova.network.os_vif_util [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.236 226109 DEBUG nova.network.os_vif_util [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.236 226109 DEBUG os_vif [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.238 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.239 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1259ceb-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.279 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.280 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:26 compute-1 nova_compute[226101]: 2025-12-06 07:24:26.282 226109 INFO os_vif [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b')
Dec 06 07:24:27 compute-1 ceph-mon[81689]: pgmap v2003: 305 pgs: 305 active+clean; 234 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 184 op/s
Dec 06 07:24:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:27.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:27 compute-1 nova_compute[226101]: 2025-12-06 07:24:27.697 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005852.6960955, fa33eb8b-971a-4974-9b2e-8a6b85faa5dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:27 compute-1 nova_compute[226101]: 2025-12-06 07:24:27.698 226109 INFO nova.compute.manager [-] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] VM Stopped (Lifecycle Event)
Dec 06 07:24:27 compute-1 nova_compute[226101]: 2025-12-06 07:24:27.903 226109 DEBUG nova.compute.manager [None req-a62ef2ff-7b2d-46a7-8997-7f5f77250e25 - - - - - -] [instance: fa33eb8b-971a-4974-9b2e-8a6b85faa5dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:29.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:29.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:29 compute-1 ceph-mon[81689]: pgmap v2004: 305 pgs: 305 active+clean; 234 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Dec 06 07:24:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2549017799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:30 compute-1 nova_compute[226101]: 2025-12-06 07:24:30.303 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/472744709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:30 compute-1 ceph-mon[81689]: pgmap v2005: 305 pgs: 305 active+clean; 193 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Dec 06 07:24:31 compute-1 podman[262493]: 2025-12-06 07:24:31.094948488 +0000 UTC m=+0.082285973 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:24:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:31.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:31 compute-1 nova_compute[226101]: 2025-12-06 07:24:31.279 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1738121377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/531532766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.159 226109 INFO nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Deleting instance files /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7_del
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.160 226109 INFO nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Deletion of /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7_del complete
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.320 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.321 226109 INFO nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Creating image(s)
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.354 226109 DEBUG nova.storage.rbd_utils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.402 226109 DEBUG nova.storage.rbd_utils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.453 226109 DEBUG nova.storage.rbd_utils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.458 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.526 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.527 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.528 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.528 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.558 226109 DEBUG nova.storage.rbd_utils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:32 compute-1 nova_compute[226101]: 2025-12-06 07:24:32.566 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:33.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:33.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:33 compute-1 ceph-mon[81689]: pgmap v2006: 305 pgs: 305 active+clean; 167 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 519 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 07:24:33 compute-1 nova_compute[226101]: 2025-12-06 07:24:33.932 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005858.9308834, 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:33 compute-1 nova_compute[226101]: 2025-12-06 07:24:33.933 226109 INFO nova.compute.manager [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] VM Stopped (Lifecycle Event)
Dec 06 07:24:33 compute-1 nova_compute[226101]: 2025-12-06 07:24:33.977 226109 DEBUG nova.compute.manager [None req-34fbaeb4-ab10-47fb-b320-16a94a5c6402 - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:34 compute-1 podman[262613]: 2025-12-06 07:24:34.072609998 +0000 UTC m=+0.052512735 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:24:34 compute-1 podman[262614]: 2025-12-06 07:24:34.071856238 +0000 UTC m=+0.047003836 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:24:34 compute-1 sudo[262649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:34 compute-1 nova_compute[226101]: 2025-12-06 07:24:34.839 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:34 compute-1 sudo[262649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:34 compute-1 sudo[262649]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:34 compute-1 sudo[262681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:24:34 compute-1 sudo[262681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:34 compute-1 sudo[262681]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:34 compute-1 sudo[262713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:34 compute-1 sudo[262713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:34 compute-1 sudo[262713]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:35 compute-1 sudo[262738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:24:35 compute-1 sudo[262738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:35 compute-1 ceph-mon[81689]: pgmap v2007: 305 pgs: 305 active+clean; 187 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 89 op/s
Dec 06 07:24:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:35.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:35 compute-1 sudo[262738]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:36 compute-1 nova_compute[226101]: 2025-12-06 07:24:36.013 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:24:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:24:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:37.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.173 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.178 226109 DEBUG nova.storage.rbd_utils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] resizing rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:24:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/495689201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:37 compute-1 ceph-mon[81689]: pgmap v2008: 305 pgs: 305 active+clean; 293 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 168 KiB/s rd, 6.6 MiB/s wr, 155 op/s
Dec 06 07:24:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:24:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:24:37 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:24:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2049410736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.673 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.675 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Ensure instance console log exists: /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.675 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.676 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.676 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.678 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Start _get_guest_xml network_info=[{"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.684 226109 WARNING nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.688 226109 DEBUG nova.virt.libvirt.host [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.689 226109 DEBUG nova.virt.libvirt.host [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.693 226109 DEBUG nova.virt.libvirt.host [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.694 226109 DEBUG nova.virt.libvirt.host [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.695 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.695 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.696 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.696 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.696 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.697 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.697 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.697 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.698 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.698 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.698 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.698 226109 DEBUG nova.virt.hardware [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.699 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:37 compute-1 nova_compute[226101]: 2025-12-06 07:24:37.721 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:24:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3125607217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.218 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.246 226109 DEBUG nova.storage.rbd_utils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.250 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/735819537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:38 compute-1 ceph-mon[81689]: pgmap v2009: 305 pgs: 305 active+clean; 293 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 5.5 MiB/s wr, 109 op/s
Dec 06 07:24:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3125607217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/936231557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:24:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1445743369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.741 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.744 226109 DEBUG nova.virt.libvirt.vif [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1613212427',display_name='tempest-tempest.common.compute-instance-1613212427',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1613212427',id=101,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:23:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-5ax3ju2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:32Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=87a430a2-0883-4e5d-a1ac-e8d75c834ac7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.745 226109 DEBUG nova.network.os_vif_util [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.746 226109 DEBUG nova.network.os_vif_util [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.750 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <uuid>87a430a2-0883-4e5d-a1ac-e8d75c834ac7</uuid>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <name>instance-00000065</name>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <nova:name>tempest-tempest.common.compute-instance-1613212427</nova:name>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:24:37</nova:creationTime>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <nova:user uuid="baddb65c90da47a58d026b0db966f6c8">tempest-ServerActionsTestOtherA-1949739102-project-member</nova:user>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <nova:project uuid="001e2256cb8b430d93c1ff613010d199">tempest-ServerActionsTestOtherA-1949739102</nova:project>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <nova:port uuid="f1259ceb-8bfa-4339-914f-ce900f9a2cf0">
Dec 06 07:24:38 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <system>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <entry name="serial">87a430a2-0883-4e5d-a1ac-e8d75c834ac7</entry>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <entry name="uuid">87a430a2-0883-4e5d-a1ac-e8d75c834ac7</entry>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </system>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <os>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   </os>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <features>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   </features>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk">
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       </source>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config">
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       </source>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:24:38 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:a9:d9:9e"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <target dev="tapf1259ceb-8b"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/console.log" append="off"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <video>
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </video>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:24:38 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:24:38 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:24:38 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:24:38 compute-1 nova_compute[226101]: </domain>
Dec 06 07:24:38 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.750 226109 DEBUG nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Preparing to wait for external event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.751 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.751 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.751 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.752 226109 DEBUG nova.virt.libvirt.vif [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1613212427',display_name='tempest-tempest.common.compute-instance-1613212427',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1613212427',id=101,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:23:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-5ax3ju2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:32Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=87a430a2-0883-4e5d-a1ac-e8d75c834ac7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.752 226109 DEBUG nova.network.os_vif_util [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.752 226109 DEBUG nova.network.os_vif_util [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.753 226109 DEBUG os_vif [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.753 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.754 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.754 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.757 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.757 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1259ceb-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.758 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1259ceb-8b, col_values=(('external_ids', {'iface-id': 'f1259ceb-8bfa-4339-914f-ce900f9a2cf0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:d9:9e', 'vm-uuid': '87a430a2-0883-4e5d-a1ac-e8d75c834ac7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.759 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:38 compute-1 NetworkManager[49031]: <info>  [1765005878.7607] manager: (tapf1259ceb-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.762 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.765 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:38 compute-1 nova_compute[226101]: 2025-12-06 07:24:38.766 226109 INFO os_vif [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b')
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.072 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.072 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.073 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:a9:d9:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.073 226109 INFO nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Using config drive
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.101 226109 DEBUG nova.storage.rbd_utils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.125 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:39.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.158 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'keypairs' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:39.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.502 226109 INFO nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Creating config drive at /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.507 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9u5qo4lw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:39 compute-1 nova_compute[226101]: 2025-12-06 07:24:39.648 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9u5qo4lw" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1445743369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:40 compute-1 nova_compute[226101]: 2025-12-06 07:24:40.941 226109 DEBUG nova.storage.rbd_utils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:40 compute-1 nova_compute[226101]: 2025-12-06 07:24:40.946 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:40 compute-1 nova_compute[226101]: 2025-12-06 07:24:40.970 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:41.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:41.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.457 226109 DEBUG oslo_concurrency.processutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config 87a430a2-0883-4e5d-a1ac-e8d75c834ac7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.457 226109 INFO nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Deleting local config drive /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7/disk.config because it was imported into RBD.
Dec 06 07:24:41 compute-1 kernel: tapf1259ceb-8b: entered promiscuous mode
Dec 06 07:24:41 compute-1 NetworkManager[49031]: <info>  [1765005881.5274] manager: (tapf1259ceb-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Dec 06 07:24:41 compute-1 ovn_controller[130279]: 2025-12-06T07:24:41Z|00408|binding|INFO|Claiming lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for this chassis.
Dec 06 07:24:41 compute-1 ovn_controller[130279]: 2025-12-06T07:24:41Z|00409|binding|INFO|f1259ceb-8bfa-4339-914f-ce900f9a2cf0: Claiming fa:16:3e:a9:d9:9e 10.100.0.11
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.529 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.537 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:d9:9e 10.100.0.11'], port_security=['fa:16:3e:a9:d9:9e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '87a430a2-0883-4e5d-a1ac-e8d75c834ac7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f2ed01ce-ee24-45dc-b59f-29fb74c119b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f1259ceb-8bfa-4339-914f-ce900f9a2cf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.538 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f1259ceb-8bfa-4339-914f-ce900f9a2cf0 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e bound to our chassis
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.540 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.547 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-1 ovn_controller[130279]: 2025-12-06T07:24:41Z|00410|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 ovn-installed in OVS
Dec 06 07:24:41 compute-1 ovn_controller[130279]: 2025-12-06T07:24:41Z|00411|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 up in Southbound
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.548 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.551 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.554 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[049efc0e-bf84-4fc5-b3e2-bd5589ce49e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.555 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6209aab-d1 in ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.557 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6209aab-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.557 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9a6d3c-c564-413c-997f-f632b8bcfb5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.559 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1d07e3-0026-4397-b86f-5fbf1bf5065b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 systemd-udevd[262989]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:24:41 compute-1 systemd-machined[190302]: New machine qemu-47-instance-00000065.
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.571 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[632918b6-8a6c-490d-8210-ecc385d97434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 NetworkManager[49031]: <info>  [1765005881.5775] device (tapf1259ceb-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:24:41 compute-1 NetworkManager[49031]: <info>  [1765005881.5785] device (tapf1259ceb-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:24:41 compute-1 systemd[1]: Started Virtual Machine qemu-47-instance-00000065.
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.586 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ccebccf6-8926-45da-bca7-d9302135b71b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.617 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c2c772-5bee-415b-ae9b-98b7c00ac092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 NetworkManager[49031]: <info>  [1765005881.6232] manager: (tapf6209aab-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.622 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7c86a194-52a9-4a71-96f8-328edefde2cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.651 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2f952016-7775-4b60-bdb8-37aadd69fad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.655 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb29caa-1b39-4094-86c5-e840d4a19d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 NetworkManager[49031]: <info>  [1765005881.6792] device (tapf6209aab-d0): carrier: link connected
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.684 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[85585b1c-a2f9-4e03-bb06-ad830fd6d390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.702 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e64368c7-c28d-4ee1-92e4-a2e35f878954]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616118, 'reachable_time': 39418, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263021, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.718 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0d0d86-87e4-406f-9e95-ca26e04300c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c5a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616118, 'tstamp': 616118}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263022, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.735 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[18cf4a73-ea4f-4b26-bd2c-67b9d8af4478]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616118, 'reachable_time': 39418, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263023, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.767 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9218e1b5-ba07-417b-9351-357ba943e73c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.834 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[be102731-4f35-4645-bdc8-9897a1516edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.836 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.836 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.837 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:41 compute-1 NetworkManager[49031]: <info>  [1765005881.8390] manager: (tapf6209aab-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.838 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-1 kernel: tapf6209aab-d0: entered promiscuous mode
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.842 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.843 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-1 ovn_controller[130279]: 2025-12-06T07:24:41Z|00412|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.844 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.845 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.845 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5d78feb2-b293-4d5f-b841-88908308c56b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.846 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:24:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:41.847 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'env', 'PROCESS_TAG=haproxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6209aab-d53f-4d58-9b94-ffb7adc6239e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:24:41 compute-1 nova_compute[226101]: 2025-12-06 07:24:41.857 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:42 compute-1 podman[263062]: 2025-12-06 07:24:42.174239599 +0000 UTC m=+0.019164351 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:24:42 compute-1 ceph-mon[81689]: pgmap v2010: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 6.0 MiB/s wr, 152 op/s
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.513 226109 DEBUG nova.compute.manager [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.513 226109 DEBUG oslo_concurrency.lockutils [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.513 226109 DEBUG oslo_concurrency.lockutils [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.514 226109 DEBUG oslo_concurrency.lockutils [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.514 226109 DEBUG nova.compute.manager [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Processing event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.514 226109 DEBUG nova.compute.manager [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.514 226109 DEBUG oslo_concurrency.lockutils [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.514 226109 DEBUG oslo_concurrency.lockutils [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.515 226109 DEBUG oslo_concurrency.lockutils [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.515 226109 DEBUG nova.compute.manager [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] No waiting events found dispatching network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:42 compute-1 nova_compute[226101]: 2025-12-06 07:24:42.515 226109 WARNING nova.compute.manager [req-dff71601-78d2-48d2-ab5c-e77830549e1f req-970c0af3-fb31-430d-8fa0-2168866f3ba5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received unexpected event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for instance with vm_state stopped and task_state rebuild_spawning.
Dec 06 07:24:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:42 compute-1 podman[263062]: 2025-12-06 07:24:42.725583404 +0000 UTC m=+0.570508126 container create 8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 07:24:42 compute-1 ceph-mon[81689]: pgmap v2011: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.3 MiB/s wr, 203 op/s
Dec 06 07:24:43 compute-1 systemd[1]: Started libpod-conmon-8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71.scope.
Dec 06 07:24:43 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:24:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b04f4fe552070197b5e994ffb65d733d87f1e93c8069048527e108bcd2434d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:43.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:43.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.309 226109 DEBUG nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.310 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005883.3089492, 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.311 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] VM Started (Lifecycle Event)
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.314 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.318 226109 INFO nova.virt.libvirt.driver [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance spawned successfully.
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.318 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.359 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.365 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.368 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.369 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.369 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.370 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.370 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.371 226109 DEBUG nova.virt.libvirt.driver [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.404 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.405 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005883.3092296, 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.405 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] VM Paused (Lifecycle Event)
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.431 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.434 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005883.3129456, 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.435 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] VM Resumed (Lifecycle Event)
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.444 226109 DEBUG nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.452 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:43 compute-1 podman[263062]: 2025-12-06 07:24:43.452791021 +0000 UTC m=+1.297715773 container init 8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.456 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:24:43 compute-1 podman[263062]: 2025-12-06 07:24:43.45829568 +0000 UTC m=+1.303220412 container start 8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:24:43 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[263102]: [NOTICE]   (263116) : New worker (263118) forked
Dec 06 07:24:43 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[263102]: [NOTICE]   (263116) : Loading success.
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.502 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.513 226109 INFO nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] bringing vm to original state: 'stopped'
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.606 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.607 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.607 226109 DEBUG nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.612 226109 DEBUG nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 06 07:24:43 compute-1 kernel: tapf1259ceb-8b (unregistering): left promiscuous mode
Dec 06 07:24:43 compute-1 NetworkManager[49031]: <info>  [1765005883.6508] device (tapf1259ceb-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00413|binding|INFO|Releasing lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 from this chassis (sb_readonly=0)
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00414|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 down in Southbound
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.704 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00415|binding|INFO|Removing iface tapf1259ceb-8b ovn-installed in OVS
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.707 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:43.712 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:d9:9e 10.100.0.11'], port_security=['fa:16:3e:a9:d9:9e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '87a430a2-0883-4e5d-a1ac-e8d75c834ac7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f2ed01ce-ee24-45dc-b59f-29fb74c119b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f1259ceb-8bfa-4339-914f-ce900f9a2cf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.719 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:43 compute-1 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000065.scope: Deactivated successfully.
Dec 06 07:24:43 compute-1 systemd-machined[190302]: Machine qemu-47-instance-00000065 terminated.
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.759 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:43.824 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f1259ceb-8bfa-4339-914f-ce900f9a2cf0 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:24:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:43.826 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6209aab-d53f-4d58-9b94-ffb7adc6239e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:24:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:43.827 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[083ae8b3-90f4-4f4b-87fb-592e1dcda847]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:43.827 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace which is not needed anymore
Dec 06 07:24:43 compute-1 kernel: tapf1259ceb-8b: entered promiscuous mode
Dec 06 07:24:43 compute-1 kernel: tapf1259ceb-8b (unregistering): left promiscuous mode
Dec 06 07:24:43 compute-1 NetworkManager[49031]: <info>  [1765005883.8345] manager: (tapf1259ceb-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00416|binding|INFO|Claiming lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for this chassis.
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.838 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00417|binding|INFO|f1259ceb-8bfa-4339-914f-ce900f9a2cf0: Claiming fa:16:3e:a9:d9:9e 10.100.0.11
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.849 226109 INFO nova.virt.libvirt.driver [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance destroyed successfully.
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.849 226109 DEBUG nova.compute.manager [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:43.852 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:d9:9e 10.100.0.11'], port_security=['fa:16:3e:a9:d9:9e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '87a430a2-0883-4e5d-a1ac-e8d75c834ac7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f2ed01ce-ee24-45dc-b59f-29fb74c119b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f1259ceb-8bfa-4339-914f-ce900f9a2cf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00418|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 ovn-installed in OVS
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00419|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 up in Southbound
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00420|binding|INFO|Releasing lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 from this chassis (sb_readonly=1)
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00421|binding|INFO|Removing iface tapf1259ceb-8b ovn-installed in OVS
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00422|if_status|INFO|Dropped 1 log messages in last 159 seconds (most recently, 159 seconds ago) due to excessive rate
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00423|if_status|INFO|Not setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 down as sb is readonly
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.862 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00424|binding|INFO|Releasing lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 from this chassis (sb_readonly=0)
Dec 06 07:24:43 compute-1 ovn_controller[130279]: 2025-12-06T07:24:43Z|00425|binding|INFO|Setting lport f1259ceb-8bfa-4339-914f-ce900f9a2cf0 down in Southbound
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.878 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:43.882 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:d9:9e 10.100.0.11'], port_security=['fa:16:3e:a9:d9:9e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '87a430a2-0883-4e5d-a1ac-e8d75c834ac7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f2ed01ce-ee24-45dc-b59f-29fb74c119b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=f1259ceb-8bfa-4339-914f-ce900f9a2cf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:43 compute-1 nova_compute[226101]: 2025-12-06 07:24:43.970 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:44 compute-1 nova_compute[226101]: 2025-12-06 07:24:44.034 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:44 compute-1 nova_compute[226101]: 2025-12-06 07:24:44.035 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:44 compute-1 nova_compute[226101]: 2025-12-06 07:24:44.035 226109 DEBUG nova.objects.instance [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:24:44 compute-1 sudo[263169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:44 compute-1 sudo[263169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:44 compute-1 sudo[263169]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:44 compute-1 nova_compute[226101]: 2025-12-06 07:24:44.106 226109 DEBUG oslo_concurrency.lockutils [None req-f311e725-983f-4db3-ae9a-a625f00326f3 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:44 compute-1 sudo[263194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:24:44 compute-1 sudo[263194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:44 compute-1 sudo[263194]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:44 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[263102]: [NOTICE]   (263116) : haproxy version is 2.8.14-c23fe91
Dec 06 07:24:44 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[263102]: [NOTICE]   (263116) : path to executable is /usr/sbin/haproxy
Dec 06 07:24:44 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[263102]: [WARNING]  (263116) : Exiting Master process...
Dec 06 07:24:44 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[263102]: [WARNING]  (263116) : Exiting Master process...
Dec 06 07:24:44 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[263102]: [ALERT]    (263116) : Current worker (263118) exited with code 143 (Terminated)
Dec 06 07:24:44 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[263102]: [WARNING]  (263116) : All workers exited. Exiting... (0)
Dec 06 07:24:44 compute-1 systemd[1]: libpod-8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71.scope: Deactivated successfully.
Dec 06 07:24:44 compute-1 podman[263156]: 2025-12-06 07:24:44.208665164 +0000 UTC m=+0.297982424 container died 8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:24:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:45 compute-1 ceph-mon[81689]: pgmap v2012: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.4 MiB/s wr, 212 op/s
Dec 06 07:24:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:45.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:45 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71-userdata-shm.mount: Deactivated successfully.
Dec 06 07:24:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-14b04f4fe552070197b5e994ffb65d733d87f1e93c8069048527e108bcd2434d-merged.mount: Deactivated successfully.
Dec 06 07:24:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:45.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.354 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.416 226109 DEBUG nova.compute.manager [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-unplugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.416 226109 DEBUG oslo_concurrency.lockutils [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.417 226109 DEBUG oslo_concurrency.lockutils [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.417 226109 DEBUG oslo_concurrency.lockutils [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.417 226109 DEBUG nova.compute.manager [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] No waiting events found dispatching network-vif-unplugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.417 226109 WARNING nova.compute.manager [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received unexpected event network-vif-unplugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for instance with vm_state stopped and task_state None.
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.417 226109 DEBUG nova.compute.manager [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.417 226109 DEBUG oslo_concurrency.lockutils [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.417 226109 DEBUG oslo_concurrency.lockutils [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.418 226109 DEBUG oslo_concurrency.lockutils [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.418 226109 DEBUG nova.compute.manager [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] No waiting events found dispatching network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:45 compute-1 nova_compute[226101]: 2025-12-06 07:24:45.418 226109 WARNING nova.compute.manager [req-d20c81b4-6aa1-4c0f-9851-71c6980d8eab req-ec0101eb-9c15-4a93-92ae-10b31f4b06c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received unexpected event network-vif-plugged-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 for instance with vm_state stopped and task_state None.
Dec 06 07:24:45 compute-1 podman[263156]: 2025-12-06 07:24:45.666995639 +0000 UTC m=+1.756312889 container cleanup 8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.423 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.424 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.424 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.424 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.425 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.425 226109 INFO nova.compute.manager [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Terminating instance
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.426 226109 DEBUG nova.compute.manager [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.431 226109 INFO nova.virt.libvirt.driver [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Instance destroyed successfully.
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.432 226109 DEBUG nova.objects.instance [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'resources' on Instance uuid 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.447 226109 DEBUG nova.virt.libvirt.vif [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:23:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1613212427',display_name='tempest-tempest.common.compute-instance-1613212427',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1613212427',id=101,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:24:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-5ax3ju2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:24:44Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=87a430a2-0883-4e5d-a1ac-e8d75c834ac7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.447 226109 DEBUG nova.network.os_vif_util [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "address": "fa:16:3e:a9:d9:9e", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1259ceb-8b", "ovs_interfaceid": "f1259ceb-8bfa-4339-914f-ce900f9a2cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.448 226109 DEBUG nova.network.os_vif_util [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.449 226109 DEBUG os_vif [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.451 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.451 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1259ceb-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.455 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:24:46 compute-1 nova_compute[226101]: 2025-12-06 07:24:46.458 226109 INFO os_vif [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:d9:9e,bridge_name='br-int',has_traffic_filtering=True,id=f1259ceb-8bfa-4339-914f-ce900f9a2cf0,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1259ceb-8b')
Dec 06 07:24:46 compute-1 ceph-mon[81689]: pgmap v2013: 305 pgs: 305 active+clean; 307 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 265 op/s
Dec 06 07:24:46 compute-1 podman[263235]: 2025-12-06 07:24:46.997905433 +0000 UTC m=+1.308739403 container remove 8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.007 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[98601010-450b-452d-9c6b-bd143d38e0a2]: (4, ('Sat Dec  6 07:24:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71)\n8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71\nSat Dec  6 07:24:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71)\n8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.009 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c621f692-50db-41ed-91eb-54cc9d664d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.010 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:47 compute-1 kernel: tapf6209aab-d0: left promiscuous mode
Dec 06 07:24:47 compute-1 systemd[1]: libpod-conmon-8c2ce3d3f522fccf9d338541d5af19047bc8dff2d706c249f2f8980a21807a71.scope: Deactivated successfully.
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.074 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[74deb078-5a4b-45d8-a638-c3b27290b2c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.092 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[01a65457-3b76-4a9a-9bd0-24c59d8f2cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.093 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a473ece9-13fc-4aa5-8845-5d16bf5ca369]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.114 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[686cdee2-aacb-488e-827a-5603ec316750]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616111, 'reachable_time': 29927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263257, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 systemd[1]: run-netns-ovnmeta\x2df6209aab\x2dd53f\x2d4d58\x2d9b94\x2dffb7adc6239e.mount: Deactivated successfully.
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.118 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.119 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c576d9-ea61-4bc9-a5fe-37eba278a7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.120 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f1259ceb-8bfa-4339-914f-ce900f9a2cf0 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.121 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6209aab-d53f-4d58-9b94-ffb7adc6239e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.122 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[edf6bc42-35eb-44f1-9c63-d9648b6eb49b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.123 139580 INFO neutron.agent.ovn.metadata.agent [-] Port f1259ceb-8bfa-4339-914f-ce900f9a2cf0 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.124 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6209aab-d53f-4d58-9b94-ffb7adc6239e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:24:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:47.125 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4e260f58-0179-4af6-9630-f16c5a70f320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:47.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:47.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:47 compute-1 nova_compute[226101]: 2025-12-06 07:24:47.854 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:48 compute-1 ceph-mon[81689]: pgmap v2014: 305 pgs: 305 active+clean; 307 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 467 KiB/s wr, 197 op/s
Dec 06 07:24:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:49.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:49.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:50 compute-1 nova_compute[226101]: 2025-12-06 07:24:50.017 226109 INFO nova.virt.libvirt.driver [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Deleting instance files /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7_del
Dec 06 07:24:50 compute-1 nova_compute[226101]: 2025-12-06 07:24:50.018 226109 INFO nova.virt.libvirt.driver [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Deletion of /var/lib/nova/instances/87a430a2-0883-4e5d-a1ac-e8d75c834ac7_del complete
Dec 06 07:24:50 compute-1 nova_compute[226101]: 2025-12-06 07:24:50.082 226109 INFO nova.compute.manager [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Took 3.66 seconds to destroy the instance on the hypervisor.
Dec 06 07:24:50 compute-1 nova_compute[226101]: 2025-12-06 07:24:50.082 226109 DEBUG oslo.service.loopingcall [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:24:50 compute-1 nova_compute[226101]: 2025-12-06 07:24:50.082 226109 DEBUG nova.compute.manager [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:24:50 compute-1 nova_compute[226101]: 2025-12-06 07:24:50.083 226109 DEBUG nova.network.neutron [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:24:50 compute-1 nova_compute[226101]: 2025-12-06 07:24:50.358 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:50 compute-1 ceph-mon[81689]: pgmap v2015: 305 pgs: 305 active+clean; 301 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.5 MiB/s wr, 244 op/s
Dec 06 07:24:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:51.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:51.171 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:51.172 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.172 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.196 226109 DEBUG nova.network.neutron [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.212 226109 INFO nova.compute.manager [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Took 1.13 seconds to deallocate network for instance.
Dec 06 07:24:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:51.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.253 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.254 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.284 226109 DEBUG nova.compute.manager [req-87582121-37cb-499f-bfa8-3aea2716f684 req-a21035e9-b6cb-4bcb-a98a-4163b59144d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Received event network-vif-deleted-f1259ceb-8bfa-4339-914f-ce900f9a2cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.313 226109 DEBUG oslo_concurrency.processutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.454 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:24:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3534498576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.792 226109 DEBUG oslo_concurrency.processutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.799 226109 DEBUG nova.compute.provider_tree [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.822 226109 DEBUG nova.scheduler.client.report [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.848 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.881 226109 INFO nova.scheduler.client.report [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Deleted allocations for instance 87a430a2-0883-4e5d-a1ac-e8d75c834ac7
Dec 06 07:24:51 compute-1 nova_compute[226101]: 2025-12-06 07:24:51.929 226109 DEBUG oslo_concurrency.lockutils [None req-a9abbbd0-f27e-47b1-98aa-6ff94a714f4b baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "87a430a2-0883-4e5d-a1ac-e8d75c834ac7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3534498576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:53.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:53.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:53 compute-1 ceph-mon[81689]: pgmap v2016: 305 pgs: 305 active+clean; 283 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.9 MiB/s wr, 252 op/s
Dec 06 07:24:54 compute-1 ceph-mon[81689]: pgmap v2017: 305 pgs: 305 active+clean; 291 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Dec 06 07:24:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:24:55.174 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:55.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:55 compute-1 nova_compute[226101]: 2025-12-06 07:24:55.360 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:56 compute-1 nova_compute[226101]: 2025-12-06 07:24:56.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:57.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:57.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:57 compute-1 ceph-mon[81689]: pgmap v2018: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 227 op/s
Dec 06 07:24:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:58 compute-1 nova_compute[226101]: 2025-12-06 07:24:58.847 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005883.845841, 87a430a2-0883-4e5d-a1ac-e8d75c834ac7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:58 compute-1 nova_compute[226101]: 2025-12-06 07:24:58.848 226109 INFO nova.compute.manager [-] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] VM Stopped (Lifecycle Event)
Dec 06 07:24:58 compute-1 nova_compute[226101]: 2025-12-06 07:24:58.870 226109 DEBUG nova.compute.manager [None req-c92eecf3-9f3d-422c-9ce5-525671a2f42d - - - - - -] [instance: 87a430a2-0883-4e5d-a1ac-e8d75c834ac7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:24:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:59.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:00 compute-1 nova_compute[226101]: 2025-12-06 07:25:00.361 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:00 compute-1 ceph-mon[81689]: pgmap v2019: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.2 MiB/s wr, 161 op/s
Dec 06 07:25:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:01.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:01.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.330 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.331 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.366 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.443 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.444 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.449 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.450 226109 INFO nova.compute.claims [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.467 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.581 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.603 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.604 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.626 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.627 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.627 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.627 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:01.642 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:01.642 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:01.642 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:01 compute-1 nova_compute[226101]: 2025-12-06 07:25:01.656 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:02 compute-1 podman[263308]: 2025-12-06 07:25:02.158364268 +0000 UTC m=+0.131356244 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:25:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:03.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:03 compute-1 ceph-mon[81689]: pgmap v2020: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 175 op/s
Dec 06 07:25:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/970776473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:03.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:04 compute-1 ceph-mon[81689]: pgmap v2021: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 703 KiB/s rd, 3.2 MiB/s wr, 138 op/s
Dec 06 07:25:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3158887967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/278321561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.364 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.783s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.368 226109 DEBUG nova.compute.provider_tree [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.464 226109 DEBUG nova.scheduler.client.report [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.741 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.741 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.744 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 3.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.744 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.744 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.745 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.826 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.827 226109 DEBUG nova.network.neutron [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.851 226109 INFO nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.868 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.962 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.964 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.965 226109 INFO nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Creating image(s)
Dec 06 07:25:04 compute-1 nova_compute[226101]: 2025-12-06 07:25:04.994 226109 DEBUG nova.storage.rbd_utils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:05 compute-1 ceph-mon[81689]: pgmap v2022: 305 pgs: 305 active+clean; 306 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 440 KiB/s rd, 2.3 MiB/s wr, 94 op/s
Dec 06 07:25:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/278321561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:05 compute-1 podman[263385]: 2025-12-06 07:25:05.0638108 +0000 UTC m=+0.048989510 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 07:25:05 compute-1 podman[263381]: 2025-12-06 07:25:05.070621185 +0000 UTC m=+0.058297773 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.099 226109 DEBUG nova.storage.rbd_utils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.123 226109 DEBUG nova.storage.rbd_utils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.127 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.162 226109 DEBUG nova.policy [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'baddb65c90da47a58d026b0db966f6c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '001e2256cb8b430d93c1ff613010d199', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:25:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:05.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.190 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.191 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.192 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.192 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.212 226109 DEBUG nova.storage.rbd_utils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.214 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8ace964d-3645-4195-8f65-8b625bce1b00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1659571722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.242 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:25:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:05.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.363 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.458 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.460 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4573MB free_disk=20.835834503173828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.460 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.461 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.521 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 8ace964d-3645-4195-8f65-8b625bce1b00 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.521 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.522 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.555 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.785 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8ace964d-3645-4195-8f65-8b625bce1b00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.854 226109 DEBUG nova.storage.rbd_utils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] resizing rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:25:05 compute-1 nova_compute[226101]: 2025-12-06 07:25:05.981 226109 DEBUG nova.objects.instance [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'migration_context' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1659571722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.011 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.011 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Ensure instance console log exists: /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.012 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.012 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.013 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3503138643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.045 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.053 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.076 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.115 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.116 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.518 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:06 compute-1 nova_compute[226101]: 2025-12-06 07:25:06.954 226109 DEBUG nova.network.neutron [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Successfully created port: eec9d8a5-6252-4c93-b74b-aecc3b28521f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:25:07 compute-1 ceph-mon[81689]: pgmap v2023: 305 pgs: 305 active+clean; 231 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Dec 06 07:25:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3503138643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:07 compute-1 nova_compute[226101]: 2025-12-06 07:25:07.079 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:07 compute-1 nova_compute[226101]: 2025-12-06 07:25:07.080 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:07 compute-1 nova_compute[226101]: 2025-12-06 07:25:07.080 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:07.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:25:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:07.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:25:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1281897683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1999743962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/904149066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.228 226109 DEBUG nova.network.neutron [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Successfully updated port: eec9d8a5-6252-4c93-b74b-aecc3b28521f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.251 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.251 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquired lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.251 226109 DEBUG nova.network.neutron [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.346 226109 DEBUG nova.compute.manager [req-2448a89b-0923-48cd-99a3-407d59aa4e97 req-3a755806-08c6-4239-bef6-afe29e2750bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-changed-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.346 226109 DEBUG nova.compute.manager [req-2448a89b-0923-48cd-99a3-407d59aa4e97 req-3a755806-08c6-4239-bef6-afe29e2750bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Refreshing instance network info cache due to event network-changed-eec9d8a5-6252-4c93-b74b-aecc3b28521f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.347 226109 DEBUG oslo_concurrency.lockutils [req-2448a89b-0923-48cd-99a3-407d59aa4e97 req-3a755806-08c6-4239-bef6-afe29e2750bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.454 226109 DEBUG nova.network.neutron [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:25:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:08 compute-1 nova_compute[226101]: 2025-12-06 07:25:08.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:09.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:09.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.061 226109 DEBUG nova.network.neutron [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Updating instance_info_cache with network_info: [{"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:25:10 compute-1 ceph-mon[81689]: pgmap v2024: 305 pgs: 305 active+clean; 231 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 277 KiB/s rd, 126 KiB/s wr, 59 op/s
Dec 06 07:25:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1966551518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3840847270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:25:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3840847270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.328 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Releasing lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.329 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance network_info: |[{"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.329 226109 DEBUG oslo_concurrency.lockutils [req-2448a89b-0923-48cd-99a3-407d59aa4e97 req-3a755806-08c6-4239-bef6-afe29e2750bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.329 226109 DEBUG nova.network.neutron [req-2448a89b-0923-48cd-99a3-407d59aa4e97 req-3a755806-08c6-4239-bef6-afe29e2750bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Refreshing network info cache for port eec9d8a5-6252-4c93-b74b-aecc3b28521f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.332 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Start _get_guest_xml network_info=[{"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.336 226109 WARNING nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.940 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.943 226109 DEBUG nova.virt.libvirt.host [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.943 226109 DEBUG nova.virt.libvirt.host [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.947 226109 DEBUG nova.virt.libvirt.host [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.948 226109 DEBUG nova.virt.libvirt.host [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.949 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.949 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.950 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.950 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.950 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.950 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.950 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.950 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.951 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.951 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.951 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.951 226109 DEBUG nova.virt.hardware [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:25:10 compute-1 nova_compute[226101]: 2025-12-06 07:25:10.954 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:11.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:11.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.403 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.404 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.424 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:25:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2880801786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:11 compute-1 ceph-mon[81689]: pgmap v2025: 305 pgs: 305 active+clean; 275 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 3.1 MiB/s wr, 122 op/s
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.520 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.715 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.715 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.720 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.721 226109 INFO nova.compute.claims [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:25:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:25:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2845940397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.812 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.859s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.839 226109 DEBUG nova.storage.rbd_utils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.844 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:11 compute-1 nova_compute[226101]: 2025-12-06 07:25:11.915 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:25:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4110808184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1268851066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.373 226109 DEBUG nova.network.neutron [req-2448a89b-0923-48cd-99a3-407d59aa4e97 req-3a755806-08c6-4239-bef6-afe29e2750bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Updated VIF entry in instance network info cache for port eec9d8a5-6252-4c93-b74b-aecc3b28521f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.374 226109 DEBUG nova.network.neutron [req-2448a89b-0923-48cd-99a3-407d59aa4e97 req-3a755806-08c6-4239-bef6-afe29e2750bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Updating instance_info_cache with network_info: [{"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.376 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.383 226109 DEBUG nova.compute.provider_tree [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.404 226109 DEBUG nova.scheduler.client.report [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.410 226109 DEBUG oslo_concurrency.lockutils [req-2448a89b-0923-48cd-99a3-407d59aa4e97 req-3a755806-08c6-4239-bef6-afe29e2750bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.440 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.441 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.488 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.489 226109 DEBUG nova.network.neutron [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:25:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3014821163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2845940397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:12 compute-1 ceph-mon[81689]: pgmap v2026: 305 pgs: 305 active+clean; 327 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 236 KiB/s rd, 4.9 MiB/s wr, 144 op/s
Dec 06 07:25:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1981577113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.537 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.538 226109 DEBUG nova.virt.libvirt.vif [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2051423871',display_name='tempest-tempest.common.compute-instance-2051423871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2051423871',id=105,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOmzXnyuZ/cQZ/wXv6i6+zthJNB9rLFLpJiO6CxE3XLRu8gE5HiPWRQFyDfJKugoxRwZmDUcohMXjvZhPRaFSbcXur6WAaqewZuOab5xSDtPso9c70pnfsVwIX5QAnRONw==',key_name='tempest-keypair-2072622459',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-tln6vdkz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=8ace964d-3645-4195-8f65-8b625bce1b00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.538 226109 DEBUG nova.network.os_vif_util [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.539 226109 DEBUG nova.network.os_vif_util [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.540 226109 DEBUG nova.objects.instance [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.560 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <uuid>8ace964d-3645-4195-8f65-8b625bce1b00</uuid>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <name>instance-00000069</name>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <nova:name>tempest-tempest.common.compute-instance-2051423871</nova:name>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:25:10</nova:creationTime>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <nova:user uuid="baddb65c90da47a58d026b0db966f6c8">tempest-ServerActionsTestOtherA-1949739102-project-member</nova:user>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <nova:project uuid="001e2256cb8b430d93c1ff613010d199">tempest-ServerActionsTestOtherA-1949739102</nova:project>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <nova:port uuid="eec9d8a5-6252-4c93-b74b-aecc3b28521f">
Dec 06 07:25:12 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <system>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <entry name="serial">8ace964d-3645-4195-8f65-8b625bce1b00</entry>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <entry name="uuid">8ace964d-3645-4195-8f65-8b625bce1b00</entry>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </system>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <os>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   </os>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <features>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   </features>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8ace964d-3645-4195-8f65-8b625bce1b00_disk">
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       </source>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8ace964d-3645-4195-8f65-8b625bce1b00_disk.config">
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       </source>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:25:12 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:0b:62:03"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <target dev="tapeec9d8a5-62"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/console.log" append="off"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <video>
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </video>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:25:12 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:25:12 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:25:12 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:25:12 compute-1 nova_compute[226101]: </domain>
Dec 06 07:25:12 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.562 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Preparing to wait for external event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.562 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.563 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.563 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.564 226109 DEBUG nova.virt.libvirt.vif [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2051423871',display_name='tempest-tempest.common.compute-instance-2051423871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2051423871',id=105,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOmzXnyuZ/cQZ/wXv6i6+zthJNB9rLFLpJiO6CxE3XLRu8gE5HiPWRQFyDfJKugoxRwZmDUcohMXjvZhPRaFSbcXur6WAaqewZuOab5xSDtPso9c70pnfsVwIX5QAnRONw==',key_name='tempest-keypair-2072622459',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-tln6vdkz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=8ace964d-3645-4195-8f65-8b625bce1b00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.564 226109 DEBUG nova.network.os_vif_util [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.565 226109 DEBUG nova.network.os_vif_util [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.565 226109 DEBUG os_vif [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.567 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.568 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.568 226109 INFO nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.573 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.573 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeec9d8a5-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.574 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeec9d8a5-62, col_values=(('external_ids', {'iface-id': 'eec9d8a5-6252-4c93-b74b-aecc3b28521f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:62:03', 'vm-uuid': '8ace964d-3645-4195-8f65-8b625bce1b00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.587 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:25:12 compute-1 NetworkManager[49031]: <info>  [1765005912.6171] manager: (tapeec9d8a5-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.617 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.619 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.623 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.625 226109 INFO os_vif [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62')
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.714 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.715 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.715 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:0b:62:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.716 226109 INFO nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Using config drive
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.739 226109 DEBUG nova.storage.rbd_utils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.752 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.754 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.754 226109 INFO nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Creating image(s)
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.784 226109 DEBUG nova.storage.rbd_utils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image f32ea15c-cf80-482c-9f9a-22392bc79e78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.813 226109 DEBUG nova.storage.rbd_utils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image f32ea15c-cf80-482c-9f9a-22392bc79e78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.842 226109 DEBUG nova.storage.rbd_utils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image f32ea15c-cf80-482c-9f9a-22392bc79e78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.846 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.874 226109 DEBUG nova.policy [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd966fefcb38a45219b9cc637c46a3d62', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.918 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.919 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.920 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.920 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.949 226109 DEBUG nova.storage.rbd_utils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image f32ea15c-cf80-482c-9f9a-22392bc79e78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:12 compute-1 nova_compute[226101]: 2025-12-06 07:25:12.953 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f32ea15c-cf80-482c-9f9a-22392bc79e78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:13.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:13 compute-1 nova_compute[226101]: 2025-12-06 07:25:13.238 226109 INFO nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Creating config drive at /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config
Dec 06 07:25:13 compute-1 nova_compute[226101]: 2025-12-06 07:25:13.244 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgav7ylmi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:13.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:13 compute-1 nova_compute[226101]: 2025-12-06 07:25:13.377 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgav7ylmi" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4110808184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1268851066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/534500885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.159 226109 DEBUG nova.storage.rbd_utils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.162 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.187 226109 DEBUG nova.network.neutron [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Successfully created port: fc5020f3-51a1-4ca2-b0b5-ff2add84607f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.191 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.209 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f32ea15c-cf80-482c-9f9a-22392bc79e78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.278 226109 DEBUG nova.storage.rbd_utils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] resizing rbd image f32ea15c-cf80-482c-9f9a-22392bc79e78_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.332 226109 DEBUG oslo_concurrency.processutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.333 226109 INFO nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Deleting local config drive /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config because it was imported into RBD.
Dec 06 07:25:14 compute-1 kernel: tapeec9d8a5-62: entered promiscuous mode
Dec 06 07:25:14 compute-1 NetworkManager[49031]: <info>  [1765005914.3853] manager: (tapeec9d8a5-62): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Dec 06 07:25:14 compute-1 ovn_controller[130279]: 2025-12-06T07:25:14Z|00426|binding|INFO|Claiming lport eec9d8a5-6252-4c93-b74b-aecc3b28521f for this chassis.
Dec 06 07:25:14 compute-1 ovn_controller[130279]: 2025-12-06T07:25:14Z|00427|binding|INFO|eec9d8a5-6252-4c93-b74b-aecc3b28521f: Claiming fa:16:3e:0b:62:03 10.100.0.3
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.384 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:14 compute-1 ovn_controller[130279]: 2025-12-06T07:25:14Z|00428|binding|INFO|Setting lport eec9d8a5-6252-4c93-b74b-aecc3b28521f ovn-installed in OVS
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.402 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.406 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:14 compute-1 systemd-udevd[263897]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:25:14 compute-1 systemd-machined[190302]: New machine qemu-48-instance-00000069.
Dec 06 07:25:14 compute-1 NetworkManager[49031]: <info>  [1765005914.4232] device (tapeec9d8a5-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:25:14 compute-1 NetworkManager[49031]: <info>  [1765005914.4239] device (tapeec9d8a5-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:25:14 compute-1 systemd[1]: Started Virtual Machine qemu-48-instance-00000069.
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.493 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:62:03 10.100.0.3'], port_security=['fa:16:3e:0b:62:03 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8ace964d-3645-4195-8f65-8b625bce1b00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee1f9954-c558-49c3-9382-8e1ede273255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=eec9d8a5-6252-4c93-b74b-aecc3b28521f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:25:14 compute-1 ovn_controller[130279]: 2025-12-06T07:25:14Z|00429|binding|INFO|Setting lport eec9d8a5-6252-4c93-b74b-aecc3b28521f up in Southbound
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.494 139580 INFO neutron.agent.ovn.metadata.agent [-] Port eec9d8a5-6252-4c93-b74b-aecc3b28521f in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e bound to our chassis
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.496 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.506 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fa0c38-a7f2-4a13-840f-d134c491e34d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.507 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6209aab-d1 in ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.510 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6209aab-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.510 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[be59803e-1814-4b9a-930f-10dcb1c24ace]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.511 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d2be43-bb6a-4713-a62d-19eab6399b91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.523 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd309d8-36ef-4606-86df-a498f05a979a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.535 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c4dfbc99-b1ae-479c-ae15-82633a294fa4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.562 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa9ab1f-1521-4aaa-9657-062859e49524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 NetworkManager[49031]: <info>  [1765005914.5703] manager: (tapf6209aab-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/194)
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.571 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1f990815-5602-432a-8e0c-d90feb41c2cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.607 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc8e69a-af21-479e-9682-4e738a3f34d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.610 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9f04c5-f293-45eb-9d8e-2dfcb7cdc858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 NetworkManager[49031]: <info>  [1765005914.6376] device (tapf6209aab-d0): carrier: link connected
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.643 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[1b23794b-987b-43ad-9ea0-4f4206732857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.659 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[faed6958-dfb7-46d6-b0d2-8768ac2ff17f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619414, 'reachable_time': 40975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263931, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.675 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c9edd01e-33ff-4087-91cd-09febe9afbee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c5a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619414, 'tstamp': 619414}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263932, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.692 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[306976cc-4345-4491-981d-fa05fb472a88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619414, 'reachable_time': 40975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263933, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.720 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f48fd3-05ee-4c39-a8d4-4e9dbe7b8b9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.779 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[85577bc0-1bf8-49ca-83a6-c0dd3c0c828d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.781 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.781 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.782 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:14 compute-1 NetworkManager[49031]: <info>  [1765005914.7846] manager: (tapf6209aab-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Dec 06 07:25:14 compute-1 kernel: tapf6209aab-d0: entered promiscuous mode
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.788 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:14 compute-1 ovn_controller[130279]: 2025-12-06T07:25:14Z|00430|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.789 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.791 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.792 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[31044c08-2896-4ee3-9573-5fd3c8b2ce9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.792 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:25:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:14.795 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'env', 'PROCESS_TAG=haproxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6209aab-d53f-4d58-9b94-ffb7adc6239e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:25:14 compute-1 nova_compute[226101]: 2025-12-06 07:25:14.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:15 compute-1 ceph-mon[81689]: pgmap v2027: 305 pgs: 305 active+clean; 342 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 5.1 MiB/s wr, 139 op/s
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.080 226109 DEBUG nova.compute.manager [req-f98bf125-2437-4dd4-81e3-cc83c8495cd0 req-d71c8aa3-9828-40ce-89b7-f706c6c4a6bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.081 226109 DEBUG oslo_concurrency.lockutils [req-f98bf125-2437-4dd4-81e3-cc83c8495cd0 req-d71c8aa3-9828-40ce-89b7-f706c6c4a6bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.081 226109 DEBUG oslo_concurrency.lockutils [req-f98bf125-2437-4dd4-81e3-cc83c8495cd0 req-d71c8aa3-9828-40ce-89b7-f706c6c4a6bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.081 226109 DEBUG oslo_concurrency.lockutils [req-f98bf125-2437-4dd4-81e3-cc83c8495cd0 req-d71c8aa3-9828-40ce-89b7-f706c6c4a6bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.082 226109 DEBUG nova.compute.manager [req-f98bf125-2437-4dd4-81e3-cc83c8495cd0 req-d71c8aa3-9828-40ce-89b7-f706c6c4a6bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Processing event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:25:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:15.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:15 compute-1 podman[263978]: 2025-12-06 07:25:15.123738162 +0000 UTC m=+0.023936311 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:25:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:15 compute-1 podman[263978]: 2025-12-06 07:25:15.500497462 +0000 UTC m=+0.400695581 container create 674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.520 226109 DEBUG nova.objects.instance [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'migration_context' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.599 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.600 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Ensure instance console log exists: /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.600 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.601 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.601 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.642 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.644 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005915.6442125, 8ace964d-3645-4195-8f65-8b625bce1b00 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.644 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] VM Started (Lifecycle Event)
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.646 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.649 226109 INFO nova.virt.libvirt.driver [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance spawned successfully.
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.649 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:25:15 compute-1 systemd[1]: Started libpod-conmon-674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1.scope.
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.672 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.679 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:25:15 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.683 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.683 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.683 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.684 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.684 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.685 226109 DEBUG nova.virt.libvirt.driver [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58ac0433b963d761a8791697250391d9cb63f6958ffba5733048918972626eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.798 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.799 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005915.6447058, 8ace964d-3645-4195-8f65-8b625bce1b00 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.799 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] VM Paused (Lifecycle Event)
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.942 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.996 226109 INFO nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Took 11.03 seconds to spawn the instance on the hypervisor.
Dec 06 07:25:15 compute-1 nova_compute[226101]: 2025-12-06 07:25:15.997 226109 DEBUG nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:16 compute-1 nova_compute[226101]: 2025-12-06 07:25:16.048 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:16 compute-1 nova_compute[226101]: 2025-12-06 07:25:16.052 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005915.6463017, 8ace964d-3645-4195-8f65-8b625bce1b00 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:16 compute-1 nova_compute[226101]: 2025-12-06 07:25:16.052 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] VM Resumed (Lifecycle Event)
Dec 06 07:25:16 compute-1 podman[263978]: 2025-12-06 07:25:16.093102946 +0000 UTC m=+0.993301095 container init 674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:25:16 compute-1 podman[263978]: 2025-12-06 07:25:16.098958615 +0000 UTC m=+0.999156734 container start 674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:25:16 compute-1 nova_compute[226101]: 2025-12-06 07:25:16.102 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:16 compute-1 nova_compute[226101]: 2025-12-06 07:25:16.107 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:25:16 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[264039]: [NOTICE]   (264043) : New worker (264045) forked
Dec 06 07:25:16 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[264039]: [NOTICE]   (264043) : Loading success.
Dec 06 07:25:16 compute-1 nova_compute[226101]: 2025-12-06 07:25:16.162 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:25:16 compute-1 nova_compute[226101]: 2025-12-06 07:25:16.179 226109 INFO nova.compute.manager [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Took 14.76 seconds to build instance.
Dec 06 07:25:16 compute-1 nova_compute[226101]: 2025-12-06 07:25:16.315 226109 DEBUG oslo_concurrency.lockutils [None req-aa447996-3694-4827-9db4-d9dc69ebf80d baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:16 compute-1 ceph-mon[81689]: pgmap v2028: 305 pgs: 305 active+clean; 380 MiB data, 908 MiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 6.9 MiB/s wr, 176 op/s
Dec 06 07:25:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:17.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:17 compute-1 nova_compute[226101]: 2025-12-06 07:25:17.616 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:18 compute-1 nova_compute[226101]: 2025-12-06 07:25:18.805 226109 DEBUG nova.compute.manager [req-c4a0f2a4-7520-4598-bcda-c4852f4e787d req-473ac4f8-6db7-4259-be04-6c5c43bbf7a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:18 compute-1 nova_compute[226101]: 2025-12-06 07:25:18.805 226109 DEBUG oslo_concurrency.lockutils [req-c4a0f2a4-7520-4598-bcda-c4852f4e787d req-473ac4f8-6db7-4259-be04-6c5c43bbf7a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:18 compute-1 nova_compute[226101]: 2025-12-06 07:25:18.805 226109 DEBUG oslo_concurrency.lockutils [req-c4a0f2a4-7520-4598-bcda-c4852f4e787d req-473ac4f8-6db7-4259-be04-6c5c43bbf7a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:18 compute-1 nova_compute[226101]: 2025-12-06 07:25:18.806 226109 DEBUG oslo_concurrency.lockutils [req-c4a0f2a4-7520-4598-bcda-c4852f4e787d req-473ac4f8-6db7-4259-be04-6c5c43bbf7a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:18 compute-1 nova_compute[226101]: 2025-12-06 07:25:18.806 226109 DEBUG nova.compute.manager [req-c4a0f2a4-7520-4598-bcda-c4852f4e787d req-473ac4f8-6db7-4259-be04-6c5c43bbf7a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] No waiting events found dispatching network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:18 compute-1 nova_compute[226101]: 2025-12-06 07:25:18.806 226109 WARNING nova.compute.manager [req-c4a0f2a4-7520-4598-bcda-c4852f4e787d req-473ac4f8-6db7-4259-be04-6c5c43bbf7a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received unexpected event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f for instance with vm_state active and task_state None.
Dec 06 07:25:18 compute-1 nova_compute[226101]: 2025-12-06 07:25:18.863 226109 DEBUG nova.network.neutron [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Successfully updated port: fc5020f3-51a1-4ca2-b0b5-ff2add84607f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:25:19 compute-1 nova_compute[226101]: 2025-12-06 07:25:19.159 226109 DEBUG nova.compute.manager [req-77765b2c-98dd-4bf5-b945-be13ee8fb445 req-b40c105f-5570-4fbd-8ec7-e711bb108b0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-changed-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:19 compute-1 nova_compute[226101]: 2025-12-06 07:25:19.159 226109 DEBUG nova.compute.manager [req-77765b2c-98dd-4bf5-b945-be13ee8fb445 req-b40c105f-5570-4fbd-8ec7-e711bb108b0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Refreshing instance network info cache due to event network-changed-fc5020f3-51a1-4ca2-b0b5-ff2add84607f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:25:19 compute-1 nova_compute[226101]: 2025-12-06 07:25:19.160 226109 DEBUG oslo_concurrency.lockutils [req-77765b2c-98dd-4bf5-b945-be13ee8fb445 req-b40c105f-5570-4fbd-8ec7-e711bb108b0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:25:19 compute-1 nova_compute[226101]: 2025-12-06 07:25:19.160 226109 DEBUG oslo_concurrency.lockutils [req-77765b2c-98dd-4bf5-b945-be13ee8fb445 req-b40c105f-5570-4fbd-8ec7-e711bb108b0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:25:19 compute-1 nova_compute[226101]: 2025-12-06 07:25:19.160 226109 DEBUG nova.network.neutron [req-77765b2c-98dd-4bf5-b945-be13ee8fb445 req-b40c105f-5570-4fbd-8ec7-e711bb108b0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Refreshing network info cache for port fc5020f3-51a1-4ca2-b0b5-ff2add84607f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:25:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:19.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:19 compute-1 nova_compute[226101]: 2025-12-06 07:25:19.277 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:25:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:19 compute-1 ceph-mon[81689]: pgmap v2029: 305 pgs: 305 active+clean; 380 MiB data, 908 MiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 6.8 MiB/s wr, 147 op/s
Dec 06 07:25:19 compute-1 nova_compute[226101]: 2025-12-06 07:25:19.555 226109 DEBUG nova.network.neutron [req-77765b2c-98dd-4bf5-b945-be13ee8fb445 req-b40c105f-5570-4fbd-8ec7-e711bb108b0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:25:20 compute-1 nova_compute[226101]: 2025-12-06 07:25:20.065 226109 DEBUG nova.network.neutron [req-77765b2c-98dd-4bf5-b945-be13ee8fb445 req-b40c105f-5570-4fbd-8ec7-e711bb108b0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:25:20 compute-1 nova_compute[226101]: 2025-12-06 07:25:20.193 226109 DEBUG oslo_concurrency.lockutils [req-77765b2c-98dd-4bf5-b945-be13ee8fb445 req-b40c105f-5570-4fbd-8ec7-e711bb108b0a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:25:20 compute-1 nova_compute[226101]: 2025-12-06 07:25:20.194 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:25:20 compute-1 nova_compute[226101]: 2025-12-06 07:25:20.195 226109 DEBUG nova.network.neutron [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:25:20 compute-1 nova_compute[226101]: 2025-12-06 07:25:20.984 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:21 compute-1 nova_compute[226101]: 2025-12-06 07:25:21.085 226109 DEBUG nova.network.neutron [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:25:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:21.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:21.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:21 compute-1 ceph-mon[81689]: pgmap v2030: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.2 MiB/s wr, 291 op/s
Dec 06 07:25:22 compute-1 nova_compute[226101]: 2025-12-06 07:25:22.619 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:23.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:23.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:23 compute-1 ceph-mon[81689]: pgmap v2031: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 290 op/s
Dec 06 07:25:24 compute-1 ceph-mon[81689]: pgmap v2032: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.3 MiB/s wr, 254 op/s
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.249 226109 DEBUG nova.compute.manager [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-changed-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.249 226109 DEBUG nova.compute.manager [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Refreshing instance network info cache due to event network-changed-eec9d8a5-6252-4c93-b74b-aecc3b28521f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.249 226109 DEBUG oslo_concurrency.lockutils [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.250 226109 DEBUG oslo_concurrency.lockutils [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.250 226109 DEBUG nova.network.neutron [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Refreshing network info cache for port eec9d8a5-6252-4c93-b74b-aecc3b28521f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:25:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.417 226109 DEBUG nova.network.neutron [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.446 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.446 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance network_info: |[{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.448 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Start _get_guest_xml network_info=[{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.452 226109 WARNING nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.457 226109 DEBUG nova.virt.libvirt.host [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.457 226109 DEBUG nova.virt.libvirt.host [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.460 226109 DEBUG nova.virt.libvirt.host [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.461 226109 DEBUG nova.virt.libvirt.host [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.462 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.462 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.462 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.462 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.463 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.463 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.463 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.463 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.463 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.463 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.464 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.464 226109 DEBUG nova.virt.hardware [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.469 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:25:24 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3300964011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.932 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.963 226109 DEBUG nova.storage.rbd_utils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image f32ea15c-cf80-482c-9f9a-22392bc79e78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:24 compute-1 nova_compute[226101]: 2025-12-06 07:25:24.971 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:25.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:25.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3300964011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:25:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1548510838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.469 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.471 226109 DEBUG nova.virt.libvirt.vif [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1353793859',display_name='tempest-DeleteServersTestJSON-server-1353793859',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1353793859',id=106,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-jutadpu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:12Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=f32ea15c-cf80-482c-9f9a-22392bc79e78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.471 226109 DEBUG nova.network.os_vif_util [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.472 226109 DEBUG nova.network.os_vif_util [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.473 226109 DEBUG nova.objects.instance [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.493 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <uuid>f32ea15c-cf80-482c-9f9a-22392bc79e78</uuid>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <name>instance-0000006a</name>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <nova:name>tempest-DeleteServersTestJSON-server-1353793859</nova:name>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:25:24</nova:creationTime>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <nova:user uuid="d966fefcb38a45219b9cc637c46a3d62">tempest-DeleteServersTestJSON-1764569218-project-member</nova:user>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <nova:project uuid="c6d2f50c0db54315bfa96a24511dda90">tempest-DeleteServersTestJSON-1764569218</nova:project>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <nova:port uuid="fc5020f3-51a1-4ca2-b0b5-ff2add84607f">
Dec 06 07:25:25 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <system>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <entry name="serial">f32ea15c-cf80-482c-9f9a-22392bc79e78</entry>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <entry name="uuid">f32ea15c-cf80-482c-9f9a-22392bc79e78</entry>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </system>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <os>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   </os>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <features>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   </features>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f32ea15c-cf80-482c-9f9a-22392bc79e78_disk">
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       </source>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f32ea15c-cf80-482c-9f9a-22392bc79e78_disk.config">
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       </source>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:25:25 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:39:43:3f"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <target dev="tapfc5020f3-51"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/console.log" append="off"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <video>
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </video>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:25:25 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:25:25 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:25:25 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:25:25 compute-1 nova_compute[226101]: </domain>
Dec 06 07:25:25 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.495 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Preparing to wait for external event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.496 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.496 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.497 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.499 226109 DEBUG nova.virt.libvirt.vif [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1353793859',display_name='tempest-DeleteServersTestJSON-server-1353793859',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1353793859',id=106,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-jutadpu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:12Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=f32ea15c-cf80-482c-9f9a-22392bc79e78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.500 226109 DEBUG nova.network.os_vif_util [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.501 226109 DEBUG nova.network.os_vif_util [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.501 226109 DEBUG os_vif [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.502 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.503 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.504 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.510 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.510 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc5020f3-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.512 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc5020f3-51, col_values=(('external_ids', {'iface-id': 'fc5020f3-51a1-4ca2-b0b5-ff2add84607f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:43:3f', 'vm-uuid': 'f32ea15c-cf80-482c-9f9a-22392bc79e78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.514 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:25 compute-1 NetworkManager[49031]: <info>  [1765005925.5159] manager: (tapfc5020f3-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.517 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.521 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.524 226109 INFO os_vif [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51')
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.860 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.861 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.861 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:39:43:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.861 226109 INFO nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Using config drive
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.887 226109 DEBUG nova.storage.rbd_utils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image f32ea15c-cf80-482c-9f9a-22392bc79e78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:25 compute-1 nova_compute[226101]: 2025-12-06 07:25:25.986 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:26 compute-1 nova_compute[226101]: 2025-12-06 07:25:26.206 226109 DEBUG nova.network.neutron [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Updated VIF entry in instance network info cache for port eec9d8a5-6252-4c93-b74b-aecc3b28521f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:25:26 compute-1 nova_compute[226101]: 2025-12-06 07:25:26.207 226109 DEBUG nova.network.neutron [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Updating instance_info_cache with network_info: [{"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:25:26 compute-1 nova_compute[226101]: 2025-12-06 07:25:26.229 226109 DEBUG oslo_concurrency.lockutils [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-8ace964d-3645-4195-8f65-8b625bce1b00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:25:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1548510838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1295261342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:26 compute-1 ceph-mon[81689]: pgmap v2033: 305 pgs: 305 active+clean; 376 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 261 op/s
Dec 06 07:25:27 compute-1 nova_compute[226101]: 2025-12-06 07:25:27.067 226109 INFO nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Creating config drive at /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/disk.config
Dec 06 07:25:27 compute-1 nova_compute[226101]: 2025-12-06 07:25:27.072 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdg8fymu0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:27.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:27 compute-1 nova_compute[226101]: 2025-12-06 07:25:27.203 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdg8fymu0" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:27 compute-1 nova_compute[226101]: 2025-12-06 07:25:27.232 226109 DEBUG nova.storage.rbd_utils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image f32ea15c-cf80-482c-9f9a-22392bc79e78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:27 compute-1 nova_compute[226101]: 2025-12-06 07:25:27.235 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/disk.config f32ea15c-cf80-482c-9f9a-22392bc79e78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:27.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2019194234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3204653884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:27 compute-1 nova_compute[226101]: 2025-12-06 07:25:27.945 226109 DEBUG oslo_concurrency.processutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/disk.config f32ea15c-cf80-482c-9f9a-22392bc79e78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:27 compute-1 nova_compute[226101]: 2025-12-06 07:25:27.946 226109 INFO nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Deleting local config drive /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/disk.config because it was imported into RBD.
Dec 06 07:25:27 compute-1 kernel: tapfc5020f3-51: entered promiscuous mode
Dec 06 07:25:27 compute-1 NetworkManager[49031]: <info>  [1765005927.9958] manager: (tapfc5020f3-51): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Dec 06 07:25:27 compute-1 ovn_controller[130279]: 2025-12-06T07:25:27Z|00431|binding|INFO|Claiming lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f for this chassis.
Dec 06 07:25:27 compute-1 nova_compute[226101]: 2025-12-06 07:25:27.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:27 compute-1 ovn_controller[130279]: 2025-12-06T07:25:27Z|00432|binding|INFO|fc5020f3-51a1-4ca2-b0b5-ff2add84607f: Claiming fa:16:3e:39:43:3f 10.100.0.11
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.008 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:43:3f 10.100.0.11'], port_security=['fa:16:3e:39:43:3f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f32ea15c-cf80-482c-9f9a-22392bc79e78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=fc5020f3-51a1-4ca2-b0b5-ff2add84607f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.010 139580 INFO neutron.agent.ovn.metadata.agent [-] Port fc5020f3-51a1-4ca2-b0b5-ff2add84607f in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 bound to our chassis
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.012 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:25:28 compute-1 ovn_controller[130279]: 2025-12-06T07:25:28Z|00433|binding|INFO|Setting lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f ovn-installed in OVS
Dec 06 07:25:28 compute-1 ovn_controller[130279]: 2025-12-06T07:25:28Z|00434|binding|INFO|Setting lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f up in Southbound
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.017 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.024 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c826deeb-2454-4ca2-8b1b-710ca0b9c34c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.025 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85cfbf28-71 in ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.029 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85cfbf28-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.029 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbf272f-b2ce-4b45-9de3-f123e44e06c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 systemd-machined[190302]: New machine qemu-49-instance-0000006a.
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.031 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8c820589-71ac-4f17-bf4d-adf1cb51d357]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 systemd-udevd[264193]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.042 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ec154604-2328-46f7-aa09-6ccb0193ced6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 NetworkManager[49031]: <info>  [1765005928.0455] device (tapfc5020f3-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:25:28 compute-1 NetworkManager[49031]: <info>  [1765005928.0463] device (tapfc5020f3-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:25:28 compute-1 systemd[1]: Started Virtual Machine qemu-49-instance-0000006a.
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.061 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5d5e02-bf44-48c7-a4aa-a40aa956c442]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.098 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6fb026-acf2-41c7-9aa2-9d1ba5e1799a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 NetworkManager[49031]: <info>  [1765005928.1058] manager: (tap85cfbf28-70): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.105 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eacfd1fc-8d63-4f69-bef4-491731fec488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.145 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[bb48145e-5008-49b4-87b8-7eedb67859af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.149 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cdde38af-952b-443d-9eec-efa7f4c3d10a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 NetworkManager[49031]: <info>  [1765005928.1714] device (tap85cfbf28-70): carrier: link connected
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.177 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cff450d6-cc5d-4d13-b53e-6d891de1e700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.195 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3b38c8-e4ab-41ff-88a6-f42c3be340b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620767, 'reachable_time': 30428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264225, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.214 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[14e0fd3a-bb93-4f36-aca5-2337e8e7a0e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 620767, 'tstamp': 620767}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264226, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.235 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[403f6967-17c0-4e6c-8c40-0e8b560ef785]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620767, 'reachable_time': 30428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264227, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.273 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3c247c-2b12-48ff-b890-36a2fb61d625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.337 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[35385646-b7ce-4662-a50d-67d31453d6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.338 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.339 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.339 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85cfbf28-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.341 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:28 compute-1 NetworkManager[49031]: <info>  [1765005928.3420] manager: (tap85cfbf28-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Dec 06 07:25:28 compute-1 kernel: tap85cfbf28-70: entered promiscuous mode
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.345 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85cfbf28-70, col_values=(('external_ids', {'iface-id': '41b1b168-8e0e-4991-9750-9b31221f4863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.346 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:28 compute-1 ovn_controller[130279]: 2025-12-06T07:25:28Z|00435|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.362 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.363 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.364 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4aa004-b40e-4dd3-8528-651da1ed4b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.365 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:25:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:28.366 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'env', 'PROCESS_TAG=haproxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85cfbf28-7016-4776-8fc2-2eb08a6b8347.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.519 226109 DEBUG nova.compute.manager [req-64826fc2-e9cb-4a20-8099-f3f02b402420 req-1814aae1-89cd-45c8-8ac7-b301114f74b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.520 226109 DEBUG oslo_concurrency.lockutils [req-64826fc2-e9cb-4a20-8099-f3f02b402420 req-1814aae1-89cd-45c8-8ac7-b301114f74b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.520 226109 DEBUG oslo_concurrency.lockutils [req-64826fc2-e9cb-4a20-8099-f3f02b402420 req-1814aae1-89cd-45c8-8ac7-b301114f74b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.521 226109 DEBUG oslo_concurrency.lockutils [req-64826fc2-e9cb-4a20-8099-f3f02b402420 req-1814aae1-89cd-45c8-8ac7-b301114f74b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:28 compute-1 nova_compute[226101]: 2025-12-06 07:25:28.521 226109 DEBUG nova.compute.manager [req-64826fc2-e9cb-4a20-8099-f3f02b402420 req-1814aae1-89cd-45c8-8ac7-b301114f74b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Processing event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:25:28 compute-1 ceph-mon[81689]: pgmap v2034: 305 pgs: 305 active+clean; 376 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 401 KiB/s wr, 218 op/s
Dec 06 07:25:28 compute-1 podman[264257]: 2025-12-06 07:25:28.77610425 +0000 UTC m=+0.062785054 container create 421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:25:28 compute-1 systemd[1]: Started libpod-conmon-421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19.scope.
Dec 06 07:25:28 compute-1 podman[264257]: 2025-12-06 07:25:28.738395077 +0000 UTC m=+0.025075911 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:25:28 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:25:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419938d0b6b7a2dfbc9d743e6bb786751a93132a7dfe38d3c3e7527ca27f39c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:28 compute-1 podman[264257]: 2025-12-06 07:25:28.855038622 +0000 UTC m=+0.141719446 container init 421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:25:28 compute-1 podman[264257]: 2025-12-06 07:25:28.860864149 +0000 UTC m=+0.147544953 container start 421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:25:28 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[264272]: [NOTICE]   (264276) : New worker (264278) forked
Dec 06 07:25:28 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[264272]: [NOTICE]   (264276) : Loading success.
Dec 06 07:25:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:29.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.217 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.219 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005929.2166083, f32ea15c-cf80-482c-9f9a-22392bc79e78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.219 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] VM Started (Lifecycle Event)
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.223 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.226 226109 INFO nova.virt.libvirt.driver [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance spawned successfully.
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.227 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.245 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.251 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.254 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.255 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.255 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.256 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.256 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.257 226109 DEBUG nova.virt.libvirt.driver [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.283 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.283 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005929.2178571, f32ea15c-cf80-482c-9f9a-22392bc79e78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.284 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] VM Paused (Lifecycle Event)
Dec 06 07:25:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:29.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.317 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.321 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005929.2217035, f32ea15c-cf80-482c-9f9a-22392bc79e78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.321 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] VM Resumed (Lifecycle Event)
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.326 226109 INFO nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Took 16.57 seconds to spawn the instance on the hypervisor.
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.327 226109 DEBUG nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.360 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.364 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.408 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.417 226109 INFO nova.compute.manager [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Took 17.95 seconds to build instance.
Dec 06 07:25:29 compute-1 nova_compute[226101]: 2025-12-06 07:25:29.431 226109 DEBUG oslo_concurrency.lockutils [None req-0381b21b-6df2-474f-ba53-88a2673e2c8c d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:29 compute-1 ovn_controller[130279]: 2025-12-06T07:25:29Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:62:03 10.100.0.3
Dec 06 07:25:29 compute-1 ovn_controller[130279]: 2025-12-06T07:25:29Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:62:03 10.100.0.3
Dec 06 07:25:30 compute-1 nova_compute[226101]: 2025-12-06 07:25:30.515 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:30 compute-1 nova_compute[226101]: 2025-12-06 07:25:30.628 226109 DEBUG nova.compute.manager [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:30 compute-1 nova_compute[226101]: 2025-12-06 07:25:30.629 226109 DEBUG oslo_concurrency.lockutils [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:30 compute-1 nova_compute[226101]: 2025-12-06 07:25:30.629 226109 DEBUG oslo_concurrency.lockutils [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:30 compute-1 nova_compute[226101]: 2025-12-06 07:25:30.629 226109 DEBUG oslo_concurrency.lockutils [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:30 compute-1 nova_compute[226101]: 2025-12-06 07:25:30.629 226109 DEBUG nova.compute.manager [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:30 compute-1 nova_compute[226101]: 2025-12-06 07:25:30.630 226109 WARNING nova.compute.manager [req-1bc60b95-59cc-4a64-bab9-7c61a9f3c913 req-b58a4aac-1af1-4ab2-994c-a79dba1bc1e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state None.
Dec 06 07:25:30 compute-1 nova_compute[226101]: 2025-12-06 07:25:30.988 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:25:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:31.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:25:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:31.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:31 compute-1 ceph-mon[81689]: pgmap v2035: 305 pgs: 305 active+clean; 395 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.4 MiB/s wr, 304 op/s
Dec 06 07:25:33 compute-1 podman[264330]: 2025-12-06 07:25:33.108336024 +0000 UTC m=+0.092589062 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:25:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:33.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:33 compute-1 ceph-mon[81689]: pgmap v2036: 305 pgs: 305 active+clean; 456 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 231 op/s
Dec 06 07:25:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:33.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:35 compute-1 nova_compute[226101]: 2025-12-06 07:25:35.031 226109 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:25:35 compute-1 nova_compute[226101]: 2025-12-06 07:25:35.031 226109 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:25:35 compute-1 nova_compute[226101]: 2025-12-06 07:25:35.032 226109 DEBUG nova.network.neutron [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:25:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:35.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:35.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:35 compute-1 nova_compute[226101]: 2025-12-06 07:25:35.516 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:35 compute-1 nova_compute[226101]: 2025-12-06 07:25:35.990 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:36 compute-1 podman[264356]: 2025-12-06 07:25:36.079493698 +0000 UTC m=+0.055140756 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:25:36 compute-1 podman[264355]: 2025-12-06 07:25:36.105510115 +0000 UTC m=+0.082301074 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:25:36 compute-1 nova_compute[226101]: 2025-12-06 07:25:36.493 226109 DEBUG nova.network.neutron [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:25:36 compute-1 nova_compute[226101]: 2025-12-06 07:25:36.513 226109 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:25:36 compute-1 nova_compute[226101]: 2025-12-06 07:25:36.613 226109 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 07:25:36 compute-1 nova_compute[226101]: 2025-12-06 07:25:36.614 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Creating file /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/93567c6988404003a0073817c24c17dc.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 07:25:36 compute-1 nova_compute[226101]: 2025-12-06 07:25:36.614 226109 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/93567c6988404003a0073817c24c17dc.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:37 compute-1 nova_compute[226101]: 2025-12-06 07:25:37.091 226109 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/93567c6988404003a0073817c24c17dc.tmp" returned: 1 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:37 compute-1 nova_compute[226101]: 2025-12-06 07:25:37.093 226109 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/93567c6988404003a0073817c24c17dc.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 07:25:37 compute-1 nova_compute[226101]: 2025-12-06 07:25:37.094 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Creating directory /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 07:25:37 compute-1 nova_compute[226101]: 2025-12-06 07:25:37.095 226109 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:37.118 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:25:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:37.119 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:25:37 compute-1 nova_compute[226101]: 2025-12-06 07:25:37.169 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:37.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:37.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:37 compute-1 nova_compute[226101]: 2025-12-06 07:25:37.371 226109 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:37 compute-1 nova_compute[226101]: 2025-12-06 07:25:37.375 226109 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:25:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/996845718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3612033471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:37 compute-1 ceph-mon[81689]: pgmap v2037: 305 pgs: 305 active+clean; 457 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.7 MiB/s wr, 220 op/s
Dec 06 07:25:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3723579441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1114235426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4074039243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:39.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:39 compute-1 ceph-mon[81689]: pgmap v2038: 305 pgs: 305 active+clean; 465 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 274 op/s
Dec 06 07:25:39 compute-1 ceph-mon[81689]: pgmap v2039: 305 pgs: 305 active+clean; 465 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 262 op/s
Dec 06 07:25:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:39.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:39 compute-1 nova_compute[226101]: 2025-12-06 07:25:39.762 226109 DEBUG oslo_concurrency.lockutils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:39 compute-1 nova_compute[226101]: 2025-12-06 07:25:39.763 226109 DEBUG oslo_concurrency.lockutils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:39 compute-1 nova_compute[226101]: 2025-12-06 07:25:39.785 226109 DEBUG nova.objects.instance [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'flavor' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:39 compute-1 nova_compute[226101]: 2025-12-06 07:25:39.846 226109 DEBUG oslo_concurrency.lockutils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.063 226109 DEBUG oslo_concurrency.lockutils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.064 226109 DEBUG oslo_concurrency.lockutils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.064 226109 INFO nova.compute.manager [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Attaching volume 5061c285-a1ac-4ce6-96b8-9bd9d7f18a74 to /dev/vdb
Dec 06 07:25:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:40.121 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.239 226109 DEBUG os_brick.utils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.240 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.250 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.250 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[8442b777-3424-4c55-a1e0-44f6690a56d6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.252 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.260 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.261 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[e049eff6-c207-42c6-ad7f-0abd0331f1e0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.262 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.272 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.272 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[ee228b5f-7c2c-43bb-a01b-a5f970f60dc5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.273 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[c425cc95-aed6-4e10-851f-ca643c6f0367]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.274 226109 DEBUG oslo_concurrency.processutils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.303 226109 DEBUG oslo_concurrency.processutils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.306 226109 DEBUG os_brick.initiator.connectors.lightos [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.306 226109 DEBUG os_brick.initiator.connectors.lightos [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.306 226109 DEBUG os_brick.initiator.connectors.lightos [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.307 226109 DEBUG os_brick.utils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.307 226109 DEBUG nova.virt.block_device [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Updating existing volume attachment record: 0c6cc203-8dde-49d6-ad46-ef5abf12f4f3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:25:40 compute-1 nova_compute[226101]: 2025-12-06 07:25:40.519 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:41 compute-1 nova_compute[226101]: 2025-12-06 07:25:41.026 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:25:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:41.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:25:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:25:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1747949989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:41.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:41 compute-1 ceph-mon[81689]: pgmap v2040: 305 pgs: 305 active+clean; 467 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 277 op/s
Dec 06 07:25:42 compute-1 nova_compute[226101]: 2025-12-06 07:25:42.857 226109 DEBUG nova.objects.instance [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'flavor' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:42 compute-1 nova_compute[226101]: 2025-12-06 07:25:42.889 226109 DEBUG nova.virt.libvirt.driver [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Attempting to attach volume 5061c285-a1ac-4ce6-96b8-9bd9d7f18a74 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:25:42 compute-1 nova_compute[226101]: 2025-12-06 07:25:42.892 226109 DEBUG nova.virt.libvirt.guest [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:25:42 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:25:42 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-5061c285-a1ac-4ce6-96b8-9bd9d7f18a74">
Dec 06 07:25:42 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:25:42 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:25:42 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:25:42 compute-1 nova_compute[226101]:   </source>
Dec 06 07:25:42 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:25:42 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:25:42 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:25:42 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:25:42 compute-1 nova_compute[226101]:   <serial>5061c285-a1ac-4ce6-96b8-9bd9d7f18a74</serial>
Dec 06 07:25:42 compute-1 nova_compute[226101]: </disk>
Dec 06 07:25:42 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:25:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:43.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:43 compute-1 ceph-mon[81689]: pgmap v2041: 305 pgs: 305 active+clean; 467 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 236 op/s
Dec 06 07:25:44 compute-1 sudo[264422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:44 compute-1 sudo[264422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:44 compute-1 sudo[264422]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:44 compute-1 sudo[264447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:25:44 compute-1 sudo[264447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:44 compute-1 sudo[264447]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:44 compute-1 sudo[264472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:44 compute-1 sudo[264472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:44 compute-1 sudo[264472]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:44 compute-1 sudo[264497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:25:44 compute-1 sudo[264497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:44 compute-1 nova_compute[226101]: 2025-12-06 07:25:44.762 226109 DEBUG nova.virt.libvirt.driver [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:44 compute-1 nova_compute[226101]: 2025-12-06 07:25:44.764 226109 DEBUG nova.virt.libvirt.driver [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:44 compute-1 nova_compute[226101]: 2025-12-06 07:25:44.764 226109 DEBUG nova.virt.libvirt.driver [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:44 compute-1 nova_compute[226101]: 2025-12-06 07:25:44.764 226109 DEBUG nova.virt.libvirt.driver [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:0b:62:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:25:45 compute-1 sudo[264497]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:45 compute-1 nova_compute[226101]: 2025-12-06 07:25:45.210 226109 DEBUG oslo_concurrency.lockutils [None req-e63fdb7c-2126-424a-a356-2ca7443fc2c7 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:45.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:45.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:45 compute-1 nova_compute[226101]: 2025-12-06 07:25:45.522 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:46 compute-1 nova_compute[226101]: 2025-12-06 07:25:46.027 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1747949989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:46 compute-1 ceph-mon[81689]: pgmap v2042: 305 pgs: 305 active+clean; 470 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 754 KiB/s wr, 240 op/s
Dec 06 07:25:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:47.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:47.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:25:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:25:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:25:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2451184587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:47 compute-1 ceph-mon[81689]: pgmap v2043: 305 pgs: 305 active+clean; 477 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.3 MiB/s wr, 223 op/s
Dec 06 07:25:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:25:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:25:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:25:47 compute-1 nova_compute[226101]: 2025-12-06 07:25:47.421 226109 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:25:47 compute-1 nova_compute[226101]: 2025-12-06 07:25:47.683 226109 INFO nova.compute.manager [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Rebuilding instance
Dec 06 07:25:47 compute-1 nova_compute[226101]: 2025-12-06 07:25:47.891 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:47 compute-1 nova_compute[226101]: 2025-12-06 07:25:47.923 226109 DEBUG nova.compute.manager [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:48 compute-1 nova_compute[226101]: 2025-12-06 07:25:48.264 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:48 compute-1 nova_compute[226101]: 2025-12-06 07:25:48.346 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:48 compute-1 ceph-mon[81689]: pgmap v2044: 305 pgs: 305 active+clean; 477 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 169 op/s
Dec 06 07:25:48 compute-1 nova_compute[226101]: 2025-12-06 07:25:48.451 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'resources' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:48 compute-1 nova_compute[226101]: 2025-12-06 07:25:48.796 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'migration_context' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:49.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:49 compute-1 ovn_controller[130279]: 2025-12-06T07:25:49Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:43:3f 10.100.0.11
Dec 06 07:25:49 compute-1 ovn_controller[130279]: 2025-12-06T07:25:49Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:43:3f 10.100.0.11
Dec 06 07:25:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:50 compute-1 nova_compute[226101]: 2025-12-06 07:25:50.041 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:25:50 compute-1 nova_compute[226101]: 2025-12-06 07:25:50.044 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:25:50 compute-1 nova_compute[226101]: 2025-12-06 07:25:50.524 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:51 compute-1 nova_compute[226101]: 2025-12-06 07:25:51.029 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:51.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:51 compute-1 ceph-mon[81689]: pgmap v2045: 305 pgs: 305 active+clean; 488 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Dec 06 07:25:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:25:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:51.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:25:51 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec 06 07:25:52 compute-1 ceph-mon[81689]: pgmap v2046: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 184 op/s
Dec 06 07:25:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:53.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:53.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:55.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:55 compute-1 ceph-mon[81689]: pgmap v2047: 305 pgs: 305 active+clean; 506 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 169 op/s
Dec 06 07:25:55 compute-1 nova_compute[226101]: 2025-12-06 07:25:55.526 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:55 compute-1 sudo[264552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:55 compute-1 sudo[264552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:55 compute-1 sudo[264552]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:56 compute-1 sudo[264577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:25:56 compute-1 sudo[264577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:56 compute-1 sudo[264577]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:56 compute-1 nova_compute[226101]: 2025-12-06 07:25:56.030 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:57.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:57 compute-1 ceph-mon[81689]: pgmap v2048: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.1 MiB/s wr, 123 op/s
Dec 06 07:25:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:57.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:58 compute-1 nova_compute[226101]: 2025-12-06 07:25:58.079 226109 INFO nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance shutdown successfully after 8 seconds.
Dec 06 07:25:58 compute-1 nova_compute[226101]: 2025-12-06 07:25:58.463 226109 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:25:58 compute-1 kernel: tapeec9d8a5-62 (unregistering): left promiscuous mode
Dec 06 07:25:58 compute-1 NetworkManager[49031]: <info>  [1765005958.5557] device (tapeec9d8a5-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:25:58 compute-1 ovn_controller[130279]: 2025-12-06T07:25:58Z|00436|binding|INFO|Releasing lport eec9d8a5-6252-4c93-b74b-aecc3b28521f from this chassis (sb_readonly=0)
Dec 06 07:25:58 compute-1 ovn_controller[130279]: 2025-12-06T07:25:58Z|00437|binding|INFO|Setting lport eec9d8a5-6252-4c93-b74b-aecc3b28521f down in Southbound
Dec 06 07:25:58 compute-1 nova_compute[226101]: 2025-12-06 07:25:58.563 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:58 compute-1 ovn_controller[130279]: 2025-12-06T07:25:58Z|00438|binding|INFO|Removing iface tapeec9d8a5-62 ovn-installed in OVS
Dec 06 07:25:58 compute-1 nova_compute[226101]: 2025-12-06 07:25:58.565 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:58 compute-1 nova_compute[226101]: 2025-12-06 07:25:58.578 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:58 compute-1 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000069.scope: Deactivated successfully.
Dec 06 07:25:58 compute-1 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000069.scope: Consumed 15.902s CPU time.
Dec 06 07:25:58 compute-1 systemd-machined[190302]: Machine qemu-48-instance-00000069 terminated.
Dec 06 07:25:58 compute-1 nova_compute[226101]: 2025-12-06 07:25:58.714 226109 INFO nova.virt.libvirt.driver [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance destroyed successfully.
Dec 06 07:25:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:58.749 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:62:03 10.100.0.3'], port_security=['fa:16:3e:0b:62:03 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8ace964d-3645-4195-8f65-8b625bce1b00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee1f9954-c558-49c3-9382-8e1ede273255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=eec9d8a5-6252-4c93-b74b-aecc3b28521f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:25:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:58.750 139580 INFO neutron.agent.ovn.metadata.agent [-] Port eec9d8a5-6252-4c93-b74b-aecc3b28521f in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:25:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:58.752 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6209aab-d53f-4d58-9b94-ffb7adc6239e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:25:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:58.754 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[28a5dbb5-39c7-4d52-bc16-8c7579cb2459]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:25:58.754 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace which is not needed anymore
Dec 06 07:25:58 compute-1 ceph-mon[81689]: pgmap v2049: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 387 KiB/s rd, 4.4 MiB/s wr, 89 op/s
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.054 226109 INFO nova.compute.manager [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Detaching volume 5061c285-a1ac-4ce6-96b8-9bd9d7f18a74
Dec 06 07:25:59 compute-1 ovn_controller[130279]: 2025-12-06T07:25:59Z|00439|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:25:59 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[264039]: [NOTICE]   (264043) : haproxy version is 2.8.14-c23fe91
Dec 06 07:25:59 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[264039]: [NOTICE]   (264043) : path to executable is /usr/sbin/haproxy
Dec 06 07:25:59 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[264039]: [WARNING]  (264043) : Exiting Master process...
Dec 06 07:25:59 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[264039]: [ALERT]    (264043) : Current worker (264045) exited with code 143 (Terminated)
Dec 06 07:25:59 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[264039]: [WARNING]  (264043) : All workers exited. Exiting... (0)
Dec 06 07:25:59 compute-1 systemd[1]: libpod-674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1.scope: Deactivated successfully.
Dec 06 07:25:59 compute-1 podman[264638]: 2025-12-06 07:25:59.14703571 +0000 UTC m=+0.296628167 container died 674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.188 226109 INFO nova.virt.block_device [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Attempting to driver detach volume 5061c285-a1ac-4ce6-96b8-9bd9d7f18a74 from mountpoint /dev/vdb
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.195 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Attempting to detach device vdb from instance 8ace964d-3645-4195-8f65-8b625bce1b00 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.195 226109 DEBUG nova.virt.libvirt.guest [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:25:59 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:25:59 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-5061c285-a1ac-4ce6-96b8-9bd9d7f18a74">
Dec 06 07:25:59 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:25:59 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:25:59 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:25:59 compute-1 nova_compute[226101]:   </source>
Dec 06 07:25:59 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:25:59 compute-1 nova_compute[226101]:   <serial>5061c285-a1ac-4ce6-96b8-9bd9d7f18a74</serial>
Dec 06 07:25:59 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:25:59 compute-1 nova_compute[226101]: </disk>
Dec 06 07:25:59 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:25:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:59.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:25:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.429 226109 DEBUG nova.compute.manager [req-89a97a0e-758e-4492-883f-2f32998479e5 req-0b04a6d3-292f-4b55-8a27-a537d06ea9a7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-unplugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.430 226109 DEBUG oslo_concurrency.lockutils [req-89a97a0e-758e-4492-883f-2f32998479e5 req-0b04a6d3-292f-4b55-8a27-a537d06ea9a7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.430 226109 DEBUG oslo_concurrency.lockutils [req-89a97a0e-758e-4492-883f-2f32998479e5 req-0b04a6d3-292f-4b55-8a27-a537d06ea9a7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.430 226109 DEBUG oslo_concurrency.lockutils [req-89a97a0e-758e-4492-883f-2f32998479e5 req-0b04a6d3-292f-4b55-8a27-a537d06ea9a7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.431 226109 DEBUG nova.compute.manager [req-89a97a0e-758e-4492-883f-2f32998479e5 req-0b04a6d3-292f-4b55-8a27-a537d06ea9a7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] No waiting events found dispatching network-vif-unplugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:59 compute-1 nova_compute[226101]: 2025-12-06 07:25:59.431 226109 WARNING nova.compute.manager [req-89a97a0e-758e-4492-883f-2f32998479e5 req-0b04a6d3-292f-4b55-8a27-a537d06ea9a7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received unexpected event network-vif-unplugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f for instance with vm_state active and task_state rebuilding.
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.240 226109 INFO nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully detached device vdb from instance 8ace964d-3645-4195-8f65-8b625bce1b00 from the persistent domain config.
Dec 06 07:26:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:00 compute-1 ceph-mon[81689]: pgmap v2050: 305 pgs: 305 active+clean; 542 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 457 KiB/s rd, 4.9 MiB/s wr, 103 op/s
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.528 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:00 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1-userdata-shm.mount: Deactivated successfully.
Dec 06 07:26:00 compute-1 systemd[1]: var-lib-containers-storage-overlay-f58ac0433b963d761a8791697250391d9cb63f6958ffba5733048918972626eb-merged.mount: Deactivated successfully.
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.664 226109 INFO nova.virt.libvirt.driver [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance destroyed successfully.
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.665 226109 DEBUG nova.virt.libvirt.vif [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2051423871',display_name='tempest-ServerActionsTestOtherA-server-1755687548',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2051423871',id=105,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOmzXnyuZ/cQZ/wXv6i6+zthJNB9rLFLpJiO6CxE3XLRu8gE5HiPWRQFyDfJKugoxRwZmDUcohMXjvZhPRaFSbcXur6WAaqewZuOab5xSDtPso9c70pnfsVwIX5QAnRONw==',key_name='tempest-keypair-2072622459',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-tln6vdkz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=8ace964d-3645-4195-8f65-8b625bce1b00,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.665 226109 DEBUG nova.network.os_vif_util [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.666 226109 DEBUG nova.network.os_vif_util [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.666 226109 DEBUG os_vif [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.668 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.668 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeec9d8a5-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.669 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.672 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:26:00 compute-1 nova_compute[226101]: 2025-12-06 07:26:00.674 226109 INFO os_vif [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62')
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.033 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:01 compute-1 podman[264638]: 2025-12-06 07:26:01.209777804 +0000 UTC m=+2.359370231 container cleanup 674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:26:01 compute-1 systemd[1]: libpod-conmon-674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1.scope: Deactivated successfully.
Dec 06 07:26:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:01.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:01.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:01.643 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:01.643 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:01.644 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.703 226109 DEBUG nova.compute.manager [req-bd4aae52-da4e-41ac-b439-e4b37059a7d0 req-4791e13f-4c38-443c-988d-39390f4a0f87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.703 226109 DEBUG oslo_concurrency.lockutils [req-bd4aae52-da4e-41ac-b439-e4b37059a7d0 req-4791e13f-4c38-443c-988d-39390f4a0f87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.703 226109 DEBUG oslo_concurrency.lockutils [req-bd4aae52-da4e-41ac-b439-e4b37059a7d0 req-4791e13f-4c38-443c-988d-39390f4a0f87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.704 226109 DEBUG oslo_concurrency.lockutils [req-bd4aae52-da4e-41ac-b439-e4b37059a7d0 req-4791e13f-4c38-443c-988d-39390f4a0f87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.704 226109 DEBUG nova.compute.manager [req-bd4aae52-da4e-41ac-b439-e4b37059a7d0 req-4791e13f-4c38-443c-988d-39390f4a0f87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] No waiting events found dispatching network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.704 226109 WARNING nova.compute.manager [req-bd4aae52-da4e-41ac-b439-e4b37059a7d0 req-4791e13f-4c38-443c-988d-39390f4a0f87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received unexpected event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f for instance with vm_state active and task_state rebuilding.
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.788 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.788 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.788 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.788 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.789 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:01 compute-1 podman[264686]: 2025-12-06 07:26:01.978258109 +0000 UTC m=+0.747360483 container remove 674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:01.985 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce43d4f-a139-4f44-9297-0730104e9545]: (4, ('Sat Dec  6 07:25:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1)\n674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1\nSat Dec  6 07:26:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1)\n674560dbd1f75be43e6ec3d94a169c1d5c1b0478e6a1fe7bafe68ec308360aa1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:01.987 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[130267cd-4960-4e7b-b6c4-d17bc51fd9e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:01.988 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.990 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:01 compute-1 kernel: tapf6209aab-d0: left promiscuous mode
Dec 06 07:26:01 compute-1 nova_compute[226101]: 2025-12-06 07:26:01.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:01.999 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d37de09f-c278-46eb-8d44-67404400fd0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.009 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:02.021 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0b6e1f-7485-4e23-ac37-6aef42775fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:02.022 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[570dabc8-9c2c-460f-a444-0394d2350315]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:02.047 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[114638d4-f859-4f35-a055-7401565facdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619406, 'reachable_time': 42512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264720, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:02.049 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:26:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:02.050 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0dc80b-0b10-4616-bc45-0e53d1c7b718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:02 compute-1 systemd[1]: run-netns-ovnmeta\x2df6209aab\x2dd53f\x2d4d58\x2d9b94\x2dffb7adc6239e.mount: Deactivated successfully.
Dec 06 07:26:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:26:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1219810440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.271 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.567 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.567 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.572 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.572 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:26:02 compute-1 ceph-mon[81689]: pgmap v2051: 305 pgs: 305 active+clean; 546 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 741 KiB/s rd, 4.2 MiB/s wr, 118 op/s
Dec 06 07:26:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1219810440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.783 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.785 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4379MB free_disk=20.69680404663086GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.785 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:02 compute-1 nova_compute[226101]: 2025-12-06 07:26:02.785 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.014 226109 INFO nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating resource usage from migration 4e7568f7-0b1a-4d39-a986-35ec8e8330d4
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.047 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 8ace964d-3645-4195-8f65-8b625bce1b00 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.048 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Migration 4e7568f7-0b1a-4d39-a986-35ec8e8330d4 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.048 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.048 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.111 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:03.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:03.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:26:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/142966572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.592 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.598 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.657 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.775 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:26:03 compute-1 nova_compute[226101]: 2025-12-06 07:26:03.776 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:04 compute-1 podman[264750]: 2025-12-06 07:26:04.108287156 +0000 UTC m=+0.088257414 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:26:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/429288119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/142966572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.547 226109 INFO nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Deleting instance files /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00_del
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.548 226109 INFO nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Deletion of /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00_del complete
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.777 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.777 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.777 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.834 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.835 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.835 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.835 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:04 compute-1 nova_compute[226101]: 2025-12-06 07:26:04.942 226109 INFO nova.virt.block_device [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Booting with volume 5061c285-a1ac-4ce6-96b8-9bd9d7f18a74 at /dev/vdb
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.105 226109 DEBUG os_brick.utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.107 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.119 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.120 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3bd0d6-5b17-4729-b853-4081d14db605]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.121 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.130 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.130 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[1763a101-3704-4137-bc0f-7e13ae2a7983]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.132 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.140 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.140 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[de34313d-df2c-4f74-9144-4415e5be47e7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.142 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[97a9d640-695f-4ead-8494-5da0e0696811]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.143 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.173 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.176 226109 DEBUG os_brick.initiator.connectors.lightos [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.176 226109 DEBUG os_brick.initiator.connectors.lightos [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.176 226109 DEBUG os_brick.initiator.connectors.lightos [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.177 226109 DEBUG os_brick.utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.177 226109 DEBUG nova.virt.block_device [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Updating existing volume attachment record: 733d4904-fcb0-4f84-8fcd-8dc4b09d4a43 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:26:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:26:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:05.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:26:05 compute-1 ceph-mon[81689]: pgmap v2052: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 770 KiB/s rd, 4.3 MiB/s wr, 136 op/s
Dec 06 07:26:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2150822182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:05.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:05 compute-1 nova_compute[226101]: 2025-12-06 07:26:05.671 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:26:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2324340893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.035 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.272 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.273 226109 INFO nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Creating image(s)
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.347 226109 DEBUG nova.storage.rbd_utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:26:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2324340893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:06 compute-1 ceph-mon[81689]: pgmap v2053: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 775 KiB/s rd, 3.2 MiB/s wr, 141 op/s
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.403 226109 DEBUG nova.storage.rbd_utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.424 226109 DEBUG nova.storage.rbd_utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.427 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.504 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.505 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.505 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.506 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.529 226109 DEBUG nova.storage.rbd_utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.533 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 8ace964d-3645-4195-8f65-8b625bce1b00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.737 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.777 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.778 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.778 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.779 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.779 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.779 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:06 compute-1 nova_compute[226101]: 2025-12-06 07:26:06.779 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:26:07 compute-1 podman[264878]: 2025-12-06 07:26:07.070681644 +0000 UTC m=+0.057627724 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:26:07 compute-1 podman[264877]: 2025-12-06 07:26:07.096806252 +0000 UTC m=+0.084227775 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:26:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:07.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:07.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:08 compute-1 ceph-mon[81689]: pgmap v2054: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 801 KiB/s wr, 112 op/s
Dec 06 07:26:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:09.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:09.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:09 compute-1 nova_compute[226101]: 2025-12-06 07:26:09.365 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 8ace964d-3645-4195-8f65-8b625bce1b00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.832s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:09 compute-1 nova_compute[226101]: 2025-12-06 07:26:09.439 226109 DEBUG nova.storage.rbd_utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] resizing rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:26:09 compute-1 nova_compute[226101]: 2025-12-06 07:26:09.743 226109 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:26:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2900093873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3294259920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:26:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3294259920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:26:10 compute-1 nova_compute[226101]: 2025-12-06 07:26:10.585 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:10 compute-1 nova_compute[226101]: 2025-12-06 07:26:10.674 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:11 compute-1 nova_compute[226101]: 2025-12-06 07:26:11.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:11 compute-1 ceph-mon[81689]: pgmap v2055: 305 pgs: 305 active+clean; 514 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 1.6 MiB/s wr, 147 op/s
Dec 06 07:26:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4230824791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:11.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:12 compute-1 nova_compute[226101]: 2025-12-06 07:26:12.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:13.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:13.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:13 compute-1 nova_compute[226101]: 2025-12-06 07:26:13.713 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005958.7116973, 8ace964d-3645-4195-8f65-8b625bce1b00 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:13 compute-1 nova_compute[226101]: 2025-12-06 07:26:13.713 226109 INFO nova.compute.manager [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] VM Stopped (Lifecycle Event)
Dec 06 07:26:13 compute-1 nova_compute[226101]: 2025-12-06 07:26:13.823 226109 DEBUG nova.compute.manager [None req-89101a2d-45ac-443e-870e-9313901c0573 - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:14 compute-1 ceph-mon[81689]: pgmap v2056: 305 pgs: 305 active+clean; 532 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 733 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.398 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.399 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Ensure instance console log exists: /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.400 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.401 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.402 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.409 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Start _get_guest_xml network_info=[{"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5061c285-a1ac-4ce6-96b8-9bd9d7f18a74', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5061c285-a1ac-4ce6-96b8-9bd9d7f18a74', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8ace964d-3645-4195-8f65-8b625bce1b00', 'attached_at': '', 'detached_at': '', 'volume_id': '5061c285-a1ac-4ce6-96b8-9bd9d7f18a74', 'serial': '5061c285-a1ac-4ce6-96b8-9bd9d7f18a74'}, 'mount_device': '/dev/vdb', 'boot_index': None, 'attachment_id': '733d4904-fcb0-4f84-8fcd-8dc4b09d4a43', 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.414 226109 WARNING nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.422 226109 DEBUG nova.virt.libvirt.host [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.423 226109 DEBUG nova.virt.libvirt.host [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.427 226109 DEBUG nova.virt.libvirt.host [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.428 226109 DEBUG nova.virt.libvirt.host [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.430 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.431 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.432 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.432 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.433 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.433 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.433 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.434 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.435 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.435 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.435 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.436 226109 DEBUG nova.virt.hardware [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.436 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:14 compute-1 nova_compute[226101]: 2025-12-06 07:26:14.595 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:26:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1767994758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:15.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:15 compute-1 nova_compute[226101]: 2025-12-06 07:26:15.303 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:15.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:15 compute-1 ceph-mon[81689]: pgmap v2057: 305 pgs: 305 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Dec 06 07:26:15 compute-1 nova_compute[226101]: 2025-12-06 07:26:15.523 226109 DEBUG nova.storage.rbd_utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:26:15 compute-1 nova_compute[226101]: 2025-12-06 07:26:15.527 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:15 compute-1 nova_compute[226101]: 2025-12-06 07:26:15.678 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:26:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4140271475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:15 compute-1 nova_compute[226101]: 2025-12-06 07:26:15.950 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.040 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.074 226109 DEBUG nova.virt.libvirt.vif [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2051423871',display_name='tempest-ServerActionsTestOtherA-server-1755687548',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2051423871',id=105,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOmzXnyuZ/cQZ/wXv6i6+zthJNB9rLFLpJiO6CxE3XLRu8gE5HiPWRQFyDfJKugoxRwZmDUcohMXjvZhPRaFSbcXur6WAaqewZuOab5xSDtPso9c70pnfsVwIX5QAnRONw==',key_name='tempest-keypair-2072622459',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-tln6vdkz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:26:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=8ace964d-3645-4195-8f65-8b625bce1b00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.075 226109 DEBUG nova.network.os_vif_util [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.075 226109 DEBUG nova.network.os_vif_util [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.079 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <uuid>8ace964d-3645-4195-8f65-8b625bce1b00</uuid>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <name>instance-00000069</name>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerActionsTestOtherA-server-1755687548</nova:name>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:26:14</nova:creationTime>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <nova:user uuid="baddb65c90da47a58d026b0db966f6c8">tempest-ServerActionsTestOtherA-1949739102-project-member</nova:user>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <nova:project uuid="001e2256cb8b430d93c1ff613010d199">tempest-ServerActionsTestOtherA-1949739102</nova:project>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <nova:port uuid="eec9d8a5-6252-4c93-b74b-aecc3b28521f">
Dec 06 07:26:16 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <system>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <entry name="serial">8ace964d-3645-4195-8f65-8b625bce1b00</entry>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <entry name="uuid">8ace964d-3645-4195-8f65-8b625bce1b00</entry>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </system>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <os>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   </os>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <features>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   </features>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8ace964d-3645-4195-8f65-8b625bce1b00_disk">
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </source>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/8ace964d-3645-4195-8f65-8b625bce1b00_disk.config">
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </source>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <source protocol="rbd" name="volumes/volume-5061c285-a1ac-4ce6-96b8-9bd9d7f18a74">
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </source>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:26:16 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <serial>5061c285-a1ac-4ce6-96b8-9bd9d7f18a74</serial>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:0b:62:03"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <target dev="tapeec9d8a5-62"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/console.log" append="off"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <video>
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </video>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:26:16 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:26:16 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:26:16 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:26:16 compute-1 nova_compute[226101]: </domain>
Dec 06 07:26:16 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.080 226109 DEBUG nova.virt.libvirt.vif [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2051423871',display_name='tempest-ServerActionsTestOtherA-server-1755687548',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2051423871',id=105,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOmzXnyuZ/cQZ/wXv6i6+zthJNB9rLFLpJiO6CxE3XLRu8gE5HiPWRQFyDfJKugoxRwZmDUcohMXjvZhPRaFSbcXur6WAaqewZuOab5xSDtPso9c70pnfsVwIX5QAnRONw==',key_name='tempest-keypair-2072622459',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-tln6vdkz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:26:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=8ace964d-3645-4195-8f65-8b625bce1b00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.080 226109 DEBUG nova.network.os_vif_util [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.080 226109 DEBUG nova.network.os_vif_util [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.081 226109 DEBUG os_vif [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.082 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.082 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.085 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.085 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeec9d8a5-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.085 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeec9d8a5-62, col_values=(('external_ids', {'iface-id': 'eec9d8a5-6252-4c93-b74b-aecc3b28521f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:62:03', 'vm-uuid': '8ace964d-3645-4195-8f65-8b625bce1b00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.190 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:16 compute-1 NetworkManager[49031]: <info>  [1765005976.1916] manager: (tapeec9d8a5-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.193 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.199 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.200 226109 INFO os_vif [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62')
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.507 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.508 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.508 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.508 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:0b:62:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:26:16 compute-1 nova_compute[226101]: 2025-12-06 07:26:16.509 226109 INFO nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Using config drive
Dec 06 07:26:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2648319042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1767994758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1647351199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4140271475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-1 ceph-mon[81689]: pgmap v2058: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 173 KiB/s rd, 3.7 MiB/s wr, 98 op/s
Dec 06 07:26:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1655734803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:17.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:17.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:26:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:19.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:26:19 compute-1 nova_compute[226101]: 2025-12-06 07:26:19.378 226109 DEBUG nova.storage.rbd_utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:26:19 compute-1 nova_compute[226101]: 2025-12-06 07:26:19.455 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:19 compute-1 nova_compute[226101]: 2025-12-06 07:26:19.567 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'keypairs' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:19 compute-1 ceph-mon[81689]: pgmap v2059: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Dec 06 07:26:20 compute-1 nova_compute[226101]: 2025-12-06 07:26:20.077 226109 INFO nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Creating config drive at /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config
Dec 06 07:26:20 compute-1 nova_compute[226101]: 2025-12-06 07:26:20.083 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgg3fgrw8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:20 compute-1 nova_compute[226101]: 2025-12-06 07:26:20.214 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgg3fgrw8" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e265 e265: 3 total, 3 up, 3 in
Dec 06 07:26:20 compute-1 nova_compute[226101]: 2025-12-06 07:26:20.932 226109 DEBUG nova.storage.rbd_utils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:26:20 compute-1 nova_compute[226101]: 2025-12-06 07:26:20.937 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:21 compute-1 ceph-mon[81689]: pgmap v2060: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Dec 06 07:26:21 compute-1 nova_compute[226101]: 2025-12-06 07:26:21.041 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:21 compute-1 nova_compute[226101]: 2025-12-06 07:26:21.190 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:21.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:23.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:23 compute-1 ceph-mon[81689]: osdmap e265: 3 total, 3 up, 3 in
Dec 06 07:26:23 compute-1 nova_compute[226101]: 2025-12-06 07:26:23.979 226109 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.023 226109 DEBUG oslo_concurrency.processutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config 8ace964d-3645-4195-8f65-8b625bce1b00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.023 226109 INFO nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Deleting local config drive /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00/disk.config because it was imported into RBD.
Dec 06 07:26:24 compute-1 kernel: tapeec9d8a5-62: entered promiscuous mode
Dec 06 07:26:24 compute-1 NetworkManager[49031]: <info>  [1765005984.0723] manager: (tapeec9d8a5-62): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.073 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:24 compute-1 ovn_controller[130279]: 2025-12-06T07:26:24Z|00440|binding|INFO|Claiming lport eec9d8a5-6252-4c93-b74b-aecc3b28521f for this chassis.
Dec 06 07:26:24 compute-1 ovn_controller[130279]: 2025-12-06T07:26:24Z|00441|binding|INFO|eec9d8a5-6252-4c93-b74b-aecc3b28521f: Claiming fa:16:3e:0b:62:03 10.100.0.3
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:24 compute-1 ovn_controller[130279]: 2025-12-06T07:26:24Z|00442|binding|INFO|Setting lport eec9d8a5-6252-4c93-b74b-aecc3b28521f ovn-installed in OVS
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:24 compute-1 systemd-machined[190302]: New machine qemu-50-instance-00000069.
Dec 06 07:26:24 compute-1 systemd[1]: Started Virtual Machine qemu-50-instance-00000069.
Dec 06 07:26:24 compute-1 systemd-udevd[265123]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:26:24 compute-1 NetworkManager[49031]: <info>  [1765005984.1440] device (tapeec9d8a5-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:26:24 compute-1 NetworkManager[49031]: <info>  [1765005984.1448] device (tapeec9d8a5-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.223 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:62:03 10.100.0.3'], port_security=['fa:16:3e:0b:62:03 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8ace964d-3645-4195-8f65-8b625bce1b00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ee1f9954-c558-49c3-9382-8e1ede273255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=eec9d8a5-6252-4c93-b74b-aecc3b28521f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:26:24 compute-1 ovn_controller[130279]: 2025-12-06T07:26:24Z|00443|binding|INFO|Setting lport eec9d8a5-6252-4c93-b74b-aecc3b28521f up in Southbound
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.224 139580 INFO neutron.agent.ovn.metadata.agent [-] Port eec9d8a5-6252-4c93-b74b-aecc3b28521f in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e bound to our chassis
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.225 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.237 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e6484318-d3c7-4369-ac56-480dc50ccabf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.238 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6209aab-d1 in ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.239 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6209aab-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.239 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[62ae278e-af87-4b9f-af07-51a7ef69dd44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.241 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c14cbad9-7cce-4cea-8042-3d945364256e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.254 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[05a656ec-b208-43fd-b99a-c6e3c494fe58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.277 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a522af-26f0-481e-8510-2a6352df7186]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.303 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2dae9f-fcc7-479f-94da-cdc960f2b0aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 NetworkManager[49031]: <info>  [1765005984.3123] manager: (tapf6209aab-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/202)
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.313 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[40140d93-387f-4c99-bd6e-74466cd024ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.347 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[69247352-5d4b-4d66-9e5e-983bfde28fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.349 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[822eb589-34f2-4df3-8f35-ec8e0a3428a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 NetworkManager[49031]: <info>  [1765005984.3675] device (tapf6209aab-d0): carrier: link connected
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.372 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d3259c-4658-4a13-8a33-11320f439db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.386 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[159f64d8-622e-4bdb-9d90-dee1356d95bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626387, 'reachable_time': 38780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265158, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.397 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2a687f41-34bb-4630-8b0d-6a7c60745800]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c5a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626387, 'tstamp': 626387}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265159, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.410 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[43d5845c-fc08-4056-9043-c5069c0c23ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626387, 'reachable_time': 38780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265160, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.433 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e77aa9d4-0fcd-42ee-b8dc-e5367f91b98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.472 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd87fc4-1dfe-444b-a966-a21b9355bfdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.473 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.473 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.474 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.475 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:24 compute-1 NetworkManager[49031]: <info>  [1765005984.4760] manager: (tapf6209aab-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Dec 06 07:26:24 compute-1 kernel: tapf6209aab-d0: entered promiscuous mode
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.477 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.478 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.479 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:24 compute-1 ovn_controller[130279]: 2025-12-06T07:26:24Z|00444|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:26:24 compute-1 nova_compute[226101]: 2025-12-06 07:26:24.494 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.495 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.496 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[26c41f81-e86a-4800-83b3-925243f386e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.496 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:26:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:24.498 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'env', 'PROCESS_TAG=haproxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6209aab-d53f-4d58-9b94-ffb7adc6239e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:26:24 compute-1 ceph-mon[81689]: pgmap v2062: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Dec 06 07:26:24 compute-1 podman[265201]: 2025-12-06 07:26:24.815973629 +0000 UTC m=+0.024291960 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:26:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:25.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:25.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:25 compute-1 ceph-mon[81689]: pgmap v2063: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.3 MiB/s wr, 30 op/s
Dec 06 07:26:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1557758486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:25 compute-1 podman[265201]: 2025-12-06 07:26:25.843334136 +0000 UTC m=+1.051652447 container create 06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:26:25 compute-1 nova_compute[226101]: 2025-12-06 07:26:25.905 226109 DEBUG nova.compute.manager [req-0cf8e578-f4bd-49a4-98ee-5e05aafeb5b4 req-ef5e1b90-4592-4997-b530-b85619dc1e4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:25 compute-1 nova_compute[226101]: 2025-12-06 07:26:25.906 226109 DEBUG oslo_concurrency.lockutils [req-0cf8e578-f4bd-49a4-98ee-5e05aafeb5b4 req-ef5e1b90-4592-4997-b530-b85619dc1e4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:25 compute-1 nova_compute[226101]: 2025-12-06 07:26:25.906 226109 DEBUG oslo_concurrency.lockutils [req-0cf8e578-f4bd-49a4-98ee-5e05aafeb5b4 req-ef5e1b90-4592-4997-b530-b85619dc1e4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:25 compute-1 nova_compute[226101]: 2025-12-06 07:26:25.906 226109 DEBUG oslo_concurrency.lockutils [req-0cf8e578-f4bd-49a4-98ee-5e05aafeb5b4 req-ef5e1b90-4592-4997-b530-b85619dc1e4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:25 compute-1 nova_compute[226101]: 2025-12-06 07:26:25.907 226109 DEBUG nova.compute.manager [req-0cf8e578-f4bd-49a4-98ee-5e05aafeb5b4 req-ef5e1b90-4592-4997-b530-b85619dc1e4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] No waiting events found dispatching network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:25 compute-1 nova_compute[226101]: 2025-12-06 07:26:25.907 226109 WARNING nova.compute.manager [req-0cf8e578-f4bd-49a4-98ee-5e05aafeb5b4 req-ef5e1b90-4592-4997-b530-b85619dc1e4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received unexpected event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f for instance with vm_state active and task_state rebuild_spawning.
Dec 06 07:26:26 compute-1 nova_compute[226101]: 2025-12-06 07:26:26.042 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:26 compute-1 nova_compute[226101]: 2025-12-06 07:26:26.192 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:26 compute-1 sshd-session[265177]: Invalid user user1 from 150.95.85.24 port 38862
Dec 06 07:26:26 compute-1 systemd[1]: Started libpod-conmon-06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76.scope.
Dec 06 07:26:26 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:26:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e199cc1480b7e4cc7ad5f65b25d8eaf1405ac3f84af3b3355c6b2240fc68dfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:26:26 compute-1 sshd-session[265177]: Received disconnect from 150.95.85.24 port 38862:11:  [preauth]
Dec 06 07:26:26 compute-1 sshd-session[265177]: Disconnected from invalid user user1 150.95.85.24 port 38862 [preauth]
Dec 06 07:26:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1733590926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:27 compute-1 ceph-mon[81689]: pgmap v2064: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 21 op/s
Dec 06 07:26:27 compute-1 podman[265201]: 2025-12-06 07:26:27.09285043 +0000 UTC m=+2.301168761 container init 06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:26:27 compute-1 podman[265201]: 2025-12-06 07:26:27.100453556 +0000 UTC m=+2.308771867 container start 06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:26:27 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[265245]: [NOTICE]   (265249) : New worker (265251) forked
Dec 06 07:26:27 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[265245]: [NOTICE]   (265249) : Loading success.
Dec 06 07:26:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:27.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:27.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.624 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005987.623842, 8ace964d-3645-4195-8f65-8b625bce1b00 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.625 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] VM Resumed (Lifecycle Event)
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.627 226109 DEBUG nova.compute.manager [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.628 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.631 226109 INFO nova.virt.libvirt.driver [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance spawned successfully.
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.631 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.734 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.738 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.739 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.740 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.740 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.741 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.741 226109 DEBUG nova.virt.libvirt.driver [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.746 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.799 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.799 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765005987.6252022, 8ace964d-3645-4195-8f65-8b625bce1b00 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.799 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] VM Started (Lifecycle Event)
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.828 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.831 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.897 226109 DEBUG nova.compute.manager [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:27 compute-1 nova_compute[226101]: 2025-12-06 07:26:27.933 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.048 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.049 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.049 226109 DEBUG nova.objects.instance [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.195 226109 DEBUG oslo_concurrency.lockutils [None req-392f78e6-45c1-41ad-9fb4-8acd9df20362 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.260 226109 DEBUG nova.compute.manager [req-289082ca-a63a-4e90-b33e-a9e3f906471f req-7a99ed6c-b0cf-4d70-94cf-d3f315e40306 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.260 226109 DEBUG oslo_concurrency.lockutils [req-289082ca-a63a-4e90-b33e-a9e3f906471f req-7a99ed6c-b0cf-4d70-94cf-d3f315e40306 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.261 226109 DEBUG oslo_concurrency.lockutils [req-289082ca-a63a-4e90-b33e-a9e3f906471f req-7a99ed6c-b0cf-4d70-94cf-d3f315e40306 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.261 226109 DEBUG oslo_concurrency.lockutils [req-289082ca-a63a-4e90-b33e-a9e3f906471f req-7a99ed6c-b0cf-4d70-94cf-d3f315e40306 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.261 226109 DEBUG nova.compute.manager [req-289082ca-a63a-4e90-b33e-a9e3f906471f req-7a99ed6c-b0cf-4d70-94cf-d3f315e40306 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] No waiting events found dispatching network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:28 compute-1 nova_compute[226101]: 2025-12-06 07:26:28.261 226109 WARNING nova.compute.manager [req-289082ca-a63a-4e90-b33e-a9e3f906471f req-7a99ed6c-b0cf-4d70-94cf-d3f315e40306 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received unexpected event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f for instance with vm_state active and task_state None.
Dec 06 07:26:28 compute-1 ceph-mon[81689]: pgmap v2065: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 21 op/s
Dec 06 07:26:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:29.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:29.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:30 compute-1 ceph-mon[81689]: pgmap v2066: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 113 op/s
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.011 226109 INFO nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance shutdown successfully after 50 seconds.
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.044 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:31 compute-1 kernel: tapfc5020f3-51 (unregistering): left promiscuous mode
Dec 06 07:26:31 compute-1 NetworkManager[49031]: <info>  [1765005991.1626] device (tapfc5020f3-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:26:31 compute-1 ovn_controller[130279]: 2025-12-06T07:26:31Z|00445|binding|INFO|Releasing lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f from this chassis (sb_readonly=0)
Dec 06 07:26:31 compute-1 ovn_controller[130279]: 2025-12-06T07:26:31Z|00446|binding|INFO|Setting lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f down in Southbound
Dec 06 07:26:31 compute-1 ovn_controller[130279]: 2025-12-06T07:26:31Z|00447|binding|INFO|Removing iface tapfc5020f3-51 ovn-installed in OVS
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.175 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.189 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:31.190 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:43:3f 10.100.0.11'], port_security=['fa:16:3e:39:43:3f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f32ea15c-cf80-482c-9f9a-22392bc79e78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=fc5020f3-51a1-4ca2-b0b5-ff2add84607f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:26:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:31.192 139580 INFO neutron.agent.ovn.metadata.agent [-] Port fc5020f3-51a1-4ca2-b0b5-ff2add84607f in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 unbound from our chassis
Dec 06 07:26:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:31.194 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.194 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:31.196 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e41ed7fc-a23a-4f95-aa44-5e2a5e9f45ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:31.197 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace which is not needed anymore
Dec 06 07:26:31 compute-1 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Dec 06 07:26:31 compute-1 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006a.scope: Consumed 16.676s CPU time.
Dec 06 07:26:31 compute-1 systemd-machined[190302]: Machine qemu-49-instance-0000006a terminated.
Dec 06 07:26:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:31.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:31.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.449 226109 INFO nova.virt.libvirt.driver [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance destroyed successfully.
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.450 226109 DEBUG nova.virt.libvirt.vif [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1353793859',display_name='tempest-DeleteServersTestJSON-server-1353793859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1353793859',id=106,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-jutadpu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:25:34Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=f32ea15c-cf80-482c-9f9a-22392bc79e78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-855821425-network", "vif_mac": "fa:16:3e:39:43:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.451 226109 DEBUG nova.network.os_vif_util [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-855821425-network", "vif_mac": "fa:16:3e:39:43:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.452 226109 DEBUG nova.network.os_vif_util [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.452 226109 DEBUG os_vif [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.454 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.454 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc5020f3-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.455 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.459 226109 INFO os_vif [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51')
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.465 226109 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.466 226109 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:26:31 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[264272]: [NOTICE]   (264276) : haproxy version is 2.8.14-c23fe91
Dec 06 07:26:31 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[264272]: [NOTICE]   (264276) : path to executable is /usr/sbin/haproxy
Dec 06 07:26:31 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[264272]: [WARNING]  (264276) : Exiting Master process...
Dec 06 07:26:31 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[264272]: [WARNING]  (264276) : Exiting Master process...
Dec 06 07:26:31 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[264272]: [ALERT]    (264276) : Current worker (264278) exited with code 143 (Terminated)
Dec 06 07:26:31 compute-1 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[264272]: [WARNING]  (264276) : All workers exited. Exiting... (0)
Dec 06 07:26:31 compute-1 systemd[1]: libpod-421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19.scope: Deactivated successfully.
Dec 06 07:26:31 compute-1 podman[265311]: 2025-12-06 07:26:31.562531153 +0000 UTC m=+0.269414839 container died 421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.694 226109 DEBUG nova.compute.manager [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.695 226109 DEBUG oslo_concurrency.lockutils [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.695 226109 DEBUG oslo_concurrency.lockutils [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.695 226109 DEBUG oslo_concurrency.lockutils [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.696 226109 DEBUG nova.compute.manager [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:31 compute-1 nova_compute[226101]: 2025-12-06 07:26:31.696 226109 WARNING nova.compute.manager [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state resize_migrating.
Dec 06 07:26:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:31 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19-userdata-shm.mount: Deactivated successfully.
Dec 06 07:26:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-419938d0b6b7a2dfbc9d743e6bb786751a93132a7dfe38d3c3e7527ca27f39c9-merged.mount: Deactivated successfully.
Dec 06 07:26:32 compute-1 nova_compute[226101]: 2025-12-06 07:26:32.129 226109 DEBUG neutronclient.v2_0.client [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port fc5020f3-51a1-4ca2-b0b5-ff2add84607f for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 07:26:32 compute-1 podman[265311]: 2025-12-06 07:26:32.446455229 +0000 UTC m=+1.153338915 container cleanup 421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:26:32 compute-1 systemd[1]: libpod-conmon-421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19.scope: Deactivated successfully.
Dec 06 07:26:32 compute-1 ceph-mon[81689]: pgmap v2067: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 51 KiB/s wr, 205 op/s
Dec 06 07:26:32 compute-1 nova_compute[226101]: 2025-12-06 07:26:32.589 226109 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:32 compute-1 nova_compute[226101]: 2025-12-06 07:26:32.590 226109 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:32 compute-1 nova_compute[226101]: 2025-12-06 07:26:32.590 226109 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:33.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:33 compute-1 podman[265354]: 2025-12-06 07:26:33.29104815 +0000 UTC m=+0.817721442 container remove 421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.297 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[54e82c57-9413-4359-bbdd-1928049b3f55]: (4, ('Sat Dec  6 07:26:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19)\n421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19\nSat Dec  6 07:26:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19)\n421aca615b1905d730880cdf57beb5453f40a8cb5b56fb00ec3275d05ba05f19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.298 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7f3df0-e17a-4d07-bd71-28aeb376bea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.299 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:33 compute-1 nova_compute[226101]: 2025-12-06 07:26:33.301 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:33 compute-1 kernel: tap85cfbf28-70: left promiscuous mode
Dec 06 07:26:33 compute-1 nova_compute[226101]: 2025-12-06 07:26:33.314 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.317 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa3857a-7883-4809-a095-ee5788e210cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.335 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fc31ba-f973-4e0e-a9cf-5a804db174a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.337 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ce46cb-4410-4761-8c16-d4048fa94399]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.350 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a26fc17b-9556-4d94-9369-9e17480d6664]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620759, 'reachable_time': 16792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265370, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:33 compute-1 systemd[1]: run-netns-ovnmeta\x2d85cfbf28\x2d7016\x2d4776\x2d8fc2\x2d2eb08a6b8347.mount: Deactivated successfully.
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.356 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:26:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:33.356 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8cca4d-9c04-4ded-aaa7-777c6c4f9a3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:33.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:33 compute-1 nova_compute[226101]: 2025-12-06 07:26:33.683 226109 DEBUG oslo_concurrency.lockutils [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:33 compute-1 nova_compute[226101]: 2025-12-06 07:26:33.684 226109 DEBUG oslo_concurrency.lockutils [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.239 226109 INFO nova.compute.manager [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Detaching volume 5061c285-a1ac-4ce6-96b8-9bd9d7f18a74
Dec 06 07:26:34 compute-1 ceph-mon[81689]: pgmap v2068: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 50 KiB/s wr, 237 op/s
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.447 226109 INFO nova.virt.block_device [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Attempting to driver detach volume 5061c285-a1ac-4ce6-96b8-9bd9d7f18a74 from mountpoint /dev/vdb
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.458 226109 DEBUG nova.virt.libvirt.driver [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Attempting to detach device vdb from instance 8ace964d-3645-4195-8f65-8b625bce1b00 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.460 226109 DEBUG nova.virt.libvirt.guest [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-5061c285-a1ac-4ce6-96b8-9bd9d7f18a74">
Dec 06 07:26:34 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   </source>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <serial>5061c285-a1ac-4ce6-96b8-9bd9d7f18a74</serial>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]: </disk>
Dec 06 07:26:34 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.580 226109 INFO nova.virt.libvirt.driver [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully detached device vdb from instance 8ace964d-3645-4195-8f65-8b625bce1b00 from the persistent domain config.
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.580 226109 DEBUG nova.virt.libvirt.driver [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8ace964d-3645-4195-8f65-8b625bce1b00 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.581 226109 DEBUG nova.virt.libvirt.guest [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-5061c285-a1ac-4ce6-96b8-9bd9d7f18a74">
Dec 06 07:26:34 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   </source>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <serial>5061c285-a1ac-4ce6-96b8-9bd9d7f18a74</serial>
Dec 06 07:26:34 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec 06 07:26:34 compute-1 nova_compute[226101]: </disk>
Dec 06 07:26:34 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.867 226109 DEBUG nova.compute.manager [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.868 226109 DEBUG oslo_concurrency.lockutils [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.868 226109 DEBUG oslo_concurrency.lockutils [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.869 226109 DEBUG oslo_concurrency.lockutils [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.869 226109 DEBUG nova.compute.manager [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:34 compute-1 nova_compute[226101]: 2025-12-06 07:26:34.869 226109 WARNING nova.compute.manager [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state resize_migrated.
Dec 06 07:26:35 compute-1 podman[265371]: 2025-12-06 07:26:35.150663042 +0000 UTC m=+0.134820219 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:26:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:35.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:35.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:35 compute-1 nova_compute[226101]: 2025-12-06 07:26:35.740 226109 DEBUG nova.compute.manager [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-changed-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:35 compute-1 nova_compute[226101]: 2025-12-06 07:26:35.740 226109 DEBUG nova.compute.manager [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Refreshing instance network info cache due to event network-changed-fc5020f3-51a1-4ca2-b0b5-ff2add84607f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:26:35 compute-1 nova_compute[226101]: 2025-12-06 07:26:35.741 226109 DEBUG oslo_concurrency.lockutils [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:26:35 compute-1 nova_compute[226101]: 2025-12-06 07:26:35.741 226109 DEBUG oslo_concurrency.lockutils [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:26:35 compute-1 nova_compute[226101]: 2025-12-06 07:26:35.741 226109 DEBUG nova.network.neutron [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Refreshing network info cache for port fc5020f3-51a1-4ca2-b0b5-ff2add84607f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:26:36 compute-1 nova_compute[226101]: 2025-12-06 07:26:36.047 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:36 compute-1 nova_compute[226101]: 2025-12-06 07:26:36.063 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765005996.0634844, 8ace964d-3645-4195-8f65-8b625bce1b00 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:26:36 compute-1 nova_compute[226101]: 2025-12-06 07:26:36.065 226109 DEBUG nova.virt.libvirt.driver [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8ace964d-3645-4195-8f65-8b625bce1b00 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:26:36 compute-1 nova_compute[226101]: 2025-12-06 07:26:36.068 226109 INFO nova.virt.libvirt.driver [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully detached device vdb from instance 8ace964d-3645-4195-8f65-8b625bce1b00 from the live domain config.
Dec 06 07:26:36 compute-1 nova_compute[226101]: 2025-12-06 07:26:36.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:36 compute-1 nova_compute[226101]: 2025-12-06 07:26:36.623 226109 DEBUG nova.objects.instance [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'flavor' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:37.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:37 compute-1 ceph-mon[81689]: pgmap v2069: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 50 KiB/s wr, 231 op/s
Dec 06 07:26:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:26:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:37.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:26:37 compute-1 nova_compute[226101]: 2025-12-06 07:26:37.488 226109 DEBUG oslo_concurrency.lockutils [None req-e927fc48-0694-4571-adcc-688a430d2e37 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 3.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:38 compute-1 podman[265400]: 2025-12-06 07:26:38.067216425 +0000 UTC m=+0.050083970 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:26:38 compute-1 podman[265399]: 2025-12-06 07:26:38.072256611 +0000 UTC m=+0.057343456 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.170 226109 DEBUG nova.network.neutron [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updated VIF entry in instance network info cache for port fc5020f3-51a1-4ca2-b0b5-ff2add84607f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.170 226109 DEBUG nova.network.neutron [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.199 226109 DEBUG oslo_concurrency.lockutils [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.367 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.367 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.368 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.368 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.368 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.369 226109 INFO nova.compute.manager [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Terminating instance
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.370 226109 DEBUG nova.compute.manager [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:26:38 compute-1 kernel: tapeec9d8a5-62 (unregistering): left promiscuous mode
Dec 06 07:26:38 compute-1 NetworkManager[49031]: <info>  [1765005998.4464] device (tapeec9d8a5-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:26:38 compute-1 ovn_controller[130279]: 2025-12-06T07:26:38Z|00448|binding|INFO|Releasing lport eec9d8a5-6252-4c93-b74b-aecc3b28521f from this chassis (sb_readonly=0)
Dec 06 07:26:38 compute-1 ovn_controller[130279]: 2025-12-06T07:26:38Z|00449|binding|INFO|Setting lport eec9d8a5-6252-4c93-b74b-aecc3b28521f down in Southbound
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.453 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-1 ovn_controller[130279]: 2025-12-06T07:26:38Z|00450|binding|INFO|Removing iface tapeec9d8a5-62 ovn-installed in OVS
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.473 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-1 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000069.scope: Deactivated successfully.
Dec 06 07:26:38 compute-1 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000069.scope: Consumed 10.694s CPU time.
Dec 06 07:26:38 compute-1 systemd-machined[190302]: Machine qemu-50-instance-00000069 terminated.
Dec 06 07:26:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:38.496 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:62:03 10.100.0.3'], port_security=['fa:16:3e:0b:62:03 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8ace964d-3645-4195-8f65-8b625bce1b00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ee1f9954-c558-49c3-9382-8e1ede273255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=eec9d8a5-6252-4c93-b74b-aecc3b28521f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:26:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:38.497 139580 INFO neutron.agent.ovn.metadata.agent [-] Port eec9d8a5-6252-4c93-b74b-aecc3b28521f in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:26:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:38.499 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6209aab-d53f-4d58-9b94-ffb7adc6239e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:26:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:38.501 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2068b6c1-e293-4afd-a8f0-d972622aab2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:38.501 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace which is not needed anymore
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.587 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.604 226109 INFO nova.virt.libvirt.driver [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Instance destroyed successfully.
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.605 226109 DEBUG nova.objects.instance [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'resources' on Instance uuid 8ace964d-3645-4195-8f65-8b625bce1b00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.625 226109 DEBUG nova.virt.libvirt.vif [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:25:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2051423871',display_name='tempest-ServerActionsTestOtherA-server-1755687548',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2051423871',id=105,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOmzXnyuZ/cQZ/wXv6i6+zthJNB9rLFLpJiO6CxE3XLRu8gE5HiPWRQFyDfJKugoxRwZmDUcohMXjvZhPRaFSbcXur6WAaqewZuOab5xSDtPso9c70pnfsVwIX5QAnRONw==',key_name='tempest-keypair-2072622459',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:26:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-tln6vdkz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:26:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=8ace964d-3645-4195-8f65-8b625bce1b00,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.625 226109 DEBUG nova.network.os_vif_util [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "address": "fa:16:3e:0b:62:03", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeec9d8a5-62", "ovs_interfaceid": "eec9d8a5-6252-4c93-b74b-aecc3b28521f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.626 226109 DEBUG nova.network.os_vif_util [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.626 226109 DEBUG os_vif [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.627 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.628 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeec9d8a5-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.629 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.630 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-1 nova_compute[226101]: 2025-12-06 07:26:38.632 226109 INFO os_vif [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:62:03,bridge_name='br-int',has_traffic_filtering=True,id=eec9d8a5-6252-4c93-b74b-aecc3b28521f,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeec9d8a5-62')
Dec 06 07:26:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:38.891 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:26:39 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[265245]: [NOTICE]   (265249) : haproxy version is 2.8.14-c23fe91
Dec 06 07:26:39 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[265245]: [NOTICE]   (265249) : path to executable is /usr/sbin/haproxy
Dec 06 07:26:39 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[265245]: [WARNING]  (265249) : Exiting Master process...
Dec 06 07:26:39 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[265245]: [WARNING]  (265249) : Exiting Master process...
Dec 06 07:26:39 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[265245]: [ALERT]    (265249) : Current worker (265251) exited with code 143 (Terminated)
Dec 06 07:26:39 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[265245]: [WARNING]  (265249) : All workers exited. Exiting... (0)
Dec 06 07:26:39 compute-1 systemd[1]: libpod-06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76.scope: Deactivated successfully.
Dec 06 07:26:39 compute-1 podman[265459]: 2025-12-06 07:26:39.176123375 +0000 UTC m=+0.597969831 container died 06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:26:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:39.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:39.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:39 compute-1 ceph-mon[81689]: pgmap v2070: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 35 KiB/s wr, 227 op/s
Dec 06 07:26:40 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76-userdata-shm.mount: Deactivated successfully.
Dec 06 07:26:40 compute-1 systemd[1]: var-lib-containers-storage-overlay-9e199cc1480b7e4cc7ad5f65b25d8eaf1405ac3f84af3b3355c6b2240fc68dfd-merged.mount: Deactivated successfully.
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.766 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.770 226109 DEBUG nova.compute.manager [req-520a3656-5430-4c47-80eb-d5088369e261 req-3bc4ec72-61e5-4ddb-8278-0b7deeeb5ff9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-unplugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.770 226109 DEBUG oslo_concurrency.lockutils [req-520a3656-5430-4c47-80eb-d5088369e261 req-3bc4ec72-61e5-4ddb-8278-0b7deeeb5ff9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.771 226109 DEBUG oslo_concurrency.lockutils [req-520a3656-5430-4c47-80eb-d5088369e261 req-3bc4ec72-61e5-4ddb-8278-0b7deeeb5ff9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.771 226109 DEBUG oslo_concurrency.lockutils [req-520a3656-5430-4c47-80eb-d5088369e261 req-3bc4ec72-61e5-4ddb-8278-0b7deeeb5ff9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.771 226109 DEBUG nova.compute.manager [req-520a3656-5430-4c47-80eb-d5088369e261 req-3bc4ec72-61e5-4ddb-8278-0b7deeeb5ff9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] No waiting events found dispatching network-vif-unplugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.772 226109 DEBUG nova.compute.manager [req-520a3656-5430-4c47-80eb-d5088369e261 req-3bc4ec72-61e5-4ddb-8278-0b7deeeb5ff9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-unplugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.852 226109 DEBUG nova.compute.manager [req-ed130832-e600-4abd-b7f6-28e16614ba98 req-681e6886-5e54-459b-8915-e01b0b7de27d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.852 226109 DEBUG oslo_concurrency.lockutils [req-ed130832-e600-4abd-b7f6-28e16614ba98 req-681e6886-5e54-459b-8915-e01b0b7de27d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.853 226109 DEBUG oslo_concurrency.lockutils [req-ed130832-e600-4abd-b7f6-28e16614ba98 req-681e6886-5e54-459b-8915-e01b0b7de27d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.853 226109 DEBUG oslo_concurrency.lockutils [req-ed130832-e600-4abd-b7f6-28e16614ba98 req-681e6886-5e54-459b-8915-e01b0b7de27d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.853 226109 DEBUG nova.compute.manager [req-ed130832-e600-4abd-b7f6-28e16614ba98 req-681e6886-5e54-459b-8915-e01b0b7de27d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] No waiting events found dispatching network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:40 compute-1 nova_compute[226101]: 2025-12-06 07:26:40.854 226109 WARNING nova.compute.manager [req-ed130832-e600-4abd-b7f6-28e16614ba98 req-681e6886-5e54-459b-8915-e01b0b7de27d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received unexpected event network-vif-plugged-eec9d8a5-6252-4c93-b74b-aecc3b28521f for instance with vm_state active and task_state deleting.
Dec 06 07:26:41 compute-1 nova_compute[226101]: 2025-12-06 07:26:41.050 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e266 e266: 3 total, 3 up, 3 in
Dec 06 07:26:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:41.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:41.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:41 compute-1 podman[265459]: 2025-12-06 07:26:41.482791504 +0000 UTC m=+2.904637960 container cleanup 06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:26:41 compute-1 systemd[1]: libpod-conmon-06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76.scope: Deactivated successfully.
Dec 06 07:26:41 compute-1 ceph-mon[81689]: pgmap v2071: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 36 KiB/s wr, 248 op/s
Dec 06 07:26:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e267 e267: 3 total, 3 up, 3 in
Dec 06 07:26:42 compute-1 podman[265514]: 2025-12-06 07:26:42.407470497 +0000 UTC m=+0.903860799 container remove 06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.414 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[226af93c-2190-490d-8a67-638cfd1ffc52]: (4, ('Sat Dec  6 07:26:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76)\n06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76\nSat Dec  6 07:26:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76)\n06ba25a1e4668f08c989f1ab0e63b4a8db91e004d0feefe4124734450affba76\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.415 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cd54be84-c9ac-420b-9264-f26ab830b4e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.416 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:42 compute-1 kernel: tapf6209aab-d0: left promiscuous mode
Dec 06 07:26:42 compute-1 nova_compute[226101]: 2025-12-06 07:26:42.467 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:42 compute-1 nova_compute[226101]: 2025-12-06 07:26:42.482 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:42 compute-1 nova_compute[226101]: 2025-12-06 07:26:42.483 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.486 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1bcd45-5038-4e7c-94d5-33bcecd6cee3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.502 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5dba3c-250a-4269-8232-b8a82f58e1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.503 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[31bb4942-1a67-42ba-aac9-1ec3bc79b540]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.518 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[56f8e1e9-3d15-409b-9cc2-9cd811e3dbcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626380, 'reachable_time': 40384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265529, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.520 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.521 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb5e035-24a5-4d51-b365-38508bc25f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:42.521 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:26:42 compute-1 systemd[1]: run-netns-ovnmeta\x2df6209aab\x2dd53f\x2d4d58\x2d9b94\x2dffb7adc6239e.mount: Deactivated successfully.
Dec 06 07:26:42 compute-1 ceph-mon[81689]: osdmap e266: 3 total, 3 up, 3 in
Dec 06 07:26:42 compute-1 ceph-mon[81689]: pgmap v2073: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 KiB/s wr, 114 op/s
Dec 06 07:26:42 compute-1 ceph-mon[81689]: osdmap e267: 3 total, 3 up, 3 in
Dec 06 07:26:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:43.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:43.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:43 compute-1 nova_compute[226101]: 2025-12-06 07:26:43.631 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-1 nova_compute[226101]: 2025-12-06 07:26:44.419 226109 INFO nova.virt.libvirt.driver [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Deleting instance files /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00_del
Dec 06 07:26:44 compute-1 nova_compute[226101]: 2025-12-06 07:26:44.420 226109 INFO nova.virt.libvirt.driver [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Deletion of /var/lib/nova/instances/8ace964d-3645-4195-8f65-8b625bce1b00_del complete
Dec 06 07:26:44 compute-1 ceph-mon[81689]: pgmap v2075: 305 pgs: 305 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.7 KiB/s wr, 183 op/s
Dec 06 07:26:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2168580241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:44 compute-1 nova_compute[226101]: 2025-12-06 07:26:44.625 226109 INFO nova.compute.manager [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Took 6.26 seconds to destroy the instance on the hypervisor.
Dec 06 07:26:44 compute-1 nova_compute[226101]: 2025-12-06 07:26:44.626 226109 DEBUG oslo.service.loopingcall [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:26:44 compute-1 nova_compute[226101]: 2025-12-06 07:26:44.626 226109 DEBUG nova.compute.manager [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:26:44 compute-1 nova_compute[226101]: 2025-12-06 07:26:44.626 226109 DEBUG nova.network.neutron [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:26:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:45.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3034104894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/514090520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.448 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005991.4472885, f32ea15c-cf80-482c-9f9a-22392bc79e78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.448 226109 INFO nova.compute.manager [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] VM Stopped (Lifecycle Event)
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.538 226109 DEBUG nova.compute.manager [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.538 226109 DEBUG oslo_concurrency.lockutils [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.539 226109 DEBUG oslo_concurrency.lockutils [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.539 226109 DEBUG oslo_concurrency.lockutils [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.539 226109 DEBUG nova.compute.manager [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.539 226109 WARNING nova.compute.manager [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state resize_finish.
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.832 226109 DEBUG nova.compute.manager [None req-4e89751e-bce6-4555-8615-53fea117ed49 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.837 226109 DEBUG nova.compute.manager [None req-4e89751e-bce6-4555-8615-53fea117ed49 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:26:46 compute-1 nova_compute[226101]: 2025-12-06 07:26:46.892 226109 INFO nova.compute.manager [None req-4e89751e-bce6-4555-8615-53fea117ed49 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Dec 06 07:26:47 compute-1 nova_compute[226101]: 2025-12-06 07:26:47.201 226109 DEBUG nova.network.neutron [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:26:47 compute-1 ceph-mon[81689]: pgmap v2076: 305 pgs: 305 active+clean; 529 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 21 KiB/s wr, 236 op/s
Dec 06 07:26:47 compute-1 nova_compute[226101]: 2025-12-06 07:26:47.292 226109 INFO nova.compute.manager [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Took 2.67 seconds to deallocate network for instance.
Dec 06 07:26:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:47.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:47.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:47 compute-1 nova_compute[226101]: 2025-12-06 07:26:47.546 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:47 compute-1 nova_compute[226101]: 2025-12-06 07:26:47.547 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:47 compute-1 nova_compute[226101]: 2025-12-06 07:26:47.640 226109 DEBUG oslo_concurrency.processutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:26:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1610378948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.147 226109 DEBUG oslo_concurrency.processutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.155 226109 DEBUG nova.compute.provider_tree [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.202 226109 DEBUG nova.scheduler.client.report [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.226 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:48 compute-1 ceph-mon[81689]: pgmap v2077: 305 pgs: 305 active+clean; 529 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 19 KiB/s wr, 205 op/s
Dec 06 07:26:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1610378948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.271 226109 INFO nova.scheduler.client.report [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Deleted allocations for instance 8ace964d-3645-4195-8f65-8b625bce1b00
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.343 226109 DEBUG oslo_concurrency.lockutils [None req-ff0f9b7e-5706-4fba-9293-7c632d05f464 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "8ace964d-3645-4195-8f65-8b625bce1b00" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e268 e268: 3 total, 3 up, 3 in
Dec 06 07:26:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:26:48.523 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.633 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.954 226109 DEBUG nova.compute.manager [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Received event network-vif-deleted-eec9d8a5-6252-4c93-b74b-aecc3b28521f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.955 226109 DEBUG nova.compute.manager [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.955 226109 DEBUG oslo_concurrency.lockutils [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.955 226109 DEBUG oslo_concurrency.lockutils [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.955 226109 DEBUG oslo_concurrency.lockutils [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.956 226109 DEBUG nova.compute.manager [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:48 compute-1 nova_compute[226101]: 2025-12-06 07:26:48.956 226109 WARNING nova.compute.manager [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state resized and task_state None.
Dec 06 07:26:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:49.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:49.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:49 compute-1 ceph-mon[81689]: osdmap e268: 3 total, 3 up, 3 in
Dec 06 07:26:49 compute-1 nova_compute[226101]: 2025-12-06 07:26:49.547 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:49 compute-1 nova_compute[226101]: 2025-12-06 07:26:49.549 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:49 compute-1 nova_compute[226101]: 2025-12-06 07:26:49.550 226109 DEBUG nova.compute.manager [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Going to confirm migration 13 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Dec 06 07:26:50 compute-1 nova_compute[226101]: 2025-12-06 07:26:50.359 226109 DEBUG neutronclient.v2_0.client [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port fc5020f3-51a1-4ca2-b0b5-ff2add84607f for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 07:26:50 compute-1 nova_compute[226101]: 2025-12-06 07:26:50.360 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:26:50 compute-1 nova_compute[226101]: 2025-12-06 07:26:50.361 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:26:50 compute-1 nova_compute[226101]: 2025-12-06 07:26:50.361 226109 DEBUG nova.network.neutron [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:26:50 compute-1 nova_compute[226101]: 2025-12-06 07:26:50.361 226109 DEBUG nova.objects.instance [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'info_cache' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:51 compute-1 nova_compute[226101]: 2025-12-06 07:26:51.090 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:51 compute-1 ceph-mon[81689]: pgmap v2079: 305 pgs: 305 active+clean; 463 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 22 KiB/s wr, 267 op/s
Dec 06 07:26:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:51.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:51.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:52 compute-1 ceph-mon[81689]: pgmap v2080: 305 pgs: 305 active+clean; 417 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 31 KiB/s wr, 286 op/s
Dec 06 07:26:52 compute-1 nova_compute[226101]: 2025-12-06 07:26:52.802 226109 DEBUG nova.network.neutron [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:26:52 compute-1 nova_compute[226101]: 2025-12-06 07:26:52.966 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:26:52 compute-1 nova_compute[226101]: 2025-12-06 07:26:52.966 226109 DEBUG nova.objects.instance [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'migration_context' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:53.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:53 compute-1 nova_compute[226101]: 2025-12-06 07:26:53.964 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:53 compute-1 nova_compute[226101]: 2025-12-06 07:26:53.967 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005998.6016157, 8ace964d-3645-4195-8f65-8b625bce1b00 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:53 compute-1 nova_compute[226101]: 2025-12-06 07:26:53.968 226109 INFO nova.compute.manager [-] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] VM Stopped (Lifecycle Event)
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:54.194034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014194118, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2335, "num_deletes": 258, "total_data_size": 5595312, "memory_usage": 5677936, "flush_reason": "Manual Compaction"}
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Dec 06 07:26:54 compute-1 nova_compute[226101]: 2025-12-06 07:26:54.199 226109 DEBUG nova.storage.rbd_utils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] removing snapshot(nova-resize) on rbd image(f32ea15c-cf80-482c-9f9a-22392bc79e78_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:26:54 compute-1 nova_compute[226101]: 2025-12-06 07:26:54.254 226109 DEBUG nova.compute.manager [None req-25157498-b454-4720-abb9-a5937d27f99b - - - - - -] [instance: 8ace964d-3645-4195-8f65-8b625bce1b00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014684444, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 3668690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42318, "largest_seqno": 44648, "table_properties": {"data_size": 3659055, "index_size": 6065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20455, "raw_average_key_size": 20, "raw_value_size": 3639577, "raw_average_value_size": 3665, "num_data_blocks": 264, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005803, "oldest_key_time": 1765005803, "file_creation_time": 1765006014, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 490427 microseconds, and 7922 cpu microseconds.
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:54.684486) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 3668690 bytes OK
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:54.684506) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:54.761517) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:54.761568) EVENT_LOG_v1 {"time_micros": 1765006014761557, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:54.761595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 5584901, prev total WAL file size 5585654, number of live WAL files 2.
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:54.763358) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323632' seq:72057594037927935, type:22 .. '6C6F676D0031353133' seq:0, type:0; will stop at (end)
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(3582KB)], [81(9076KB)]
Dec 06 07:26:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014763407, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 12963027, "oldest_snapshot_seqno": -1}
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 7552 keys, 12814120 bytes, temperature: kUnknown
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006015297113, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 12814120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12762376, "index_size": 31792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 195068, "raw_average_key_size": 25, "raw_value_size": 12626143, "raw_average_value_size": 1671, "num_data_blocks": 1263, "num_entries": 7552, "num_filter_entries": 7552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765006014, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:26:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3505948962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:26:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:55.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:55.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:55.297446) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 12814120 bytes
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:55.552317) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.3 rd, 24.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.9 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 8085, records dropped: 533 output_compression: NoCompression
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:55.552352) EVENT_LOG_v1 {"time_micros": 1765006015552339, "job": 50, "event": "compaction_finished", "compaction_time_micros": 533789, "compaction_time_cpu_micros": 29270, "output_level": 6, "num_output_files": 1, "total_output_size": 12814120, "num_input_records": 8085, "num_output_records": 7552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006015553560, "job": 50, "event": "table_file_deletion", "file_number": 83}
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006015555604, "job": 50, "event": "table_file_deletion", "file_number": 81}
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:54.763065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:55.555662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:55.555666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:55.555667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:55.555669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:26:55.555670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:56 compute-1 nova_compute[226101]: 2025-12-06 07:26:56.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:56 compute-1 ceph-mon[81689]: pgmap v2081: 305 pgs: 305 active+clean; 409 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 29 KiB/s wr, 205 op/s
Dec 06 07:26:56 compute-1 sudo[265593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:56 compute-1 sudo[265593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:56 compute-1 sudo[265593]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:56 compute-1 sudo[265618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:26:56 compute-1 sudo[265618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:56 compute-1 sudo[265618]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:56 compute-1 sudo[265643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:56 compute-1 sudo[265643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:56 compute-1 sudo[265643]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:56 compute-1 sudo[265668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:26:56 compute-1 sudo[265668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:56 compute-1 sudo[265668]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e269 e269: 3 total, 3 up, 3 in
Dec 06 07:26:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:26:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:57.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:26:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:57.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:57 compute-1 ceph-mon[81689]: pgmap v2082: 305 pgs: 305 active+clean; 409 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 17 KiB/s wr, 162 op/s
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.411 226109 DEBUG nova.virt.libvirt.vif [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1353793859',display_name='tempest-DeleteServersTestJSON-server-1353793859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1353793859',id=106,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:26:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-jutadpu4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:26:49Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=f32ea15c-cf80-482c-9f9a-22392bc79e78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.412 226109 DEBUG nova.network.os_vif_util [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.412 226109 DEBUG nova.network.os_vif_util [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.413 226109 DEBUG os_vif [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.415 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.415 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc5020f3-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.415 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.417 226109 INFO os_vif [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51')
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.418 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.418 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.554 226109 DEBUG oslo_concurrency.processutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:58 compute-1 ceph-mon[81689]: osdmap e269: 3 total, 3 up, 3 in
Dec 06 07:26:58 compute-1 ceph-mon[81689]: pgmap v2084: 305 pgs: 305 active+clean; 409 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 123 op/s
Dec 06 07:26:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:26:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:26:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:26:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:26:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:26:58 compute-1 nova_compute[226101]: 2025-12-06 07:26:58.969 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:26:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/392178062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:59 compute-1 nova_compute[226101]: 2025-12-06 07:26:59.063 226109 DEBUG oslo_concurrency.processutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:59 compute-1 nova_compute[226101]: 2025-12-06 07:26:59.073 226109 DEBUG nova.compute.provider_tree [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:26:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:59.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:26:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:59.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/392178062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:00 compute-1 nova_compute[226101]: 2025-12-06 07:27:00.228 226109 DEBUG nova.scheduler.client.report [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:27:00 compute-1 nova_compute[226101]: 2025-12-06 07:27:00.389 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:00 compute-1 nova_compute[226101]: 2025-12-06 07:27:00.832 226109 INFO nova.scheduler.client.report [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Deleted allocation for migration 4e7568f7-0b1a-4d39-a986-35ec8e8330d4
Dec 06 07:27:01 compute-1 nova_compute[226101]: 2025-12-06 07:27:01.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:01 compute-1 ceph-mon[81689]: pgmap v2085: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 367 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 16 KiB/s wr, 90 op/s
Dec 06 07:27:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:01.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:01.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:01.644 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:01.645 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:01.645 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:01 compute-1 nova_compute[226101]: 2025-12-06 07:27:01.939 226109 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 12.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:02 compute-1 ceph-mon[81689]: pgmap v2086: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 302 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 171 KiB/s rd, 5.4 KiB/s wr, 80 op/s
Dec 06 07:27:02 compute-1 nova_compute[226101]: 2025-12-06 07:27:02.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:02 compute-1 nova_compute[226101]: 2025-12-06 07:27:02.669 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:02 compute-1 nova_compute[226101]: 2025-12-06 07:27:02.670 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:02 compute-1 nova_compute[226101]: 2025-12-06 07:27:02.670 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:02 compute-1 nova_compute[226101]: 2025-12-06 07:27:02.670 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:27:02 compute-1 nova_compute[226101]: 2025-12-06 07:27:02.670 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:02.932087) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022932167, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 396, "num_deletes": 252, "total_data_size": 375262, "memory_usage": 383064, "flush_reason": "Manual Compaction"}
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022954304, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 247460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44653, "largest_seqno": 45044, "table_properties": {"data_size": 245065, "index_size": 495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6156, "raw_average_key_size": 19, "raw_value_size": 240144, "raw_average_value_size": 755, "num_data_blocks": 21, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006014, "oldest_key_time": 1765006014, "file_creation_time": 1765006022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 22268 microseconds, and 1571 cpu microseconds.
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:02.954363) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 247460 bytes OK
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:02.954387) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:02.957173) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:02.957196) EVENT_LOG_v1 {"time_micros": 1765006022957189, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:02.957217) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 372658, prev total WAL file size 372658, number of live WAL files 2.
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:02.957828) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(241KB)], [84(12MB)]
Dec 06 07:27:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022957891, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 13061580, "oldest_snapshot_seqno": -1}
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 7350 keys, 11110406 bytes, temperature: kUnknown
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023081929, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 11110406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11061317, "index_size": 29585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 191584, "raw_average_key_size": 26, "raw_value_size": 10929863, "raw_average_value_size": 1487, "num_data_blocks": 1164, "num_entries": 7350, "num_filter_entries": 7350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765006022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:03.082184) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11110406 bytes
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:03.102175) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.2 rd, 89.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(97.7) write-amplify(44.9) OK, records in: 7870, records dropped: 520 output_compression: NoCompression
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:03.102221) EVENT_LOG_v1 {"time_micros": 1765006023102204, "job": 52, "event": "compaction_finished", "compaction_time_micros": 124117, "compaction_time_cpu_micros": 30505, "output_level": 6, "num_output_files": 1, "total_output_size": 11110406, "num_input_records": 7870, "num_output_records": 7350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023102540, "job": 52, "event": "table_file_deletion", "file_number": 86}
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023105286, "job": 52, "event": "table_file_deletion", "file_number": 84}
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:02.957783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:03.105334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:03.105339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:03.105341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:03.105344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:27:03.105346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:27:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/219169138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.267 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:03.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:03.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.520 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.522 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4624MB free_disk=20.837993621826172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.522 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.522 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.818 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.819 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.958 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:03 compute-1 nova_compute[226101]: 2025-12-06 07:27:03.985 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:27:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1290920532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:04 compute-1 nova_compute[226101]: 2025-12-06 07:27:04.668 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:04 compute-1 nova_compute[226101]: 2025-12-06 07:27:04.673 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:27:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/219169138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:05 compute-1 nova_compute[226101]: 2025-12-06 07:27:05.294 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:27:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:05.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:05.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:05 compute-1 nova_compute[226101]: 2025-12-06 07:27:05.532 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:27:05 compute-1 nova_compute[226101]: 2025-12-06 07:27:05.533 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:06 compute-1 ceph-mon[81689]: pgmap v2087: 305 pgs: 305 active+clean; 233 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 678 KiB/s rd, 19 KiB/s wr, 158 op/s
Dec 06 07:27:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1290920532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2829125190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:06 compute-1 nova_compute[226101]: 2025-12-06 07:27:06.103 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:06 compute-1 podman[265791]: 2025-12-06 07:27:06.138595066 +0000 UTC m=+0.113637923 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:27:07 compute-1 ceph-mon[81689]: pgmap v2088: 305 pgs: 305 active+clean; 200 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 19 KiB/s wr, 163 op/s
Dec 06 07:27:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1443338164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:07.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:07.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.534 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.534 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:27:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.674 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.674 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.887 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.888 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.888 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.888 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.889 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:07 compute-1 nova_compute[226101]: 2025-12-06 07:27:07.889 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:27:08 compute-1 nova_compute[226101]: 2025-12-06 07:27:08.121 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:27:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2054989693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:08 compute-1 ceph-mon[81689]: pgmap v2089: 305 pgs: 305 active+clean; 200 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 18 KiB/s wr, 153 op/s
Dec 06 07:27:08 compute-1 nova_compute[226101]: 2025-12-06 07:27:08.987 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:09 compute-1 podman[265815]: 2025-12-06 07:27:09.071545154 +0000 UTC m=+0.054163720 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd)
Dec 06 07:27:09 compute-1 podman[265816]: 2025-12-06 07:27:09.090674763 +0000 UTC m=+0.070117263 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:27:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 e270: 3 total, 3 up, 3 in
Dec 06 07:27:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:09.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:27:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/479479914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:27:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:27:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/479479914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:27:10 compute-1 sudo[265854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:10 compute-1 sudo[265854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:10 compute-1 sudo[265854]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:10 compute-1 sudo[265879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:27:10 compute-1 sudo[265879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:10 compute-1 sudo[265879]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:10 compute-1 nova_compute[226101]: 2025-12-06 07:27:10.938 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:11 compute-1 nova_compute[226101]: 2025-12-06 07:27:11.105 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:11.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/874349504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:11 compute-1 ceph-mon[81689]: osdmap e270: 3 total, 3 up, 3 in
Dec 06 07:27:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/479479914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:27:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/479479914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:27:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:27:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:27:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:11.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:12 compute-1 nova_compute[226101]: 2025-12-06 07:27:12.006 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:12 compute-1 nova_compute[226101]: 2025-12-06 07:27:12.007 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:12 compute-1 nova_compute[226101]: 2025-12-06 07:27:12.014 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:27:12 compute-1 nova_compute[226101]: 2025-12-06 07:27:12.015 226109 INFO nova.compute.claims [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:27:12 compute-1 ceph-mon[81689]: pgmap v2091: 305 pgs: 305 active+clean; 162 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 19 KiB/s wr, 163 op/s
Dec 06 07:27:12 compute-1 ceph-mon[81689]: pgmap v2092: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 567 KiB/s rd, 17 KiB/s wr, 122 op/s
Dec 06 07:27:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:13.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:13.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1591755363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3772906005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:13 compute-1 nova_compute[226101]: 2025-12-06 07:27:13.990 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:14 compute-1 nova_compute[226101]: 2025-12-06 07:27:14.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:14 compute-1 nova_compute[226101]: 2025-12-06 07:27:14.979 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:15.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:15.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:15 compute-1 nova_compute[226101]: 2025-12-06 07:27:15.475 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:15 compute-1 nova_compute[226101]: 2025-12-06 07:27:15.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:16.067 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:27:16 compute-1 nova_compute[226101]: 2025-12-06 07:27:16.067 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:16.068 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:27:16 compute-1 nova_compute[226101]: 2025-12-06 07:27:16.107 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:17.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:18 compute-1 nova_compute[226101]: 2025-12-06 07:27:18.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:19.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:20 compute-1 ceph-mon[81689]: pgmap v2093: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Dec 06 07:27:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:27:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3762767992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.063 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.071 226109 DEBUG nova.compute.provider_tree [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.110 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.123 226109 DEBUG nova.scheduler.client.report [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.163 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 9.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.164 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.228 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.228 226109 DEBUG nova.network.neutron [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.258 226109 INFO nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:27:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:21.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3341888395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:21 compute-1 ceph-mon[81689]: pgmap v2094: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 32 op/s
Dec 06 07:27:21 compute-1 ceph-mon[81689]: pgmap v2095: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 32 op/s
Dec 06 07:27:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/679720062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:21 compute-1 ceph-mon[81689]: pgmap v2096: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 464 B/s wr, 11 op/s
Dec 06 07:27:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3762767992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.442 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:27:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.541 226109 DEBUG nova.policy [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd67c136e82ad4001b000848d75eef50d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88f5b34244614321a9b6e902eaba0ece', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.727 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.729 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.729 226109 INFO nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Creating image(s)
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.767 226109 DEBUG nova.storage.rbd_utils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.806 226109 DEBUG nova.storage.rbd_utils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.856 226109 DEBUG nova.storage.rbd_utils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.862 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.936 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.938 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.938 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.938 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.985 226109 DEBUG nova.storage.rbd_utils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:21 compute-1 nova_compute[226101]: 2025-12-06 07:27:21.995 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:22 compute-1 nova_compute[226101]: 2025-12-06 07:27:22.302 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:22 compute-1 nova_compute[226101]: 2025-12-06 07:27:22.542 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:22 compute-1 nova_compute[226101]: 2025-12-06 07:27:22.692 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:22 compute-1 ceph-mon[81689]: pgmap v2097: 305 pgs: 305 active+clean; 149 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Dec 06 07:27:22 compute-1 nova_compute[226101]: 2025-12-06 07:27:22.853 226109 DEBUG nova.storage.rbd_utils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] resizing rbd image 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:27:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:23 compute-1 nova_compute[226101]: 2025-12-06 07:27:23.179 226109 DEBUG nova.network.neutron [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Successfully created port: cf525aa5-a11a-4218-98dd-328546551976 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:27:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:23.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:23 compute-1 nova_compute[226101]: 2025-12-06 07:27:23.998 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:24.069 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:24 compute-1 nova_compute[226101]: 2025-12-06 07:27:24.561 226109 DEBUG nova.network.neutron [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Successfully updated port: cf525aa5-a11a-4218-98dd-328546551976 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:27:24 compute-1 nova_compute[226101]: 2025-12-06 07:27:24.606 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:27:24 compute-1 nova_compute[226101]: 2025-12-06 07:27:24.607 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:27:24 compute-1 nova_compute[226101]: 2025-12-06 07:27:24.607 226109 DEBUG nova.network.neutron [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:27:24 compute-1 nova_compute[226101]: 2025-12-06 07:27:24.799 226109 DEBUG nova.compute.manager [req-67bb5153-852b-4de3-ab4b-f8d4a27382e5 req-c9015ee3-62ae-42a7-9564-65d53459ed70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-changed-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:24 compute-1 nova_compute[226101]: 2025-12-06 07:27:24.799 226109 DEBUG nova.compute.manager [req-67bb5153-852b-4de3-ab4b-f8d4a27382e5 req-c9015ee3-62ae-42a7-9564-65d53459ed70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Refreshing instance network info cache due to event network-changed-cf525aa5-a11a-4218-98dd-328546551976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:27:24 compute-1 nova_compute[226101]: 2025-12-06 07:27:24.800 226109 DEBUG oslo_concurrency.lockutils [req-67bb5153-852b-4de3-ab4b-f8d4a27382e5 req-c9015ee3-62ae-42a7-9564-65d53459ed70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:27:25 compute-1 nova_compute[226101]: 2025-12-06 07:27:25.206 226109 DEBUG nova.network.neutron [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:27:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:25.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:25 compute-1 ceph-mon[81689]: pgmap v2098: 305 pgs: 305 active+clean; 204 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.150 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.363 226109 DEBUG nova.objects.instance [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.433 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.434 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Ensure instance console log exists: /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.434 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.435 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.435 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.898 226109 DEBUG nova.network.neutron [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/803218215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2939792088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:26 compute-1 ceph-mon[81689]: pgmap v2099: 305 pgs: 305 active+clean; 213 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 53 op/s
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.933 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.934 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance network_info: |[{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.934 226109 DEBUG oslo_concurrency.lockutils [req-67bb5153-852b-4de3-ab4b-f8d4a27382e5 req-c9015ee3-62ae-42a7-9564-65d53459ed70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.935 226109 DEBUG nova.network.neutron [req-67bb5153-852b-4de3-ab4b-f8d4a27382e5 req-c9015ee3-62ae-42a7-9564-65d53459ed70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Refreshing network info cache for port cf525aa5-a11a-4218-98dd-328546551976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.938 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Start _get_guest_xml network_info=[{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.943 226109 WARNING nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.949 226109 DEBUG nova.virt.libvirt.host [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.950 226109 DEBUG nova.virt.libvirt.host [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.954 226109 DEBUG nova.virt.libvirt.host [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.954 226109 DEBUG nova.virt.libvirt.host [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.955 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.956 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.956 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.956 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.957 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.957 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.957 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.957 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.958 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.958 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.958 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.958 226109 DEBUG nova.virt.hardware [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:27:26 compute-1 nova_compute[226101]: 2025-12-06 07:27:26.961 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:27.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:27:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1297532924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:27 compute-1 nova_compute[226101]: 2025-12-06 07:27:27.449 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:27.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:27 compute-1 nova_compute[226101]: 2025-12-06 07:27:27.480 226109 DEBUG nova.storage.rbd_utils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:27 compute-1 nova_compute[226101]: 2025-12-06 07:27:27.486 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:27:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2529783913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:27 compute-1 nova_compute[226101]: 2025-12-06 07:27:27.971 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:27 compute-1 nova_compute[226101]: 2025-12-06 07:27:27.972 226109 DEBUG nova.virt.libvirt.vif [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1401757895',display_name='tempest-ServerDiskConfigTestJSON-server-1401757895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1401757895',id=110,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-iyd6zt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:27:21Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=018b5c4d-2e0a-428b-ac8b-a74236e0d103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:27:27 compute-1 nova_compute[226101]: 2025-12-06 07:27:27.973 226109 DEBUG nova.network.os_vif_util [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:27:27 compute-1 nova_compute[226101]: 2025-12-06 07:27:27.974 226109 DEBUG nova.network.os_vif_util [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:27:27 compute-1 nova_compute[226101]: 2025-12-06 07:27:27.975 226109 DEBUG nova.objects.instance [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.051 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <uuid>018b5c4d-2e0a-428b-ac8b-a74236e0d103</uuid>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <name>instance-0000006e</name>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1401757895</nova:name>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:27:26</nova:creationTime>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <nova:port uuid="cf525aa5-a11a-4218-98dd-328546551976">
Dec 06 07:27:28 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <system>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <entry name="serial">018b5c4d-2e0a-428b-ac8b-a74236e0d103</entry>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <entry name="uuid">018b5c4d-2e0a-428b-ac8b-a74236e0d103</entry>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </system>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <os>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   </os>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <features>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   </features>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk">
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       </source>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk.config">
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       </source>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:27:28 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:01:c9:f4"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <target dev="tapcf525aa5-a1"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/console.log" append="off"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <video>
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </video>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:27:28 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:27:28 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:27:28 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:27:28 compute-1 nova_compute[226101]: </domain>
Dec 06 07:27:28 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.052 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Preparing to wait for external event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.052 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.053 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.053 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.053 226109 DEBUG nova.virt.libvirt.vif [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1401757895',display_name='tempest-ServerDiskConfigTestJSON-server-1401757895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1401757895',id=110,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-iyd6zt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:27:21Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=018b5c4d-2e0a-428b-ac8b-a74236e0d103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.054 226109 DEBUG nova.network.os_vif_util [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.054 226109 DEBUG nova.network.os_vif_util [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.055 226109 DEBUG os_vif [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.055 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.056 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.056 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.058 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.059 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf525aa5-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.059 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf525aa5-a1, col_values=(('external_ids', {'iface-id': 'cf525aa5-a11a-4218-98dd-328546551976', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:c9:f4', 'vm-uuid': '018b5c4d-2e0a-428b-ac8b-a74236e0d103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.060 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:28 compute-1 NetworkManager[49031]: <info>  [1765006048.0616] manager: (tapcf525aa5-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.065 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:27:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.068 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.069 226109 INFO os_vif [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1')
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.356 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.358 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.358 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:01:c9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.359 226109 INFO nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Using config drive
Dec 06 07:27:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1297532924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2529783913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:28 compute-1 nova_compute[226101]: 2025-12-06 07:27:28.603 226109 DEBUG nova.storage.rbd_utils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:29 compute-1 nova_compute[226101]: 2025-12-06 07:27:29.274 226109 INFO nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Creating config drive at /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/disk.config
Dec 06 07:27:29 compute-1 nova_compute[226101]: 2025-12-06 07:27:29.280 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpchwxsicu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:29.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:29 compute-1 nova_compute[226101]: 2025-12-06 07:27:29.413 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpchwxsicu" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:29 compute-1 nova_compute[226101]: 2025-12-06 07:27:29.439 226109 DEBUG nova.storage.rbd_utils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:29 compute-1 nova_compute[226101]: 2025-12-06 07:27:29.443 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/disk.config 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:29.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:30 compute-1 ceph-mon[81689]: pgmap v2100: 305 pgs: 305 active+clean; 213 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 53 op/s
Dec 06 07:27:30 compute-1 nova_compute[226101]: 2025-12-06 07:27:30.453 226109 DEBUG nova.network.neutron [req-67bb5153-852b-4de3-ab4b-f8d4a27382e5 req-c9015ee3-62ae-42a7-9564-65d53459ed70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updated VIF entry in instance network info cache for port cf525aa5-a11a-4218-98dd-328546551976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:27:30 compute-1 nova_compute[226101]: 2025-12-06 07:27:30.454 226109 DEBUG nova.network.neutron [req-67bb5153-852b-4de3-ab4b-f8d4a27382e5 req-c9015ee3-62ae-42a7-9564-65d53459ed70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:30 compute-1 nova_compute[226101]: 2025-12-06 07:27:30.573 226109 DEBUG oslo_concurrency.lockutils [req-67bb5153-852b-4de3-ab4b-f8d4a27382e5 req-c9015ee3-62ae-42a7-9564-65d53459ed70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:27:30 compute-1 nova_compute[226101]: 2025-12-06 07:27:30.991 226109 DEBUG oslo_concurrency.processutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/disk.config 018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:30 compute-1 nova_compute[226101]: 2025-12-06 07:27:30.992 226109 INFO nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Deleting local config drive /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/disk.config because it was imported into RBD.
Dec 06 07:27:31 compute-1 kernel: tapcf525aa5-a1: entered promiscuous mode
Dec 06 07:27:31 compute-1 NetworkManager[49031]: <info>  [1765006051.0647] manager: (tapcf525aa5-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/205)
Dec 06 07:27:31 compute-1 systemd-udevd[266226]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:27:31 compute-1 ovn_controller[130279]: 2025-12-06T07:27:31Z|00451|binding|INFO|Claiming lport cf525aa5-a11a-4218-98dd-328546551976 for this chassis.
Dec 06 07:27:31 compute-1 ovn_controller[130279]: 2025-12-06T07:27:31Z|00452|binding|INFO|cf525aa5-a11a-4218-98dd-328546551976: Claiming fa:16:3e:01:c9:f4 10.100.0.11
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.105 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.115 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 NetworkManager[49031]: <info>  [1765006051.1220] device (tapcf525aa5-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:27:31 compute-1 NetworkManager[49031]: <info>  [1765006051.1228] device (tapcf525aa5-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:27:31 compute-1 systemd-machined[190302]: New machine qemu-51-instance-0000006e.
Dec 06 07:27:31 compute-1 systemd[1]: Started Virtual Machine qemu-51-instance-0000006e.
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.211 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 ovn_controller[130279]: 2025-12-06T07:27:31Z|00453|binding|INFO|Setting lport cf525aa5-a11a-4218-98dd-328546551976 ovn-installed in OVS
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.215 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 ovn_controller[130279]: 2025-12-06T07:27:31Z|00454|binding|INFO|Setting lport cf525aa5-a11a-4218-98dd-328546551976 up in Southbound
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.244 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:c9:f4 10.100.0.11'], port_security=['fa:16:3e:01:c9:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '018b5c4d-2e0a-428b-ac8b-a74236e0d103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '2', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=cf525aa5-a11a-4218-98dd-328546551976) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.246 139580 INFO neutron.agent.ovn.metadata.agent [-] Port cf525aa5-a11a-4218-98dd-328546551976 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.247 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.257 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e55891-02a0-4057-82cf-2a6951877e4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.258 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.260 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.260 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[97d4fa4b-ee37-493f-99e9-8e935b2d5e10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.261 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dfeaabb7-7cfe-43a0-b90b-26ff28a7580e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.271 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc44810-b393-4658-9e74-32005f1be188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.295 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e73924-40c3-4f1c-b902-08c6a5f9bbdc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.322 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[760bc2ff-7fac-40fd-a318-f8564d87845d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.329 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ea9eec-ca94-4c65-a221-2fd8c7c385be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 NetworkManager[49031]: <info>  [1765006051.3305] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/206)
Dec 06 07:27:31 compute-1 systemd-udevd[266231]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:27:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:31.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.365 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[771e1592-f598-40f3-932c-82d58b173798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.370 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d91e4cd6-67ec-4528-b997-91e9e1eda03b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 NetworkManager[49031]: <info>  [1765006051.3970] device (tap7c014e4e-a0): carrier: link connected
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.403 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[aca270e9-c727-401d-8e9f-2ef7fd8384ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.422 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ee79d375-72de-45d0-852c-64e05e2d15ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633090, 'reachable_time': 32265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266264, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.440 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[74e84764-d1d5-43dc-b190-7f9f4bfaaa69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633090, 'tstamp': 633090}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266265, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.458 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e121c5f1-025f-44f6-a7e0-7142742d8e2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633090, 'reachable_time': 32265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266266, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:31.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.486 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6e705b7b-59d2-4182-bf1a-ea97150d31f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.540 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ccae86a7-c824-4999-b878-ca1a3ef02f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.541 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.541 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.542 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.543 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec 06 07:27:31 compute-1 NetworkManager[49031]: <info>  [1765006051.5445] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.545 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.546 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.547 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 ovn_controller[130279]: 2025-12-06T07:27:31Z|00455|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=0)
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.548 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.548 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.549 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2874ad02-9874-41c4-9748-fd0cbcb6507b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.550 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:27:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:31.551 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:27:31 compute-1 nova_compute[226101]: 2025-12-06 07:27:31.562 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:31 compute-1 podman[266312]: 2025-12-06 07:27:31.875334128 +0000 UTC m=+0.033715866 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:27:32 compute-1 ceph-mon[81689]: pgmap v2101: 305 pgs: 305 active+clean; 213 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Dec 06 07:27:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:27:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/326643290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:27:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:27:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/326643290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.885 226109 DEBUG nova.compute.manager [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.886 226109 DEBUG oslo_concurrency.lockutils [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.886 226109 DEBUG oslo_concurrency.lockutils [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.886 226109 DEBUG oslo_concurrency.lockutils [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.887 226109 DEBUG nova.compute.manager [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Processing event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.887 226109 DEBUG nova.compute.manager [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.887 226109 DEBUG oslo_concurrency.lockutils [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.888 226109 DEBUG oslo_concurrency.lockutils [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.888 226109 DEBUG oslo_concurrency.lockutils [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.888 226109 DEBUG nova.compute.manager [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:27:32 compute-1 nova_compute[226101]: 2025-12-06 07:27:32.888 226109 WARNING nova.compute.manager [req-d60390e0-7764-480d-8313-31d4298f88ee req-c0ec0c1b-1cfe-4c80-a740-d59fbccd928c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state building and task_state spawning.
Dec 06 07:27:33 compute-1 nova_compute[226101]: 2025-12-06 07:27:33.062 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:33.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:33 compute-1 ceph-mon[81689]: pgmap v2102: 305 pgs: 305 active+clean; 213 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Dec 06 07:27:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/326643290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:27:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/326643290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:27:33 compute-1 podman[266312]: 2025-12-06 07:27:33.943138088 +0000 UTC m=+2.101519775 container create 271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:27:34 compute-1 systemd[1]: Started libpod-conmon-271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387.scope.
Dec 06 07:27:34 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:27:34 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff0082f16d0d5fa34a15678f6e75df167dfa10f500a44f1098fe1588d54fa56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.348 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.349 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006054.347932, 018b5c4d-2e0a-428b-ac8b-a74236e0d103 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.349 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] VM Started (Lifecycle Event)
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.353 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.356 226109 INFO nova.virt.libvirt.driver [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance spawned successfully.
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.357 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.388 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.392 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.405 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.405 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.406 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.406 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.406 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.407 226109 DEBUG nova.virt.libvirt.driver [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.441 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.441 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006054.3490775, 018b5c4d-2e0a-428b-ac8b-a74236e0d103 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.441 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] VM Paused (Lifecycle Event)
Dec 06 07:27:34 compute-1 podman[266312]: 2025-12-06 07:27:34.471996375 +0000 UTC m=+2.630378072 container init 271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:27:34 compute-1 podman[266312]: 2025-12-06 07:27:34.478030638 +0000 UTC m=+2.636412305 container start 271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.492 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.497 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006054.352836, 018b5c4d-2e0a-428b-ac8b-a74236e0d103 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.497 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] VM Resumed (Lifecycle Event)
Dec 06 07:27:34 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[266349]: [NOTICE]   (266360) : New worker (266362) forked
Dec 06 07:27:34 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[266349]: [NOTICE]   (266360) : Loading success.
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.516 226109 INFO nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Took 12.79 seconds to spawn the instance on the hypervisor.
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.516 226109 DEBUG nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.523 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.526 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.554 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.603 226109 INFO nova.compute.manager [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Took 22.66 seconds to build instance.
Dec 06 07:27:34 compute-1 nova_compute[226101]: 2025-12-06 07:27:34.626 226109 DEBUG oslo_concurrency.lockutils [None req-10f099bf-3cee-4ab1-b1b7-f283cdd1b3da d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:35 compute-1 ceph-mon[81689]: pgmap v2103: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 133 op/s
Dec 06 07:27:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:35.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:35.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:36 compute-1 nova_compute[226101]: 2025-12-06 07:27:36.212 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:37 compute-1 podman[266371]: 2025-12-06 07:27:37.10218934 +0000 UTC m=+0.084374790 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:27:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:37.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:37.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:37 compute-1 ceph-mon[81689]: pgmap v2104: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 240 KiB/s wr, 110 op/s
Dec 06 07:27:38 compute-1 nova_compute[226101]: 2025-12-06 07:27:38.064 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:38 compute-1 ceph-mon[81689]: pgmap v2105: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 25 KiB/s wr, 99 op/s
Dec 06 07:27:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:39.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:39.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:40 compute-1 podman[266399]: 2025-12-06 07:27:40.073088617 +0000 UTC m=+0.050586823 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 06 07:27:40 compute-1 podman[266398]: 2025-12-06 07:27:40.104444778 +0000 UTC m=+0.083090915 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:27:41 compute-1 nova_compute[226101]: 2025-12-06 07:27:41.215 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:41.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:42 compute-1 ceph-mon[81689]: pgmap v2106: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 25 KiB/s wr, 155 op/s
Dec 06 07:27:42 compute-1 nova_compute[226101]: 2025-12-06 07:27:42.677 226109 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:27:42 compute-1 nova_compute[226101]: 2025-12-06 07:27:42.678 226109 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:27:42 compute-1 nova_compute[226101]: 2025-12-06 07:27:42.678 226109 DEBUG nova.network.neutron [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:27:43 compute-1 nova_compute[226101]: 2025-12-06 07:27:43.067 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:43.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:45 compute-1 nova_compute[226101]: 2025-12-06 07:27:45.380 226109 DEBUG nova.network.neutron [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:45.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:45 compute-1 ceph-mon[81689]: pgmap v2107: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Dec 06 07:27:45 compute-1 nova_compute[226101]: 2025-12-06 07:27:45.408 226109 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:27:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:45.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:45 compute-1 nova_compute[226101]: 2025-12-06 07:27:45.608 226109 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 07:27:45 compute-1 nova_compute[226101]: 2025-12-06 07:27:45.609 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Creating file /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/093f6c0e1c854270a4809c7830260f48.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 07:27:45 compute-1 nova_compute[226101]: 2025-12-06 07:27:45.609 226109 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/093f6c0e1c854270a4809c7830260f48.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:46 compute-1 nova_compute[226101]: 2025-12-06 07:27:46.033 226109 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/093f6c0e1c854270a4809c7830260f48.tmp" returned: 1 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:46 compute-1 nova_compute[226101]: 2025-12-06 07:27:46.034 226109 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103/093f6c0e1c854270a4809c7830260f48.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 07:27:46 compute-1 nova_compute[226101]: 2025-12-06 07:27:46.034 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Creating directory /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 07:27:46 compute-1 nova_compute[226101]: 2025-12-06 07:27:46.034 226109 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:46 compute-1 nova_compute[226101]: 2025-12-06 07:27:46.217 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:46 compute-1 nova_compute[226101]: 2025-12-06 07:27:46.255 226109 DEBUG oslo_concurrency.processutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/018b5c4d-2e0a-428b-ac8b-a74236e0d103" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:46 compute-1 nova_compute[226101]: 2025-12-06 07:27:46.261 226109 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:27:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:47.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:47.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:48 compute-1 nova_compute[226101]: 2025-12-06 07:27:48.069 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3802733494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:49 compute-1 ceph-mon[81689]: pgmap v2108: 305 pgs: 305 active+clean; 220 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 630 KiB/s wr, 105 op/s
Dec 06 07:27:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:49.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:51 compute-1 nova_compute[226101]: 2025-12-06 07:27:51.220 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:51.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:51.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:53 compute-1 nova_compute[226101]: 2025-12-06 07:27:53.072 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:53.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:55.015 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:27:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:27:55.016 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:27:55 compute-1 nova_compute[226101]: 2025-12-06 07:27:55.016 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.035268784s, txc = 0x55b552795500
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.034807682s, txc = 0x55b552c94f00
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.033878326s, txc = 0x55b552de0600
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.033662796s, txc = 0x55b5522cef00
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.033124447s, txc = 0x55b553366900
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.031509876s, txc = 0x55b552b93800
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.031417370s, txc = 0x55b552b93b00
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.031121254s, txc = 0x55b554749200
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.030771255s, txc = 0x55b552b40900
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.030066490s, txc = 0x55b552b40f00
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.029835224s, txc = 0x55b5526f5500
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.029419899s, txc = 0x55b5546b0000
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.028999805s, txc = 0x55b552c95b00
Dec 06 07:27:55 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.027952194s, txc = 0x55b553ccc000
Dec 06 07:27:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:27:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:55.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:27:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:55.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:56 compute-1 nova_compute[226101]: 2025-12-06 07:27:56.221 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:56 compute-1 nova_compute[226101]: 2025-12-06 07:27:56.303 226109 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:27:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:57.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:57 compute-1 ceph-mon[81689]: pgmap v2109: 305 pgs: 305 active+clean; 225 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 84 op/s
Dec 06 07:27:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4280102292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:57 compute-1 ceph-mon[81689]: pgmap v2110: 305 pgs: 305 active+clean; 225 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 79 op/s
Dec 06 07:27:58 compute-1 nova_compute[226101]: 2025-12-06 07:27:58.075 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.067656994s, txc = 0x55b554619800
Dec 06 07:27:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:27:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:59.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:27:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:27:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:00 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:00.018 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:00 compute-1 ceph-mon[81689]: pgmap v2111: 305 pgs: 305 active+clean; 227 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 90 op/s
Dec 06 07:28:00 compute-1 ceph-mon[81689]: pgmap v2112: 305 pgs: 305 active+clean; 238 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 252 KiB/s rd, 2.4 MiB/s wr, 48 op/s
Dec 06 07:28:00 compute-1 ceph-mon[81689]: pgmap v2113: 305 pgs: 305 active+clean; 244 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 3.0 MiB/s wr, 46 op/s
Dec 06 07:28:00 compute-1 ceph-mon[81689]: pgmap v2114: 305 pgs: 305 active+clean; 279 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 55 op/s
Dec 06 07:28:01 compute-1 nova_compute[226101]: 2025-12-06 07:28:01.223 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:01.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:01.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:01.645 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:01.646 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:01.646 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:02 compute-1 nova_compute[226101]: 2025-12-06 07:28:02.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:02 compute-1 nova_compute[226101]: 2025-12-06 07:28:02.686 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:02 compute-1 nova_compute[226101]: 2025-12-06 07:28:02.686 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:02 compute-1 nova_compute[226101]: 2025-12-06 07:28:02.687 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:02 compute-1 nova_compute[226101]: 2025-12-06 07:28:02.687 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:28:02 compute-1 nova_compute[226101]: 2025-12-06 07:28:02.687 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:03 compute-1 nova_compute[226101]: 2025-12-06 07:28:03.077 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:03.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:05.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:06 compute-1 nova_compute[226101]: 2025-12-06 07:28:06.225 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:06 compute-1 ceph-mon[81689]: pgmap v2115: 305 pgs: 305 active+clean; 279 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 47 op/s
Dec 06 07:28:06 compute-1 ceph-mon[81689]: pgmap v2116: 305 pgs: 305 active+clean; 281 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 47 op/s
Dec 06 07:28:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:28:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/164684786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:06 compute-1 nova_compute[226101]: 2025-12-06 07:28:06.922 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:07 compute-1 nova_compute[226101]: 2025-12-06 07:28:07.351 226109 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:28:07 compute-1 nova_compute[226101]: 2025-12-06 07:28:07.384 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:28:07 compute-1 nova_compute[226101]: 2025-12-06 07:28:07.384 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:28:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:07.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:07.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:07 compute-1 nova_compute[226101]: 2025-12-06 07:28:07.586 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:28:07 compute-1 nova_compute[226101]: 2025-12-06 07:28:07.587 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4383MB free_disk=20.83625030517578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:28:07 compute-1 nova_compute[226101]: 2025-12-06 07:28:07.587 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:07 compute-1 nova_compute[226101]: 2025-12-06 07:28:07.588 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:08 compute-1 nova_compute[226101]: 2025-12-06 07:28:08.033 226109 INFO nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating resource usage from migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f
Dec 06 07:28:08 compute-1 nova_compute[226101]: 2025-12-06 07:28:08.080 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:08 compute-1 ceph-mon[81689]: pgmap v2117: 305 pgs: 305 active+clean; 300 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.1 MiB/s wr, 41 op/s
Dec 06 07:28:08 compute-1 ceph-mon[81689]: pgmap v2118: 305 pgs: 305 active+clean; 302 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 34 op/s
Dec 06 07:28:08 compute-1 ceph-mon[81689]: pgmap v2119: 305 pgs: 305 active+clean; 307 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 53 op/s
Dec 06 07:28:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2598187878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/164684786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3741694525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:08 compute-1 nova_compute[226101]: 2025-12-06 07:28:08.103 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:28:08 compute-1 nova_compute[226101]: 2025-12-06 07:28:08.104 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:28:08 compute-1 nova_compute[226101]: 2025-12-06 07:28:08.104 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:28:08 compute-1 podman[266462]: 2025-12-06 07:28:08.133509769 +0000 UTC m=+0.116381574 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:28:08 compute-1 nova_compute[226101]: 2025-12-06 07:28:08.246 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:28:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1049557105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:08 compute-1 nova_compute[226101]: 2025-12-06 07:28:08.740 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:08 compute-1 nova_compute[226101]: 2025-12-06 07:28:08.747 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:28:09 compute-1 ovn_controller[130279]: 2025-12-06T07:28:09Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:c9:f4 10.100.0.11
Dec 06 07:28:09 compute-1 ovn_controller[130279]: 2025-12-06T07:28:09Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:c9:f4 10.100.0.11
Dec 06 07:28:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:10.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:10 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:10.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:10 compute-1 nova_compute[226101]: 2025-12-06 07:28:10.225 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:28:10 compute-1 ceph-mon[81689]: pgmap v2120: 305 pgs: 305 active+clean; 307 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 224 KiB/s rd, 1.0 MiB/s wr, 35 op/s
Dec 06 07:28:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1106945750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2551767847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2081102898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:28:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2081102898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:28:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1049557105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1704356668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:10 compute-1 nova_compute[226101]: 2025-12-06 07:28:10.392 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:28:10 compute-1 nova_compute[226101]: 2025-12-06 07:28:10.393 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:10 compute-1 sudo[266510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:10 compute-1 sudo[266510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:10 compute-1 sudo[266510]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:10 compute-1 podman[266534]: 2025-12-06 07:28:10.473333705 +0000 UTC m=+0.053147429 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:28:10 compute-1 podman[266535]: 2025-12-06 07:28:10.480311391 +0000 UTC m=+0.057554766 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:28:10 compute-1 sudo[266556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:28:10 compute-1 sudo[266556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:10 compute-1 sudo[266556]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:10 compute-1 sudo[266596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:10 compute-1 sudo[266596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:10 compute-1 sudo[266596]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:10 compute-1 sudo[266621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:28:10 compute-1 sudo[266621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:11 compute-1 sudo[266621]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:11 compute-1 sudo[266677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:11 compute-1 sudo[266677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:11 compute-1 sudo[266677]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:11 compute-1 sudo[266702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:28:11 compute-1 sudo[266702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:11 compute-1 sudo[266702]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:11 compute-1 nova_compute[226101]: 2025-12-06 07:28:11.227 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:11 compute-1 sudo[266727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:11 compute-1 sudo[266727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:11 compute-1 sudo[266727]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:11 compute-1 sudo[266752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 07:28:11 compute-1 sudo[266752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:11 compute-1 sudo[266752]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:12 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:12.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:12.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:12 compute-1 nova_compute[226101]: 2025-12-06 07:28:12.393 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:12 compute-1 nova_compute[226101]: 2025-12-06 07:28:12.393 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:28:12 compute-1 nova_compute[226101]: 2025-12-06 07:28:12.394 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:28:12 compute-1 nova_compute[226101]: 2025-12-06 07:28:12.422 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:28:12 compute-1 nova_compute[226101]: 2025-12-06 07:28:12.422 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:28:12 compute-1 nova_compute[226101]: 2025-12-06 07:28:12.422 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:28:12 compute-1 nova_compute[226101]: 2025-12-06 07:28:12.422 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:28:12 compute-1 ceph-mon[81689]: pgmap v2121: 305 pgs: 305 active+clean; 309 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Dec 06 07:28:13 compute-1 nova_compute[226101]: 2025-12-06 07:28:13.082 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3051144945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:13 compute-1 ceph-mon[81689]: pgmap v2122: 305 pgs: 305 active+clean; 318 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 1.2 MiB/s wr, 83 op/s
Dec 06 07:28:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:15 compute-1 ceph-mon[81689]: pgmap v2123: 305 pgs: 305 active+clean; 321 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 436 KiB/s wr, 84 op/s
Dec 06 07:28:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.097 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.229 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:16.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:16.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.254 226109 INFO nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance shutdown successfully after 29 seconds.
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.398 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.398 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.398 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.399 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.399 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.399 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.399 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.399 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.400 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:28:16 compute-1 nova_compute[226101]: 2025-12-06 07:28:16.618 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:28:18 compute-1 nova_compute[226101]: 2025-12-06 07:28:18.084 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:18.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:18.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:19 compute-1 kernel: tapcf525aa5-a1 (unregistering): left promiscuous mode
Dec 06 07:28:19 compute-1 NetworkManager[49031]: <info>  [1765006099.2811] device (tapcf525aa5-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:28:19 compute-1 ovn_controller[130279]: 2025-12-06T07:28:19Z|00456|binding|INFO|Releasing lport cf525aa5-a11a-4218-98dd-328546551976 from this chassis (sb_readonly=0)
Dec 06 07:28:19 compute-1 ovn_controller[130279]: 2025-12-06T07:28:19Z|00457|binding|INFO|Setting lport cf525aa5-a11a-4218-98dd-328546551976 down in Southbound
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.288 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-1 ovn_controller[130279]: 2025-12-06T07:28:19Z|00458|binding|INFO|Removing iface tapcf525aa5-a1 ovn-installed in OVS
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.290 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.306 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-1 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Dec 06 07:28:19 compute-1 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006e.scope: Consumed 15.989s CPU time.
Dec 06 07:28:19 compute-1 systemd-machined[190302]: Machine qemu-51-instance-0000006e terminated.
Dec 06 07:28:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:19.366 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:c9:f4 10.100.0.11'], port_security=['fa:16:3e:01:c9:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '018b5c4d-2e0a-428b-ac8b-a74236e0d103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '4', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=cf525aa5-a11a-4218-98dd-328546551976) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:28:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:19.368 139580 INFO neutron.agent.ovn.metadata.agent [-] Port cf525aa5-a11a-4218-98dd-328546551976 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis
Dec 06 07:28:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:19.369 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:28:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:19.371 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[20f71b3c-83cc-4565-9d4f-672944bfa00f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:19.371 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.475 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.479 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.494 226109 INFO nova.virt.libvirt.driver [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Instance destroyed successfully.
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.495 226109 DEBUG nova.virt.libvirt.vif [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1401757895',display_name='tempest-ServerDiskConfigTestJSON-server-1401757895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1401757895',id=110,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:27:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-iyd6zt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:27:41Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=018b5c4d-2e0a-428b-ac8b-a74236e0d103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:01:c9:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.495 226109 DEBUG nova.network.os_vif_util [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:01:c9:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.496 226109 DEBUG nova.network.os_vif_util [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.496 226109 DEBUG os_vif [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.498 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.498 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf525aa5-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.500 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.501 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.504 226109 INFO os_vif [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1')
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.509 226109 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.509 226109 DEBUG nova.virt.libvirt.driver [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:28:19 compute-1 nova_compute[226101]: 2025-12-06 07:28:19.824 226109 DEBUG neutronclient.v2_0.client [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port cf525aa5-a11a-4218-98dd-328546551976 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 07:28:19 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[266349]: [NOTICE]   (266360) : haproxy version is 2.8.14-c23fe91
Dec 06 07:28:19 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[266349]: [NOTICE]   (266360) : path to executable is /usr/sbin/haproxy
Dec 06 07:28:19 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[266349]: [WARNING]  (266360) : Exiting Master process...
Dec 06 07:28:19 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[266349]: [WARNING]  (266360) : Exiting Master process...
Dec 06 07:28:19 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[266349]: [ALERT]    (266360) : Current worker (266362) exited with code 143 (Terminated)
Dec 06 07:28:19 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[266349]: [WARNING]  (266360) : All workers exited. Exiting... (0)
Dec 06 07:28:19 compute-1 systemd[1]: libpod-271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387.scope: Deactivated successfully.
Dec 06 07:28:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:28:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:28:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:28:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:28:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:28:19 compute-1 ceph-mon[81689]: pgmap v2124: 305 pgs: 305 active+clean; 359 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 188 op/s
Dec 06 07:28:19 compute-1 podman[266819]: 2025-12-06 07:28:19.896297136 +0000 UTC m=+0.444792731 container died 271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:28:20 compute-1 nova_compute[226101]: 2025-12-06 07:28:20.101 226109 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:20 compute-1 nova_compute[226101]: 2025-12-06 07:28:20.101 226109 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:20 compute-1 nova_compute[226101]: 2025-12-06 07:28:20.102 226109 DEBUG oslo_concurrency.lockutils [None req-e6c9ed57-aed4-4ef0-98d2-c9bde68f5212 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:20.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:20.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:20 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387-userdata-shm.mount: Deactivated successfully.
Dec 06 07:28:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-eff0082f16d0d5fa34a15678f6e75df167dfa10f500a44f1098fe1588d54fa56-merged.mount: Deactivated successfully.
Dec 06 07:28:20 compute-1 podman[266819]: 2025-12-06 07:28:20.775211664 +0000 UTC m=+1.323707259 container cleanup 271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:28:21 compute-1 podman[266854]: 2025-12-06 07:28:21.216593694 +0000 UTC m=+0.416658201 container remove 271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.222 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f6431af7-ebc8-4663-89c4-f02f3d3c100b]: (4, ('Sat Dec  6 07:28:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387)\n271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387\nSat Dec  6 07:28:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387)\n271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.225 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa063e1-b42d-4c2c-85f8-acaf0b53b57f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.227 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:21 compute-1 kernel: tap7c014e4e-a0: left promiscuous mode
Dec 06 07:28:21 compute-1 nova_compute[226101]: 2025-12-06 07:28:21.230 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:21 compute-1 nova_compute[226101]: 2025-12-06 07:28:21.243 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.245 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[29ec37f2-92d4-4862-bdda-7f7e8fe5c4f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:21 compute-1 ceph-mon[81689]: pgmap v2125: 305 pgs: 305 active+clean; 359 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 163 op/s
Dec 06 07:28:21 compute-1 ceph-mon[81689]: pgmap v2126: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Dec 06 07:28:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1737581334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.261 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fb78da40-8cfa-4043-91e1-f1164e361e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:21 compute-1 systemd[1]: libpod-conmon-271e936f5d80feef4f2ea8d438346b027d8b8947c311aee700d729e3499c5387.scope: Deactivated successfully.
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.263 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a59ce7f6-e311-4eb8-96c0-ab00d67dc5ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.280 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8c6d00-46e9-4eec-aead-8c1f079b972a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633082, 'reachable_time': 40460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266868, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:21 compute-1 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.283 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:28:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:28:21.283 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[1d29565a-76f7-4947-bfef-46d7a2e714c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:28:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3605.4 total, 600.0 interval
                                           Cumulative writes: 33K writes, 132K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.93 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7462 writes, 31K keys, 7462 commit groups, 1.0 writes per commit group, ingest: 32.89 MB, 0.05 MB/s
                                           Interval WAL: 7462 writes, 2672 syncs, 2.79 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:28:22 compute-1 nova_compute[226101]: 2025-12-06 07:28:22.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:22 compute-1 nova_compute[226101]: 2025-12-06 07:28:22.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:28:23 compute-1 ceph-mon[81689]: pgmap v2127: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Dec 06 07:28:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:24.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:24.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:24 compute-1 nova_compute[226101]: 2025-12-06 07:28:24.503 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:24 compute-1 nova_compute[226101]: 2025-12-06 07:28:24.555 226109 DEBUG nova.compute.manager [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:28:24 compute-1 nova_compute[226101]: 2025-12-06 07:28:24.555 226109 DEBUG oslo_concurrency.lockutils [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:24 compute-1 nova_compute[226101]: 2025-12-06 07:28:24.555 226109 DEBUG oslo_concurrency.lockutils [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:24 compute-1 nova_compute[226101]: 2025-12-06 07:28:24.556 226109 DEBUG oslo_concurrency.lockutils [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:24 compute-1 nova_compute[226101]: 2025-12-06 07:28:24.556 226109 DEBUG nova.compute.manager [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:28:24 compute-1 nova_compute[226101]: 2025-12-06 07:28:24.556 226109 WARNING nova.compute.manager [req-f1e3cd3c-6ace-4a4d-9606-868879a2f81e req-a8a782d6-fb6d-4dc6-9693-13a1a6cc8ff6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-unplugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state resize_migrated.
Dec 06 07:28:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2126141649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:25 compute-1 ceph-mon[81689]: pgmap v2128: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Dec 06 07:28:26 compute-1 nova_compute[226101]: 2025-12-06 07:28:26.244 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:26.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:26.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:26 compute-1 nova_compute[226101]: 2025-12-06 07:28:26.972 226109 DEBUG nova.compute.manager [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:28:26 compute-1 nova_compute[226101]: 2025-12-06 07:28:26.972 226109 DEBUG oslo_concurrency.lockutils [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:26 compute-1 nova_compute[226101]: 2025-12-06 07:28:26.972 226109 DEBUG oslo_concurrency.lockutils [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:26 compute-1 nova_compute[226101]: 2025-12-06 07:28:26.972 226109 DEBUG oslo_concurrency.lockutils [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:26 compute-1 nova_compute[226101]: 2025-12-06 07:28:26.972 226109 DEBUG nova.compute.manager [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:28:26 compute-1 nova_compute[226101]: 2025-12-06 07:28:26.973 226109 WARNING nova.compute.manager [req-a9b34cf5-ac39-4ddc-bd19-4f4b4f19512a req-c6c74a55-be15-4dae-ac0c-ce0d801dcd80 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state resize_migrated.
Dec 06 07:28:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:28.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:28.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:28 compute-1 nova_compute[226101]: 2025-12-06 07:28:28.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:29 compute-1 nova_compute[226101]: 2025-12-06 07:28:29.506 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:29 compute-1 ceph-mon[81689]: pgmap v2129: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Dec 06 07:28:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:30.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:30.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:31 compute-1 sudo[266871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:31 compute-1 sudo[266871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:31 compute-1 sudo[266871]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:31 compute-1 nova_compute[226101]: 2025-12-06 07:28:31.246 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:31 compute-1 sudo[266896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:28:31 compute-1 sudo[266896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:31 compute-1 sudo[266896]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:31 compute-1 nova_compute[226101]: 2025-12-06 07:28:31.865 226109 DEBUG nova.compute.manager [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-changed-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:28:31 compute-1 nova_compute[226101]: 2025-12-06 07:28:31.865 226109 DEBUG nova.compute.manager [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Refreshing instance network info cache due to event network-changed-cf525aa5-a11a-4218-98dd-328546551976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:28:31 compute-1 nova_compute[226101]: 2025-12-06 07:28:31.865 226109 DEBUG oslo_concurrency.lockutils [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:28:31 compute-1 nova_compute[226101]: 2025-12-06 07:28:31.865 226109 DEBUG oslo_concurrency.lockutils [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:28:31 compute-1 nova_compute[226101]: 2025-12-06 07:28:31.865 226109 DEBUG nova.network.neutron [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Refreshing network info cache for port cf525aa5-a11a-4218-98dd-328546551976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:28:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:32.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:32.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:32 compute-1 ceph-mon[81689]: pgmap v2130: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 523 KiB/s wr, 8 op/s
Dec 06 07:28:32 compute-1 ceph-mon[81689]: pgmap v2131: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 523 KiB/s wr, 11 op/s
Dec 06 07:28:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:34.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:34.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:34 compute-1 nova_compute[226101]: 2025-12-06 07:28:34.493 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006099.4920378, 018b5c4d-2e0a-428b-ac8b-a74236e0d103 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:28:34 compute-1 nova_compute[226101]: 2025-12-06 07:28:34.493 226109 INFO nova.compute.manager [-] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] VM Stopped (Lifecycle Event)
Dec 06 07:28:34 compute-1 nova_compute[226101]: 2025-12-06 07:28:34.510 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:34 compute-1 nova_compute[226101]: 2025-12-06 07:28:34.520 226109 DEBUG nova.compute.manager [None req-e6d1f88e-ca39-4d95-b4e4-a75cbc4a8e82 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:28:34 compute-1 nova_compute[226101]: 2025-12-06 07:28:34.523 226109 DEBUG nova.compute.manager [None req-e6d1f88e-ca39-4d95-b4e4-a75cbc4a8e82 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:28:34 compute-1 nova_compute[226101]: 2025-12-06 07:28:34.563 226109 INFO nova.compute.manager [None req-e6d1f88e-ca39-4d95-b4e4-a75cbc4a8e82 - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com
Dec 06 07:28:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:36 compute-1 ceph-mon[81689]: pgmap v2132: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 122 KiB/s rd, 1.4 MiB/s wr, 23 op/s
Dec 06 07:28:36 compute-1 nova_compute[226101]: 2025-12-06 07:28:36.138 226109 DEBUG nova.network.neutron [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updated VIF entry in instance network info cache for port cf525aa5-a11a-4218-98dd-328546551976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:28:36 compute-1 nova_compute[226101]: 2025-12-06 07:28:36.140 226109 DEBUG nova.network.neutron [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:28:36 compute-1 nova_compute[226101]: 2025-12-06 07:28:36.195 226109 DEBUG oslo_concurrency.lockutils [req-8d741eba-af64-4086-bdb2-657fac0d6357 req-dad847fe-cdb0-44a6-990b-2308244564b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:28:36 compute-1 nova_compute[226101]: 2025-12-06 07:28:36.248 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:36.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:36.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:37 compute-1 ceph-mon[81689]: pgmap v2133: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Dec 06 07:28:37 compute-1 ceph-mon[81689]: pgmap v2134: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 07:28:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1676437557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:28:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:38 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:38.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:39 compute-1 podman[266921]: 2025-12-06 07:28:39.092537461 +0000 UTC m=+0.077465358 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:28:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 e271: 3 total, 3 up, 3 in
Dec 06 07:28:39 compute-1 nova_compute[226101]: 2025-12-06 07:28:39.512 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:40.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:40.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:41 compute-1 podman[266948]: 2025-12-06 07:28:41.063914901 +0000 UTC m=+0.045744061 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 07:28:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:41 compute-1 podman[266947]: 2025-12-06 07:28:41.070209809 +0000 UTC m=+0.052216584 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:28:41 compute-1 nova_compute[226101]: 2025-12-06 07:28:41.251 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:41 compute-1 ceph-mon[81689]: pgmap v2135: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 07:28:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:42.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:42.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:44.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:44 compute-1 nova_compute[226101]: 2025-12-06 07:28:44.517 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:46 compute-1 nova_compute[226101]: 2025-12-06 07:28:46.253 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:46.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:46.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:46 compute-1 ceph-mon[81689]: osdmap e271: 3 total, 3 up, 3 in
Dec 06 07:28:46 compute-1 ceph-mon[81689]: pgmap v2137: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.4 MiB/s wr, 32 op/s
Dec 06 07:28:46 compute-1 ceph-mon[81689]: pgmap v2138: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 784 KiB/s wr, 17 op/s
Dec 06 07:28:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:48 compute-1 ceph-mon[81689]: pgmap v2139: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 756 KiB/s wr, 24 op/s
Dec 06 07:28:48 compute-1 ceph-mon[81689]: pgmap v2140: 305 pgs: 305 active+clean; 403 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 175 KiB/s wr, 59 op/s
Dec 06 07:28:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:48.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:48.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:49 compute-1 nova_compute[226101]: 2025-12-06 07:28:49.519 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2549393210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:49 compute-1 ceph-mon[81689]: pgmap v2141: 305 pgs: 305 active+clean; 403 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 175 KiB/s wr, 59 op/s
Dec 06 07:28:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1698605546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:50.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:50.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:51 compute-1 ceph-mon[81689]: pgmap v2142: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 179 KiB/s wr, 58 op/s
Dec 06 07:28:51 compute-1 nova_compute[226101]: 2025-12-06 07:28:51.255 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:52.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:54.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:54.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:54 compute-1 nova_compute[226101]: 2025-12-06 07:28:54.523 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:56 compute-1 nova_compute[226101]: 2025-12-06 07:28:56.256 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:56.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:56.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:57 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:28:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:58.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:28:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:28:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:58.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:28:59 compute-1 nova_compute[226101]: 2025-12-06 07:28:59.527 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 3766..4359) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 3.148769379s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 07:28:59 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1[81685]: 2025-12-06T07:28:59.555+0000 7fc98ce5b640 -1 mon.compute-1@2(peon).paxos(paxos updating c 3766..4359) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 3.148769379s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 07:28:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 8.276966095s
Dec 06 07:28:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 8.276966095s
Dec 06 07:28:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.277296066s, txc = 0x55b553859800
Dec 06 07:28:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.622703552s, txc = 0x55b5526f4900
Dec 06 07:28:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.609727859s, txc = 0x55b554a7cf00
Dec 06 07:29:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:00.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:00.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:00 compute-1 ceph-mon[81689]: pgmap v2143: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 162 KiB/s wr, 59 op/s
Dec 06 07:29:01 compute-1 nova_compute[226101]: 2025-12-06 07:29:01.257 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:01 compute-1 nova_compute[226101]: 2025-12-06 07:29:01.298 226109 DEBUG nova.compute.manager [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:01 compute-1 nova_compute[226101]: 2025-12-06 07:29:01.298 226109 DEBUG oslo_concurrency.lockutils [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:01 compute-1 nova_compute[226101]: 2025-12-06 07:29:01.298 226109 DEBUG oslo_concurrency.lockutils [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:01 compute-1 nova_compute[226101]: 2025-12-06 07:29:01.299 226109 DEBUG oslo_concurrency.lockutils [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:01 compute-1 nova_compute[226101]: 2025-12-06 07:29:01.299 226109 DEBUG nova.compute.manager [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:01 compute-1 nova_compute[226101]: 2025-12-06 07:29:01.299 226109 WARNING nova.compute.manager [req-732371a7-5131-4525-90ce-86ae61f8c291 req-faeff8e5-3f3e-4b8a-bf19-9e6d63f74da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state resize_finish.
Dec 06 07:29:01 compute-1 ovn_controller[130279]: 2025-12-06T07:29:01Z|00459|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Dec 06 07:29:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:01.646 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:01.647 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:01.647 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:02.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:02.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:03 compute-1 ceph-mon[81689]: pgmap v2144: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 105 KiB/s wr, 58 op/s
Dec 06 07:29:03 compute-1 ceph-mon[81689]: pgmap v2145: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 107 KiB/s wr, 54 op/s
Dec 06 07:29:03 compute-1 ceph-mon[81689]: pgmap v2146: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 18 KiB/s wr, 16 op/s
Dec 06 07:29:03 compute-1 ceph-mon[81689]: pgmap v2147: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 18 KiB/s wr, 19 op/s
Dec 06 07:29:03 compute-1 nova_compute[226101]: 2025-12-06 07:29:03.613 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:29:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:04.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:29:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:04.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:04 compute-1 nova_compute[226101]: 2025-12-06 07:29:04.531 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:06 compute-1 nova_compute[226101]: 2025-12-06 07:29:06.260 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:06.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:06.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:06 compute-1 ceph-mon[81689]: pgmap v2148: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.9 KiB/s wr, 19 op/s
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.190 226109 DEBUG nova.compute.manager [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.190 226109 DEBUG oslo_concurrency.lockutils [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.190 226109 DEBUG oslo_concurrency.lockutils [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.190 226109 DEBUG oslo_concurrency.lockutils [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.191 226109 DEBUG nova.compute.manager [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] No waiting events found dispatching network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.191 226109 WARNING nova.compute.manager [req-f0294cf5-6b62-43ae-8667-ab4e09ddee7b req-52ff4667-e765-4dbb-a4ab-b854232e2e07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Received unexpected event network-vif-plugged-cf525aa5-a11a-4218-98dd-328546551976 for instance with vm_state active and task_state resize_finish.
Dec 06 07:29:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:07.248 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.248 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:07.249 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.494 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.495 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.495 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.495 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.496 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:07 compute-1 ceph-mon[81689]: pgmap v2149: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 KiB/s wr, 71 op/s
Dec 06 07:29:07 compute-1 ceph-mon[81689]: pgmap v2150: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 144 op/s
Dec 06 07:29:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:29:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/851716319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:07 compute-1 nova_compute[226101]: 2025-12-06 07:29:07.962 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:08.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:08.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:08 compute-1 nova_compute[226101]: 2025-12-06 07:29:08.409 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:08 compute-1 nova_compute[226101]: 2025-12-06 07:29:08.409 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:08 compute-1 nova_compute[226101]: 2025-12-06 07:29:08.539 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:29:08 compute-1 nova_compute[226101]: 2025-12-06 07:29:08.540 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4598MB free_disk=20.805904388427734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:29:08 compute-1 nova_compute[226101]: 2025-12-06 07:29:08.540 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:08 compute-1 nova_compute[226101]: 2025-12-06 07:29:08.540 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:09 compute-1 nova_compute[226101]: 2025-12-06 07:29:09.534 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/851716319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:09 compute-1 ceph-mon[81689]: pgmap v2151: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 23 KiB/s wr, 141 op/s
Dec 06 07:29:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/803272312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2527059766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:29:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2527059766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:29:10 compute-1 podman[267008]: 2025-12-06 07:29:10.11023066 +0000 UTC m=+0.083828787 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:29:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:10.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.468 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Migration for instance 018b5c4d-2e0a-428b-ac8b-a74236e0d103 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.508 226109 INFO nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating resource usage from migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.508 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Starting to track outgoing migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f with flavor 25848a18-11d9-4f11-80b5-5d005675c76d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Dec 06 07:29:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.705 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.705 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.705 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.835 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.874 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.875 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.893 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.923 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:29:10 compute-1 nova_compute[226101]: 2025-12-06 07:29:10.970 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.000 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.001 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.001 226109 DEBUG nova.compute.manager [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Going to confirm migration 15 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.261 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:11 compute-1 ceph-mon[81689]: pgmap v2152: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 142 op/s
Dec 06 07:29:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:29:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1467730153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.529 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.535 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.677 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.759 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:29:11 compute-1 nova_compute[226101]: 2025-12-06 07:29:11.759 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:12 compute-1 podman[267058]: 2025-12-06 07:29:12.062322256 +0000 UTC m=+0.045215716 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:29:12 compute-1 podman[267057]: 2025-12-06 07:29:12.075472936 +0000 UTC m=+0.063177995 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:29:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:12.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:12.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:13.251 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:13 compute-1 nova_compute[226101]: 2025-12-06 07:29:13.736 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:13 compute-1 nova_compute[226101]: 2025-12-06 07:29:13.736 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:13 compute-1 nova_compute[226101]: 2025-12-06 07:29:13.736 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:29:13 compute-1 nova_compute[226101]: 2025-12-06 07:29:13.736 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:29:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:14.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:14.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:14 compute-1 nova_compute[226101]: 2025-12-06 07:29:14.360 226109 DEBUG neutronclient.v2_0.client [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port cf525aa5-a11a-4218-98dd-328546551976 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 07:29:14 compute-1 nova_compute[226101]: 2025-12-06 07:29:14.361 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:14 compute-1 nova_compute[226101]: 2025-12-06 07:29:14.361 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:14 compute-1 nova_compute[226101]: 2025-12-06 07:29:14.361 226109 DEBUG nova.network.neutron [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:29:14 compute-1 nova_compute[226101]: 2025-12-06 07:29:14.362 226109 DEBUG nova.objects.instance [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'info_cache' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:14 compute-1 nova_compute[226101]: 2025-12-06 07:29:14.538 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1467730153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3153342435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:15 compute-1 ceph-mon[81689]: pgmap v2153: 305 pgs: 305 active+clean; 430 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 959 KiB/s wr, 158 op/s
Dec 06 07:29:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.262 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:16.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:16.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:16 compute-1 ceph-mon[81689]: pgmap v2154: 305 pgs: 305 active+clean; 445 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 164 op/s
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.553 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.553 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.554 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.555 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.555 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.556 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.556 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:29:16 compute-1 nova_compute[226101]: 2025-12-06 07:29:16.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:17 compute-1 ceph-mon[81689]: pgmap v2155: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Dec 06 07:29:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1424801763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:18.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:18.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:29:18 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1032885356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:18 compute-1 nova_compute[226101]: 2025-12-06 07:29:18.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:19 compute-1 nova_compute[226101]: 2025-12-06 07:29:19.541 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1045039534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:20 compute-1 ceph-mon[81689]: pgmap v2156: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Dec 06 07:29:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1311121322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1032885356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:20.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:20.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:21 compute-1 nova_compute[226101]: 2025-12-06 07:29:21.265 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:22.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:23 compute-1 ceph-mon[81689]: pgmap v2157: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Dec 06 07:29:23 compute-1 nova_compute[226101]: 2025-12-06 07:29:23.336 226109 DEBUG nova.network.neutron [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 018b5c4d-2e0a-428b-ac8b-a74236e0d103] Updating instance_info_cache with network_info: [{"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:24.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:24 compute-1 ceph-mon[81689]: pgmap v2158: 305 pgs: 305 active+clean; 503 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 4.9 MiB/s wr, 86 op/s
Dec 06 07:29:24 compute-1 nova_compute[226101]: 2025-12-06 07:29:24.546 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:25 compute-1 nova_compute[226101]: 2025-12-06 07:29:25.895 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-018b5c4d-2e0a-428b-ac8b-a74236e0d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:25 compute-1 nova_compute[226101]: 2025-12-06 07:29:25.896 226109 DEBUG nova.objects.instance [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid 018b5c4d-2e0a-428b-ac8b-a74236e0d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:26.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:27 compute-1 nova_compute[226101]: 2025-12-06 07:29:27.376 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:27 compute-1 nova_compute[226101]: 2025-12-06 07:29:27.378 226109 DEBUG nova.storage.rbd_utils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] removing snapshot(nova-resize) on rbd image(018b5c4d-2e0a-428b-ac8b-a74236e0d103_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:29:27 compute-1 ceph-mon[81689]: pgmap v2159: 305 pgs: 305 active+clean; 503 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 3.9 MiB/s wr, 67 op/s
Dec 06 07:29:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:28.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:28.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.008 226109 DEBUG nova.compute.manager [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.135 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.136 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.156 226109 DEBUG nova.objects.instance [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.171 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.171 226109 INFO nova.compute.claims [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.172 226109 DEBUG nova.objects.instance [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'resources' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.185 226109 DEBUG nova.objects.instance [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.231 226109 INFO nova.compute.resource_tracker [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating resource usage from migration c33ab03a-95a6-4e21-b04b-af3607324828
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.232 226109 DEBUG nova.compute.resource_tracker [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Starting to track incoming migration c33ab03a-95a6-4e21-b04b-af3607324828 with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.407 226109 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:29 compute-1 nova_compute[226101]: 2025-12-06 07:29:29.550 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:29 compute-1 ceph-mon[81689]: pgmap v2160: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 652 KiB/s rd, 3.7 MiB/s wr, 91 op/s
Dec 06 07:29:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1719017473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:29 compute-1 ceph-mon[81689]: pgmap v2161: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 625 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Dec 06 07:29:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:29:30 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2722503193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:30 compute-1 nova_compute[226101]: 2025-12-06 07:29:30.254 226109 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.847s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:30 compute-1 nova_compute[226101]: 2025-12-06 07:29:30.259 226109 DEBUG nova.compute.provider_tree [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:29:30 compute-1 nova_compute[226101]: 2025-12-06 07:29:30.276 226109 DEBUG nova.scheduler.client.report [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:29:30 compute-1 nova_compute[226101]: 2025-12-06 07:29:30.316 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:30 compute-1 nova_compute[226101]: 2025-12-06 07:29:30.317 226109 INFO nova.compute.manager [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Migrating
Dec 06 07:29:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:30.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:29:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.5 total, 600.0 interval
                                           Cumulative writes: 8729 writes, 46K keys, 8729 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 8729 writes, 8729 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1369 writes, 7065 keys, 1369 commit groups, 1.0 writes per commit group, ingest: 14.37 MB, 0.02 MB/s
                                           Interval WAL: 1370 writes, 1370 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     14.5      3.85              0.16        26    0.148       0      0       0.0       0.0
                                             L6      1/0   10.60 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.2     46.0     38.6      6.16              0.66        25    0.246    153K    14K       0.0       0.0
                                            Sum      1/0   10.60 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.2     28.3     29.3     10.01              0.82        51    0.196    153K    14K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     27.3     27.8      2.64              0.20        12    0.220     46K   3059       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     46.0     38.6      6.16              0.66        25    0.246    153K    14K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     14.7      3.81              0.16        25    0.152       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.055, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 10.0 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 2.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 31.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000196 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1824,30.50 MB,10.0339%) FilterBlock(51,436.30 KB,0.140155%) IndexBlock(51,737.86 KB,0.237028%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:29:31 compute-1 ceph-mon[81689]: pgmap v2162: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 625 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Dec 06 07:29:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2722503193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.270 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:31 compute-1 sudo[267155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:31 compute-1 sudo[267155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:31 compute-1 sudo[267155]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:31 compute-1 sudo[267180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:31 compute-1 sudo[267180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:31 compute-1 sudo[267180]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e272 e272: 3 total, 3 up, 3 in
Dec 06 07:29:31 compute-1 sudo[267205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:31 compute-1 sudo[267205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:31 compute-1 sudo[267205]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:31 compute-1 sudo[267230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:29:31 compute-1 sudo[267230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.743 226109 DEBUG nova.virt.libvirt.vif [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:27:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1401757895',display_name='tempest-ServerDiskConfigTestJSON-server-1401757895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1401757895',id=110,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-iyd6zt70',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:29:07Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=018b5c4d-2e0a-428b-ac8b-a74236e0d103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.744 226109 DEBUG nova.network.os_vif_util [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "cf525aa5-a11a-4218-98dd-328546551976", "address": "fa:16:3e:01:c9:f4", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf525aa5-a1", "ovs_interfaceid": "cf525aa5-a11a-4218-98dd-328546551976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.745 226109 DEBUG nova.network.os_vif_util [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.746 226109 DEBUG os_vif [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.748 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.749 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf525aa5-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.749 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.751 226109 INFO os_vif [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=cf525aa5-a11a-4218-98dd-328546551976,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf525aa5-a1')
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.752 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.753 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:31 compute-1 nova_compute[226101]: 2025-12-06 07:29:31.882 226109 DEBUG oslo_concurrency.processutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:32 compute-1 podman[267348]: 2025-12-06 07:29:32.256006941 +0000 UTC m=+0.126680460 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:29:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:29:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1251781797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:32.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:32.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:32 compute-1 nova_compute[226101]: 2025-12-06 07:29:32.343 226109 DEBUG oslo_concurrency.processutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:32 compute-1 nova_compute[226101]: 2025-12-06 07:29:32.349 226109 DEBUG nova.compute.provider_tree [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:29:32 compute-1 nova_compute[226101]: 2025-12-06 07:29:32.380 226109 DEBUG nova.scheduler.client.report [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:29:32 compute-1 podman[267348]: 2025-12-06 07:29:32.418288758 +0000 UTC m=+0.288962277 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:32 compute-1 nova_compute[226101]: 2025-12-06 07:29:32.505 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:32 compute-1 nova_compute[226101]: 2025-12-06 07:29:32.660 226109 INFO nova.scheduler.client.report [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Deleted allocation for migration e9fd380a-9dfd-4e3f-9f3e-f4a350f6e02f
Dec 06 07:29:32 compute-1 nova_compute[226101]: 2025-12-06 07:29:32.762 226109 DEBUG oslo_concurrency.lockutils [None req-8c9fcc71-a97d-4043-91e1-5447971486c2 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "018b5c4d-2e0a-428b-ac8b-a74236e0d103" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 21.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:33 compute-1 sshd-session[267380]: Accepted publickey for nova from 192.168.122.100 port 51352 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:29:33 compute-1 systemd-logind[788]: New session 54 of user nova.
Dec 06 07:29:33 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Dec 06 07:29:33 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec 06 07:29:33 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec 06 07:29:33 compute-1 systemd[1]: Starting User Manager for UID 42436...
Dec 06 07:29:33 compute-1 systemd[267386]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:29:33 compute-1 ceph-mon[81689]: osdmap e272: 3 total, 3 up, 3 in
Dec 06 07:29:33 compute-1 ceph-mon[81689]: pgmap v2164: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1017 KiB/s wr, 87 op/s
Dec 06 07:29:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1251781797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:33 compute-1 systemd[267386]: Queued start job for default target Main User Target.
Dec 06 07:29:33 compute-1 systemd[267386]: Created slice User Application Slice.
Dec 06 07:29:33 compute-1 systemd[267386]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:29:33 compute-1 systemd[267386]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 07:29:33 compute-1 systemd[267386]: Reached target Paths.
Dec 06 07:29:33 compute-1 systemd[267386]: Reached target Timers.
Dec 06 07:29:33 compute-1 systemd[267386]: Starting D-Bus User Message Bus Socket...
Dec 06 07:29:33 compute-1 systemd[267386]: Starting Create User's Volatile Files and Directories...
Dec 06 07:29:33 compute-1 systemd[267386]: Finished Create User's Volatile Files and Directories.
Dec 06 07:29:33 compute-1 systemd[267386]: Listening on D-Bus User Message Bus Socket.
Dec 06 07:29:33 compute-1 systemd[267386]: Reached target Sockets.
Dec 06 07:29:33 compute-1 systemd[267386]: Reached target Basic System.
Dec 06 07:29:33 compute-1 systemd[267386]: Reached target Main User Target.
Dec 06 07:29:33 compute-1 systemd[267386]: Startup finished in 134ms.
Dec 06 07:29:33 compute-1 systemd[1]: Started User Manager for UID 42436.
Dec 06 07:29:33 compute-1 systemd[1]: Started Session 54 of User nova.
Dec 06 07:29:33 compute-1 sshd-session[267380]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:29:33 compute-1 sshd-session[267442]: Received disconnect from 192.168.122.100 port 51352:11: disconnected by user
Dec 06 07:29:33 compute-1 sshd-session[267442]: Disconnected from user nova 192.168.122.100 port 51352
Dec 06 07:29:33 compute-1 sshd-session[267380]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:29:33 compute-1 systemd[1]: session-54.scope: Deactivated successfully.
Dec 06 07:29:33 compute-1 systemd-logind[788]: Session 54 logged out. Waiting for processes to exit.
Dec 06 07:29:33 compute-1 systemd-logind[788]: Removed session 54.
Dec 06 07:29:33 compute-1 sshd-session[267460]: Accepted publickey for nova from 192.168.122.100 port 51368 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:29:33 compute-1 systemd-logind[788]: New session 56 of user nova.
Dec 06 07:29:33 compute-1 systemd[1]: Started Session 56 of User nova.
Dec 06 07:29:33 compute-1 sshd-session[267460]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:29:33 compute-1 sudo[267230]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:33 compute-1 sshd-session[267495]: Received disconnect from 192.168.122.100 port 51368:11: disconnected by user
Dec 06 07:29:33 compute-1 sshd-session[267495]: Disconnected from user nova 192.168.122.100 port 51368
Dec 06 07:29:33 compute-1 sshd-session[267460]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:29:33 compute-1 systemd[1]: session-56.scope: Deactivated successfully.
Dec 06 07:29:33 compute-1 systemd-logind[788]: Session 56 logged out. Waiting for processes to exit.
Dec 06 07:29:33 compute-1 systemd-logind[788]: Removed session 56.
Dec 06 07:29:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:29:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:34 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:34.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:34 compute-1 nova_compute[226101]: 2025-12-06 07:29:34.554 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:35 compute-1 sudo[267497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:35 compute-1 sudo[267497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:35 compute-1 sudo[267497]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:35 compute-1 sudo[267522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:35 compute-1 sudo[267522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:35 compute-1 sudo[267522]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:35 compute-1 sudo[267547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:35 compute-1 sudo[267547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:35 compute-1 sudo[267547]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:35 compute-1 sudo[267572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:29:35 compute-1 sudo[267572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:36 compute-1 sudo[267572]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:36 compute-1 sudo[267627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:36 compute-1 sudo[267627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:36 compute-1 sudo[267627]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:36 compute-1 sudo[267652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:36 compute-1 sudo[267652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:36 compute-1 sudo[267652]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:36 compute-1 sudo[267677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:36 compute-1 sudo[267677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:36 compute-1 sudo[267677]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:36 compute-1 ceph-mon[81689]: pgmap v2165: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1017 KiB/s wr, 87 op/s
Dec 06 07:29:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:36 compute-1 ceph-mon[81689]: pgmap v2166: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:36 compute-1 sudo[267702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 07:29:36 compute-1 sudo[267702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:36 compute-1 nova_compute[226101]: 2025-12-06 07:29:36.272 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:29:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:36 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:36 compute-1 podman[267765]: 2025-12-06 07:29:36.558812573 +0000 UTC m=+0.022565142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:36 compute-1 podman[267765]: 2025-12-06 07:29:36.804137955 +0000 UTC m=+0.267890504 container create 62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:37 compute-1 systemd[1]: Started libpod-conmon-62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985.scope.
Dec 06 07:29:37 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:29:37 compute-1 podman[267765]: 2025-12-06 07:29:37.128041152 +0000 UTC m=+0.591793721 container init 62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:29:37 compute-1 podman[267765]: 2025-12-06 07:29:37.136273722 +0000 UTC m=+0.600026271 container start 62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:29:37 compute-1 jolly_ritchie[267781]: 167 167
Dec 06 07:29:37 compute-1 systemd[1]: libpod-62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985.scope: Deactivated successfully.
Dec 06 07:29:37 compute-1 podman[267765]: 2025-12-06 07:29:37.246128692 +0000 UTC m=+0.709881261 container attach 62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 07:29:37 compute-1 podman[267765]: 2025-12-06 07:29:37.246600724 +0000 UTC m=+0.710353273 container died 62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:29:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-cdfebbd2fa7c57b5597f104a2980bae8f4d3377952c58879c3bbc25b7d25a3ca-merged.mount: Deactivated successfully.
Dec 06 07:29:37 compute-1 nova_compute[226101]: 2025-12-06 07:29:37.884 226109 DEBUG nova.compute.manager [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:37 compute-1 nova_compute[226101]: 2025-12-06 07:29:37.885 226109 DEBUG oslo_concurrency.lockutils [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:37 compute-1 nova_compute[226101]: 2025-12-06 07:29:37.887 226109 DEBUG oslo_concurrency.lockutils [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:37 compute-1 nova_compute[226101]: 2025-12-06 07:29:37.887 226109 DEBUG oslo_concurrency.lockutils [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:37 compute-1 nova_compute[226101]: 2025-12-06 07:29:37.887 226109 DEBUG nova.compute.manager [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:37 compute-1 nova_compute[226101]: 2025-12-06 07:29:37.887 226109 WARNING nova.compute.manager [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state active and task_state resize_migrating.
Dec 06 07:29:37 compute-1 podman[267765]: 2025-12-06 07:29:37.968995479 +0000 UTC m=+1.432748028 container remove 62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:29:38 compute-1 systemd[1]: libpod-conmon-62c68f5458b0d430b11ac8af4a164a7daac9f71edb8eb434b90af0e897b36985.scope: Deactivated successfully.
Dec 06 07:29:38 compute-1 podman[267805]: 2025-12-06 07:29:38.132798047 +0000 UTC m=+0.027264709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:38 compute-1 podman[267805]: 2025-12-06 07:29:38.296499842 +0000 UTC m=+0.190966484 container create 20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 07:29:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:29:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:38.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:38 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:38.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:38 compute-1 systemd[1]: Started libpod-conmon-20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62.scope.
Dec 06 07:29:38 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:29:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c9cf23c015d4287b96184fac08edc948d38290aecec810274d015da2236c6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c9cf23c015d4287b96184fac08edc948d38290aecec810274d015da2236c6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c9cf23c015d4287b96184fac08edc948d38290aecec810274d015da2236c6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c9cf23c015d4287b96184fac08edc948d38290aecec810274d015da2236c6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:38 compute-1 podman[267805]: 2025-12-06 07:29:38.501894809 +0000 UTC m=+0.396361481 container init 20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:29:38 compute-1 podman[267805]: 2025-12-06 07:29:38.509467861 +0000 UTC m=+0.403934493 container start 20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_saha, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:29:38 compute-1 podman[267805]: 2025-12-06 07:29:38.521688597 +0000 UTC m=+0.416155239 container attach 20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_saha, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:29:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:38 compute-1 ceph-mon[81689]: pgmap v2167: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:39 compute-1 nova_compute[226101]: 2025-12-06 07:29:39.557 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:39 compute-1 infallible_saha[267821]: [
Dec 06 07:29:39 compute-1 infallible_saha[267821]:     {
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         "available": false,
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         "ceph_device": false,
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         "lsm_data": {},
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         "lvs": [],
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         "path": "/dev/sr0",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         "rejected_reasons": [
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "Has a FileSystem",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "Insufficient space (<5GB)"
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         ],
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         "sys_api": {
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "actuators": null,
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "device_nodes": "sr0",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "devname": "sr0",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "human_readable_size": "482.00 KB",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "id_bus": "ata",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "model": "QEMU DVD-ROM",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "nr_requests": "2",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "parent": "/dev/sr0",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "partitions": {},
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "path": "/dev/sr0",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "removable": "1",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "rev": "2.5+",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "ro": "0",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "rotational": "1",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "sas_address": "",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "sas_device_handle": "",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "scheduler_mode": "mq-deadline",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "sectors": 0,
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "sectorsize": "2048",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "size": 493568.0,
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "support_discard": "2048",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "type": "disk",
Dec 06 07:29:39 compute-1 infallible_saha[267821]:             "vendor": "QEMU"
Dec 06 07:29:39 compute-1 infallible_saha[267821]:         }
Dec 06 07:29:39 compute-1 infallible_saha[267821]:     }
Dec 06 07:29:39 compute-1 infallible_saha[267821]: ]
Dec 06 07:29:39 compute-1 systemd[1]: libpod-20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62.scope: Deactivated successfully.
Dec 06 07:29:39 compute-1 podman[267805]: 2025-12-06 07:29:39.902880639 +0000 UTC m=+1.797347281 container died 20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:29:39 compute-1 systemd[1]: libpod-20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62.scope: Consumed 1.344s CPU time.
Dec 06 07:29:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-c9c9cf23c015d4287b96184fac08edc948d38290aecec810274d015da2236c6b-merged.mount: Deactivated successfully.
Dec 06 07:29:40 compute-1 podman[267805]: 2025-12-06 07:29:40.044112146 +0000 UTC m=+1.938578798 container remove 20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_saha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:29:40 compute-1 systemd[1]: libpod-conmon-20151ceacfe88b884f29db8960b855a4d4e24aa88173c04ecaafa571840f4c62.scope: Deactivated successfully.
Dec 06 07:29:40 compute-1 sudo[267702]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:40 compute-1 nova_compute[226101]: 2025-12-06 07:29:40.192 226109 INFO nova.network.neutron [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 07:29:40 compute-1 nova_compute[226101]: 2025-12-06 07:29:40.284 226109 DEBUG nova.compute.manager [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:40 compute-1 nova_compute[226101]: 2025-12-06 07:29:40.284 226109 DEBUG oslo_concurrency.lockutils [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:40 compute-1 nova_compute[226101]: 2025-12-06 07:29:40.284 226109 DEBUG oslo_concurrency.lockutils [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:40 compute-1 nova_compute[226101]: 2025-12-06 07:29:40.284 226109 DEBUG oslo_concurrency.lockutils [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:40 compute-1 nova_compute[226101]: 2025-12-06 07:29:40.285 226109 DEBUG nova.compute.manager [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:40 compute-1 nova_compute[226101]: 2025-12-06 07:29:40.285 226109 WARNING nova.compute.manager [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state active and task_state resize_migrated.
Dec 06 07:29:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:40.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:40 compute-1 ceph-mon[81689]: pgmap v2168: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:41 compute-1 podman[269045]: 2025-12-06 07:29:41.163476096 +0000 UTC m=+0.132470713 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 07:29:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 e273: 3 total, 3 up, 3 in
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.273 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.639 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.639 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.639 226109 DEBUG nova.network.neutron [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:29:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:29:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:29:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:29:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:29:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:29:41 compute-1 ceph-mon[81689]: osdmap e273: 3 total, 3 up, 3 in
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.775 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "11e39188-38c3-4fc8-99a9-769e536f649d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.776 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.811 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.932 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.933 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.939 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:29:41 compute-1 nova_compute[226101]: 2025-12-06 07:29:41.940 226109 INFO nova.compute.claims [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.079 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:42.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:29:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4188191624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.602 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.609 226109 DEBUG nova.compute.provider_tree [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.631 226109 DEBUG nova.scheduler.client.report [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.669 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.670 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.756 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.757 226109 DEBUG nova.network.neutron [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:29:42 compute-1 nova_compute[226101]: 2025-12-06 07:29:42.901 226109 INFO nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.028 226109 DEBUG nova.compute.manager [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-changed-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.029 226109 DEBUG nova.compute.manager [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Refreshing instance network info cache due to event network-changed-06cbbb2f-cba8-4b99-b2dd-778c71df2d23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.029 226109 DEBUG oslo_concurrency.lockutils [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.033 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:29:43 compute-1 podman[269094]: 2025-12-06 07:29:43.072270258 +0000 UTC m=+0.055906652 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 07:29:43 compute-1 podman[269093]: 2025-12-06 07:29:43.078237907 +0000 UTC m=+0.063040392 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.218 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.219 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.220 226109 INFO nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Creating image(s)
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.248 226109 DEBUG nova.storage.rbd_utils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 11e39188-38c3-4fc8-99a9-769e536f649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.278 226109 DEBUG nova.storage.rbd_utils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 11e39188-38c3-4fc8-99a9-769e536f649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.301 226109 DEBUG nova.storage.rbd_utils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 11e39188-38c3-4fc8-99a9-769e536f649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.305 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.372 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.373 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.374 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.374 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.400 226109 DEBUG nova.storage.rbd_utils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 11e39188-38c3-4fc8-99a9-769e536f649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.404 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 11e39188-38c3-4fc8-99a9-769e536f649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:43 compute-1 nova_compute[226101]: 2025-12-06 07:29:43.436 226109 DEBUG nova.policy [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd67c136e82ad4001b000848d75eef50d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88f5b34244614321a9b6e902eaba0ece', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:29:43 compute-1 ceph-mon[81689]: pgmap v2170: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 85 KiB/s wr, 142 op/s
Dec 06 07:29:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/54487037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4188191624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:43 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Dec 06 07:29:43 compute-1 systemd[267386]: Activating special unit Exit the Session...
Dec 06 07:29:43 compute-1 systemd[267386]: Stopped target Main User Target.
Dec 06 07:29:43 compute-1 systemd[267386]: Stopped target Basic System.
Dec 06 07:29:43 compute-1 systemd[267386]: Stopped target Paths.
Dec 06 07:29:43 compute-1 systemd[267386]: Stopped target Sockets.
Dec 06 07:29:43 compute-1 systemd[267386]: Stopped target Timers.
Dec 06 07:29:43 compute-1 systemd[267386]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:29:43 compute-1 systemd[267386]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 07:29:43 compute-1 systemd[267386]: Closed D-Bus User Message Bus Socket.
Dec 06 07:29:43 compute-1 systemd[267386]: Stopped Create User's Volatile Files and Directories.
Dec 06 07:29:43 compute-1 systemd[267386]: Removed slice User Application Slice.
Dec 06 07:29:43 compute-1 systemd[267386]: Reached target Shutdown.
Dec 06 07:29:43 compute-1 systemd[267386]: Finished Exit the Session.
Dec 06 07:29:43 compute-1 systemd[267386]: Reached target Exit the Session.
Dec 06 07:29:43 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Dec 06 07:29:43 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Dec 06 07:29:43 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec 06 07:29:43 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec 06 07:29:43 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec 06 07:29:43 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec 06 07:29:43 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Dec 06 07:29:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:44.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:44 compute-1 nova_compute[226101]: 2025-12-06 07:29:44.399 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 11e39188-38c3-4fc8-99a9-769e536f649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.996s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:44 compute-1 nova_compute[226101]: 2025-12-06 07:29:44.470 226109 DEBUG nova.storage.rbd_utils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] resizing rbd image 11e39188-38c3-4fc8-99a9-769e536f649d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:29:44 compute-1 nova_compute[226101]: 2025-12-06 07:29:44.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:44 compute-1 nova_compute[226101]: 2025-12-06 07:29:44.658 226109 DEBUG nova.network.neutron [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Successfully created port: 5c48cbf0-42a6-4bcd-97d6-a16936750d94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:29:44 compute-1 nova_compute[226101]: 2025-12-06 07:29:44.881 226109 DEBUG nova.network.neutron [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:44 compute-1 nova_compute[226101]: 2025-12-06 07:29:44.916 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:44 compute-1 nova_compute[226101]: 2025-12-06 07:29:44.920 226109 DEBUG oslo_concurrency.lockutils [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:44 compute-1 nova_compute[226101]: 2025-12-06 07:29:44.921 226109 DEBUG nova.network.neutron [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Refreshing network info cache for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.009 226109 DEBUG os_brick.utils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.011 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.021 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.022 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8e5f69-b147-4429-88d4-9eee36b76160]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.023 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.032 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.032 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[56ce5084-388e-495d-bde2-6d20996b7d79]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.034 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.041 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.042 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[dee00ff0-0b6d-4cf4-9fa3-7121eb4254e5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.043 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[eefba127-3680-46c5-ba0e-87daebde3e47]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.044 226109 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:45 compute-1 ceph-mon[81689]: pgmap v2171: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 85 KiB/s wr, 142 op/s
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.071 226109 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.074 226109 DEBUG os_brick.initiator.connectors.lightos [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.075 226109 DEBUG os_brick.initiator.connectors.lightos [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.075 226109 DEBUG os_brick.initiator.connectors.lightos [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.076 226109 DEBUG os_brick.utils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.147 226109 DEBUG nova.objects.instance [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid 11e39188-38c3-4fc8-99a9-769e536f649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.165 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.166 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Ensure instance console log exists: /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.166 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.167 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.167 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.819 226109 DEBUG nova.network.neutron [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Successfully updated port: 5c48cbf0-42a6-4bcd-97d6-a16936750d94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.845 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-11e39188-38c3-4fc8-99a9-769e536f649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.845 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-11e39188-38c3-4fc8-99a9-769e536f649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:45 compute-1 nova_compute[226101]: 2025-12-06 07:29:45.845 226109 DEBUG nova.network.neutron [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:29:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.179 226109 DEBUG nova.network.neutron [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.276 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:46.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.357 226109 DEBUG nova.compute.manager [req-cbe615a2-28d5-469f-9087-1367ad3b106c req-ca29ea28-efec-4bb3-babf-9b321e61f906 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received event network-changed-5c48cbf0-42a6-4bcd-97d6-a16936750d94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.358 226109 DEBUG nova.compute.manager [req-cbe615a2-28d5-469f-9087-1367ad3b106c req-ca29ea28-efec-4bb3-babf-9b321e61f906 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Refreshing instance network info cache due to event network-changed-5c48cbf0-42a6-4bcd-97d6-a16936750d94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.358 226109 DEBUG oslo_concurrency.lockutils [req-cbe615a2-28d5-469f-9087-1367ad3b106c req-ca29ea28-efec-4bb3-babf-9b321e61f906 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-11e39188-38c3-4fc8-99a9-769e536f649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.482 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.483 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.483 226109 INFO nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Creating image(s)
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.484 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.484 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Ensure instance console log exists: /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.484 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.485 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.485 226109 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.488 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Start _get_guest_xml network_info=[{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1643604044-network", "vif_mac": "fa:16:3e:14:2c:72"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f889bfdf-c600-4260-90f0-d332e8e3d79b', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f889bfdf-c600-4260-90f0-d332e8e3d79b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '6ee4f2f5-3303-4c84-b708-eb35a65082b6', 'attached_at': '2025-12-06T07:29:46.000000', 'detached_at': '', 'volume_id': 'f889bfdf-c600-4260-90f0-d332e8e3d79b', 'serial': 'f889bfdf-c600-4260-90f0-d332e8e3d79b'}, 'mount_device': '/dev/vda', 'boot_index': 0, 'attachment_id': '21ca6e03-67ed-4dbc-bbaf-49d8d79cf1b2', 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.492 226109 WARNING nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.500 226109 DEBUG nova.virt.libvirt.host [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.502 226109 DEBUG nova.virt.libvirt.host [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.506 226109 DEBUG nova.virt.libvirt.host [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.506 226109 DEBUG nova.virt.libvirt.host [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.507 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.507 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.508 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.508 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.508 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.508 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.508 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.508 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.509 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.509 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.509 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.509 226109 DEBUG nova.virt.hardware [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.509 226109 DEBUG nova.objects.instance [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:46 compute-1 nova_compute[226101]: 2025-12-06 07:29:46.559 226109 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1839537884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:46 compute-1 ceph-mon[81689]: pgmap v2172: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:29:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2507357678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.062 226109 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.095 226109 DEBUG nova.virt.libvirt.vif [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1330317525',display_name='tempest-ServerActionsTestOtherA-server-1330317525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1330317525',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-ul2d1zzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:29:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=6ee4f2f5-3303-4c84-b708-eb35a65082b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1643604044-network", "vif_mac": "fa:16:3e:14:2c:72"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.095 226109 DEBUG nova.network.os_vif_util [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1643604044-network", "vif_mac": "fa:16:3e:14:2c:72"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.096 226109 DEBUG nova.network.os_vif_util [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.098 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <uuid>6ee4f2f5-3303-4c84-b708-eb35a65082b6</uuid>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <name>instance-00000070</name>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <memory>196608</memory>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerActionsTestOtherA-server-1330317525</nova:name>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:29:46</nova:creationTime>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <nova:flavor name="m1.micro">
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <nova:memory>192</nova:memory>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <nova:user uuid="baddb65c90da47a58d026b0db966f6c8">tempest-ServerActionsTestOtherA-1949739102-project-member</nova:user>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <nova:project uuid="001e2256cb8b430d93c1ff613010d199">tempest-ServerActionsTestOtherA-1949739102</nova:project>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <nova:port uuid="06cbbb2f-cba8-4b99-b2dd-778c71df2d23">
Dec 06 07:29:47 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <system>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <entry name="serial">6ee4f2f5-3303-4c84-b708-eb35a65082b6</entry>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <entry name="uuid">6ee4f2f5-3303-4c84-b708-eb35a65082b6</entry>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </system>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <os>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   </os>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <features>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   </features>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/6ee4f2f5-3303-4c84-b708-eb35a65082b6_disk.config">
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       </source>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <source protocol="rbd" name="volumes/volume-f889bfdf-c600-4260-90f0-d332e8e3d79b">
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       </source>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:29:47 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <serial>f889bfdf-c600-4260-90f0-d332e8e3d79b</serial>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:14:2c:72"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <target dev="tap06cbbb2f-cb"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/console.log" append="off"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <video>
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </video>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:29:47 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:29:47 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:29:47 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:29:47 compute-1 nova_compute[226101]: </domain>
Dec 06 07:29:47 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.100 226109 DEBUG nova.virt.libvirt.vif [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1330317525',display_name='tempest-ServerActionsTestOtherA-server-1330317525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1330317525',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-ul2d1zzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:29:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=6ee4f2f5-3303-4c84-b708-eb35a65082b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1643604044-network", "vif_mac": "fa:16:3e:14:2c:72"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.100 226109 DEBUG nova.network.os_vif_util [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1643604044-network", "vif_mac": "fa:16:3e:14:2c:72"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.101 226109 DEBUG nova.network.os_vif_util [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.101 226109 DEBUG os_vif [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.102 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.102 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.103 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.108 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.108 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06cbbb2f-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.109 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06cbbb2f-cb, col_values=(('external_ids', {'iface-id': '06cbbb2f-cba8-4b99-b2dd-778c71df2d23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:2c:72', 'vm-uuid': '6ee4f2f5-3303-4c84-b708-eb35a65082b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.1110] manager: (tap06cbbb2f-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.110 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.113 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.118 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.118 226109 INFO os_vif [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb')
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.378 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.379 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.379 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:14:2c:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.379 226109 INFO nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Using config drive
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.440 226109 DEBUG nova.network.neutron [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updated VIF entry in instance network info cache for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.440 226109 DEBUG nova.network.neutron [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:47 compute-1 kernel: tap06cbbb2f-cb: entered promiscuous mode
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.4656] manager: (tap06cbbb2f-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.467 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 ovn_controller[130279]: 2025-12-06T07:29:47Z|00460|binding|INFO|Claiming lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for this chassis.
Dec 06 07:29:47 compute-1 ovn_controller[130279]: 2025-12-06T07:29:47Z|00461|binding|INFO|06cbbb2f-cba8-4b99-b2dd-778c71df2d23: Claiming fa:16:3e:14:2c:72 10.100.0.7
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.474 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.477 226109 DEBUG oslo_concurrency.lockutils [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.494 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.4946] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.4952] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.497 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:2c:72 10.100.0.7'], port_security=['fa:16:3e:14:2c:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6ee4f2f5-3303-4c84-b708-eb35a65082b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '6', 'neutron:security_group_ids': '56e13d32-a2bf-49aa-a4ac-9182c3684195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=06cbbb2f-cba8-4b99-b2dd-778c71df2d23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:29:47 compute-1 systemd-machined[190302]: New machine qemu-52-instance-00000070.
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.498 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e bound to our chassis
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.499 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.510 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8b07c301-ed5d-469d-b667-04c1b5a3d2ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.510 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6209aab-d1 in ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.513 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6209aab-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.513 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[962830be-ed88-41fb-9f30-03f186c2929d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.513 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[db757457-c36d-4673-8a56-6dd09758bb68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.525 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[c7696a64-2557-47ba-bc8f-0ff5bbd0a451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 systemd[1]: Started Virtual Machine qemu-52-instance-00000070.
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.552 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5986c5c1-f6d4-4f5e-a7b4-8aedb9d4801d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 systemd-udevd[269382]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.5728] device (tap06cbbb2f-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.5736] device (tap06cbbb2f-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.588 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f52e50-7d40-4b39-8891-bf80c0fbedbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 systemd-udevd[269387]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.6007] manager: (tapf6209aab-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.599 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cc74cdbf-d908-4545-9e76-dfc77eb874e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.638 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fda598-3c97-4b3a-898c-c508c2155410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.644 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a44b6b-9c36-44cc-93fd-7365c6ef55c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.6708] device (tapf6209aab-d0): carrier: link connected
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.677 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0842d0-3e30-43a3-9796-fa2fe98d9af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.695 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8f2ad3-0c09-401a-969f-fbc7c453194c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646717, 'reachable_time': 43164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269412, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.697 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.718 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfffb7a-8a49-4437-8395-3f1b53215c32]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c5a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646717, 'tstamp': 646717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269413, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.724 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 ovn_controller[130279]: 2025-12-06T07:29:47Z|00462|binding|INFO|Setting lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 ovn-installed in OVS
Dec 06 07:29:47 compute-1 ovn_controller[130279]: 2025-12-06T07:29:47Z|00463|binding|INFO|Setting lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 up in Southbound
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.736 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.740 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b43edca-0c15-47bd-b4b3-c0c02a45c793]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646717, 'reachable_time': 43164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269414, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.775 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8529744d-d2ac-4e1a-a9af-5cd85cebb3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.834 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[553590c5-1fcd-4f4b-abb5-fdfbd196df4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.834 226109 DEBUG nova.network.neutron [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Updating instance_info_cache with network_info: [{"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.836 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.836 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.836 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:47 compute-1 kernel: tapf6209aab-d0: entered promiscuous mode
Dec 06 07:29:47 compute-1 NetworkManager[49031]: <info>  [1765006187.8563] manager: (tapf6209aab-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.855 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.858 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.859 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:47 compute-1 ovn_controller[130279]: 2025-12-06T07:29:47Z|00464|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.860 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.862 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-11e39188-38c3-4fc8-99a9-769e536f649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.862 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Instance network_info: |[{"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.863 226109 DEBUG oslo_concurrency.lockutils [req-cbe615a2-28d5-469f-9087-1367ad3b106c req-ca29ea28-efec-4bb3-babf-9b321e61f906 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-11e39188-38c3-4fc8-99a9-769e536f649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.863 226109 DEBUG nova.network.neutron [req-cbe615a2-28d5-469f-9087-1367ad3b106c req-ca29ea28-efec-4bb3-babf-9b321e61f906 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Refreshing network info cache for port 5c48cbf0-42a6-4bcd-97d6-a16936750d94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.865 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Start _get_guest_xml network_info=[{"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.869 226109 WARNING nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.874 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.875 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.875 226109 DEBUG nova.virt.libvirt.host [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.876 226109 DEBUG nova.virt.libvirt.host [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.876 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[74767b70-336d-49c8-8e7a-12d550f731df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.877 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:29:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:47.878 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'env', 'PROCESS_TAG=haproxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6209aab-d53f-4d58-9b94-ffb7adc6239e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.880 226109 DEBUG nova.virt.libvirt.host [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.880 226109 DEBUG nova.virt.libvirt.host [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.881 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.881 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.882 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.882 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.882 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.882 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.882 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.883 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.883 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.883 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.883 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.884 226109 DEBUG nova.virt.hardware [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:29:47 compute-1 nova_compute[226101]: 2025-12-06 07:29:47.889 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:48 compute-1 podman[269502]: 2025-12-06 07:29:48.247831421 +0000 UTC m=+0.068193040 container create 453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 07:29:48 compute-1 systemd[1]: Started libpod-conmon-453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6.scope.
Dec 06 07:29:48 compute-1 podman[269502]: 2025-12-06 07:29:48.205651616 +0000 UTC m=+0.026013265 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:29:48 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:29:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f79c7265a9bba20fb6af056dcd03dd7592a1f8f3af66afcb41b696c654918c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:29:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3309389964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:48 compute-1 podman[269502]: 2025-12-06 07:29:48.344199231 +0000 UTC m=+0.164560890 container init 453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:29:48 compute-1 podman[269502]: 2025-12-06 07:29:48.349530743 +0000 UTC m=+0.169892372 container start 453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 07:29:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:48 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[269517]: [NOTICE]   (269522) : New worker (269524) forked
Dec 06 07:29:48 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[269517]: [NOTICE]   (269522) : Loading success.
Dec 06 07:29:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2507357678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.551 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.581 226109 DEBUG nova.storage.rbd_utils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 11e39188-38c3-4fc8-99a9-769e536f649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.585 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.610 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006188.6040792, 6ee4f2f5-3303-4c84-b708-eb35a65082b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.611 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] VM Resumed (Lifecycle Event)
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.614 226109 DEBUG nova.compute.manager [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.617 226109 INFO nova.virt.libvirt.driver [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance running successfully.
Dec 06 07:29:48 compute-1 virtqemud[225710]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.619 226109 DEBUG nova.virt.libvirt.guest [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.620 226109 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.907 226109 DEBUG nova.compute.manager [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.910 226109 DEBUG oslo_concurrency.lockutils [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.910 226109 DEBUG oslo_concurrency.lockutils [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.910 226109 DEBUG oslo_concurrency.lockutils [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.911 226109 DEBUG nova.compute.manager [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.911 226109 WARNING nova.compute.manager [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state active and task_state resize_finish.
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.946 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.951 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.985 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.986 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006188.605233, 6ee4f2f5-3303-4c84-b708-eb35a65082b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:29:48 compute-1 nova_compute[226101]: 2025-12-06 07:29:48.986 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] VM Started (Lifecycle Event)
Dec 06 07:29:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:29:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3301973627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.148 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.151 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.221 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.257 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.260 226109 DEBUG nova.virt.libvirt.vif [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1174437944',display_name='tempest-ServerDiskConfigTestJSON-server-1174437944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1174437944',id=113,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-s7qt4ynu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:29:43Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=11e39188-38c3-4fc8-99a9-769e536f649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.261 226109 DEBUG nova.network.os_vif_util [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.262 226109 DEBUG nova.network.os_vif_util [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:93:98,bridge_name='br-int',has_traffic_filtering=True,id=5c48cbf0-42a6-4bcd-97d6-a16936750d94,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c48cbf0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.263 226109 DEBUG nova.objects.instance [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid 11e39188-38c3-4fc8-99a9-769e536f649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.295 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <uuid>11e39188-38c3-4fc8-99a9-769e536f649d</uuid>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <name>instance-00000071</name>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1174437944</nova:name>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:29:47</nova:creationTime>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <nova:port uuid="5c48cbf0-42a6-4bcd-97d6-a16936750d94">
Dec 06 07:29:49 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <system>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <entry name="serial">11e39188-38c3-4fc8-99a9-769e536f649d</entry>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <entry name="uuid">11e39188-38c3-4fc8-99a9-769e536f649d</entry>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </system>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <os>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   </os>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <features>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   </features>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/11e39188-38c3-4fc8-99a9-769e536f649d_disk">
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       </source>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/11e39188-38c3-4fc8-99a9-769e536f649d_disk.config">
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       </source>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:29:49 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:c4:93:98"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <target dev="tap5c48cbf0-42"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d/console.log" append="off"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <video>
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </video>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:29:49 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:29:49 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:29:49 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:29:49 compute-1 nova_compute[226101]: </domain>
Dec 06 07:29:49 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.297 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Preparing to wait for external event network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.297 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.297 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.297 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.298 226109 DEBUG nova.virt.libvirt.vif [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1174437944',display_name='tempest-ServerDiskConfigTestJSON-server-1174437944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1174437944',id=113,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-s7qt4ynu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:29:43Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=11e39188-38c3-4fc8-99a9-769e536f649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.299 226109 DEBUG nova.network.os_vif_util [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.299 226109 DEBUG nova.network.os_vif_util [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:93:98,bridge_name='br-int',has_traffic_filtering=True,id=5c48cbf0-42a6-4bcd-97d6-a16936750d94,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c48cbf0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.300 226109 DEBUG os_vif [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:93:98,bridge_name='br-int',has_traffic_filtering=True,id=5c48cbf0-42a6-4bcd-97d6-a16936750d94,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c48cbf0-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.300 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.301 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.301 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.306 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c48cbf0-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.307 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c48cbf0-42, col_values=(('external_ids', {'iface-id': '5c48cbf0-42a6-4bcd-97d6-a16936750d94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:93:98', 'vm-uuid': '11e39188-38c3-4fc8-99a9-769e536f649d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.308 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:49 compute-1 NetworkManager[49031]: <info>  [1765006189.3095] manager: (tap5c48cbf0-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.312 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.317 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.318 226109 INFO os_vif [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:93:98,bridge_name='br-int',has_traffic_filtering=True,id=5c48cbf0-42a6-4bcd-97d6-a16936750d94,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c48cbf0-42')
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.565 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.566 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.566 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:c4:93:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.567 226109 INFO nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Using config drive
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.596 226109 DEBUG nova.storage.rbd_utils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 11e39188-38c3-4fc8-99a9-769e536f649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.683 226109 DEBUG nova.network.neutron [req-cbe615a2-28d5-469f-9087-1367ad3b106c req-ca29ea28-efec-4bb3-babf-9b321e61f906 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Updated VIF entry in instance network info cache for port 5c48cbf0-42a6-4bcd-97d6-a16936750d94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.684 226109 DEBUG nova.network.neutron [req-cbe615a2-28d5-469f-9087-1367ad3b106c req-ca29ea28-efec-4bb3-babf-9b321e61f906 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Updating instance_info_cache with network_info: [{"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:49 compute-1 nova_compute[226101]: 2025-12-06 07:29:49.700 226109 DEBUG oslo_concurrency.lockutils [req-cbe615a2-28d5-469f-9087-1367ad3b106c req-ca29ea28-efec-4bb3-babf-9b321e61f906 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-11e39188-38c3-4fc8-99a9-769e536f649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:49 compute-1 ceph-mon[81689]: pgmap v2173: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3309389964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3301973627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:50 compute-1 nova_compute[226101]: 2025-12-06 07:29:50.189 226109 INFO nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Creating config drive at /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d/disk.config
Dec 06 07:29:50 compute-1 nova_compute[226101]: 2025-12-06 07:29:50.194 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpotldx5ek execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:50 compute-1 nova_compute[226101]: 2025-12-06 07:29:50.331 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpotldx5ek" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:50.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:50 compute-1 nova_compute[226101]: 2025-12-06 07:29:50.364 226109 DEBUG nova.storage.rbd_utils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 11e39188-38c3-4fc8-99a9-769e536f649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:29:50 compute-1 nova_compute[226101]: 2025-12-06 07:29:50.369 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d/disk.config 11e39188-38c3-4fc8-99a9-769e536f649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:51 compute-1 nova_compute[226101]: 2025-12-06 07:29:51.280 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:51 compute-1 nova_compute[226101]: 2025-12-06 07:29:51.679 226109 DEBUG oslo_concurrency.processutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d/disk.config 11e39188-38c3-4fc8-99a9-769e536f649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:51 compute-1 nova_compute[226101]: 2025-12-06 07:29:51.679 226109 INFO nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Deleting local config drive /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d/disk.config because it was imported into RBD.
Dec 06 07:29:51 compute-1 ceph-mon[81689]: pgmap v2174: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:51 compute-1 NetworkManager[49031]: <info>  [1765006191.7281] manager: (tap5c48cbf0-42): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Dec 06 07:29:51 compute-1 kernel: tap5c48cbf0-42: entered promiscuous mode
Dec 06 07:29:51 compute-1 nova_compute[226101]: 2025-12-06 07:29:51.735 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:51 compute-1 ovn_controller[130279]: 2025-12-06T07:29:51Z|00465|binding|INFO|Claiming lport 5c48cbf0-42a6-4bcd-97d6-a16936750d94 for this chassis.
Dec 06 07:29:51 compute-1 ovn_controller[130279]: 2025-12-06T07:29:51Z|00466|binding|INFO|5c48cbf0-42a6-4bcd-97d6-a16936750d94: Claiming fa:16:3e:c4:93:98 10.100.0.6
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.746 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:93:98 10.100.0.6'], port_security=['fa:16:3e:c4:93:98 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '11e39188-38c3-4fc8-99a9-769e536f649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '2', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=5c48cbf0-42a6-4bcd-97d6-a16936750d94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.748 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5c48cbf0-42a6-4bcd-97d6-a16936750d94 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.750 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:29:51 compute-1 ovn_controller[130279]: 2025-12-06T07:29:51Z|00467|binding|INFO|Setting lport 5c48cbf0-42a6-4bcd-97d6-a16936750d94 ovn-installed in OVS
Dec 06 07:29:51 compute-1 ovn_controller[130279]: 2025-12-06T07:29:51Z|00468|binding|INFO|Setting lport 5c48cbf0-42a6-4bcd-97d6-a16936750d94 up in Southbound
Dec 06 07:29:51 compute-1 nova_compute[226101]: 2025-12-06 07:29:51.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.762 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a88ef125-d976-4f94-a3a2-18a696c0d124]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.763 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:29:51 compute-1 systemd-udevd[269655]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.765 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.765 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cc18622a-5d98-40ec-8c19-91c21eb12fdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 systemd-machined[190302]: New machine qemu-53-instance-00000071.
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.766 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e5fa57fb-101c-4d3b-a43e-bad43b5b3045]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 systemd[1]: Started Virtual Machine qemu-53-instance-00000071.
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.777 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[dd60479b-27ea-4040-9e31-54b206bdab3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 NetworkManager[49031]: <info>  [1765006191.7810] device (tap5c48cbf0-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:29:51 compute-1 NetworkManager[49031]: <info>  [1765006191.7820] device (tap5c48cbf0-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.805 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dc18dbce-73d2-4f1b-b4ed-cad801887f8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.835 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[aaefe859-6f6e-4301-9652-048657d4ba4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 NetworkManager[49031]: <info>  [1765006191.8472] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Dec 06 07:29:51 compute-1 systemd-udevd[269658]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.846 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fddc9764-b98f-4faa-9bd8-595f2a9d3432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.879 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf7702e-ea69-4a1e-808e-1361162c255d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.883 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4aba8347-bd0a-42c5-a373-e05cf2cc3c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 NetworkManager[49031]: <info>  [1765006191.9070] device (tap7c014e4e-a0): carrier: link connected
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.914 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd2e864-4492-4243-ab38-8b7d03ef4613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.932 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dd662bde-8780-495c-bc2d-a91558504de9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647141, 'reachable_time': 40755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269687, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.946 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eb595bdc-cbb4-4806-9834-e83cf677ffd6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647141, 'tstamp': 647141}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269688, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.963 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[316696b9-fcaf-4a52-b3fe-23933e874a97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647141, 'reachable_time': 40755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269689, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:51.991 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ed485d6b-58f8-46b4-aeda-04f684a5a8da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.045 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9155d7d9-6cb0-4ddb-b9e5-6a0c9060a556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.046 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.046 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.047 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:52 compute-1 NetworkManager[49031]: <info>  [1765006192.0495] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.049 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:52 compute-1 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.053 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:52 compute-1 ovn_controller[130279]: 2025-12-06T07:29:52Z|00469|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=0)
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.056 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.057 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[99107cb0-fa8e-469c-a68b-f67b896a10b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.058 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:29:52 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:29:52.059 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.070 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.358 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006192.358244, 11e39188-38c3-4fc8-99a9-769e536f649d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.359 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] VM Started (Lifecycle Event)
Dec 06 07:29:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:52.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.389 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.394 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006192.3584394, 11e39188-38c3-4fc8-99a9-769e536f649d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.394 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] VM Paused (Lifecycle Event)
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.421 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.425 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:29:52 compute-1 podman[269762]: 2025-12-06 07:29:52.455931328 +0000 UTC m=+0.050356433 container create 00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.467 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:29:52 compute-1 systemd[1]: Started libpod-conmon-00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256.scope.
Dec 06 07:29:52 compute-1 podman[269762]: 2025-12-06 07:29:52.428573658 +0000 UTC m=+0.022998793 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:29:52 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.531 226109 DEBUG nova.compute.manager [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.532 226109 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.533 226109 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f10da0e0f42b8c67b22249d27cdabc5a697d8d95ea6c02f36ee0459a7f8e7d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.533 226109 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.533 226109 DEBUG nova.compute.manager [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:52 compute-1 nova_compute[226101]: 2025-12-06 07:29:52.533 226109 WARNING nova.compute.manager [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state resized and task_state None.
Dec 06 07:29:52 compute-1 podman[269762]: 2025-12-06 07:29:52.545019354 +0000 UTC m=+0.139444479 container init 00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:29:52 compute-1 podman[269762]: 2025-12-06 07:29:52.552435281 +0000 UTC m=+0.146860386 container start 00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:29:52 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[269777]: [NOTICE]   (269781) : New worker (269783) forked
Dec 06 07:29:52 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[269777]: [NOTICE]   (269781) : Loading success.
Dec 06 07:29:53 compute-1 ceph-mon[81689]: pgmap v2175: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 178 op/s
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.288 226109 DEBUG nova.compute.manager [req-7860e61d-2cde-4bb3-8275-2fb664c27c4c req-8f3d7e5c-6273-4a87-8ecb-b7fc8567e6e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received event network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.289 226109 DEBUG oslo_concurrency.lockutils [req-7860e61d-2cde-4bb3-8275-2fb664c27c4c req-8f3d7e5c-6273-4a87-8ecb-b7fc8567e6e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.289 226109 DEBUG oslo_concurrency.lockutils [req-7860e61d-2cde-4bb3-8275-2fb664c27c4c req-8f3d7e5c-6273-4a87-8ecb-b7fc8567e6e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.289 226109 DEBUG oslo_concurrency.lockutils [req-7860e61d-2cde-4bb3-8275-2fb664c27c4c req-8f3d7e5c-6273-4a87-8ecb-b7fc8567e6e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.289 226109 DEBUG nova.compute.manager [req-7860e61d-2cde-4bb3-8275-2fb664c27c4c req-8f3d7e5c-6273-4a87-8ecb-b7fc8567e6e5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Processing event network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.290 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.294 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006194.2939556, 11e39188-38c3-4fc8-99a9-769e536f649d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.294 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] VM Resumed (Lifecycle Event)
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.298 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.303 226109 INFO nova.virt.libvirt.driver [-] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Instance spawned successfully.
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.303 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.310 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.325 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.326 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.326 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.327 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.327 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.327 226109 DEBUG nova.virt.libvirt.driver [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.331 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.335 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:29:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:54.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.367 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.428 226109 INFO nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Took 11.21 seconds to spawn the instance on the hypervisor.
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.428 226109 DEBUG nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.492 226109 INFO nova.compute.manager [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Took 12.60 seconds to build instance.
Dec 06 07:29:54 compute-1 nova_compute[226101]: 2025-12-06 07:29:54.534 226109 DEBUG oslo_concurrency.lockutils [None req-6013be11-1df3-4410-91dd-54b530a84a8d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:54 compute-1 sudo[269792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:54 compute-1 sudo[269792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:54 compute-1 sudo[269792]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:54 compute-1 sudo[269817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:29:54 compute-1 sudo[269817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:54 compute-1 sudo[269817]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:55 compute-1 ceph-mon[81689]: pgmap v2176: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Dec 06 07:29:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:56 compute-1 nova_compute[226101]: 2025-12-06 07:29:56.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:29:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:56.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:56 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:56.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:56 compute-1 nova_compute[226101]: 2025-12-06 07:29:56.775 226109 DEBUG nova.compute.manager [req-16c8e848-7156-4655-b767-9e53c879390d req-ccd03c3c-0a07-4134-b7ac-6472f293edd8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received event network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:56 compute-1 nova_compute[226101]: 2025-12-06 07:29:56.775 226109 DEBUG oslo_concurrency.lockutils [req-16c8e848-7156-4655-b767-9e53c879390d req-ccd03c3c-0a07-4134-b7ac-6472f293edd8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:56 compute-1 nova_compute[226101]: 2025-12-06 07:29:56.776 226109 DEBUG oslo_concurrency.lockutils [req-16c8e848-7156-4655-b767-9e53c879390d req-ccd03c3c-0a07-4134-b7ac-6472f293edd8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:56 compute-1 nova_compute[226101]: 2025-12-06 07:29:56.776 226109 DEBUG oslo_concurrency.lockutils [req-16c8e848-7156-4655-b767-9e53c879390d req-ccd03c3c-0a07-4134-b7ac-6472f293edd8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:56 compute-1 nova_compute[226101]: 2025-12-06 07:29:56.776 226109 DEBUG nova.compute.manager [req-16c8e848-7156-4655-b767-9e53c879390d req-ccd03c3c-0a07-4134-b7ac-6472f293edd8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] No waiting events found dispatching network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:56 compute-1 nova_compute[226101]: 2025-12-06 07:29:56.776 226109 WARNING nova.compute.manager [req-16c8e848-7156-4655-b767-9e53c879390d req-ccd03c3c-0a07-4134-b7ac-6472f293edd8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received unexpected event network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 for instance with vm_state active and task_state None.
Dec 06 07:29:57 compute-1 ceph-mon[81689]: pgmap v2177: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Dec 06 07:29:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:29:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:29:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:58.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1937180653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:58 compute-1 ceph-mon[81689]: pgmap v2178: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 129 op/s
Dec 06 07:29:59 compute-1 nova_compute[226101]: 2025-12-06 07:29:59.313 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:00 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:00.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:00 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:30:00 compute-1 ceph-mon[81689]: pgmap v2179: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 129 op/s
Dec 06 07:30:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:01 compute-1 nova_compute[226101]: 2025-12-06 07:30:01.285 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:01.647 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:01.648 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:01.649 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:01 compute-1 nova_compute[226101]: 2025-12-06 07:30:01.956 226109 INFO nova.compute.manager [None req-47294d97-345f-4bf0-b1f4-c59f4281d9cb baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Get console output
Dec 06 07:30:01 compute-1 nova_compute[226101]: 2025-12-06 07:30:01.961 226109 INFO oslo.privsep.daemon [None req-47294d97-345f-4bf0-b1f4-c59f4281d9cb baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpsxlr09hr/privsep.sock']
Dec 06 07:30:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:02 compute-1 nova_compute[226101]: 2025-12-06 07:30:02.688 226109 INFO oslo.privsep.daemon [None req-47294d97-345f-4bf0-b1f4-c59f4281d9cb baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Spawned new privsep daemon via rootwrap
Dec 06 07:30:02 compute-1 nova_compute[226101]: 2025-12-06 07:30:02.555 269846 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 07:30:02 compute-1 nova_compute[226101]: 2025-12-06 07:30:02.560 269846 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 07:30:02 compute-1 nova_compute[226101]: 2025-12-06 07:30:02.563 269846 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 06 07:30:02 compute-1 nova_compute[226101]: 2025-12-06 07:30:02.563 269846 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269846
Dec 06 07:30:02 compute-1 ceph-mon[81689]: pgmap v2180: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 24 KiB/s wr, 167 op/s
Dec 06 07:30:03 compute-1 ovn_controller[130279]: 2025-12-06T07:30:03Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:2c:72 10.100.0.7
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.316 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:30:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:30:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:04.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:30:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:04.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:30:04 compute-1 ceph-mon[81689]: pgmap v2181: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.584 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "11e39188-38c3-4fc8-99a9-769e536f649d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.585 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.585 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.585 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.586 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.587 226109 INFO nova.compute.manager [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Terminating instance
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.588 226109 DEBUG nova.compute.manager [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:30:04 compute-1 kernel: tap5c48cbf0-42 (unregistering): left promiscuous mode
Dec 06 07:30:04 compute-1 NetworkManager[49031]: <info>  [1765006204.6349] device (tap5c48cbf0-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.642 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:04 compute-1 ovn_controller[130279]: 2025-12-06T07:30:04Z|00470|binding|INFO|Releasing lport 5c48cbf0-42a6-4bcd-97d6-a16936750d94 from this chassis (sb_readonly=0)
Dec 06 07:30:04 compute-1 ovn_controller[130279]: 2025-12-06T07:30:04Z|00471|binding|INFO|Setting lport 5c48cbf0-42a6-4bcd-97d6-a16936750d94 down in Southbound
Dec 06 07:30:04 compute-1 ovn_controller[130279]: 2025-12-06T07:30:04Z|00472|binding|INFO|Removing iface tap5c48cbf0-42 ovn-installed in OVS
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.659 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:04 compute-1 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000071.scope: Deactivated successfully.
Dec 06 07:30:04 compute-1 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000071.scope: Consumed 11.012s CPU time.
Dec 06 07:30:04 compute-1 systemd-machined[190302]: Machine qemu-53-instance-00000071 terminated.
Dec 06 07:30:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:04.709 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:93:98 10.100.0.6'], port_security=['fa:16:3e:c4:93:98 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '11e39188-38c3-4fc8-99a9-769e536f649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '4', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=5c48cbf0-42a6-4bcd-97d6-a16936750d94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:30:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:04.711 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5c48cbf0-42a6-4bcd-97d6-a16936750d94 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis
Dec 06 07:30:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:04.713 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:30:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:04.714 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5a55fc-0334-46b1-9f04-bd331c860971]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:04.715 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.829 226109 INFO nova.virt.libvirt.driver [-] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Instance destroyed successfully.
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.829 226109 DEBUG nova.objects.instance [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid 11e39188-38c3-4fc8-99a9-769e536f649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:04 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[269777]: [NOTICE]   (269781) : haproxy version is 2.8.14-c23fe91
Dec 06 07:30:04 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[269777]: [NOTICE]   (269781) : path to executable is /usr/sbin/haproxy
Dec 06 07:30:04 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[269777]: [WARNING]  (269781) : Exiting Master process...
Dec 06 07:30:04 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[269777]: [ALERT]    (269781) : Current worker (269783) exited with code 143 (Terminated)
Dec 06 07:30:04 compute-1 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[269777]: [WARNING]  (269781) : All workers exited. Exiting... (0)
Dec 06 07:30:04 compute-1 systemd[1]: libpod-00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256.scope: Deactivated successfully.
Dec 06 07:30:04 compute-1 podman[269871]: 2025-12-06 07:30:04.860231664 +0000 UTC m=+0.048164976 container died 00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:30:04 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256-userdata-shm.mount: Deactivated successfully.
Dec 06 07:30:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-1f10da0e0f42b8c67b22249d27cdabc5a697d8d95ea6c02f36ee0459a7f8e7d5-merged.mount: Deactivated successfully.
Dec 06 07:30:04 compute-1 podman[269871]: 2025-12-06 07:30:04.908725717 +0000 UTC m=+0.096659029 container cleanup 00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:30:04 compute-1 systemd[1]: libpod-conmon-00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256.scope: Deactivated successfully.
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.963 226109 DEBUG nova.virt.libvirt.vif [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1174437944',display_name='tempest-ServerDiskConfigTestJSON-server-1174437944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1174437944',id=113,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-s7qt4ynu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:02Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=11e39188-38c3-4fc8-99a9-769e536f649d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.964 226109 DEBUG nova.network.os_vif_util [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "address": "fa:16:3e:c4:93:98", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c48cbf0-42", "ovs_interfaceid": "5c48cbf0-42a6-4bcd-97d6-a16936750d94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.965 226109 DEBUG nova.network.os_vif_util [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:93:98,bridge_name='br-int',has_traffic_filtering=True,id=5c48cbf0-42a6-4bcd-97d6-a16936750d94,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c48cbf0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.965 226109 DEBUG os_vif [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:93:98,bridge_name='br-int',has_traffic_filtering=True,id=5c48cbf0-42a6-4bcd-97d6-a16936750d94,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c48cbf0-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.966 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.967 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c48cbf0-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.968 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.970 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:04 compute-1 nova_compute[226101]: 2025-12-06 07:30:04.972 226109 INFO os_vif [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:93:98,bridge_name='br-int',has_traffic_filtering=True,id=5c48cbf0-42a6-4bcd-97d6-a16936750d94,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c48cbf0-42')
Dec 06 07:30:05 compute-1 podman[269914]: 2025-12-06 07:30:05.047756594 +0000 UTC m=+0.120229127 container remove 00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.055 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c75bf904-0a5d-4261-8038-8e12154fd403]: (4, ('Sat Dec  6 07:30:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256)\n00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256\nSat Dec  6 07:30:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256)\n00b1d4ccbadfc99a3fa6729d8e54154eb2bcce55bdd82796e484cb7eb6a94256\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.057 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca4139c-9f74-4b4b-8fac-44664f554d1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.058 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.060 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:05 compute-1 kernel: tap7c014e4e-a0: left promiscuous mode
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.073 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.076 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eadf6edf-b49e-4a97-8e7e-2ff6f7541f78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.093 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[46260336-30dd-42b4-a698-217405742abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.095 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[db9cc221-06f5-46cd-87ae-f6962ab5870d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.115 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[352c0e29-571e-486a-8498-85782f1dfa5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647133, 'reachable_time': 23532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269945, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.119 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:30:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:05.119 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc48de4-78ab-4667-9d1b-d57fa002066f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:05 compute-1 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:30:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.801 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.986 226109 DEBUG nova.compute.manager [req-db9272a5-f71e-47f0-aec9-01c85ba2fd00 req-6d98af0b-33e9-4342-be44-15633b290188 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received event network-vif-unplugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.987 226109 DEBUG oslo_concurrency.lockutils [req-db9272a5-f71e-47f0-aec9-01c85ba2fd00 req-6d98af0b-33e9-4342-be44-15633b290188 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.987 226109 DEBUG oslo_concurrency.lockutils [req-db9272a5-f71e-47f0-aec9-01c85ba2fd00 req-6d98af0b-33e9-4342-be44-15633b290188 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.987 226109 DEBUG oslo_concurrency.lockutils [req-db9272a5-f71e-47f0-aec9-01c85ba2fd00 req-6d98af0b-33e9-4342-be44-15633b290188 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.988 226109 DEBUG nova.compute.manager [req-db9272a5-f71e-47f0-aec9-01c85ba2fd00 req-6d98af0b-33e9-4342-be44-15633b290188 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] No waiting events found dispatching network-vif-unplugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:05 compute-1 nova_compute[226101]: 2025-12-06 07:30:05.988 226109 DEBUG nova.compute.manager [req-db9272a5-f71e-47f0-aec9-01c85ba2fd00 req-6d98af0b-33e9-4342-be44-15633b290188 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received event network-vif-unplugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.011 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.012 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.012 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.012 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.287 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.375 226109 INFO nova.virt.libvirt.driver [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Deleting instance files /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d_del
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.376 226109 INFO nova.virt.libvirt.driver [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Deletion of /var/lib/nova/instances/11e39188-38c3-4fc8-99a9-769e536f649d_del complete
Dec 06 07:30:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:06.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:06.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.443 226109 INFO nova.compute.manager [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Took 1.85 seconds to destroy the instance on the hypervisor.
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.443 226109 DEBUG oslo.service.loopingcall [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.444 226109 DEBUG nova.compute.manager [-] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:30:06 compute-1 nova_compute[226101]: 2025-12-06 07:30:06.444 226109 DEBUG nova.network.neutron [-] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:30:07 compute-1 nova_compute[226101]: 2025-12-06 07:30:07.320 226109 DEBUG nova.network.neutron [-] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:30:07 compute-1 nova_compute[226101]: 2025-12-06 07:30:07.350 226109 INFO nova.compute.manager [-] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Took 0.91 seconds to deallocate network for instance.
Dec 06 07:30:07 compute-1 nova_compute[226101]: 2025-12-06 07:30:07.429 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:07 compute-1 nova_compute[226101]: 2025-12-06 07:30:07.429 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:07 compute-1 nova_compute[226101]: 2025-12-06 07:30:07.544 226109 DEBUG oslo_concurrency.processutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:07 compute-1 ceph-mon[81689]: pgmap v2182: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 28 KiB/s wr, 161 op/s
Dec 06 07:30:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/881476132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3067774594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.055 226109 DEBUG oslo_concurrency.processutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.063 226109 DEBUG nova.compute.provider_tree [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.093 226109 DEBUG nova.scheduler.client.report [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.125 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.131 226109 DEBUG nova.compute.manager [req-fc6632d2-f67c-46dc-aa62-21d6ab972dcb req-ff8d940e-7a46-47c4-8896-d9412d9d06c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received event network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.133 226109 DEBUG oslo_concurrency.lockutils [req-fc6632d2-f67c-46dc-aa62-21d6ab972dcb req-ff8d940e-7a46-47c4-8896-d9412d9d06c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.134 226109 DEBUG oslo_concurrency.lockutils [req-fc6632d2-f67c-46dc-aa62-21d6ab972dcb req-ff8d940e-7a46-47c4-8896-d9412d9d06c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.134 226109 DEBUG oslo_concurrency.lockutils [req-fc6632d2-f67c-46dc-aa62-21d6ab972dcb req-ff8d940e-7a46-47c4-8896-d9412d9d06c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.134 226109 DEBUG nova.compute.manager [req-fc6632d2-f67c-46dc-aa62-21d6ab972dcb req-ff8d940e-7a46-47c4-8896-d9412d9d06c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] No waiting events found dispatching network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.134 226109 WARNING nova.compute.manager [req-fc6632d2-f67c-46dc-aa62-21d6ab972dcb req-ff8d940e-7a46-47c4-8896-d9412d9d06c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received unexpected event network-vif-plugged-5c48cbf0-42a6-4bcd-97d6-a16936750d94 for instance with vm_state deleted and task_state None.
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.134 226109 DEBUG nova.compute.manager [req-fc6632d2-f67c-46dc-aa62-21d6ab972dcb req-ff8d940e-7a46-47c4-8896-d9412d9d06c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Received event network-vif-deleted-5c48cbf0-42a6-4bcd-97d6-a16936750d94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.172 226109 INFO nova.scheduler.client.report [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Deleted allocations for instance 11e39188-38c3-4fc8-99a9-769e536f649d
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.217 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.236 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.236 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.236 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.250 226109 DEBUG oslo_concurrency.lockutils [None req-b6eac664-dc66-4157-aae2-714c1956619a d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "11e39188-38c3-4fc8-99a9-769e536f649d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.254 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.255 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.255 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.255 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.255 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:08.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/434332849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1666525427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3067774594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-1 ceph-mon[81689]: pgmap v2183: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 16 KiB/s wr, 123 op/s
Dec 06 07:30:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3057662245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.825 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.916 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:30:08 compute-1 nova_compute[226101]: 2025-12-06 07:30:08.916 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.088 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.089 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4275MB free_disk=20.863555908203125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.089 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.090 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.160 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 6ee4f2f5-3303-4c84-b708-eb35a65082b6 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.160 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.160 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=704MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.194 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/712143605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/815563372' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/815563372' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3057662245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/20788799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.663 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.669 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.688 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.710 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.710 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:09 compute-1 nova_compute[226101]: 2025-12-06 07:30:09.969 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:10 compute-1 nova_compute[226101]: 2025-12-06 07:30:10.063 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-1 nova_compute[226101]: 2025-12-06 07:30:10.064 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-1 nova_compute[226101]: 2025-12-06 07:30:10.064 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-1 nova_compute[226101]: 2025-12-06 07:30:10.064 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-1 nova_compute[226101]: 2025-12-06 07:30:10.065 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:30:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:10.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:10.855 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:30:10 compute-1 nova_compute[226101]: 2025-12-06 07:30:10.856 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:10.857 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:30:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:30:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3433212711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.291 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:11 compute-1 ovn_controller[130279]: 2025-12-06T07:30:11Z|00473|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.314 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.315 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.315 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.316 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.316 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.317 226109 INFO nova.compute.manager [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Terminating instance
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.318 226109 DEBUG nova.compute.manager [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.352 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/712143605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:11 compute-1 ceph-mon[81689]: pgmap v2184: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 16 KiB/s wr, 123 op/s
Dec 06 07:30:11 compute-1 kernel: tap06cbbb2f-cb (unregistering): left promiscuous mode
Dec 06 07:30:11 compute-1 NetworkManager[49031]: <info>  [1765006211.8499] device (tap06cbbb2f-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:30:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:11.859 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:11 compute-1 ovn_controller[130279]: 2025-12-06T07:30:11Z|00474|binding|INFO|Releasing lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 from this chassis (sb_readonly=0)
Dec 06 07:30:11 compute-1 ovn_controller[130279]: 2025-12-06T07:30:11Z|00475|binding|INFO|Setting lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 down in Southbound
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.859 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:11 compute-1 ovn_controller[130279]: 2025-12-06T07:30:11Z|00476|binding|INFO|Removing iface tap06cbbb2f-cb ovn-installed in OVS
Dec 06 07:30:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:11.875 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:2c:72 10.100.0.7'], port_security=['fa:16:3e:14:2c:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6ee4f2f5-3303-4c84-b708-eb35a65082b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '8', 'neutron:security_group_ids': '56e13d32-a2bf-49aa-a4ac-9182c3684195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.239', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=06cbbb2f-cba8-4b99-b2dd-778c71df2d23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:30:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:11.876 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:30:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:11.878 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6209aab-d53f-4d58-9b94-ffb7adc6239e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:30:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:11.879 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[067ca249-ca00-4d20-b4b8-55d1098e4009]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:11.880 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace which is not needed anymore
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.894 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:11 compute-1 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000070.scope: Deactivated successfully.
Dec 06 07:30:11 compute-1 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000070.scope: Consumed 13.805s CPU time.
Dec 06 07:30:11 compute-1 systemd-machined[190302]: Machine qemu-52-instance-00000070 terminated.
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.955 226109 INFO nova.virt.libvirt.driver [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance destroyed successfully.
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.958 226109 DEBUG nova.objects.instance [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'resources' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.972 226109 DEBUG nova.virt.libvirt.vif [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1330317525',display_name='tempest-ServerActionsTestOtherA-server-1330317525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1330317525',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-ul2d1zzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:29:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=6ee4f2f5-3303-4c84-b708-eb35a65082b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.973 226109 DEBUG nova.network.os_vif_util [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.974 226109 DEBUG nova.network.os_vif_util [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.975 226109 DEBUG os_vif [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.977 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.977 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06cbbb2f-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.978 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.980 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:11 compute-1 podman[270014]: 2025-12-06 07:30:11.983544941 +0000 UTC m=+0.096774101 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:30:11 compute-1 nova_compute[226101]: 2025-12-06 07:30:11.983 226109 INFO os_vif [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb')
Dec 06 07:30:12 compute-1 nova_compute[226101]: 2025-12-06 07:30:12.226 226109 DEBUG nova.compute.manager [req-7dab69be-c016-4773-af76-d06a554b4620 req-d437d26f-2080-42d2-8f78-4cffbc093310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:12 compute-1 nova_compute[226101]: 2025-12-06 07:30:12.226 226109 DEBUG oslo_concurrency.lockutils [req-7dab69be-c016-4773-af76-d06a554b4620 req-d437d26f-2080-42d2-8f78-4cffbc093310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:12 compute-1 nova_compute[226101]: 2025-12-06 07:30:12.226 226109 DEBUG oslo_concurrency.lockutils [req-7dab69be-c016-4773-af76-d06a554b4620 req-d437d26f-2080-42d2-8f78-4cffbc093310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:12 compute-1 nova_compute[226101]: 2025-12-06 07:30:12.226 226109 DEBUG oslo_concurrency.lockutils [req-7dab69be-c016-4773-af76-d06a554b4620 req-d437d26f-2080-42d2-8f78-4cffbc093310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:12 compute-1 nova_compute[226101]: 2025-12-06 07:30:12.226 226109 DEBUG nova.compute.manager [req-7dab69be-c016-4773-af76-d06a554b4620 req-d437d26f-2080-42d2-8f78-4cffbc093310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:12 compute-1 nova_compute[226101]: 2025-12-06 07:30:12.227 226109 DEBUG nova.compute.manager [req-7dab69be-c016-4773-af76-d06a554b4620 req-d437d26f-2080-42d2-8f78-4cffbc093310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:30:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:12.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:30:12 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:12.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:30:12 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[269517]: [NOTICE]   (269522) : haproxy version is 2.8.14-c23fe91
Dec 06 07:30:12 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[269517]: [NOTICE]   (269522) : path to executable is /usr/sbin/haproxy
Dec 06 07:30:12 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[269517]: [WARNING]  (269522) : Exiting Master process...
Dec 06 07:30:12 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[269517]: [ALERT]    (269522) : Current worker (269524) exited with code 143 (Terminated)
Dec 06 07:30:12 compute-1 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[269517]: [WARNING]  (269522) : All workers exited. Exiting... (0)
Dec 06 07:30:12 compute-1 systemd[1]: libpod-453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6.scope: Deactivated successfully.
Dec 06 07:30:12 compute-1 podman[270070]: 2025-12-06 07:30:12.776365884 +0000 UTC m=+0.809181039 container died 453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:30:12 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6-userdata-shm.mount: Deactivated successfully.
Dec 06 07:30:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-2f79c7265a9bba20fb6af056dcd03dd7592a1f8f3af66afcb41b696c654918c7-merged.mount: Deactivated successfully.
Dec 06 07:30:12 compute-1 podman[270070]: 2025-12-06 07:30:12.902130147 +0000 UTC m=+0.934945302 container cleanup 453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 07:30:12 compute-1 systemd[1]: libpod-conmon-453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6.scope: Deactivated successfully.
Dec 06 07:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3433212711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/878054034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:13 compute-1 ceph-mon[81689]: pgmap v2185: 305 pgs: 305 active+clean; 328 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 28 KiB/s wr, 161 op/s
Dec 06 07:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/87351210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2865460060' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:13 compute-1 podman[270124]: 2025-12-06 07:30:13.543837439 +0000 UTC m=+0.618209586 container remove 453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.551 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6344b5-787a-46a4-ae5c-758832a401fb]: (4, ('Sat Dec  6 07:30:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6)\n453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6\nSat Dec  6 07:30:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6)\n453fb26d1a19248911c2d66d5d55057bc7afe2e46dccf25607824e4b475cf0e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.553 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[93ccaaee-d740-4d96-8f8d-787e42775431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.554 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:13 compute-1 kernel: tapf6209aab-d0: left promiscuous mode
Dec 06 07:30:13 compute-1 nova_compute[226101]: 2025-12-06 07:30:13.556 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.560 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[74304266-6b3f-4614-a12b-f2d9be8b342f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:13 compute-1 nova_compute[226101]: 2025-12-06 07:30:13.571 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.579 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3a80c1fd-e9f9-42ef-ad1e-87ce59c5817a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.580 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e08144a2-35c5-4926-bd20-7ec705ce2abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.597 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9d5499-fea2-469a-9307-5dff9dbc51e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646708, 'reachable_time': 24863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270142, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:13 compute-1 systemd[1]: run-netns-ovnmeta\x2df6209aab\x2dd53f\x2d4d58\x2d9b94\x2dffb7adc6239e.mount: Deactivated successfully.
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.601 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:30:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:30:13.601 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[7c89c3d5-de51-48d9-a3d7-0c47cc1a11d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:13 compute-1 podman[270141]: 2025-12-06 07:30:13.642000037 +0000 UTC m=+0.050874078 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 06 07:30:13 compute-1 podman[270138]: 2025-12-06 07:30:13.642392117 +0000 UTC m=+0.054083363 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:30:14 compute-1 nova_compute[226101]: 2025-12-06 07:30:14.370 226109 DEBUG nova.compute.manager [req-243344ff-cf96-41a6-aa7a-e0c1abc489e9 req-e36f74a9-4ba5-48a3-bcbd-bcca8dd83ac2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:14 compute-1 nova_compute[226101]: 2025-12-06 07:30:14.370 226109 DEBUG oslo_concurrency.lockutils [req-243344ff-cf96-41a6-aa7a-e0c1abc489e9 req-e36f74a9-4ba5-48a3-bcbd-bcca8dd83ac2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:14 compute-1 nova_compute[226101]: 2025-12-06 07:30:14.371 226109 DEBUG oslo_concurrency.lockutils [req-243344ff-cf96-41a6-aa7a-e0c1abc489e9 req-e36f74a9-4ba5-48a3-bcbd-bcca8dd83ac2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:14 compute-1 nova_compute[226101]: 2025-12-06 07:30:14.371 226109 DEBUG oslo_concurrency.lockutils [req-243344ff-cf96-41a6-aa7a-e0c1abc489e9 req-e36f74a9-4ba5-48a3-bcbd-bcca8dd83ac2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:14 compute-1 nova_compute[226101]: 2025-12-06 07:30:14.371 226109 DEBUG nova.compute.manager [req-243344ff-cf96-41a6-aa7a-e0c1abc489e9 req-e36f74a9-4ba5-48a3-bcbd-bcca8dd83ac2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:14 compute-1 nova_compute[226101]: 2025-12-06 07:30:14.371 226109 WARNING nova.compute.manager [req-243344ff-cf96-41a6-aa7a-e0c1abc489e9 req-e36f74a9-4ba5-48a3-bcbd-bcca8dd83ac2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state active and task_state deleting.
Dec 06 07:30:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:14.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:14.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:30:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/693340743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:15 compute-1 nova_compute[226101]: 2025-12-06 07:30:15.425 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:15 compute-1 nova_compute[226101]: 2025-12-06 07:30:15.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:15 compute-1 nova_compute[226101]: 2025-12-06 07:30:15.628 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:16 compute-1 nova_compute[226101]: 2025-12-06 07:30:16.293 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:16.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:16 compute-1 nova_compute[226101]: 2025-12-06 07:30:16.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:16 compute-1 nova_compute[226101]: 2025-12-06 07:30:16.980 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:17 compute-1 nova_compute[226101]: 2025-12-06 07:30:17.268 226109 INFO nova.virt.libvirt.driver [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Deleting instance files /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6_del
Dec 06 07:30:17 compute-1 nova_compute[226101]: 2025-12-06 07:30:17.269 226109 INFO nova.virt.libvirt.driver [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Deletion of /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6_del complete
Dec 06 07:30:17 compute-1 nova_compute[226101]: 2025-12-06 07:30:17.525 226109 INFO nova.compute.manager [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Took 6.21 seconds to destroy the instance on the hypervisor.
Dec 06 07:30:17 compute-1 nova_compute[226101]: 2025-12-06 07:30:17.525 226109 DEBUG oslo.service.loopingcall [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:30:17 compute-1 nova_compute[226101]: 2025-12-06 07:30:17.525 226109 DEBUG nova.compute.manager [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:30:17 compute-1 nova_compute[226101]: 2025-12-06 07:30:17.526 226109 DEBUG nova.network.neutron [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:30:17 compute-1 nova_compute[226101]: 2025-12-06 07:30:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:17 compute-1 ceph-mon[81689]: pgmap v2186: 305 pgs: 305 active+clean; 328 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 26 KiB/s wr, 122 op/s
Dec 06 07:30:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/693340743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:17 compute-1 ceph-mon[81689]: pgmap v2187: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 26 KiB/s wr, 134 op/s
Dec 06 07:30:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:18 compute-1 nova_compute[226101]: 2025-12-06 07:30:18.677 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:18 compute-1 nova_compute[226101]: 2025-12-06 07:30:18.677 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:18 compute-1 nova_compute[226101]: 2025-12-06 07:30:18.783 226109 DEBUG nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:30:18 compute-1 nova_compute[226101]: 2025-12-06 07:30:18.904 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:18 compute-1 nova_compute[226101]: 2025-12-06 07:30:18.904 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:18 compute-1 nova_compute[226101]: 2025-12-06 07:30:18.910 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:30:18 compute-1 nova_compute[226101]: 2025-12-06 07:30:18.911 226109 INFO nova.compute.claims [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:30:19 compute-1 ceph-mon[81689]: pgmap v2188: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 49 op/s
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.042 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3715423026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.520 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.524 226109 DEBUG nova.network.neutron [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.527 226109 DEBUG nova.compute.provider_tree [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.545 226109 DEBUG nova.scheduler.client.report [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.549 226109 INFO nova.compute.manager [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Took 2.02 seconds to deallocate network for instance.
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.568 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.569 226109 DEBUG nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.626 226109 DEBUG nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.641 226109 INFO nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.661 226109 DEBUG nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.803 226109 DEBUG nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.804 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.805 226109 INFO nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Creating image(s)
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.830 226109 DEBUG nova.storage.rbd_utils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.859 226109 DEBUG nova.storage.rbd_utils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.885 226109 DEBUG nova.storage.rbd_utils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.889 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.913 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006204.825555, 11e39188-38c3-4fc8-99a9-769e536f649d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.913 226109 INFO nova.compute.manager [-] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] VM Stopped (Lifecycle Event)
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.937 226109 DEBUG nova.compute.manager [None req-c5c8bfab-697b-4b60-ba0e-6c247d40f0ee - - - - - -] [instance: 11e39188-38c3-4fc8-99a9-769e536f649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.949 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.949 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.950 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.950 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.976 226109 DEBUG nova.storage.rbd_utils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:19 compute-1 nova_compute[226101]: 2025-12-06 07:30:19.982 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.071 226109 INFO nova.compute.manager [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Took 0.52 seconds to detach 1 volumes for instance.
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.073 226109 DEBUG nova.compute.manager [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Deleting volume: f889bfdf-c600-4260-90f0-d332e8e3d79b _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Dec 06 07:30:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3715423026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.286 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.287 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.372 226109 DEBUG oslo_concurrency.processutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:20.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.487 226109 DEBUG nova.compute.manager [req-9f015eb3-09f4-4632-96e5-86df041ff142 req-d15abd06-e88c-489e-895c-e0b93b445459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-deleted-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.621 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.685 226109 DEBUG nova.storage.rbd_utils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] resizing rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.806 226109 DEBUG nova.objects.instance [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2634629622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.846 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.847 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Ensure instance console log exists: /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.847 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.848 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.848 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.849 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.853 226109 WARNING nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.856 226109 DEBUG oslo_concurrency.processutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.861 226109 DEBUG nova.compute.provider_tree [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.864 226109 DEBUG nova.virt.libvirt.host [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.865 226109 DEBUG nova.virt.libvirt.host [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.871 226109 DEBUG nova.virt.libvirt.host [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.872 226109 DEBUG nova.virt.libvirt.host [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.873 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.873 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.874 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.874 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.874 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.875 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.875 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.875 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.876 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.876 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.876 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.876 226109 DEBUG nova.virt.hardware [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.879 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.908 226109 DEBUG nova.scheduler.client.report [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.933 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:20 compute-1 nova_compute[226101]: 2025-12-06 07:30:20.985 226109 INFO nova.scheduler.client.report [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Deleted allocations for instance 6ee4f2f5-3303-4c84-b708-eb35a65082b6
Dec 06 07:30:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:30:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/872411589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:30:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:30:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/872411589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:30:21 compute-1 ceph-mon[81689]: pgmap v2189: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 49 op/s
Dec 06 07:30:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2634629622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.067 226109 DEBUG oslo_concurrency.lockutils [None req-a2551002-9a58-42ed-a2c3-20993c4ba0b4 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.295 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:30:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2043406885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.331 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.359 226109 DEBUG nova.storage.rbd_utils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.364 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:30:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1557114716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.823 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.826 226109 DEBUG nova.objects.instance [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.842 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <uuid>1ac46512-4df2-4f9b-b8f7-03f2d5a473c8</uuid>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <name>instance-00000073</name>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerShowV254Test-server-2058420121</nova:name>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:30:20</nova:creationTime>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <nova:user uuid="a4f3cac602f942718452b29d8ede4536">tempest-ServerShowV254Test-1516172779-project-member</nova:user>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <nova:project uuid="a739117bee9940b0a047ef0dfc826f46">tempest-ServerShowV254Test-1516172779</nova:project>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <system>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <entry name="serial">1ac46512-4df2-4f9b-b8f7-03f2d5a473c8</entry>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <entry name="uuid">1ac46512-4df2-4f9b-b8f7-03f2d5a473c8</entry>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     </system>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <os>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   </os>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <features>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   </features>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk">
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       </source>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config">
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       </source>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:30:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/console.log" append="off"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <video>
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     </video>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:30:21 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:30:21 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:30:21 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:30:21 compute-1 nova_compute[226101]: </domain>
Dec 06 07:30:21 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:30:21 compute-1 nova_compute[226101]: 2025-12-06 07:30:21.982 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.000 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.000 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.001 226109 INFO nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Using config drive
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.025 226109 DEBUG nova.storage.rbd_utils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/872411589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:30:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/872411589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:30:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2043406885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1557114716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:22.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:22.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.414 226109 INFO nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Creating config drive at /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.419 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjxzj3gp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.552 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjxzj3gp" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.579 226109 DEBUG nova.storage.rbd_utils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.583 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.752 226109 DEBUG oslo_concurrency.processutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:22 compute-1 nova_compute[226101]: 2025-12-06 07:30:22.754 226109 INFO nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Deleting local config drive /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config because it was imported into RBD.
Dec 06 07:30:22 compute-1 systemd-machined[190302]: New machine qemu-54-instance-00000073.
Dec 06 07:30:22 compute-1 systemd[1]: Started Virtual Machine qemu-54-instance-00000073.
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.482 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006223.4823742, 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.484 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] VM Resumed (Lifecycle Event)
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.486 226109 DEBUG nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.486 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.490 226109 INFO nova.virt.libvirt.driver [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance spawned successfully.
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.490 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:30:23 compute-1 ceph-mon[81689]: pgmap v2190: 305 pgs: 305 active+clean; 327 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 13 KiB/s wr, 57 op/s
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.512 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.518 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.521 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.521 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.522 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.522 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.523 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.523 226109 DEBUG nova.virt.libvirt.driver [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.543 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.544 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006223.483399, 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.544 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] VM Started (Lifecycle Event)
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.574 226109 INFO nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Took 3.77 seconds to spawn the instance on the hypervisor.
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.575 226109 DEBUG nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.582 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.585 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.614 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.631 226109 INFO nova.compute.manager [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Took 4.79 seconds to build instance.
Dec 06 07:30:23 compute-1 nova_compute[226101]: 2025-12-06 07:30:23.650 226109 DEBUG oslo_concurrency.lockutils [None req-8d4abf02-d08a-4e91-9b5a-0080adb8336a a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:24.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:24.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:24 compute-1 ceph-mon[81689]: pgmap v2191: 305 pgs: 305 active+clean; 327 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 682 B/s wr, 20 op/s
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.265 226109 INFO nova.compute.manager [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Rebuilding instance
Dec 06 07:30:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/946517001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.553 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.567 226109 DEBUG nova.compute.manager [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.608 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'pci_requests' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.618 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.627 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'resources' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.640 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.654 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:30:25 compute-1 nova_compute[226101]: 2025-12-06 07:30:25.660 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:30:26 compute-1 nova_compute[226101]: 2025-12-06 07:30:26.296 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:30:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767874342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:30:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:30:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767874342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:30:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:26 compute-1 ceph-mon[81689]: pgmap v2192: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Dec 06 07:30:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2767874342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:30:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2767874342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:30:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:26 compute-1 nova_compute[226101]: 2025-12-06 07:30:26.955 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006211.9539337, 6ee4f2f5-3303-4c84-b708-eb35a65082b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:26 compute-1 nova_compute[226101]: 2025-12-06 07:30:26.956 226109 INFO nova.compute.manager [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] VM Stopped (Lifecycle Event)
Dec 06 07:30:26 compute-1 nova_compute[226101]: 2025-12-06 07:30:26.983 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:26 compute-1 nova_compute[226101]: 2025-12-06 07:30:26.999 226109 DEBUG nova.compute.manager [None req-7cdf4034-9311-4505-a017-3c629ae40f78 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:28 compute-1 ceph-mon[81689]: pgmap v2193: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:30:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:28.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:28.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:30.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:30 compute-1 ceph-mon[81689]: pgmap v2194: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:30:31 compute-1 nova_compute[226101]: 2025-12-06 07:30:31.297 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:31 compute-1 nova_compute[226101]: 2025-12-06 07:30:31.985 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:32.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:32 compute-1 ceph-mon[81689]: pgmap v2195: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Dec 06 07:30:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:34.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:34 compute-1 ceph-mon[81689]: pgmap v2196: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Dec 06 07:30:35 compute-1 nova_compute[226101]: 2025-12-06 07:30:35.706 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:30:36 compute-1 nova_compute[226101]: 2025-12-06 07:30:36.299 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:36.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:36.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:36 compute-1 nova_compute[226101]: 2025-12-06 07:30:36.987 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4228057039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:38 compute-1 ceph-mon[81689]: pgmap v2197: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Dec 06 07:30:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:38.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:38.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:39 compute-1 ceph-mon[81689]: pgmap v2198: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 06 07:30:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:40 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/428252605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:40 compute-1 ceph-mon[81689]: pgmap v2199: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 06 07:30:41 compute-1 nova_compute[226101]: 2025-12-06 07:30:41.302 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:41 compute-1 nova_compute[226101]: 2025-12-06 07:30:41.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:42 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:42 compute-1 ceph-mon[81689]: pgmap v2200: 305 pgs: 305 active+clean; 167 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 06 07:30:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4097164889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:43 compute-1 podman[270571]: 2025-12-06 07:30:43.09902832 +0000 UTC m=+0.086706872 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:30:44 compute-1 podman[270598]: 2025-12-06 07:30:44.066454308 +0000 UTC m=+0.054909265 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:30:44 compute-1 podman[270599]: 2025-12-06 07:30:44.067220059 +0000 UTC m=+0.051753151 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:30:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:44.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:44 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:44.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:45 compute-1 ceph-mon[81689]: pgmap v2201: 305 pgs: 305 active+clean; 167 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 07:30:46 compute-1 nova_compute[226101]: 2025-12-06 07:30:46.304 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:46.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:46.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:46 compute-1 ceph-mon[81689]: pgmap v2202: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 112 op/s
Dec 06 07:30:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:46 compute-1 nova_compute[226101]: 2025-12-06 07:30:46.748 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:30:46 compute-1 nova_compute[226101]: 2025-12-06 07:30:46.992 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3395693374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:48 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:48.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:48.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:49 compute-1 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000073.scope: Deactivated successfully.
Dec 06 07:30:49 compute-1 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000073.scope: Consumed 14.077s CPU time.
Dec 06 07:30:49 compute-1 systemd-machined[190302]: Machine qemu-54-instance-00000073 terminated.
Dec 06 07:30:49 compute-1 nova_compute[226101]: 2025-12-06 07:30:49.763 226109 INFO nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance shutdown successfully after 24 seconds.
Dec 06 07:30:49 compute-1 nova_compute[226101]: 2025-12-06 07:30:49.769 226109 INFO nova.virt.libvirt.driver [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance destroyed successfully.
Dec 06 07:30:49 compute-1 nova_compute[226101]: 2025-12-06 07:30:49.773 226109 INFO nova.virt.libvirt.driver [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance destroyed successfully.
Dec 06 07:30:50 compute-1 ceph-mon[81689]: pgmap v2203: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Dec 06 07:30:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1817414365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:50.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:51 compute-1 nova_compute[226101]: 2025-12-06 07:30:51.306 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:51 compute-1 nova_compute[226101]: 2025-12-06 07:30:51.995 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:52.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:52.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:52 compute-1 ceph-mon[81689]: pgmap v2204: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 133 op/s
Dec 06 07:30:54 compute-1 ceph-mon[81689]: pgmap v2205: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 174 op/s
Dec 06 07:30:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:54.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:30:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:54.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:54 compute-1 sudo[270659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:55 compute-1 sudo[270659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:55 compute-1 sudo[270659]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:55 compute-1 sudo[270684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:30:55 compute-1 sudo[270684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:55 compute-1 sudo[270684]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:55 compute-1 sudo[270709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:55 compute-1 sudo[270709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:55 compute-1 sudo[270709]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:55 compute-1 sudo[270734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:30:55 compute-1 sudo[270734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:55 compute-1 ceph-mon[81689]: pgmap v2206: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Dec 06 07:30:55 compute-1 sudo[270734]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:56 compute-1 nova_compute[226101]: 2025-12-06 07:30:56.307 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:56.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:56.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:56 compute-1 ceph-mon[81689]: pgmap v2207: 305 pgs: 305 active+clean; 214 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Dec 06 07:30:56 compute-1 nova_compute[226101]: 2025-12-06 07:30:56.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:57 compute-1 sudo[270779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:57 compute-1 sudo[270779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:57 compute-1 sudo[270779]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:57 compute-1 sudo[270804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:30:57 compute-1 sudo[270804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:57 compute-1 sudo[270804]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:57 compute-1 sudo[270829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:57 compute-1 sudo[270829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:57 compute-1 sudo[270829]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:57 compute-1 sudo[270854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:30:57 compute-1 sudo[270854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:57 compute-1 sudo[270854]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:30:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.038 226109 INFO nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Deleting instance files /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_del
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.039 226109 INFO nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Deletion of /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_del complete
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.442 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.443 226109 INFO nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Creating image(s)
Dec 06 07:30:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:58.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:30:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:58.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.467 226109 DEBUG nova.storage.rbd_utils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.492 226109 DEBUG nova.storage.rbd_utils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.517 226109 DEBUG nova.storage.rbd_utils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.521 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.584 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.585 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.585 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.586 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.610 226109 DEBUG nova.storage.rbd_utils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:58 compute-1 nova_compute[226101]: 2025-12-06 07:30:58.614 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:59 compute-1 ceph-mon[81689]: pgmap v2208: 305 pgs: 305 active+clean; 214 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 87 op/s
Dec 06 07:30:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1648050588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:00.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:00.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:01 compute-1 nova_compute[226101]: 2025-12-06 07:31:01.308 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:31:01.649 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:31:01.649 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:31:01.649 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:02 compute-1 nova_compute[226101]: 2025-12-06 07:31:01.999 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:02.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:02.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:04.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:04.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:04 compute-1 nova_compute[226101]: 2025-12-06 07:31:04.611 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006249.6107662, 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:04 compute-1 nova_compute[226101]: 2025-12-06 07:31:04.612 226109 INFO nova.compute.manager [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] VM Stopped (Lifecycle Event)
Dec 06 07:31:04 compute-1 nova_compute[226101]: 2025-12-06 07:31:04.629 226109 DEBUG nova.compute.manager [None req-37054712-9d1e-4dae-8dcc-1dc384dbe5d9 - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:04 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.060383320s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.060251236s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.060530186s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.060400009s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.318799973s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.317599297s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 6.336466789s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 6.332360744s
Dec 06 07:31:06 compute-1 nova_compute[226101]: 2025-12-06 07:31:06.310 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:06.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:06.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 3766..4493) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.858166575s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 07:31:06 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1[81685]: 2025-12-06T07:31:06.508+0000 7fc98ce5b640 -1 mon.compute-1@2(peon).paxos(paxos updating c 3766..4493) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.858166575s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 07:31:06 compute-1 nova_compute[226101]: 2025-12-06 07:31:06.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:06 compute-1 nova_compute[226101]: 2025-12-06 07:31:06.630 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:06 compute-1 nova_compute[226101]: 2025-12-06 07:31:06.630 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:06 compute-1 nova_compute[226101]: 2025-12-06 07:31:06.631 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:06 compute-1 nova_compute[226101]: 2025-12-06 07:31:06.631 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:31:06 compute-1 nova_compute[226101]: 2025-12-06 07:31:06.631 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_flush, latency = 6.090322495s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 6.852011204s
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.187519073s, txc = 0x55b553cce000
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.168101311s, txc = 0x55b552b1e600
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.167879105s, txc = 0x55b552b92c00
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.167515755s, txc = 0x55b552b92900
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.162447929s, txc = 0x55b552b1e300
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.162765503s, txc = 0x55b5522cef00
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.160801888s, txc = 0x55b5534a8f00
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.155763626s, txc = 0x55b5522ce300
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.155444622s, txc = 0x55b552c0ac00
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.154779434s, txc = 0x55b552be4c00
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.154381275s, txc = 0x55b5533c1500
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.152678967s, txc = 0x55b553cccc00
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.153306484s, txc = 0x55b552efec00
Dec 06 07:31:06 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.140814304s, txc = 0x55b55279c300
Dec 06 07:31:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1699218702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.180 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:07 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.159048557s, txc = 0x55b5539c2900
Dec 06 07:31:07 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.159070015s, txc = 0x55b5535c5800
Dec 06 07:31:07 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.416985512s, txc = 0x55b5533c0600
Dec 06 07:31:07 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.434696674s, txc = 0x55b55351ef00
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.352 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.353 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4465MB free_disk=20.921672821044922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.354 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.354 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.498 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.498 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.498 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:31:07 compute-1 nova_compute[226101]: 2025-12-06 07:31:07.534 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:08.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:08.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4135206798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:08 compute-1 nova_compute[226101]: 2025-12-06 07:31:08.867 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:08 compute-1 nova_compute[226101]: 2025-12-06 07:31:08.873 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:31:08 compute-1 nova_compute[226101]: 2025-12-06 07:31:08.905 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:31:08 compute-1 nova_compute[226101]: 2025-12-06 07:31:08.942 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:31:08 compute-1 nova_compute[226101]: 2025-12-06 07:31:08.942 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:09 compute-1 ceph-mon[81689]: pgmap v2209: 305 pgs: 305 active+clean; 202 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 789 KiB/s wr, 144 op/s
Dec 06 07:31:09 compute-1 nova_compute[226101]: 2025-12-06 07:31:09.943 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:09 compute-1 nova_compute[226101]: 2025-12-06 07:31:09.943 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:31:09 compute-1 nova_compute[226101]: 2025-12-06 07:31:09.944 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:31:10 compute-1 nova_compute[226101]: 2025-12-06 07:31:10.003 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:31:10 compute-1 nova_compute[226101]: 2025-12-06 07:31:10.003 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:31:10 compute-1 nova_compute[226101]: 2025-12-06 07:31:10.004 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:31:10 compute-1 nova_compute[226101]: 2025-12-06 07:31:10.004 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:31:10 compute-1 nova_compute[226101]: 2025-12-06 07:31:10.354 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:31:10 compute-1 nova_compute[226101]: 2025-12-06 07:31:10.434 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 11.820s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:12.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:14 compute-1 podman[271064]: 2025-12-06 07:31:14.114530338 +0000 UTC m=+0.100556382 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:31:14 compute-1 podman[271090]: 2025-12-06 07:31:14.206866071 +0000 UTC m=+0.055630305 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:31:14 compute-1 podman[271091]: 2025-12-06 07:31:14.206877481 +0000 UTC m=+0.054241318 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:31:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:14.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:14.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:15 compute-1 sudo[271129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:15 compute-1 sudo[271129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:15 compute-1 sudo[271129]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:15 compute-1 sudo[271154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:31:15 compute-1 sudo[271154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:15 compute-1 sudo[271154]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.409267426s, txc = 0x55b552c0a300
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.400412560s, txc = 0x55b5522ce600
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.399877548s, txc = 0x55b552765b00
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.399731159s, txc = 0x55b552794000
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.399587154s, txc = 0x55b55391a300
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.399403572s, txc = 0x55b5535c4900
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.399134159s, txc = 0x55b553367b00
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.398846149s, txc = 0x55b553859b00
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.398417950s, txc = 0x55b5522cec00
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.397811890s, txc = 0x55b553858f00
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.397518158s, txc = 0x55b553cce600
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.393152237s, txc = 0x55b5538f9800
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.387685299s, txc = 0x55b552be4000
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.385972023s, txc = 0x55b552c94f00
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.385612965s, txc = 0x55b553ccd200
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.385448933s, txc = 0x55b5538c4000
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.384831429s, txc = 0x55b554619500
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.380526543s, txc = 0x55b553963b00
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.379758358s, txc = 0x55b55279d200
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.378574371s, txc = 0x55b554a8a000
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.378003120s, txc = 0x55b554749200
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.375781536s, txc = 0x55b553624900
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.368191719s, txc = 0x55b5535d0300
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.368964672s, txc = 0x55b551fc7800
Dec 06 07:31:15 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.366592884s, txc = 0x55b5526f4f00
Dec 06 07:31:15 compute-1 ceph-mon[81689]: pgmap v2210: 305 pgs: 305 active+clean; 194 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Dec 06 07:31:15 compute-1 ceph-mon[81689]: pgmap v2211: 305 pgs: 305 active+clean; 194 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 1006 KiB/s rd, 1.1 MiB/s wr, 90 op/s
Dec 06 07:31:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1541494324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:15 compute-1 ceph-mon[81689]: pgmap v2212: 305 pgs: 305 active+clean; 144 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Dec 06 07:31:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1699218702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4247156057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:15 compute-1 ceph-mon[81689]: pgmap v2213: 305 pgs: 305 active+clean; 144 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 122 op/s
Dec 06 07:31:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1080974136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:31:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1080974136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:31:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4135206798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:15 compute-1 ceph-mon[81689]: pgmap v2214: 305 pgs: 305 active+clean; 172 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 131 op/s
Dec 06 07:31:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:31:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.846284866s, txc = 0x55b552b41200
Dec 06 07:31:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.770007610s, txc = 0x55b553ccc900
Dec 06 07:31:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.865587711s, txc = 0x55b55391a600
Dec 06 07:31:16 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.867330074s, txc = 0x55b552de0300
Dec 06 07:31:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:16.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:16.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:16 compute-1 nova_compute[226101]: 2025-12-06 07:31:16.489 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:16 compute-1 nova_compute[226101]: 2025-12-06 07:31:16.495 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:31:16 compute-1 nova_compute[226101]: 2025-12-06 07:31:16.532 226109 DEBUG nova.storage.rbd_utils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] resizing rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.342 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.357 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.358 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.359 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.359 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.359 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.360 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.360 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.360 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:31:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:31:17.795 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:31:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:31:17.795 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:31:17 compute-1 nova_compute[226101]: 2025-12-06 07:31:17.852 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:18.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:18.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:18 compute-1 ceph-mon[81689]: pgmap v2215: 305 pgs: 305 active+clean; 177 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 89 op/s
Dec 06 07:31:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1200503693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:18 compute-1 ceph-mon[81689]: pgmap v2216: 305 pgs: 305 active+clean; 185 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.4 MiB/s wr, 86 op/s
Dec 06 07:31:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2339104030' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:31:19 compute-1 ovn_controller[130279]: 2025-12-06T07:31:19Z|00477|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:31:19 compute-1 nova_compute[226101]: 2025-12-06 07:31:19.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:19 compute-1 nova_compute[226101]: 2025-12-06 07:31:19.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:20.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:20.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:21 compute-1 nova_compute[226101]: 2025-12-06 07:31:21.315 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:21 compute-1 nova_compute[226101]: 2025-12-06 07:31:21.493 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:22.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:22.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:22 compute-1 ceph-mon[81689]: pgmap v2217: 305 pgs: 305 active+clean; 185 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.4 MiB/s wr, 97 op/s
Dec 06 07:31:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/920108480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:22 compute-1 ceph-mon[81689]: pgmap v2218: 305 pgs: 305 active+clean; 185 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.4 MiB/s wr, 52 op/s
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.104 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.104 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Ensure instance console log exists: /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.105 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.105 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.105 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.107 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.111 226109 WARNING nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.118 226109 DEBUG nova.virt.libvirt.host [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.119 226109 DEBUG nova.virt.libvirt.host [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.123 226109 DEBUG nova.virt.libvirt.host [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.123 226109 DEBUG nova.virt.libvirt.host [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.124 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.124 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.125 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.125 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.125 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.125 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.126 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.126 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.126 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.127 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.127 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.127 226109 DEBUG nova.virt.hardware [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.127 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.176 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:31:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1817503598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:23 compute-1 nova_compute[226101]: 2025-12-06 07:31:23.907 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.731s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:24 compute-1 nova_compute[226101]: 2025-12-06 07:31:24.067 226109 DEBUG nova.storage.rbd_utils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:24 compute-1 nova_compute[226101]: 2025-12-06 07:31:24.071 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/759483955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:24 compute-1 ceph-mon[81689]: pgmap v2219: 305 pgs: 305 active+clean; 196 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 4.0 MiB/s wr, 74 op/s
Dec 06 07:31:24 compute-1 ceph-mon[81689]: pgmap v2220: 305 pgs: 305 active+clean; 199 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 82 op/s
Dec 06 07:31:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:31:24 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4164864406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:24 compute-1 nova_compute[226101]: 2025-12-06 07:31:24.803 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:24 compute-1 nova_compute[226101]: 2025-12-06 07:31:24.806 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <uuid>1ac46512-4df2-4f9b-b8f7-03f2d5a473c8</uuid>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <name>instance-00000073</name>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerShowV254Test-server-2058420121</nova:name>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:31:23</nova:creationTime>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <nova:user uuid="a4f3cac602f942718452b29d8ede4536">tempest-ServerShowV254Test-1516172779-project-member</nova:user>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <nova:project uuid="a739117bee9940b0a047ef0dfc826f46">tempest-ServerShowV254Test-1516172779</nova:project>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <system>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <entry name="serial">1ac46512-4df2-4f9b-b8f7-03f2d5a473c8</entry>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <entry name="uuid">1ac46512-4df2-4f9b-b8f7-03f2d5a473c8</entry>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     </system>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <os>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   </os>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <features>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   </features>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk">
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       </source>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config">
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       </source>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:31:24 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/console.log" append="off"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <video>
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     </video>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:31:24 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:31:24 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:31:24 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:31:24 compute-1 nova_compute[226101]: </domain>
Dec 06 07:31:24 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:31:24 compute-1 nova_compute[226101]: 2025-12-06 07:31:24.977 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:31:24 compute-1 nova_compute[226101]: 2025-12-06 07:31:24.978 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:31:24 compute-1 nova_compute[226101]: 2025-12-06 07:31:24.978 226109 INFO nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Using config drive
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.020 226109 DEBUG nova.storage.rbd_utils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.058 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.349 226109 INFO nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Creating config drive at /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.354 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph5ihhszy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.495 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph5ihhszy" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.521 226109 DEBUG nova.storage.rbd_utils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] rbd image 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.524 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:25 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:31:25.797 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.818 226109 DEBUG oslo_concurrency.processutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:25 compute-1 nova_compute[226101]: 2025-12-06 07:31:25.819 226109 INFO nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Deleting local config drive /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8/disk.config because it was imported into RBD.
Dec 06 07:31:25 compute-1 systemd-machined[190302]: New machine qemu-55-instance-00000073.
Dec 06 07:31:25 compute-1 systemd[1]: Started Virtual Machine qemu-55-instance-00000073.
Dec 06 07:31:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1817503598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:26 compute-1 ceph-mon[81689]: pgmap v2221: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 2.4 MiB/s wr, 103 op/s
Dec 06 07:31:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4164864406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/249822656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.317 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.430 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006286.4298174, 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.431 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] VM Resumed (Lifecycle Event)
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.433 226109 DEBUG nova.compute.manager [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.433 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.436 226109 INFO nova.virt.libvirt.driver [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance spawned successfully.
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.436 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.455 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.459 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.463 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.463 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.464 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.464 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.465 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.465 226109 DEBUG nova.virt.libvirt.driver [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.487 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.488 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006286.4308937, 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.488 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] VM Started (Lifecycle Event)
Dec 06 07:31:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:26.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:26.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.494 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.527 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.532 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.539 226109 DEBUG nova.compute.manager [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.584 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.617 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.618 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.618 226109 DEBUG nova.objects.instance [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:31:26 compute-1 nova_compute[226101]: 2025-12-06 07:31:26.684 226109 DEBUG oslo_concurrency.lockutils [None req-8ac46d6e-7eee-43fe-9aaf-979b363c4d33 a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:27 compute-1 ceph-mon[81689]: pgmap v2222: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 174 op/s
Dec 06 07:31:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.287 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.289 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.289 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.289 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.290 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.291 226109 INFO nova.compute.manager [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Terminating instance
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.292 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "refresh_cache-1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.292 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquired lock "refresh_cache-1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.292 226109 DEBUG nova.network.neutron [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.480 226109 DEBUG nova.network.neutron [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:31:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:28.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:28.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.807 226109 DEBUG nova.network.neutron [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.832 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Releasing lock "refresh_cache-1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:31:28 compute-1 nova_compute[226101]: 2025-12-06 07:31:28.833 226109 DEBUG nova.compute.manager [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:31:28 compute-1 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000073.scope: Deactivated successfully.
Dec 06 07:31:28 compute-1 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000073.scope: Consumed 2.959s CPU time.
Dec 06 07:31:28 compute-1 systemd-machined[190302]: Machine qemu-55-instance-00000073 terminated.
Dec 06 07:31:29 compute-1 nova_compute[226101]: 2025-12-06 07:31:29.049 226109 INFO nova.virt.libvirt.driver [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance destroyed successfully.
Dec 06 07:31:29 compute-1 nova_compute[226101]: 2025-12-06 07:31:29.050 226109 DEBUG nova.objects.instance [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lazy-loading 'resources' on Instance uuid 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:31:29 compute-1 ceph-mon[81689]: pgmap v2223: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 162 op/s
Dec 06 07:31:30 compute-1 ceph-mon[81689]: pgmap v2224: 305 pgs: 305 active+clean; 181 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.2 MiB/s wr, 278 op/s
Dec 06 07:31:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:31 compute-1 nova_compute[226101]: 2025-12-06 07:31:31.319 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:31 compute-1 nova_compute[226101]: 2025-12-06 07:31:31.496 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:33 compute-1 ceph-mon[81689]: pgmap v2225: 305 pgs: 305 active+clean; 170 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 508 KiB/s wr, 283 op/s
Dec 06 07:31:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:34.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:34 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:34.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:35 compute-1 ceph-mon[81689]: pgmap v2226: 305 pgs: 305 active+clean; 162 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 365 KiB/s wr, 277 op/s
Dec 06 07:31:36 compute-1 nova_compute[226101]: 2025-12-06 07:31:36.320 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:36 compute-1 nova_compute[226101]: 2025-12-06 07:31:36.498 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:36.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:36 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:36.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/449106417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:36 compute-1 ceph-mon[81689]: pgmap v2227: 305 pgs: 305 active+clean; 139 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 117 KiB/s wr, 287 op/s
Dec 06 07:31:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:38 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:38.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:38.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:39.340737) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299340822, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2521, "num_deletes": 253, "total_data_size": 6388205, "memory_usage": 6483840, "flush_reason": "Manual Compaction"}
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299521962, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 4120130, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45049, "largest_seqno": 47565, "table_properties": {"data_size": 4109543, "index_size": 6761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22728, "raw_average_key_size": 21, "raw_value_size": 4088292, "raw_average_value_size": 3796, "num_data_blocks": 293, "num_entries": 1077, "num_filter_entries": 1077, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006024, "oldest_key_time": 1765006024, "file_creation_time": 1765006299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 181259 microseconds, and 10824 cpu microseconds.
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:39.522010) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 4120130 bytes OK
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:39.522030) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:39.831518) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:39.831572) EVENT_LOG_v1 {"time_micros": 1765006299831560, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:39.831598) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 6376977, prev total WAL file size 6376977, number of live WAL files 2.
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:39.833380) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(4023KB)], [87(10MB)]
Dec 06 07:31:39 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299833490, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 15230536, "oldest_snapshot_seqno": -1}
Dec 06 07:31:39 compute-1 nova_compute[226101]: 2025-12-06 07:31:39.914 226109 INFO nova.virt.libvirt.driver [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Deleting instance files /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_del
Dec 06 07:31:39 compute-1 nova_compute[226101]: 2025-12-06 07:31:39.915 226109 INFO nova.virt.libvirt.driver [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Deletion of /var/lib/nova/instances/1ac46512-4df2-4f9b-b8f7-03f2d5a473c8_del complete
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.066 226109 INFO nova.compute.manager [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Took 11.23 seconds to destroy the instance on the hypervisor.
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.067 226109 DEBUG oslo.service.loopingcall [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.067 226109 DEBUG nova.compute.manager [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.068 226109 DEBUG nova.network.neutron [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.351 226109 DEBUG nova.network.neutron [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.388 226109 DEBUG nova.network.neutron [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.411 226109 INFO nova.compute.manager [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Took 0.34 seconds to deallocate network for instance.
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.482 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.482 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:40 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:40 compute-1 nova_compute[226101]: 2025-12-06 07:31:40.562 226109 DEBUG oslo_concurrency.processutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7900 keys, 13150652 bytes, temperature: kUnknown
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006301127043, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 13150652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13096245, "index_size": 33559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 204338, "raw_average_key_size": 25, "raw_value_size": 12953797, "raw_average_value_size": 1639, "num_data_blocks": 1327, "num_entries": 7900, "num_filter_entries": 7900, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765006299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:31:41 compute-1 nova_compute[226101]: 2025-12-06 07:31:41.327 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:41 compute-1 nova_compute[226101]: 2025-12-06 07:31:41.499 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:41.127454) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 13150652 bytes
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:41.640976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 11.8 rd, 10.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 10.6 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8427, records dropped: 527 output_compression: NoCompression
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:41.641019) EVENT_LOG_v1 {"time_micros": 1765006301641005, "job": 54, "event": "compaction_finished", "compaction_time_micros": 1293713, "compaction_time_cpu_micros": 30144, "output_level": 6, "num_output_files": 1, "total_output_size": 13150652, "num_input_records": 8427, "num_output_records": 7900, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006301642007, "job": 54, "event": "table_file_deletion", "file_number": 89}
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006301644220, "job": 54, "event": "table_file_deletion", "file_number": 87}
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:39.833294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:41.644336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:41.644341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:41.644343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:41.644344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:41 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:31:41.644346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:42.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:42 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:42.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:42 compute-1 ceph-mon[81689]: pgmap v2228: 305 pgs: 305 active+clean; 139 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 114 KiB/s wr, 199 op/s
Dec 06 07:31:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1659011349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:43 compute-1 nova_compute[226101]: 2025-12-06 07:31:43.248 226109 DEBUG oslo_concurrency.processutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:43 compute-1 nova_compute[226101]: 2025-12-06 07:31:43.254 226109 DEBUG nova.compute.provider_tree [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:31:43 compute-1 nova_compute[226101]: 2025-12-06 07:31:43.271 226109 DEBUG nova.scheduler.client.report [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:31:43 compute-1 nova_compute[226101]: 2025-12-06 07:31:43.292 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:43 compute-1 nova_compute[226101]: 2025-12-06 07:31:43.320 226109 INFO nova.scheduler.client.report [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Deleted allocations for instance 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8
Dec 06 07:31:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:43 compute-1 nova_compute[226101]: 2025-12-06 07:31:43.420 226109 DEBUG oslo_concurrency.lockutils [None req-470fe6c7-a787-422b-9166-334c703e5fcb a4f3cac602f942718452b29d8ede4536 a739117bee9940b0a047ef0dfc826f46 - - default default] Lock "1ac46512-4df2-4f9b-b8f7-03f2d5a473c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:44 compute-1 nova_compute[226101]: 2025-12-06 07:31:44.048 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006289.0465734, 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:44 compute-1 nova_compute[226101]: 2025-12-06 07:31:44.048 226109 INFO nova.compute.manager [-] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] VM Stopped (Lifecycle Event)
Dec 06 07:31:44 compute-1 nova_compute[226101]: 2025-12-06 07:31:44.112 226109 DEBUG nova.compute.manager [None req-cb2300b2-2390-4efa-ab65-0b81137b9161 - - - - - -] [instance: 1ac46512-4df2-4f9b-b8f7-03f2d5a473c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:44.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:44 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:45 compute-1 podman[271459]: 2025-12-06 07:31:45.078177295 +0000 UTC m=+0.058679245 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:31:45 compute-1 podman[271458]: 2025-12-06 07:31:45.081255947 +0000 UTC m=+0.062631511 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 06 07:31:45 compute-1 podman[271460]: 2025-12-06 07:31:45.103804209 +0000 UTC m=+0.081888015 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:31:45 compute-1 ceph-mon[81689]: pgmap v2229: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 124 KiB/s wr, 222 op/s
Dec 06 07:31:45 compute-1 ceph-mon[81689]: pgmap v2230: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 212 KiB/s rd, 65 KiB/s wr, 109 op/s
Dec 06 07:31:46 compute-1 nova_compute[226101]: 2025-12-06 07:31:46.329 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:46 compute-1 nova_compute[226101]: 2025-12-06 07:31:46.501 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1659011349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:47 compute-1 ceph-mon[81689]: pgmap v2231: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 52 KiB/s wr, 83 op/s
Dec 06 07:31:47 compute-1 ceph-mon[81689]: pgmap v2232: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 21 KiB/s wr, 77 op/s
Dec 06 07:31:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:50.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:51 compute-1 ceph-mon[81689]: pgmap v2233: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 33 op/s
Dec 06 07:31:51 compute-1 nova_compute[226101]: 2025-12-06 07:31:51.333 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:51 compute-1 nova_compute[226101]: 2025-12-06 07:31:51.503 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:52.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:31:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:52.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:53 compute-1 ceph-mon[81689]: pgmap v2234: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 13 KiB/s wr, 40 op/s
Dec 06 07:31:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:54.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:54.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:54 compute-1 nova_compute[226101]: 2025-12-06 07:31:54.756 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:54 compute-1 nova_compute[226101]: 2025-12-06 07:31:54.756 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:54 compute-1 nova_compute[226101]: 2025-12-06 07:31:54.770 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:31:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:54 compute-1 nova_compute[226101]: 2025-12-06 07:31:54.854 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:54 compute-1 nova_compute[226101]: 2025-12-06 07:31:54.854 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:54 compute-1 nova_compute[226101]: 2025-12-06 07:31:54.861 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:31:54 compute-1 nova_compute[226101]: 2025-12-06 07:31:54.861 226109 INFO nova.compute.claims [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:31:54 compute-1 nova_compute[226101]: 2025-12-06 07:31:54.964 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:56 compute-1 nova_compute[226101]: 2025-12-06 07:31:56.337 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:56 compute-1 nova_compute[226101]: 2025-12-06 07:31:56.505 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:56.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:56.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:56 compute-1 ceph-mon[81689]: pgmap v2235: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 3.6 KiB/s wr, 18 op/s
Dec 06 07:31:57 compute-1 nova_compute[226101]: 2025-12-06 07:31:57.089 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:57 compute-1 nova_compute[226101]: 2025-12-06 07:31:57.089 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:57 compute-1 nova_compute[226101]: 2025-12-06 07:31:57.106 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:31:57 compute-1 nova_compute[226101]: 2025-12-06 07:31:57.183 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3610760591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.344 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.351 226109 DEBUG nova.compute.provider_tree [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.369 226109 DEBUG nova.scheduler.client.report [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.392 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.393 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.396 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.402 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.402 226109 INFO nova.compute.claims [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:31:58 compute-1 ceph-mon[81689]: pgmap v2236: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 19 op/s
Dec 06 07:31:58 compute-1 ceph-mon[81689]: pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 978 B/s wr, 22 op/s
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.462 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.463 226109 DEBUG nova.network.neutron [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.488 226109 INFO nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.520 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:31:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:58.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:31:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:58.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.580 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.611 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.613 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.613 226109 INFO nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Creating image(s)
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.732 226109 DEBUG nova.storage.rbd_utils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.768 226109 DEBUG nova.storage.rbd_utils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.803 226109 DEBUG nova.storage.rbd_utils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.807 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.841 226109 DEBUG nova.policy [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a70f6c3c5e2c402bb6fa0e0507e9b6dc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b10aa03d68eb4d4799d53538521cc364', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.882 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.882 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.883 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.883 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.911 226109 DEBUG nova.storage.rbd_utils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:58 compute-1 nova_compute[226101]: 2025-12-06 07:31:58.915 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/53891712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.044 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.050 226109 DEBUG nova.compute.provider_tree [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.072 226109 DEBUG nova.scheduler.client.report [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.101 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.102 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.167 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.167 226109 DEBUG nova.network.neutron [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.192 226109 INFO nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.216 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.346 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.348 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.349 226109 INFO nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Creating image(s)
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.384 226109 DEBUG nova.storage.rbd_utils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] rbd image 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.416 226109 DEBUG nova.storage.rbd_utils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] rbd image 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.444 226109 DEBUG nova.storage.rbd_utils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] rbd image 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.448 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.476 226109 DEBUG nova.policy [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3f95d5fc3ae84153badf13157ceaf23e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39801d5920c640488d08116f4121fc25', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.510 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.511 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.511 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.512 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.536 226109 DEBUG nova.storage.rbd_utils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] rbd image 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:59 compute-1 nova_compute[226101]: 2025-12-06 07:31:59.539 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3610760591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:59 compute-1 ceph-mon[81689]: pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.0 KiB/s wr, 19 op/s
Dec 06 07:31:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/53891712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:00.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:00.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:01 compute-1 nova_compute[226101]: 2025-12-06 07:32:01.273 226109 DEBUG nova.network.neutron [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Successfully created port: 835de58d-7832-467f-9464-dbbc619e219a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:32:01 compute-1 nova_compute[226101]: 2025-12-06 07:32:01.339 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:01 compute-1 nova_compute[226101]: 2025-12-06 07:32:01.507 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:01 compute-1 nova_compute[226101]: 2025-12-06 07:32:01.554 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:01.650 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:01.650 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:01.650 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:02 compute-1 nova_compute[226101]: 2025-12-06 07:32:02.463 226109 DEBUG nova.network.neutron [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Successfully updated port: 835de58d-7832-467f-9464-dbbc619e219a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:32:02 compute-1 nova_compute[226101]: 2025-12-06 07:32:02.465 226109 DEBUG nova.network.neutron [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Successfully created port: 844228dd-a09d-4ac5-bca7-4a4f664afd31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:32:02 compute-1 nova_compute[226101]: 2025-12-06 07:32:02.511 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "refresh_cache-2364e72a-c2a9-494b-a0f0-ee3bf62444af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:32:02 compute-1 nova_compute[226101]: 2025-12-06 07:32:02.512 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquired lock "refresh_cache-2364e72a-c2a9-494b-a0f0-ee3bf62444af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:32:02 compute-1 nova_compute[226101]: 2025-12-06 07:32:02.512 226109 DEBUG nova.network.neutron [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:32:02 compute-1 nova_compute[226101]: 2025-12-06 07:32:02.519 226109 DEBUG nova.storage.rbd_utils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] resizing rbd image a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:32:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:02.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:02 compute-1 nova_compute[226101]: 2025-12-06 07:32:02.606 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:02 compute-1 nova_compute[226101]: 2025-12-06 07:32:02.698 226109 DEBUG nova.storage.rbd_utils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] resizing rbd image 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:32:03 compute-1 ceph-mon[81689]: pgmap v2239: 305 pgs: 305 active+clean; 134 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 572 KiB/s wr, 47 op/s
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.352 226109 DEBUG nova.objects.instance [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'migration_context' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.376 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.377 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Ensure instance console log exists: /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.377 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.377 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.378 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.396 226109 DEBUG nova.compute.manager [req-369039cf-f23f-49f8-8b9c-c9ad27b1fd90 req-33b27d94-20c2-44ae-ab11-652f6523244e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received event network-changed-835de58d-7832-467f-9464-dbbc619e219a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.396 226109 DEBUG nova.compute.manager [req-369039cf-f23f-49f8-8b9c-c9ad27b1fd90 req-33b27d94-20c2-44ae-ab11-652f6523244e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Refreshing instance network info cache due to event network-changed-835de58d-7832-467f-9464-dbbc619e219a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.397 226109 DEBUG oslo_concurrency.lockutils [req-369039cf-f23f-49f8-8b9c-c9ad27b1fd90 req-33b27d94-20c2-44ae-ab11-652f6523244e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2364e72a-c2a9-494b-a0f0-ee3bf62444af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.452 226109 DEBUG nova.network.neutron [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.844 226109 DEBUG nova.objects.instance [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lazy-loading 'migration_context' on Instance uuid 2364e72a-c2a9-494b-a0f0-ee3bf62444af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.881 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.882 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Ensure instance console log exists: /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.882 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.883 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:03 compute-1 nova_compute[226101]: 2025-12-06 07:32:03.883 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:04 compute-1 ceph-mon[81689]: pgmap v2240: 305 pgs: 305 active+clean; 134 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 572 KiB/s wr, 40 op/s
Dec 06 07:32:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:04.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:04.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.836 226109 DEBUG nova.network.neutron [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Updating instance_info_cache with network_info: [{"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.900 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Releasing lock "refresh_cache-2364e72a-c2a9-494b-a0f0-ee3bf62444af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.900 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Instance network_info: |[{"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.901 226109 DEBUG oslo_concurrency.lockutils [req-369039cf-f23f-49f8-8b9c-c9ad27b1fd90 req-33b27d94-20c2-44ae-ab11-652f6523244e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2364e72a-c2a9-494b-a0f0-ee3bf62444af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.901 226109 DEBUG nova.network.neutron [req-369039cf-f23f-49f8-8b9c-c9ad27b1fd90 req-33b27d94-20c2-44ae-ab11-652f6523244e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Refreshing network info cache for port 835de58d-7832-467f-9464-dbbc619e219a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.905 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Start _get_guest_xml network_info=[{"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.909 226109 WARNING nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.915 226109 DEBUG nova.virt.libvirt.host [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.916 226109 DEBUG nova.virt.libvirt.host [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.921 226109 DEBUG nova.virt.libvirt.host [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.923 226109 DEBUG nova.virt.libvirt.host [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.924 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.924 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.925 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.925 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.925 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.926 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.926 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.926 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.926 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.927 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.927 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.927 226109 DEBUG nova.virt.hardware [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:32:04 compute-1 nova_compute[226101]: 2025-12-06 07:32:04.930 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:32:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3743405159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.393 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.416 226109 DEBUG nova.storage.rbd_utils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] rbd image 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.419 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:32:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1789558873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.851 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.853 226109 DEBUG nova.virt.libvirt.vif [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-323574989',display_name='tempest-ServerPasswordTestJSON-server-323574989',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-323574989',id=119,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39801d5920c640488d08116f4121fc25',ramdisk_id='',reservation_id='r-860ivol8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-804103674',owner_user_name='tempest-ServerPasswordTestJSON-804103674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:31:59Z,user_data=None,user_id='3f95d5fc3ae84153badf13157ceaf23e',uuid=2364e72a-c2a9-494b-a0f0-ee3bf62444af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.853 226109 DEBUG nova.network.os_vif_util [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Converting VIF {"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.854 226109 DEBUG nova.network.os_vif_util [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f6:ff,bridge_name='br-int',has_traffic_filtering=True,id=835de58d-7832-467f-9464-dbbc619e219a,network=Network(4eedc8db-4948-45cd-825b-cc2f01de39f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap835de58d-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.855 226109 DEBUG nova.objects.instance [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2364e72a-c2a9-494b-a0f0-ee3bf62444af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.886 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <uuid>2364e72a-c2a9-494b-a0f0-ee3bf62444af</uuid>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <name>instance-00000077</name>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerPasswordTestJSON-server-323574989</nova:name>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:32:04</nova:creationTime>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <nova:user uuid="3f95d5fc3ae84153badf13157ceaf23e">tempest-ServerPasswordTestJSON-804103674-project-member</nova:user>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <nova:project uuid="39801d5920c640488d08116f4121fc25">tempest-ServerPasswordTestJSON-804103674</nova:project>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <nova:port uuid="835de58d-7832-467f-9464-dbbc619e219a">
Dec 06 07:32:05 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <system>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <entry name="serial">2364e72a-c2a9-494b-a0f0-ee3bf62444af</entry>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <entry name="uuid">2364e72a-c2a9-494b-a0f0-ee3bf62444af</entry>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </system>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <os>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   </os>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <features>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   </features>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk">
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       </source>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk.config">
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       </source>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:32:05 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:fb:f6:ff"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <target dev="tap835de58d-78"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af/console.log" append="off"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <video>
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </video>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:32:05 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:32:05 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:32:05 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:32:05 compute-1 nova_compute[226101]: </domain>
Dec 06 07:32:05 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.888 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Preparing to wait for external event network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.888 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.888 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.888 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.889 226109 DEBUG nova.virt.libvirt.vif [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-323574989',display_name='tempest-ServerPasswordTestJSON-server-323574989',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-323574989',id=119,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39801d5920c640488d08116f4121fc25',ramdisk_id='',reservation_id='r-860ivol8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-804103674',owner_user_name='tempest-ServerPasswordTestJSON-804103674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:31:59Z,user_data=None,user_id='3f95d5fc3ae84153badf13157ceaf23e',uuid=2364e72a-c2a9-494b-a0f0-ee3bf62444af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.889 226109 DEBUG nova.network.os_vif_util [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Converting VIF {"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.890 226109 DEBUG nova.network.os_vif_util [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f6:ff,bridge_name='br-int',has_traffic_filtering=True,id=835de58d-7832-467f-9464-dbbc619e219a,network=Network(4eedc8db-4948-45cd-825b-cc2f01de39f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap835de58d-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.890 226109 DEBUG os_vif [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f6:ff,bridge_name='br-int',has_traffic_filtering=True,id=835de58d-7832-467f-9464-dbbc619e219a,network=Network(4eedc8db-4948-45cd-825b-cc2f01de39f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap835de58d-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.890 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.891 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.891 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.895 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.895 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap835de58d-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.895 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap835de58d-78, col_values=(('external_ids', {'iface-id': '835de58d-7832-467f-9464-dbbc619e219a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:f6:ff', 'vm-uuid': '2364e72a-c2a9-494b-a0f0-ee3bf62444af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.896 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:05 compute-1 NetworkManager[49031]: <info>  [1765006325.8976] manager: (tap835de58d-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.900 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.904 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.905 226109 INFO os_vif [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f6:ff,bridge_name='br-int',has_traffic_filtering=True,id=835de58d-7832-467f-9464-dbbc619e219a,network=Network(4eedc8db-4948-45cd-825b-cc2f01de39f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap835de58d-78')
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.913 226109 DEBUG nova.network.neutron [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Successfully updated port: 844228dd-a09d-4ac5-bca7-4a4f664afd31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.929 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.929 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:32:05 compute-1 nova_compute[226101]: 2025-12-06 07:32:05.929 226109 DEBUG nova.network.neutron [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.066 226109 DEBUG nova.compute.manager [req-f01d207d-5f2c-460b-8843-d7acc5708e26 req-05c4e4dc-2bc5-48ca-bd7f-8b61976b2bf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-changed-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.067 226109 DEBUG nova.compute.manager [req-f01d207d-5f2c-460b-8843-d7acc5708e26 req-05c4e4dc-2bc5-48ca-bd7f-8b61976b2bf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Refreshing instance network info cache due to event network-changed-844228dd-a09d-4ac5-bca7-4a4f664afd31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.067 226109 DEBUG oslo_concurrency.lockutils [req-f01d207d-5f2c-460b-8843-d7acc5708e26 req-05c4e4dc-2bc5-48ca-bd7f-8b61976b2bf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.228 226109 DEBUG nova.network.neutron [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.252 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.253 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.254 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] No VIF found with MAC fa:16:3e:fb:f6:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.255 226109 INFO nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Using config drive
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.292 226109 DEBUG nova.storage.rbd_utils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] rbd image 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:32:06 compute-1 nova_compute[226101]: 2025-12-06 07:32:06.341 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:06.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:06.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:06 compute-1 ceph-mon[81689]: pgmap v2241: 305 pgs: 305 active+clean; 150 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.2 MiB/s wr, 60 op/s
Dec 06 07:32:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3743405159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1789558873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:07 compute-1 nova_compute[226101]: 2025-12-06 07:32:07.439 226109 DEBUG nova.network.neutron [req-369039cf-f23f-49f8-8b9c-c9ad27b1fd90 req-33b27d94-20c2-44ae-ab11-652f6523244e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Updated VIF entry in instance network info cache for port 835de58d-7832-467f-9464-dbbc619e219a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:32:07 compute-1 nova_compute[226101]: 2025-12-06 07:32:07.440 226109 DEBUG nova.network.neutron [req-369039cf-f23f-49f8-8b9c-c9ad27b1fd90 req-33b27d94-20c2-44ae-ab11-652f6523244e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Updating instance_info_cache with network_info: [{"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:32:07 compute-1 nova_compute[226101]: 2025-12-06 07:32:07.460 226109 DEBUG oslo_concurrency.lockutils [req-369039cf-f23f-49f8-8b9c-c9ad27b1fd90 req-33b27d94-20c2-44ae-ab11-652f6523244e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2364e72a-c2a9-494b-a0f0-ee3bf62444af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:32:08 compute-1 nova_compute[226101]: 2025-12-06 07:32:08.410 226109 INFO nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Creating config drive at /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af/disk.config
Dec 06 07:32:08 compute-1 nova_compute[226101]: 2025-12-06 07:32:08.417 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprngkjbpi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:08.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:08.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:08 compute-1 nova_compute[226101]: 2025-12-06 07:32:08.552 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprngkjbpi" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:08 compute-1 ceph-mon[81689]: pgmap v2242: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 3.5 MiB/s wr, 95 op/s
Dec 06 07:32:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1179823086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:11 compute-1 ceph-mon[81689]: pgmap v2243: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Dec 06 07:32:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3113212600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.926 226109 DEBUG nova.storage.rbd_utils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] rbd image 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.930 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af/disk.config 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.963 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.965 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.965 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.966 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.969 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.989 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.990 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.990 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.990 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.990 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.991 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:32:11 compute-1 nova_compute[226101]: 2025-12-06 07:32:11.991 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.021 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.021 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.021 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.021 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.022 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.744 226109 DEBUG nova.network.neutron [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.767 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.767 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance network_info: |[{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.768 226109 DEBUG oslo_concurrency.lockutils [req-f01d207d-5f2c-460b-8843-d7acc5708e26 req-05c4e4dc-2bc5-48ca-bd7f-8b61976b2bf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.768 226109 DEBUG nova.network.neutron [req-f01d207d-5f2c-460b-8843-d7acc5708e26 req-05c4e4dc-2bc5-48ca-bd7f-8b61976b2bf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Refreshing network info cache for port 844228dd-a09d-4ac5-bca7-4a4f664afd31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.770 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Start _get_guest_xml network_info=[{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.775 226109 WARNING nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.779 226109 DEBUG nova.virt.libvirt.host [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.780 226109 DEBUG nova.virt.libvirt.host [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.786 226109 DEBUG nova.virt.libvirt.host [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.786 226109 DEBUG nova.virt.libvirt.host [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.787 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.788 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.788 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.788 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.789 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.789 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.789 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.789 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.789 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.790 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.790 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.790 226109 DEBUG nova.virt.hardware [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:32:12 compute-1 nova_compute[226101]: 2025-12-06 07:32:12.793 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1774354065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:32:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1774354065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:32:12 compute-1 ceph-mon[81689]: pgmap v2244: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Dec 06 07:32:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:14.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:14 compute-1 nova_compute[226101]: 2025-12-06 07:32:14.698 226109 DEBUG nova.network.neutron [req-f01d207d-5f2c-460b-8843-d7acc5708e26 req-05c4e4dc-2bc5-48ca-bd7f-8b61976b2bf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updated VIF entry in instance network info cache for port 844228dd-a09d-4ac5-bca7-4a4f664afd31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:32:14 compute-1 nova_compute[226101]: 2025-12-06 07:32:14.699 226109 DEBUG nova.network.neutron [req-f01d207d-5f2c-460b-8843-d7acc5708e26 req-05c4e4dc-2bc5-48ca-bd7f-8b61976b2bf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:32:14 compute-1 nova_compute[226101]: 2025-12-06 07:32:14.718 226109 DEBUG oslo_concurrency.lockutils [req-f01d207d-5f2c-460b-8843-d7acc5708e26 req-05c4e4dc-2bc5-48ca-bd7f-8b61976b2bf4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.466 226109 DEBUG oslo_concurrency.processutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af/disk.config 2364e72a-c2a9-494b-a0f0-ee3bf62444af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.466 226109 INFO nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Deleting local config drive /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af/disk.config because it was imported into RBD.
Dec 06 07:32:15 compute-1 kernel: tap835de58d-78: entered promiscuous mode
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.516 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-1 ovn_controller[130279]: 2025-12-06T07:32:15Z|00478|binding|INFO|Claiming lport 835de58d-7832-467f-9464-dbbc619e219a for this chassis.
Dec 06 07:32:15 compute-1 ovn_controller[130279]: 2025-12-06T07:32:15Z|00479|binding|INFO|835de58d-7832-467f-9464-dbbc619e219a: Claiming fa:16:3e:fb:f6:ff 10.100.0.8
Dec 06 07:32:15 compute-1 NetworkManager[49031]: <info>  [1765006335.5188] manager: (tap835de58d-78): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.521 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.524 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.539 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:f6:ff 10.100.0.8'], port_security=['fa:16:3e:fb:f6:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2364e72a-c2a9-494b-a0f0-ee3bf62444af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4eedc8db-4948-45cd-825b-cc2f01de39f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39801d5920c640488d08116f4121fc25', 'neutron:revision_number': '2', 'neutron:security_group_ids': '69588819-45bd-4a0e-979e-39523a0d833b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=234fd4a2-2ef7-42bc-8d0a-4b3c8a10835e, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=835de58d-7832-467f-9464-dbbc619e219a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.540 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 835de58d-7832-467f-9464-dbbc619e219a in datapath 4eedc8db-4948-45cd-825b-cc2f01de39f7 bound to our chassis
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.541 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4eedc8db-4948-45cd-825b-cc2f01de39f7
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.554 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[84e1582d-b744-4e60-ab34-ac492d6dff1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.555 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4eedc8db-41 in ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.556 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4eedc8db-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.557 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb568d7-cb6e-485e-9646-25dfccc315ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.558 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[db5054fd-7b24-4453-a703-1343192ddcd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.569 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc98c45-b8ab-4336-8981-992996ffe8e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.589 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-1 systemd-machined[190302]: New machine qemu-56-instance-00000077.
Dec 06 07:32:15 compute-1 ovn_controller[130279]: 2025-12-06T07:32:15Z|00480|binding|INFO|Setting lport 835de58d-7832-467f-9464-dbbc619e219a ovn-installed in OVS
Dec 06 07:32:15 compute-1 ovn_controller[130279]: 2025-12-06T07:32:15Z|00481|binding|INFO|Setting lport 835de58d-7832-467f-9464-dbbc619e219a up in Southbound
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.595 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.597 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d2493b44-8723-49c4-8536-5a855c714044]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 systemd[1]: Started Virtual Machine qemu-56-instance-00000077.
Dec 06 07:32:15 compute-1 systemd-udevd[272134]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:32:15 compute-1 sudo[272076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.634 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0f7ef4-1295-4529-b3cd-ad839a871f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 sudo[272076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-1 sudo[272076]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:15 compute-1 NetworkManager[49031]: <info>  [1765006335.6436] device (tap835de58d-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:32:15 compute-1 NetworkManager[49031]: <info>  [1765006335.6446] device (tap835de58d-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:32:15 compute-1 NetworkManager[49031]: <info>  [1765006335.6478] manager: (tap4eedc8db-40): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Dec 06 07:32:15 compute-1 systemd-udevd[272145]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.646 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d66d729c-350f-4da8-bdc2-5cfd40ff4428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 podman[272061]: 2025-12-06 07:32:15.654220854 +0000 UTC m=+0.087885985 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 07:32:15 compute-1 podman[272059]: 2025-12-06 07:32:15.655731404 +0000 UTC m=+0.091155292 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.688 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f3553a09-0f44-4d33-b109-33ad5724766f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.691 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[72e3a0dc-48f1-48c4-b3ce-cd9a6df6567b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 sudo[272155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:32:15 compute-1 NetworkManager[49031]: <info>  [1765006335.7196] device (tap4eedc8db-40): carrier: link connected
Dec 06 07:32:15 compute-1 sudo[272155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-1 podman[272062]: 2025-12-06 07:32:15.723127141 +0000 UTC m=+0.146203331 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:32:15 compute-1 sudo[272155]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.724 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa489b1-b5da-433e-8078-71be6e87b3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.743 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[278baa67-babc-4b14-ba17-cc7bec07e453]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4eedc8db-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:fd:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661522, 'reachable_time': 39658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272204, 'error': None, 'target': 'ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.759 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1cad8d49-4992-49da-b22d-35b5189ac571]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:fdfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661522, 'tstamp': 661522}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272214, 'error': None, 'target': 'ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 sudo[272205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:15 compute-1 sudo[272205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.778 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[129e5c7c-6df8-443f-b6a3-9891b3af04f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4eedc8db-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:fd:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661522, 'reachable_time': 39658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272229, 'error': None, 'target': 'ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 sudo[272205]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.810 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[867b5c12-3cd8-4518-9d66-e9bc7ff799ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 sudo[272233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:32:15 compute-1 sudo[272233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:32:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1222227724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.869 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4e4018-99e3-4abb-b4b8-062d7fb8be7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.871 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4eedc8db-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.871 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.873 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eedc8db-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:15 compute-1 kernel: tap4eedc8db-40: entered promiscuous mode
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.875 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-1 NetworkManager[49031]: <info>  [1765006335.8761] manager: (tap4eedc8db-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.883 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4eedc8db-40, col_values=(('external_ids', {'iface-id': '3ef132b4-d5b1-445b-90f9-d6827506a664'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-1 ovn_controller[130279]: 2025-12-06T07:32:15Z|00482|binding|INFO|Releasing lport 3ef132b4-d5b1-445b-90f9-d6827506a664 from this chassis (sb_readonly=0)
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.888 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4eedc8db-4948-45cd-825b-cc2f01de39f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4eedc8db-4948-45cd-825b-cc2f01de39f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.889 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8a926155-1fd4-4a04-8041-ce89fffa900d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.890 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-4eedc8db-4948-45cd-825b-cc2f01de39f7
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/4eedc8db-4948-45cd-825b-cc2f01de39f7.pid.haproxy
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 4eedc8db-4948-45cd-825b-cc2f01de39f7
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:32:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:15.891 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7', 'env', 'PROCESS_TAG=haproxy-4eedc8db-4948-45cd-825b-cc2f01de39f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4eedc8db-4948-45cd-825b-cc2f01de39f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.898 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.907 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.886s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.931 226109 DEBUG nova.compute.manager [req-e2821854-93d9-4d59-b1b8-40acff942255 req-887b32d2-9894-4735-a5bf-64af2934a8cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received event network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.931 226109 DEBUG oslo_concurrency.lockutils [req-e2821854-93d9-4d59-b1b8-40acff942255 req-887b32d2-9894-4735-a5bf-64af2934a8cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.932 226109 DEBUG oslo_concurrency.lockutils [req-e2821854-93d9-4d59-b1b8-40acff942255 req-887b32d2-9894-4735-a5bf-64af2934a8cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.932 226109 DEBUG oslo_concurrency.lockutils [req-e2821854-93d9-4d59-b1b8-40acff942255 req-887b32d2-9894-4735-a5bf-64af2934a8cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.932 226109 DEBUG nova.compute.manager [req-e2821854-93d9-4d59-b1b8-40acff942255 req-887b32d2-9894-4735-a5bf-64af2934a8cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Processing event network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.979 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:32:15 compute-1 nova_compute[226101]: 2025-12-06 07:32:15.979 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.281 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.284 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4458MB free_disk=20.94660186767578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.284 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.285 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:16 compute-1 sudo[272233]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.344 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:16 compute-1 podman[272320]: 2025-12-06 07:32:16.250251707 +0000 UTC m=+0.024546905 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.388 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance a01650e7-2f06-400b-82aa-0ad2c8b84e6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.388 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 2364e72a-c2a9-494b-a0f0-ee3bf62444af actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.388 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.389 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:32:16 compute-1 ceph-mon[81689]: pgmap v2245: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.0 MiB/s wr, 71 op/s
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.472 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000078s ======
Dec 06 07:32:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:16.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Dec 06 07:32:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:16 compute-1 nova_compute[226101]: 2025-12-06 07:32:16.966 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:17 compute-1 podman[272320]: 2025-12-06 07:32:17.508596053 +0000 UTC m=+1.282891221 container create be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:32:17 compute-1 systemd[1]: Started libpod-conmon-be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a.scope.
Dec 06 07:32:17 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:32:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76d8ad055b36180df623f48ad64adfa8979bff52885c61a06b7112f1cb6d037/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:17 compute-1 podman[272320]: 2025-12-06 07:32:17.995156058 +0000 UTC m=+1.769451226 container init be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 07:32:18 compute-1 podman[272320]: 2025-12-06 07:32:18.001461107 +0000 UTC m=+1.775756275 container start be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.018 226109 DEBUG nova.compute.manager [req-9da8640c-fec0-4673-a029-a569cc3f18fd req-205825a6-6658-4d36-8983-f0614c065dfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received event network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.019 226109 DEBUG oslo_concurrency.lockutils [req-9da8640c-fec0-4673-a029-a569cc3f18fd req-205825a6-6658-4d36-8983-f0614c065dfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.019 226109 DEBUG oslo_concurrency.lockutils [req-9da8640c-fec0-4673-a029-a569cc3f18fd req-205825a6-6658-4d36-8983-f0614c065dfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.020 226109 DEBUG oslo_concurrency.lockutils [req-9da8640c-fec0-4673-a029-a569cc3f18fd req-205825a6-6658-4d36-8983-f0614c065dfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.020 226109 DEBUG nova.compute.manager [req-9da8640c-fec0-4673-a029-a569cc3f18fd req-205825a6-6658-4d36-8983-f0614c065dfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] No waiting events found dispatching network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.020 226109 WARNING nova.compute.manager [req-9da8640c-fec0-4673-a029-a569cc3f18fd req-205825a6-6658-4d36-8983-f0614c065dfc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received unexpected event network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a for instance with vm_state building and task_state spawning.
Dec 06 07:32:18 compute-1 neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7[272368]: [NOTICE]   (272390) : New worker (272392) forked
Dec 06 07:32:18 compute-1 neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7[272368]: [NOTICE]   (272390) : Loading success.
Dec 06 07:32:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3515848525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:18 compute-1 ceph-mon[81689]: pgmap v2246: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.0 MiB/s wr, 73 op/s
Dec 06 07:32:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3910440295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1222227724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:32:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:32:18 compute-1 ceph-mon[81689]: pgmap v2247: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Dec 06 07:32:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:18.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:32:18 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3435441625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.602 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.809s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.625 226109 DEBUG nova.storage.rbd_utils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.630 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.732 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.733 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006338.7311711, 2364e72a-c2a9-494b-a0f0-ee3bf62444af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.734 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] VM Started (Lifecycle Event)
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.737 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.750 226109 INFO nova.virt.libvirt.driver [-] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Instance spawned successfully.
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.751 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.756 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.770 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.775 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.775 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.776 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.776 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.777 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.777 226109 DEBUG nova.virt.libvirt.driver [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.804 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.805 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006338.7328792, 2364e72a-c2a9-494b-a0f0-ee3bf62444af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.805 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] VM Paused (Lifecycle Event)
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.860 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.864 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006338.7367496, 2364e72a-c2a9-494b-a0f0-ee3bf62444af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.865 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] VM Resumed (Lifecycle Event)
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.878 226109 INFO nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Took 19.53 seconds to spawn the instance on the hypervisor.
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.879 226109 DEBUG nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.888 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.892 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:32:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:32:18 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/941915969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.923 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.931 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.933 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.954 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.969 226109 INFO nova.compute.manager [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Took 21.81 seconds to build instance.
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.985 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.986 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:18 compute-1 nova_compute[226101]: 2025-12-06 07:32:18.991 226109 DEBUG oslo_concurrency.lockutils [None req-b90dc954-6294-45e8-ba41-e7b574a09d7d 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:32:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/720906342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.553 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.923s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.555 226109 DEBUG nova.virt.libvirt.vif [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:31:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.555 226109 DEBUG nova.network.os_vif_util [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.556 226109 DEBUG nova.network.os_vif_util [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.557 226109 DEBUG nova.objects.instance [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.581 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <uuid>a01650e7-2f06-400b-82aa-0ad2c8b84e6c</uuid>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <name>instance-00000076</name>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerActionsTestOtherB-server-1460317127</nova:name>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:32:12</nova:creationTime>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <nova:port uuid="844228dd-a09d-4ac5-bca7-4a4f664afd31">
Dec 06 07:32:19 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <system>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <entry name="serial">a01650e7-2f06-400b-82aa-0ad2c8b84e6c</entry>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <entry name="uuid">a01650e7-2f06-400b-82aa-0ad2c8b84e6c</entry>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </system>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <os>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   </os>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <features>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   </features>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk">
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       </source>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk.config">
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       </source>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:32:19 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:5e:74:4a"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <target dev="tap844228dd-a0"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/console.log" append="off"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <video>
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </video>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:32:19 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:32:19 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:32:19 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:32:19 compute-1 nova_compute[226101]: </domain>
Dec 06 07:32:19 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.587 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Preparing to wait for external event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.587 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.587 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.587 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.588 226109 DEBUG nova.virt.libvirt.vif [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:31:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.588 226109 DEBUG nova.network.os_vif_util [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.589 226109 DEBUG nova.network.os_vif_util [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.589 226109 DEBUG os_vif [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.590 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.591 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.591 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.592 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.592 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.593 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.593 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.595 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.595 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844228dd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.596 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap844228dd-a0, col_values=(('external_ids', {'iface-id': '844228dd-a09d-4ac5-bca7-4a4f664afd31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:74:4a', 'vm-uuid': 'a01650e7-2f06-400b-82aa-0ad2c8b84e6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.631 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:19 compute-1 NetworkManager[49031]: <info>  [1765006339.6334] manager: (tap844228dd-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.634 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.638 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:19 compute-1 nova_compute[226101]: 2025-12-06 07:32:19.640 226109 INFO os_vif [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0')
Dec 06 07:32:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:20.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:20 compute-1 nova_compute[226101]: 2025-12-06 07:32:20.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.348 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.912 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.913 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.913 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.914 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.914 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.916 226109 INFO nova.compute.manager [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Terminating instance
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.917 226109 DEBUG nova.compute.manager [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:32:21 compute-1 kernel: tap835de58d-78 (unregistering): left promiscuous mode
Dec 06 07:32:21 compute-1 NetworkManager[49031]: <info>  [1765006341.9596] device (tap835de58d-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.961 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:21 compute-1 ovn_controller[130279]: 2025-12-06T07:32:21Z|00483|binding|INFO|Releasing lport 835de58d-7832-467f-9464-dbbc619e219a from this chassis (sb_readonly=0)
Dec 06 07:32:21 compute-1 ovn_controller[130279]: 2025-12-06T07:32:21Z|00484|binding|INFO|Setting lport 835de58d-7832-467f-9464-dbbc619e219a down in Southbound
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:21 compute-1 ovn_controller[130279]: 2025-12-06T07:32:21Z|00485|binding|INFO|Removing iface tap835de58d-78 ovn-installed in OVS
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.976 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:21.985 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:f6:ff 10.100.0.8'], port_security=['fa:16:3e:fb:f6:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2364e72a-c2a9-494b-a0f0-ee3bf62444af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4eedc8db-4948-45cd-825b-cc2f01de39f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39801d5920c640488d08116f4121fc25', 'neutron:revision_number': '4', 'neutron:security_group_ids': '69588819-45bd-4a0e-979e-39523a0d833b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=234fd4a2-2ef7-42bc-8d0a-4b3c8a10835e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=835de58d-7832-467f-9464-dbbc619e219a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:32:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:21.987 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 835de58d-7832-467f-9464-dbbc619e219a in datapath 4eedc8db-4948-45cd-825b-cc2f01de39f7 unbound from our chassis
Dec 06 07:32:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:21.988 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4eedc8db-4948-45cd-825b-cc2f01de39f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:32:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:21.989 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cbebf42c-b167-4900-bacb-a8876d1e38c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:21.990 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7 namespace which is not needed anymore
Dec 06 07:32:21 compute-1 nova_compute[226101]: 2025-12-06 07:32:21.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:22 compute-1 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000077.scope: Deactivated successfully.
Dec 06 07:32:22 compute-1 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000077.scope: Consumed 4.013s CPU time.
Dec 06 07:32:22 compute-1 systemd-machined[190302]: Machine qemu-56-instance-00000077 terminated.
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.153 226109 INFO nova.virt.libvirt.driver [-] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Instance destroyed successfully.
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.154 226109 DEBUG nova.objects.instance [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lazy-loading 'resources' on Instance uuid 2364e72a-c2a9-494b-a0f0-ee3bf62444af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.169 226109 DEBUG nova.virt.libvirt.vif [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-323574989',display_name='tempest-ServerPasswordTestJSON-server-323574989',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-323574989',id=119,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:32:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39801d5920c640488d08116f4121fc25',ramdisk_id='',reservation_id='r-860ivol8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-804103674',owner_user_name='tempest-ServerPasswordTestJSON-804103674-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:32:21Z,user_data=None,user_id='3f95d5fc3ae84153badf13157ceaf23e',uuid=2364e72a-c2a9-494b-a0f0-ee3bf62444af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.170 226109 DEBUG nova.network.os_vif_util [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Converting VIF {"id": "835de58d-7832-467f-9464-dbbc619e219a", "address": "fa:16:3e:fb:f6:ff", "network": {"id": "4eedc8db-4948-45cd-825b-cc2f01de39f7", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1649873010-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39801d5920c640488d08116f4121fc25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap835de58d-78", "ovs_interfaceid": "835de58d-7832-467f-9464-dbbc619e219a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.171 226109 DEBUG nova.network.os_vif_util [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f6:ff,bridge_name='br-int',has_traffic_filtering=True,id=835de58d-7832-467f-9464-dbbc619e219a,network=Network(4eedc8db-4948-45cd-825b-cc2f01de39f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap835de58d-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.171 226109 DEBUG os_vif [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f6:ff,bridge_name='br-int',has_traffic_filtering=True,id=835de58d-7832-467f-9464-dbbc619e219a,network=Network(4eedc8db-4948-45cd-825b-cc2f01de39f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap835de58d-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.173 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.173 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap835de58d-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.175 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.177 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.178 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:22 compute-1 nova_compute[226101]: 2025-12-06 07:32:22.181 226109 INFO os_vif [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:f6:ff,bridge_name='br-int',has_traffic_filtering=True,id=835de58d-7832-467f-9464-dbbc619e219a,network=Network(4eedc8db-4948-45cd-825b-cc2f01de39f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap835de58d-78')
Dec 06 07:32:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:22.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:22.782 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:32:24 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:32:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:24.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:24.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:26.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:28 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:32:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:28.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:28.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:30.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:32 compute-1 ovsdb-server[47252]: ovs|00005|reconnect|ERR|tcp:127.0.0.1:35566: no response to inactivity probe after 5 seconds, disconnecting
Dec 06 07:32:32 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 13.344520569s
Dec 06 07:32:32 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 13.344520569s
Dec 06 07:32:32 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.344745636s, txc = 0x55b5535c5200
Dec 06 07:32:32 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:32:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:32.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 4017..4552) lease_timeout -- calling new election
Dec 06 07:32:32 compute-1 ceph-mon[81689]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 06 07:32:32 compute-1 ceph-mon[81689]: paxos.2).electionLogic(52) init, last seen epoch 52
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:32:32 compute-1 neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7[272368]: [NOTICE]   (272390) : haproxy version is 2.8.14-c23fe91
Dec 06 07:32:32 compute-1 neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7[272368]: [NOTICE]   (272390) : path to executable is /usr/sbin/haproxy
Dec 06 07:32:32 compute-1 neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7[272368]: [WARNING]  (272390) : Exiting Master process...
Dec 06 07:32:32 compute-1 neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7[272368]: [ALERT]    (272390) : Current worker (272392) exited with code 143 (Terminated)
Dec 06 07:32:32 compute-1 neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7[272368]: [WARNING]  (272390) : All workers exited. Exiting... (0)
Dec 06 07:32:32 compute-1 systemd[1]: libpod-be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a.scope: Deactivated successfully.
Dec 06 07:32:32 compute-1 podman[272492]: 2025-12-06 07:32:32.828792384 +0000 UTC m=+10.748999363 container died be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:33 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:33 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:33 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:33 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a-userdata-shm.mount: Deactivated successfully.
Dec 06 07:32:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-b76d8ad055b36180df623f48ad64adfa8979bff52885c61a06b7112f1cb6d037-merged.mount: Deactivated successfully.
Dec 06 07:32:34 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:34 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:34 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:35 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:36 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:36 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:36 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:36 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:32:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:36 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:37 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:37 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:37 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:38 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:32:38 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:32:38 compute-1 podman[272492]: 2025-12-06 07:32:38.097149904 +0000 UTC m=+16.017356893 container cleanup be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:32:38 compute-1 systemd[1]: libpod-conmon-be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a.scope: Deactivated successfully.
Dec 06 07:32:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3435441625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/941915969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:32:38 compute-1 podman[272550]: 2025-12-06 07:32:38.25753372 +0000 UTC m=+0.132797882 container remove be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.267 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[86250c9c-66d4-48ac-99f0-b62bf7d6cb17]: (4, ('Sat Dec  6 07:32:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7 (be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a)\nbe93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a\nSat Dec  6 07:32:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7 (be93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a)\nbe93efac4c37a4f84ce7008e37726f881dd8792288ae17e05dfeb7fd2197b96a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.269 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[91f4fc04-b755-418c-8d8a-4bc35bc728e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.270 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4eedc8db-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:38 compute-1 kernel: tap4eedc8db-40: left promiscuous mode
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.295 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd938c2-3abf-4ab8-81a5-f9045a863ce1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.316 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b065abec-c5e5-4f6d-bc90-164dbd267ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.317 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5d461119-9140-4dc8-b122-f6d68c3592bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.335 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa389c0-cecc-41c8-b7b5-2cbc99ac4a95]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661513, 'reachable_time': 25516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272565, 'error': None, 'target': 'ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:38 compute-1 systemd[1]: run-netns-ovnmeta\x2d4eedc8db\x2d4948\x2d45cd\x2d825b\x2dcc2f01de39f7.mount: Deactivated successfully.
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.339 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4eedc8db-4948-45cd-825b-cc2f01de39f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.340 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[85356c9c-9d8f-4235-9a00-05d2c6798979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:38.342 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:32:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:38.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:38.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.639 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.645 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.645 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.646 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.646 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.646 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.647 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.648 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.649 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006342.1508656, 2364e72a-c2a9-494b-a0f0-ee3bf62444af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.649 226109 INFO nova.compute.manager [-] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] VM Stopped (Lifecycle Event)
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.653 226109 DEBUG nova.compute.manager [req-fe6feeb7-fe00-447c-a1df-34a5d74fff65 req-260895d0-8bf2-4606-b767-f95e413dacc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received event network-vif-unplugged-835de58d-7832-467f-9464-dbbc619e219a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.653 226109 DEBUG oslo_concurrency.lockutils [req-fe6feeb7-fe00-447c-a1df-34a5d74fff65 req-260895d0-8bf2-4606-b767-f95e413dacc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.654 226109 DEBUG oslo_concurrency.lockutils [req-fe6feeb7-fe00-447c-a1df-34a5d74fff65 req-260895d0-8bf2-4606-b767-f95e413dacc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.654 226109 DEBUG oslo_concurrency.lockutils [req-fe6feeb7-fe00-447c-a1df-34a5d74fff65 req-260895d0-8bf2-4606-b767-f95e413dacc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.654 226109 DEBUG nova.compute.manager [req-fe6feeb7-fe00-447c-a1df-34a5d74fff65 req-260895d0-8bf2-4606-b767-f95e413dacc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] No waiting events found dispatching network-vif-unplugged-835de58d-7832-467f-9464-dbbc619e219a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.655 226109 DEBUG nova.compute.manager [req-fe6feeb7-fe00-447c-a1df-34a5d74fff65 req-260895d0-8bf2-4606-b767-f95e413dacc7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received event network-vif-unplugged-835de58d-7832-467f-9464-dbbc619e219a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.657 226109 DEBUG nova.compute.manager [req-913d9c80-a82f-40d6-9906-1805f23a6d4a req-6f5f0c71-49a7-44d0-b124-ce013b0b5672 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received event network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.657 226109 DEBUG oslo_concurrency.lockutils [req-913d9c80-a82f-40d6-9906-1805f23a6d4a req-6f5f0c71-49a7-44d0-b124-ce013b0b5672 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.657 226109 DEBUG oslo_concurrency.lockutils [req-913d9c80-a82f-40d6-9906-1805f23a6d4a req-6f5f0c71-49a7-44d0-b124-ce013b0b5672 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.658 226109 DEBUG oslo_concurrency.lockutils [req-913d9c80-a82f-40d6-9906-1805f23a6d4a req-6f5f0c71-49a7-44d0-b124-ce013b0b5672 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.658 226109 DEBUG nova.compute.manager [req-913d9c80-a82f-40d6-9906-1805f23a6d4a req-6f5f0c71-49a7-44d0-b124-ce013b0b5672 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] No waiting events found dispatching network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.658 226109 WARNING nova.compute.manager [req-913d9c80-a82f-40d6-9906-1805f23a6d4a req-6f5f0c71-49a7-44d0-b124-ce013b0b5672 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received unexpected event network-vif-plugged-835de58d-7832-467f-9464-dbbc619e219a for instance with vm_state active and task_state deleting.
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.660 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.673 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.673 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering BACKOFF _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.729 226109 DEBUG nova.compute.manager [None req-ae8ddb44-8b6e-4b4f-8418-6a61b26fe65b - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.735 226109 DEBUG nova.compute.manager [None req-ae8ddb44-8b6e-4b4f-8418-6a61b26fe65b - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.760 226109 INFO nova.compute.manager [None req-ae8ddb44-8b6e-4b4f-8418-6a61b26fe65b - - - - - -] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] During sync_power_state the instance has a pending task (deleting). Skip.
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.761 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.761 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.761 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:5e:74:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.761 226109 INFO nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Using config drive
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.780 226109 DEBUG nova.storage.rbd_utils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.787 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.788 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Image id 6efab05d-c7cf-4770-a5c3-c806a2739063 yields fingerprint 890368a5690a3dbdbb6650dcb9de9e2c9dc5acef _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.788 226109 INFO nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] image 6efab05d-c7cf-4770-a5c3-c806a2739063 at (/var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef): checking
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.788 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] image 6efab05d-c7cf-4770-a5c3-c806a2739063 at (/var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.790 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.790 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] a01650e7-2f06-400b-82aa-0ad2c8b84e6c is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.790 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] 2364e72a-c2a9-494b-a0f0-ee3bf62444af is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.791 226109 WARNING nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.791 226109 INFO nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Active base files: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.791 226109 INFO nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Removable base files: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.791 226109 INFO nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.791 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.792 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 06 07:32:38 compute-1 nova_compute[226101]: 2025-12-06 07:32:38.792 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2248: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 19 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/720906342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2249: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 49 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/72302301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2250: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 47 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2251: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 64 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2252: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2253: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 07:32:39 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2254: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: mon.compute-1 calling monitor election
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2255: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 29 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2256: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 29 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: pgmap v2257: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 13 op/s
Dec 06 07:32:39 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:32:39 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:32:39 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:32:39 compute-1 ceph-mon[81689]: osdmap e273: 3 total, 3 up, 3 in
Dec 06 07:32:39 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 66m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:32:39 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:32:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:32:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:39.345 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.674 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.675 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLOUT] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.678 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.680 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.703 226109 INFO nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Creating config drive at /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/disk.config
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.709 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6z8ol7bv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.840 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6z8ol7bv" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.865 226109 DEBUG nova.storage.rbd_utils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:32:39 compute-1 nova_compute[226101]: 2025-12-06 07:32:39.868 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/disk.config a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:40.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:40 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:40.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:41 compute-1 nova_compute[226101]: 2025-12-06 07:32:41.356 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:41 compute-1 ceph-mon[81689]: pgmap v2258: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail
Dec 06 07:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:32:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:32:41 compute-1 nova_compute[226101]: 2025-12-06 07:32:41.976 226109 DEBUG oslo_concurrency.processutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/disk.config a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:41 compute-1 nova_compute[226101]: 2025-12-06 07:32:41.977 226109 INFO nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Deleting local config drive /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/disk.config because it was imported into RBD.
Dec 06 07:32:42 compute-1 kernel: tap844228dd-a0: entered promiscuous mode
Dec 06 07:32:42 compute-1 NetworkManager[49031]: <info>  [1765006362.0400] manager: (tap844228dd-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Dec 06 07:32:42 compute-1 ovn_controller[130279]: 2025-12-06T07:32:42Z|00486|binding|INFO|Claiming lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 for this chassis.
Dec 06 07:32:42 compute-1 ovn_controller[130279]: 2025-12-06T07:32:42Z|00487|binding|INFO|844228dd-a09d-4ac5-bca7-4a4f664afd31: Claiming fa:16:3e:5e:74:4a 10.100.0.12
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.086 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.100 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:74:4a 10.100.0.12'], port_security=['fa:16:3e:5e:74:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a01650e7-2f06-400b-82aa-0ad2c8b84e6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=844228dd-a09d-4ac5-bca7-4a4f664afd31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.102 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 844228dd-a09d-4ac5-bca7-4a4f664afd31 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis
Dec 06 07:32:42 compute-1 systemd-udevd[272644]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.104 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:32:42 compute-1 NetworkManager[49031]: <info>  [1765006362.1201] device (tap844228dd-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:32:42 compute-1 NetworkManager[49031]: <info>  [1765006362.1208] device (tap844228dd-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.119 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b4314e0a-7b1d-4881-b3bd-ccb847a5dc0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.120 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3beede49-11 in ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.122 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3beede49-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.122 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[31cca62c-7b4a-4e9f-a419-23ef7e73fa6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.123 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5deb3b6d-5918-484c-90d2-f5d5f716ba5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 systemd-machined[190302]: New machine qemu-57-instance-00000076.
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.133 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5319de74-5b3a-43a3-b371-5bfca4c1545b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.149 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:42 compute-1 systemd[1]: Started Virtual Machine qemu-57-instance-00000076.
Dec 06 07:32:42 compute-1 ovn_controller[130279]: 2025-12-06T07:32:42Z|00488|binding|INFO|Setting lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 ovn-installed in OVS
Dec 06 07:32:42 compute-1 ovn_controller[130279]: 2025-12-06T07:32:42Z|00489|binding|INFO|Setting lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 up in Southbound
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.157 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.156 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc13023-c16a-44ef-85c8-bc2c75f749cd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.186 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[aaac84a1-621c-4803-88b6-e599b5e2db3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.191 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[73c2cbda-fd5b-49f2-ab14-a37bd986db9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 NetworkManager[49031]: <info>  [1765006362.1930] manager: (tap3beede49-10): new Veth device (/org/freedesktop/NetworkManager/Devices/224)
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.220 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a9eb65-f779-41c2-b92f-75f70fd6456f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.222 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[99d28fc6-f749-45e0-b0ca-1980ae327082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 NetworkManager[49031]: <info>  [1765006362.2432] device (tap3beede49-10): carrier: link connected
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.252 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[28ae7ac4-0242-49c3-939f-af3be12bf02f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.272 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[227dfa3d-5d6b-4e13-b1d6-394657e8b610]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664174, 'reachable_time': 38539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272679, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.287 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ff602a-ed15-4d9b-95f9-7b8021b67b02]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:c755'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664174, 'tstamp': 664174}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272680, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.308 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fea26ba5-5627-43d8-952b-03379f604335]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664174, 'reachable_time': 38539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272681, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.348 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e18408e7-e732-4dc6-ba83-88caf01f618f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.417 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3d868a80-3ffe-4cff-b1ed-b03bc92878c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.418 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.418 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.419 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.420 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:42 compute-1 NetworkManager[49031]: <info>  [1765006362.4213] manager: (tap3beede49-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Dec 06 07:32:42 compute-1 kernel: tap3beede49-10: entered promiscuous mode
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.423 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:42 compute-1 ovn_controller[130279]: 2025-12-06T07:32:42Z|00490|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.424 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.437 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.439 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.439 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bf6c37-1ddc-4ac0-8587-1643d34bdb89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.440 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:32:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:32:42.441 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'env', 'PROCESS_TAG=haproxy-3beede49-1cbb-425c-b1af-82f43dc57163', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3beede49-1cbb-425c-b1af-82f43dc57163.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:32:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:42.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:42 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:42.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:42 compute-1 ceph-mon[81689]: pgmap v2259: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.835 226109 DEBUG nova.compute.manager [req-41617923-3a95-4623-ac82-18947a8429eb req-a6681d1b-5b3f-481d-bf05-84acb6c7c6b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.836 226109 DEBUG oslo_concurrency.lockutils [req-41617923-3a95-4623-ac82-18947a8429eb req-a6681d1b-5b3f-481d-bf05-84acb6c7c6b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.836 226109 DEBUG oslo_concurrency.lockutils [req-41617923-3a95-4623-ac82-18947a8429eb req-a6681d1b-5b3f-481d-bf05-84acb6c7c6b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.837 226109 DEBUG oslo_concurrency.lockutils [req-41617923-3a95-4623-ac82-18947a8429eb req-a6681d1b-5b3f-481d-bf05-84acb6c7c6b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:42 compute-1 nova_compute[226101]: 2025-12-06 07:32:42.837 226109 DEBUG nova.compute.manager [req-41617923-3a95-4623-ac82-18947a8429eb req-a6681d1b-5b3f-481d-bf05-84acb6c7c6b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Processing event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:32:42 compute-1 podman[272749]: 2025-12-06 07:32:42.785014734 +0000 UTC m=+0.021047752 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:32:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.083 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006363.0827184, a01650e7-2f06-400b-82aa-0ad2c8b84e6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.083 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] VM Started (Lifecycle Event)
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.086 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.088 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.091 226109 INFO nova.virt.libvirt.driver [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance spawned successfully.
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.091 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.140 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.140 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.141 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.141 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.142 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.142 226109 DEBUG nova.virt.libvirt.driver [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.458 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.462 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.558 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.559 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006363.0828576, a01650e7-2f06-400b-82aa-0ad2c8b84e6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.559 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] VM Paused (Lifecycle Event)
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.582 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.585 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006363.087887, a01650e7-2f06-400b-82aa-0ad2c8b84e6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.586 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] VM Resumed (Lifecycle Event)
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.594 226109 INFO nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Took 44.98 seconds to spawn the instance on the hypervisor.
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.594 226109 DEBUG nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.625 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.629 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.659 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.685 226109 INFO nova.compute.manager [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Took 48.86 seconds to build instance.
Dec 06 07:32:43 compute-1 nova_compute[226101]: 2025-12-06 07:32:43.711 226109 DEBUG oslo_concurrency.lockutils [None req-2ce31eb1-72bf-47e8-b245-83b2022785dd a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 48.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:44 compute-1 podman[272749]: 2025-12-06 07:32:44.553879335 +0000 UTC m=+1.789912343 container create 23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:32:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:44.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:44 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:44.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:44 compute-1 nova_compute[226101]: 2025-12-06 07:32:44.681 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:45 compute-1 nova_compute[226101]: 2025-12-06 07:32:45.004 226109 DEBUG nova.compute.manager [req-968c14d8-34ce-4f2a-a77f-d617dcb03fb6 req-f30a4f1f-a356-44a4-8951-6c7a739ece3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:45 compute-1 nova_compute[226101]: 2025-12-06 07:32:45.004 226109 DEBUG oslo_concurrency.lockutils [req-968c14d8-34ce-4f2a-a77f-d617dcb03fb6 req-f30a4f1f-a356-44a4-8951-6c7a739ece3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:45 compute-1 nova_compute[226101]: 2025-12-06 07:32:45.005 226109 DEBUG oslo_concurrency.lockutils [req-968c14d8-34ce-4f2a-a77f-d617dcb03fb6 req-f30a4f1f-a356-44a4-8951-6c7a739ece3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:45 compute-1 nova_compute[226101]: 2025-12-06 07:32:45.005 226109 DEBUG oslo_concurrency.lockutils [req-968c14d8-34ce-4f2a-a77f-d617dcb03fb6 req-f30a4f1f-a356-44a4-8951-6c7a739ece3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:45 compute-1 nova_compute[226101]: 2025-12-06 07:32:45.005 226109 DEBUG nova.compute.manager [req-968c14d8-34ce-4f2a-a77f-d617dcb03fb6 req-f30a4f1f-a356-44a4-8951-6c7a739ece3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] No waiting events found dispatching network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:32:45 compute-1 nova_compute[226101]: 2025-12-06 07:32:45.005 226109 WARNING nova.compute.manager [req-968c14d8-34ce-4f2a-a77f-d617dcb03fb6 req-f30a4f1f-a356-44a4-8951-6c7a739ece3f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received unexpected event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 for instance with vm_state active and task_state None.
Dec 06 07:32:45 compute-1 systemd[1]: Started libpod-conmon-23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4.scope.
Dec 06 07:32:45 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:32:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e4a5c82973c5aaf7b5c78428d1817c47a7d6786f9d969ce8c50b7d79541048/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:45 compute-1 ceph-mon[81689]: pgmap v2260: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:32:45 compute-1 podman[272749]: 2025-12-06 07:32:45.498039152 +0000 UTC m=+2.734072180 container init 23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 07:32:45 compute-1 podman[272749]: 2025-12-06 07:32:45.505746238 +0000 UTC m=+2.741779236 container start 23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 07:32:45 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[272770]: [NOTICE]   (272774) : New worker (272776) forked
Dec 06 07:32:45 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[272770]: [NOTICE]   (272774) : Loading success.
Dec 06 07:32:46 compute-1 podman[272786]: 2025-12-06 07:32:46.271276593 +0000 UTC m=+0.248353445 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 06 07:32:46 compute-1 podman[272787]: 2025-12-06 07:32:46.271990251 +0000 UTC m=+0.245972840 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 06 07:32:46 compute-1 podman[272788]: 2025-12-06 07:32:46.307944231 +0000 UTC m=+0.276069123 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:32:46 compute-1 ceph-mon[81689]: pgmap v2261: 305 pgs: 305 active+clean; 206 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 22 op/s
Dec 06 07:32:46 compute-1 nova_compute[226101]: 2025-12-06 07:32:46.357 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:46 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:46.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:46.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:46 compute-1 sudo[272848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:46 compute-1 sudo[272848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:46 compute-1 sudo[272848]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:46 compute-1 sudo[272873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:32:46 compute-1 sudo[272873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:46 compute-1 sudo[272873]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:48 compute-1 ceph-mon[81689]: pgmap v2262: 305 pgs: 305 active+clean; 167 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 92 op/s
Dec 06 07:32:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:48 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.134 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:49 compute-1 NetworkManager[49031]: <info>  [1765006369.1349] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Dec 06 07:32:49 compute-1 NetworkManager[49031]: <info>  [1765006369.1359] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.333 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:49 compute-1 ovn_controller[130279]: 2025-12-06T07:32:49Z|00491|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.347 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.683 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.972 226109 DEBUG nova.compute.manager [req-24d1d440-a8cd-4dd4-a806-d9eb0fd54698 req-07a9e784-6dd9-4e1a-8f10-56de12ebaf14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-changed-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.973 226109 DEBUG nova.compute.manager [req-24d1d440-a8cd-4dd4-a806-d9eb0fd54698 req-07a9e784-6dd9-4e1a-8f10-56de12ebaf14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Refreshing instance network info cache due to event network-changed-844228dd-a09d-4ac5-bca7-4a4f664afd31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.973 226109 DEBUG oslo_concurrency.lockutils [req-24d1d440-a8cd-4dd4-a806-d9eb0fd54698 req-07a9e784-6dd9-4e1a-8f10-56de12ebaf14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.973 226109 DEBUG oslo_concurrency.lockutils [req-24d1d440-a8cd-4dd4-a806-d9eb0fd54698 req-07a9e784-6dd9-4e1a-8f10-56de12ebaf14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:32:49 compute-1 nova_compute[226101]: 2025-12-06 07:32:49.974 226109 DEBUG nova.network.neutron [req-24d1d440-a8cd-4dd4-a806-d9eb0fd54698 req-07a9e784-6dd9-4e1a-8f10-56de12ebaf14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Refreshing network info cache for port 844228dd-a09d-4ac5-bca7-4a4f664afd31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:32:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:51 compute-1 nova_compute[226101]: 2025-12-06 07:32:51.359 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:52.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:52.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:53 compute-1 ceph-mon[81689]: pgmap v2263: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 108 op/s
Dec 06 07:32:53 compute-1 ceph-mon[81689]: pgmap v2264: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 16 KiB/s wr, 122 op/s
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.519 226109 DEBUG nova.network.neutron [req-24d1d440-a8cd-4dd4-a806-d9eb0fd54698 req-07a9e784-6dd9-4e1a-8f10-56de12ebaf14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updated VIF entry in instance network info cache for port 844228dd-a09d-4ac5-bca7-4a4f664afd31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.521 226109 DEBUG nova.network.neutron [req-24d1d440-a8cd-4dd4-a806-d9eb0fd54698 req-07a9e784-6dd9-4e1a-8f10-56de12ebaf14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.553 226109 DEBUG oslo_concurrency.lockutils [req-24d1d440-a8cd-4dd4-a806-d9eb0fd54698 req-07a9e784-6dd9-4e1a-8f10-56de12ebaf14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.898 226109 INFO nova.virt.libvirt.driver [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Deleting instance files /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af_del
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.900 226109 INFO nova.virt.libvirt.driver [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Deletion of /var/lib/nova/instances/2364e72a-c2a9-494b-a0f0-ee3bf62444af_del complete
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.961 226109 INFO nova.compute.manager [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Took 32.04 seconds to destroy the instance on the hypervisor.
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.962 226109 DEBUG oslo.service.loopingcall [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.962 226109 DEBUG nova.compute.manager [-] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:32:53 compute-1 nova_compute[226101]: 2025-12-06 07:32:53.963 226109 DEBUG nova.network.neutron [-] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:32:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:54.304063) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374304116, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 757, "num_deletes": 250, "total_data_size": 1502491, "memory_usage": 1535664, "flush_reason": "Manual Compaction"}
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374491402, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 700523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47571, "largest_seqno": 48322, "table_properties": {"data_size": 697117, "index_size": 1186, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9389, "raw_average_key_size": 21, "raw_value_size": 689914, "raw_average_value_size": 1582, "num_data_blocks": 51, "num_entries": 436, "num_filter_entries": 436, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006301, "oldest_key_time": 1765006301, "file_creation_time": 1765006374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 187445 microseconds, and 2915 cpu microseconds.
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:32:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:54.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:54.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:54 compute-1 nova_compute[226101]: 2025-12-06 07:32:54.687 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:54.491512) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 700523 bytes OK
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:54.491536) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:54.830235) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:54.830277) EVENT_LOG_v1 {"time_micros": 1765006374830268, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:54.830298) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 1498382, prev total WAL file size 1545527, number of live WAL files 2.
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:54.830902) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353235' seq:72057594037927935, type:22 .. '6D6772737461740031373736' seq:0, type:0; will stop at (end)
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(684KB)], [90(12MB)]
Dec 06 07:32:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374830936, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 13851175, "oldest_snapshot_seqno": -1}
Dec 06 07:32:55 compute-1 nova_compute[226101]: 2025-12-06 07:32:55.466 226109 DEBUG nova.network.neutron [-] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:32:55 compute-1 nova_compute[226101]: 2025-12-06 07:32:55.492 226109 INFO nova.compute.manager [-] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Took 1.53 seconds to deallocate network for instance.
Dec 06 07:32:55 compute-1 nova_compute[226101]: 2025-12-06 07:32:55.566 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:55 compute-1 nova_compute[226101]: 2025-12-06 07:32:55.567 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7836 keys, 10260004 bytes, temperature: kUnknown
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006375628570, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 10260004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10210196, "index_size": 29088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 203461, "raw_average_key_size": 25, "raw_value_size": 10072935, "raw_average_value_size": 1285, "num_data_blocks": 1141, "num_entries": 7836, "num_filter_entries": 7836, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765006374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:32:55 compute-1 nova_compute[226101]: 2025-12-06 07:32:55.670 226109 DEBUG oslo_concurrency.processutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:55 compute-1 nova_compute[226101]: 2025-12-06 07:32:55.697 226109 DEBUG nova.compute.manager [req-b076c7b4-2502-4a73-8e94-002452349739 req-d59f1cc6-9457-466b-a602-6dfdd433b5df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2364e72a-c2a9-494b-a0f0-ee3bf62444af] Received event network-vif-deleted-835de58d-7832-467f-9464-dbbc619e219a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:55.628967) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10260004 bytes
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:55.839536) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 17.4 rd, 12.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(34.4) write-amplify(14.6) OK, records in: 8336, records dropped: 500 output_compression: NoCompression
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:55.839572) EVENT_LOG_v1 {"time_micros": 1765006375839558, "job": 56, "event": "compaction_finished", "compaction_time_micros": 797855, "compaction_time_cpu_micros": 28564, "output_level": 6, "num_output_files": 1, "total_output_size": 10260004, "num_input_records": 8336, "num_output_records": 7836, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006375840021, "job": 56, "event": "table_file_deletion", "file_number": 92}
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006375842129, "job": 56, "event": "table_file_deletion", "file_number": 90}
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:54.830802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:55.842243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:55.842249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:55.842252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:55.842254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:55 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:32:55.842256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:55 compute-1 ceph-mon[81689]: pgmap v2265: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 115 op/s
Dec 06 07:32:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1505189984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:32:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1505189984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:32:56 compute-1 nova_compute[226101]: 2025-12-06 07:32:56.364 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:56.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:56 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:56.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:56 compute-1 ceph-mon[81689]: pgmap v2266: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 116 op/s
Dec 06 07:32:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:32:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3433138790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:56 compute-1 nova_compute[226101]: 2025-12-06 07:32:56.946 226109 DEBUG oslo_concurrency.processutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:56 compute-1 nova_compute[226101]: 2025-12-06 07:32:56.951 226109 DEBUG nova.compute.provider_tree [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:32:56 compute-1 nova_compute[226101]: 2025-12-06 07:32:56.975 226109 DEBUG nova.scheduler.client.report [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:32:56 compute-1 nova_compute[226101]: 2025-12-06 07:32:56.995 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:57 compute-1 nova_compute[226101]: 2025-12-06 07:32:57.036 226109 INFO nova.scheduler.client.report [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Deleted allocations for instance 2364e72a-c2a9-494b-a0f0-ee3bf62444af
Dec 06 07:32:57 compute-1 nova_compute[226101]: 2025-12-06 07:32:57.170 226109 DEBUG oslo_concurrency.lockutils [None req-4bfaf5f1-cd61-4515-aab1-abb745b2bb45 3f95d5fc3ae84153badf13157ceaf23e 39801d5920c640488d08116f4121fc25 - - default default] Lock "2364e72a-c2a9-494b-a0f0-ee3bf62444af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 35.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:32:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:58.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:32:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:58.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:59 compute-1 nova_compute[226101]: 2025-12-06 07:32:59.693 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:00.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:00 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:00.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:01 compute-1 ceph-mon[81689]: pgmap v2267: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 108 op/s
Dec 06 07:33:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3433138790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:01 compute-1 nova_compute[226101]: 2025-12-06 07:33:01.366 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:33:01.650 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:33:01.651 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:33:01.651 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:01 compute-1 ovn_controller[130279]: 2025-12-06T07:33:01Z|00492|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:33:01 compute-1 nova_compute[226101]: 2025-12-06 07:33:01.867 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:02 compute-1 ceph-mon[81689]: pgmap v2268: 305 pgs: 305 active+clean; 172 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 360 KiB/s wr, 47 op/s
Dec 06 07:33:02 compute-1 ceph-mon[81689]: pgmap v2269: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Dec 06 07:33:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:02.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:02.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:03 compute-1 ovn_controller[130279]: 2025-12-06T07:33:03Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:74:4a 10.100.0.12
Dec 06 07:33:03 compute-1 ovn_controller[130279]: 2025-12-06T07:33:03Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:74:4a 10.100.0.12
Dec 06 07:33:04 compute-1 ceph-mon[81689]: pgmap v2270: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Dec 06 07:33:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:33:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:33:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:04.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:33:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:04.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:33:04 compute-1 nova_compute[226101]: 2025-12-06 07:33:04.709 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3258911594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:33:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3258911594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:33:05 compute-1 ceph-mon[81689]: pgmap v2271: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Dec 06 07:33:06 compute-1 nova_compute[226101]: 2025-12-06 07:33:06.368 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:06.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:06.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:08 compute-1 ceph-mon[81689]: pgmap v2272: 305 pgs: 305 active+clean; 231 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 3.4 MiB/s wr, 87 op/s
Dec 06 07:33:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:08.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:08.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:08 compute-1 nova_compute[226101]: 2025-12-06 07:33:08.736 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:08 compute-1 nova_compute[226101]: 2025-12-06 07:33:08.785 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:08 compute-1 nova_compute[226101]: 2025-12-06 07:33:08.786 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:08 compute-1 nova_compute[226101]: 2025-12-06 07:33:08.786 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:08 compute-1 nova_compute[226101]: 2025-12-06 07:33:08.786 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:33:08 compute-1 nova_compute[226101]: 2025-12-06 07:33:08.786 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:08 compute-1 ceph-osd[79002]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Dec 06 07:33:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2456836380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:33:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2456836380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:33:09 compute-1 nova_compute[226101]: 2025-12-06 07:33:09.714 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:33:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2134285174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:09 compute-1 nova_compute[226101]: 2025-12-06 07:33:09.897 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.015 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.016 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.227 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.229 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4324MB free_disk=20.94310760498047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.231 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.231 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:10 compute-1 ceph-mon[81689]: pgmap v2273: 305 pgs: 305 active+clean; 242 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 06 07:33:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1667020030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2134285174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/192691524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:10.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:10.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.734 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance a01650e7-2f06-400b-82aa-0ad2c8b84e6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.735 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.735 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:33:10 compute-1 nova_compute[226101]: 2025-12-06 07:33:10.811 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:33:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4150334743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.305 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.310 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.331 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.360 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.361 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.411 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.550 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.550 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:33:11 compute-1 nova_compute[226101]: 2025-12-06 07:33:11.551 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:33:11 compute-1 ceph-mon[81689]: pgmap v2274: 305 pgs: 305 active+clean; 221 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.6 MiB/s wr, 103 op/s
Dec 06 07:33:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4278511991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:33:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4278511991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:33:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4150334743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:12 compute-1 nova_compute[226101]: 2025-12-06 07:33:12.172 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:33:12 compute-1 nova_compute[226101]: 2025-12-06 07:33:12.172 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:33:12 compute-1 nova_compute[226101]: 2025-12-06 07:33:12.172 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:33:12 compute-1 nova_compute[226101]: 2025-12-06 07:33:12.173 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:33:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:12.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:12.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:14 compute-1 nova_compute[226101]: 2025-12-06 07:33:14.288 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:14.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:14.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:14 compute-1 nova_compute[226101]: 2025-12-06 07:33:14.719 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:15 compute-1 ceph-mon[81689]: pgmap v2275: 305 pgs: 305 active+clean; 221 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 1.9 MiB/s wr, 77 op/s
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.416 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.447 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.448 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.448 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.449 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.449 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.449 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.474 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.475 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.475 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.475 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.476 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:33:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:15 compute-1 nova_compute[226101]: 2025-12-06 07:33:15.502 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:16 compute-1 nova_compute[226101]: 2025-12-06 07:33:16.413 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:16.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:16 compute-1 ceph-mon[81689]: pgmap v2276: 305 pgs: 305 active+clean; 194 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 1.9 MiB/s wr, 85 op/s
Dec 06 07:33:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/527801579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:17 compute-1 podman[272968]: 2025-12-06 07:33:17.074355881 +0000 UTC m=+0.050707484 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:33:17 compute-1 podman[272967]: 2025-12-06 07:33:17.082268751 +0000 UTC m=+0.059912169 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:33:17 compute-1 podman[272969]: 2025-12-06 07:33:17.118747054 +0000 UTC m=+0.088719807 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:33:18 compute-1 ceph-mon[81689]: pgmap v2277: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 274 KiB/s rd, 1.9 MiB/s wr, 73 op/s
Dec 06 07:33:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2460644784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1407645777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:18 compute-1 nova_compute[226101]: 2025-12-06 07:33:18.617 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:18.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:18.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:19 compute-1 nova_compute[226101]: 2025-12-06 07:33:19.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:19 compute-1 ceph-mon[81689]: pgmap v2278: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 551 KiB/s wr, 42 op/s
Dec 06 07:33:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/107927179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:19 compute-1 ovn_controller[130279]: 2025-12-06T07:33:19Z|00493|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:33:19 compute-1 nova_compute[226101]: 2025-12-06 07:33:19.721 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:19 compute-1 nova_compute[226101]: 2025-12-06 07:33:19.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:20.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:20.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:21 compute-1 nova_compute[226101]: 2025-12-06 07:33:21.415 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:21 compute-1 nova_compute[226101]: 2025-12-06 07:33:21.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:21 compute-1 ceph-mon[81689]: pgmap v2279: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 38 KiB/s wr, 24 op/s
Dec 06 07:33:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:22.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:22.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:22 compute-1 nova_compute[226101]: 2025-12-06 07:33:22.900 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:23 compute-1 ceph-mon[81689]: pgmap v2280: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 06 07:33:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:33:24.177 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:33:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:33:24.178 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:33:24 compute-1 nova_compute[226101]: 2025-12-06 07:33:24.178 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:24 compute-1 nova_compute[226101]: 2025-12-06 07:33:24.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:24 compute-1 nova_compute[226101]: 2025-12-06 07:33:24.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:33:24 compute-1 nova_compute[226101]: 2025-12-06 07:33:24.616 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:33:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:24.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:24.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:24 compute-1 nova_compute[226101]: 2025-12-06 07:33:24.764 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:25 compute-1 nova_compute[226101]: 2025-12-06 07:33:25.313 226109 DEBUG nova.compute.manager [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:33:25 compute-1 nova_compute[226101]: 2025-12-06 07:33:25.405 226109 INFO nova.compute.manager [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] instance snapshotting
Dec 06 07:33:25 compute-1 nova_compute[226101]: 2025-12-06 07:33:25.405 226109 DEBUG nova.objects.instance [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'flavor' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:33:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:25 compute-1 nova_compute[226101]: 2025-12-06 07:33:25.863 226109 INFO nova.virt.libvirt.driver [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Beginning live snapshot process
Dec 06 07:33:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:33:26.180 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:33:26 compute-1 nova_compute[226101]: 2025-12-06 07:33:26.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:26 compute-1 nova_compute[226101]: 2025-12-06 07:33:26.465 226109 DEBUG nova.virt.libvirt.imagebackend [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:33:26 compute-1 ceph-mon[81689]: pgmap v2281: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 06 07:33:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3403146162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:26.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:26.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:27 compute-1 nova_compute[226101]: 2025-12-06 07:33:27.017 226109 DEBUG nova.storage.rbd_utils [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(65cd5338eaac40b9898a3daf92381b52) on rbd image(a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:33:27 compute-1 nova_compute[226101]: 2025-12-06 07:33:27.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:27 compute-1 ceph-mon[81689]: pgmap v2282: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Dec 06 07:33:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e274 e274: 3 total, 3 up, 3 in
Dec 06 07:33:28 compute-1 nova_compute[226101]: 2025-12-06 07:33:28.583 226109 DEBUG nova.storage.rbd_utils [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] cloning vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk@65cd5338eaac40b9898a3daf92381b52 to images/24540ea9-2655-4ff6-a852-08458e75db90 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:33:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:28.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:28.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:29 compute-1 nova_compute[226101]: 2025-12-06 07:33:29.017 226109 DEBUG nova.storage.rbd_utils [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] flattening images/24540ea9-2655-4ff6-a852-08458e75db90 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:33:29 compute-1 ceph-mon[81689]: osdmap e274: 3 total, 3 up, 3 in
Dec 06 07:33:29 compute-1 nova_compute[226101]: 2025-12-06 07:33:29.768 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:29 compute-1 ovn_controller[130279]: 2025-12-06T07:33:29Z|00494|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:33:29 compute-1 nova_compute[226101]: 2025-12-06 07:33:29.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:30.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:30.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:31 compute-1 ceph-mon[81689]: pgmap v2284: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.4 KiB/s rd, 16 KiB/s wr, 3 op/s
Dec 06 07:33:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:32.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:32.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:32 compute-1 nova_compute[226101]: 2025-12-06 07:33:32.690 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2309148104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:34 compute-1 nova_compute[226101]: 2025-12-06 07:33:34.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:34 compute-1 nova_compute[226101]: 2025-12-06 07:33:34.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:33:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:34.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:34 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:34.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:34 compute-1 nova_compute[226101]: 2025-12-06 07:33:34.771 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:35 compute-1 nova_compute[226101]: 2025-12-06 07:33:35.577 226109 DEBUG nova.storage.rbd_utils [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] removing snapshot(65cd5338eaac40b9898a3daf92381b52) on rbd image(a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:33:36 compute-1 nova_compute[226101]: 2025-12-06 07:33:36.422 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:36 compute-1 ceph-mon[81689]: pgmap v2285: 305 pgs: 305 active+clean; 205 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:33:36 compute-1 ceph-mon[81689]: pgmap v2286: 305 pgs: 305 active+clean; 205 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:33:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:36 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:36.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:36.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:37 compute-1 ceph-mon[81689]: pgmap v2287: 305 pgs: 305 active+clean; 254 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 5.0 MiB/s wr, 161 op/s
Dec 06 07:33:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e275 e275: 3 total, 3 up, 3 in
Dec 06 07:33:37 compute-1 nova_compute[226101]: 2025-12-06 07:33:37.892 226109 DEBUG nova.storage.rbd_utils [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(snap) on rbd image(24540ea9-2655-4ff6-a852-08458e75db90) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:33:38 compute-1 ceph-mon[81689]: pgmap v2288: 305 pgs: 305 active+clean; 286 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.6 MiB/s wr, 189 op/s
Dec 06 07:33:38 compute-1 ceph-mon[81689]: osdmap e275: 3 total, 3 up, 3 in
Dec 06 07:33:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2378557285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:38.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e276 e276: 3 total, 3 up, 3 in
Dec 06 07:33:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2246063812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:39 compute-1 ceph-mon[81689]: osdmap e276: 3 total, 3 up, 3 in
Dec 06 07:33:39 compute-1 nova_compute[226101]: 2025-12-06 07:33:39.773 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:40 compute-1 nova_compute[226101]: 2025-12-06 07:33:40.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:40.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:40.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:41 compute-1 ceph-mon[81689]: pgmap v2290: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.8 MiB/s wr, 188 op/s
Dec 06 07:33:41 compute-1 nova_compute[226101]: 2025-12-06 07:33:41.425 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:42.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:42 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:42.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:43 compute-1 nova_compute[226101]: 2025-12-06 07:33:43.024 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:43 compute-1 nova_compute[226101]: 2025-12-06 07:33:43.024 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:43 compute-1 nova_compute[226101]: 2025-12-06 07:33:43.068 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:33:43 compute-1 nova_compute[226101]: 2025-12-06 07:33:43.231 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:43 compute-1 nova_compute[226101]: 2025-12-06 07:33:43.232 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:43 compute-1 nova_compute[226101]: 2025-12-06 07:33:43.240 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:33:43 compute-1 nova_compute[226101]: 2025-12-06 07:33:43.241 226109 INFO nova.compute.claims [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:33:43 compute-1 nova_compute[226101]: 2025-12-06 07:33:43.435 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:44 compute-1 nova_compute[226101]: 2025-12-06 07:33:44.558 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:44.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:44.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:44 compute-1 nova_compute[226101]: 2025-12-06 07:33:44.774 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:45 compute-1 ceph-mon[81689]: pgmap v2292: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 148 op/s
Dec 06 07:33:46 compute-1 nova_compute[226101]: 2025-12-06 07:33:46.426 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e277 e277: 3 total, 3 up, 3 in
Dec 06 07:33:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:46.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:46.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:47 compute-1 sudo[273190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:47 compute-1 sudo[273190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:47 compute-1 sudo[273190]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:47 compute-1 podman[273215]: 2025-12-06 07:33:47.228404163 +0000 UTC m=+0.049390858 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 06 07:33:47 compute-1 podman[273214]: 2025-12-06 07:33:47.235962354 +0000 UTC m=+0.059529558 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:33:47 compute-1 sudo[273233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:33:47 compute-1 sudo[273233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:47 compute-1 sudo[273233]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:33:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2904693144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.278 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.843s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.284 226109 DEBUG nova.compute.provider_tree [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:33:47 compute-1 podman[273216]: 2025-12-06 07:33:47.288522076 +0000 UTC m=+0.104618351 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:33:47 compute-1 sudo[273296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:47 compute-1 sudo[273296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:47 compute-1 sudo[273296]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.314 226109 DEBUG nova.scheduler.client.report [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.348 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.349 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:33:47 compute-1 sudo[273325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:33:47 compute-1 sudo[273325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.436 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.436 226109 DEBUG nova.network.neutron [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.498 226109 INFO nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.526 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:33:47 compute-1 ceph-mon[81689]: pgmap v2293: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 263 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Dec 06 07:33:47 compute-1 ceph-mon[81689]: pgmap v2294: 305 pgs: 305 active+clean; 293 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 731 KiB/s rd, 321 KiB/s wr, 74 op/s
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.660 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.662 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:33:47 compute-1 nova_compute[226101]: 2025-12-06 07:33:47.663 226109 INFO nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Creating image(s)
Dec 06 07:33:47 compute-1 sudo[273325]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:48.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:48 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:48.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e278 e278: 3 total, 3 up, 3 in
Dec 06 07:33:50 compute-1 ceph-mon[81689]: osdmap e277: 3 total, 3 up, 3 in
Dec 06 07:33:50 compute-1 ceph-mon[81689]: pgmap v2296: 305 pgs: 305 active+clean; 299 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.6 MiB/s wr, 184 op/s
Dec 06 07:33:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2904693144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:33:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:50.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:50.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:50 compute-1 nova_compute[226101]: 2025-12-06 07:33:50.738 226109 DEBUG nova.storage.rbd_utils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 829161c5-19b5-459e-88a3-58512aaa5fc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.073 226109 DEBUG nova.storage.rbd_utils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 829161c5-19b5-459e-88a3-58512aaa5fc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.215 226109 DEBUG nova.storage.rbd_utils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 829161c5-19b5-459e-88a3-58512aaa5fc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.219 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.242 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.248 226109 INFO nova.virt.libvirt.driver [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Snapshot image upload complete
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.249 226109 INFO nova.compute.manager [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Took 25.80 seconds to snapshot the instance on the hypervisor.
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.254 226109 DEBUG nova.policy [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '605b5481e0c944048e6a67046c30d693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.285 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.285 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.286 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.286 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:51 compute-1 ceph-mon[81689]: pgmap v2297: 305 pgs: 305 active+clean; 305 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 153 op/s
Dec 06 07:33:51 compute-1 ceph-mon[81689]: osdmap e278: 3 total, 3 up, 3 in
Dec 06 07:33:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:51 compute-1 ceph-mon[81689]: pgmap v2299: 305 pgs: 305 active+clean; 313 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 158 op/s
Dec 06 07:33:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:33:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:33:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:33:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:33:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.449 226109 DEBUG nova.storage.rbd_utils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 829161c5-19b5-459e-88a3-58512aaa5fc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.453 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 829161c5-19b5-459e-88a3-58512aaa5fc7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.479 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.774 226109 DEBUG nova.compute.manager [None req-56f2fdc3-ec91-4c58-b79c-7a14845a73fb a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.875 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 829161c5-19b5-459e-88a3-58512aaa5fc7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:51 compute-1 nova_compute[226101]: 2025-12-06 07:33:51.962 226109 DEBUG nova.storage.rbd_utils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] resizing rbd image 829161c5-19b5-459e-88a3-58512aaa5fc7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:33:52 compute-1 nova_compute[226101]: 2025-12-06 07:33:52.083 226109 DEBUG nova.objects.instance [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'migration_context' on Instance uuid 829161c5-19b5-459e-88a3-58512aaa5fc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:33:52 compute-1 nova_compute[226101]: 2025-12-06 07:33:52.104 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:33:52 compute-1 nova_compute[226101]: 2025-12-06 07:33:52.105 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Ensure instance console log exists: /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:33:52 compute-1 nova_compute[226101]: 2025-12-06 07:33:52.106 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:52 compute-1 nova_compute[226101]: 2025-12-06 07:33:52.106 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:52 compute-1 nova_compute[226101]: 2025-12-06 07:33:52.107 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:52.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:33:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:52.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:33:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e279 e279: 3 total, 3 up, 3 in
Dec 06 07:33:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3573876812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:52 compute-1 nova_compute[226101]: 2025-12-06 07:33:52.992 226109 DEBUG nova.network.neutron [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Successfully created port: 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:33:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:53 compute-1 nova_compute[226101]: 2025-12-06 07:33:53.666 226109 DEBUG nova.compute.manager [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:33:53 compute-1 nova_compute[226101]: 2025-12-06 07:33:53.707 226109 INFO nova.compute.manager [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] instance snapshotting
Dec 06 07:33:53 compute-1 nova_compute[226101]: 2025-12-06 07:33:53.708 226109 DEBUG nova.objects.instance [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'flavor' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:33:53 compute-1 ceph-mon[81689]: pgmap v2300: 305 pgs: 305 active+clean; 313 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 07:33:53 compute-1 ceph-mon[81689]: osdmap e279: 3 total, 3 up, 3 in
Dec 06 07:33:53 compute-1 nova_compute[226101]: 2025-12-06 07:33:53.821 226109 DEBUG nova.network.neutron [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Successfully updated port: 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:33:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e280 e280: 3 total, 3 up, 3 in
Dec 06 07:33:53 compute-1 nova_compute[226101]: 2025-12-06 07:33:53.842 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:33:53 compute-1 nova_compute[226101]: 2025-12-06 07:33:53.842 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:33:53 compute-1 nova_compute[226101]: 2025-12-06 07:33:53.843 226109 DEBUG nova.network.neutron [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:33:53 compute-1 nova_compute[226101]: 2025-12-06 07:33:53.989 226109 INFO nova.virt.libvirt.driver [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Beginning live snapshot process
Dec 06 07:33:54 compute-1 nova_compute[226101]: 2025-12-06 07:33:54.074 226109 DEBUG nova.network.neutron [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:33:54 compute-1 nova_compute[226101]: 2025-12-06 07:33:54.166 226109 DEBUG nova.virt.libvirt.imagebackend [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:33:54 compute-1 nova_compute[226101]: 2025-12-06 07:33:54.258 226109 DEBUG nova.compute.manager [req-818f7a02-1585-45a3-a42d-71e849ba6d7d req-89d6490f-755f-4de8-a5c5-7fc92f551d03 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-changed-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:33:54 compute-1 nova_compute[226101]: 2025-12-06 07:33:54.259 226109 DEBUG nova.compute.manager [req-818f7a02-1585-45a3-a42d-71e849ba6d7d req-89d6490f-755f-4de8-a5c5-7fc92f551d03 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Refreshing instance network info cache due to event network-changed-1f39ead9-93ea-42d4-bb3c-64034b85c0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:33:54 compute-1 nova_compute[226101]: 2025-12-06 07:33:54.259 226109 DEBUG oslo_concurrency.lockutils [req-818f7a02-1585-45a3-a42d-71e849ba6d7d req-89d6490f-755f-4de8-a5c5-7fc92f551d03 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:33:54 compute-1 nova_compute[226101]: 2025-12-06 07:33:54.373 226109 DEBUG nova.storage.rbd_utils [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(86339c57bb3348f9b419369364f086d1) on rbd image(a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:33:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:54.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:54.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:54 compute-1 ceph-mon[81689]: osdmap e280: 3 total, 3 up, 3 in
Dec 06 07:33:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e281 e281: 3 total, 3 up, 3 in
Dec 06 07:33:55 compute-1 nova_compute[226101]: 2025-12-06 07:33:55.293 226109 DEBUG nova.storage.rbd_utils [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] cloning vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk@86339c57bb3348f9b419369364f086d1 to images/c4654208-04b4-4132-87e5-c9f3bf472e83 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:33:55 compute-1 nova_compute[226101]: 2025-12-06 07:33:55.585 226109 DEBUG nova.network.neutron [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating instance_info_cache with network_info: [{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.118 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.118 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Instance network_info: |[{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.119 226109 DEBUG oslo_concurrency.lockutils [req-818f7a02-1585-45a3-a42d-71e849ba6d7d req-89d6490f-755f-4de8-a5c5-7fc92f551d03 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.119 226109 DEBUG nova.network.neutron [req-818f7a02-1585-45a3-a42d-71e849ba6d7d req-89d6490f-755f-4de8-a5c5-7fc92f551d03 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Refreshing network info cache for port 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.124 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Start _get_guest_xml network_info=[{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.130 226109 WARNING nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.137 226109 DEBUG nova.virt.libvirt.host [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.139 226109 DEBUG nova.virt.libvirt.host [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.143 226109 DEBUG nova.virt.libvirt.host [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.144 226109 DEBUG nova.virt.libvirt.host [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.146 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.146 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.147 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.147 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.147 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.147 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.148 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.148 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.148 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.148 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.149 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.149 226109 DEBUG nova.virt.hardware [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.152 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.245 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:56 compute-1 nova_compute[226101]: 2025-12-06 07:33:56.430 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:56.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:56.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:57 compute-1 ceph-mon[81689]: pgmap v2303: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.8 MiB/s wr, 178 op/s
Dec 06 07:33:57 compute-1 ceph-mon[81689]: osdmap e281: 3 total, 3 up, 3 in
Dec 06 07:33:58 compute-1 nova_compute[226101]: 2025-12-06 07:33:58.530 226109 DEBUG nova.network.neutron [req-818f7a02-1585-45a3-a42d-71e849ba6d7d req-89d6490f-755f-4de8-a5c5-7fc92f551d03 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updated VIF entry in instance network info cache for port 1f39ead9-93ea-42d4-bb3c-64034b85c0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:33:58 compute-1 nova_compute[226101]: 2025-12-06 07:33:58.531 226109 DEBUG nova.network.neutron [req-818f7a02-1585-45a3-a42d-71e849ba6d7d req-89d6490f-755f-4de8-a5c5-7fc92f551d03 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating instance_info_cache with network_info: [{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:33:58 compute-1 nova_compute[226101]: 2025-12-06 07:33:58.551 226109 DEBUG oslo_concurrency.lockutils [req-818f7a02-1585-45a3-a42d-71e849ba6d7d req-89d6490f-755f-4de8-a5c5-7fc92f551d03 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:33:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:58.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:33:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:58.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:59 compute-1 ceph-mon[81689]: pgmap v2305: 305 pgs: 305 active+clean; 490 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 15 MiB/s wr, 356 op/s
Dec 06 07:33:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:33:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3797525378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:59 compute-1 nova_compute[226101]: 2025-12-06 07:33:59.859 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.707s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:59 compute-1 nova_compute[226101]: 2025-12-06 07:33:59.887 226109 DEBUG nova.storage.rbd_utils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 829161c5-19b5-459e-88a3-58512aaa5fc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:59 compute-1 nova_compute[226101]: 2025-12-06 07:33:59.891 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:59 compute-1 sudo[273675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:59 compute-1 sudo[273675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:59 compute-1 sudo[273675]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:59 compute-1 sudo[273701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:34:00 compute-1 sudo[273701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:34:00 compute-1 sudo[273701]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:34:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2964872426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.600 226109 DEBUG nova.storage.rbd_utils [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] flattening images/c4654208-04b4-4132-87e5-c9f3bf472e83 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:34:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/789745650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:00 compute-1 ceph-mon[81689]: pgmap v2306: 305 pgs: 305 active+clean; 490 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 MiB/s wr, 405 op/s
Dec 06 07:34:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:34:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/494249757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3797525378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:34:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:00.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:00 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:00.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.759 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.868s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.761 226109 DEBUG nova.virt.libvirt.vif [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=122,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-ayw3ijlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:33:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=829161c5-19b5-459e-88a3-58512aaa5fc7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.762 226109 DEBUG nova.network.os_vif_util [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.763 226109 DEBUG nova.network.os_vif_util [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:39:62,bridge_name='br-int',has_traffic_filtering=True,id=1f39ead9-93ea-42d4-bb3c-64034b85c0a4,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f39ead9-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.765 226109 DEBUG nova.objects.instance [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 829161c5-19b5-459e-88a3-58512aaa5fc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.791 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <uuid>829161c5-19b5-459e-88a3-58512aaa5fc7</uuid>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <name>instance-0000007a</name>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <nova:name>multiattach-server-0</nova:name>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:33:56</nova:creationTime>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <nova:user uuid="605b5481e0c944048e6a67046c30d693">tempest-AttachVolumeMultiAttachTest-690984293-project-member</nova:user>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <nova:project uuid="833f4cf9f5a64b2ab94c3bf330353a31">tempest-AttachVolumeMultiAttachTest-690984293</nova:project>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <nova:port uuid="1f39ead9-93ea-42d4-bb3c-64034b85c0a4">
Dec 06 07:34:00 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <system>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <entry name="serial">829161c5-19b5-459e-88a3-58512aaa5fc7</entry>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <entry name="uuid">829161c5-19b5-459e-88a3-58512aaa5fc7</entry>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </system>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <os>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   </os>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <features>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   </features>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/829161c5-19b5-459e-88a3-58512aaa5fc7_disk">
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       </source>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/829161c5-19b5-459e-88a3-58512aaa5fc7_disk.config">
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       </source>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:34:00 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d3:39:62"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <target dev="tap1f39ead9-93"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7/console.log" append="off"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <video>
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </video>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:34:00 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:34:00 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:34:00 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:34:00 compute-1 nova_compute[226101]: </domain>
Dec 06 07:34:00 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.793 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Preparing to wait for external event network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.793 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.794 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.794 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.795 226109 DEBUG nova.virt.libvirt.vif [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=122,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-ayw3ijlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:33:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=829161c5-19b5-459e-88a3-58512aaa5fc7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.795 226109 DEBUG nova.network.os_vif_util [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.796 226109 DEBUG nova.network.os_vif_util [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:39:62,bridge_name='br-int',has_traffic_filtering=True,id=1f39ead9-93ea-42d4-bb3c-64034b85c0a4,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f39ead9-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.796 226109 DEBUG os_vif [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:39:62,bridge_name='br-int',has_traffic_filtering=True,id=1f39ead9-93ea-42d4-bb3c-64034b85c0a4,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f39ead9-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.797 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.797 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.798 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.801 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.801 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f39ead9-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.802 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f39ead9-93, col_values=(('external_ids', {'iface-id': '1f39ead9-93ea-42d4-bb3c-64034b85c0a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:39:62', 'vm-uuid': '829161c5-19b5-459e-88a3-58512aaa5fc7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:00 compute-1 NetworkManager[49031]: <info>  [1765006440.8446] manager: (tap1f39ead9-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.844 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.848 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.851 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:00 compute-1 nova_compute[226101]: 2025-12-06 07:34:00.853 226109 INFO os_vif [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:39:62,bridge_name='br-int',has_traffic_filtering=True,id=1f39ead9-93ea-42d4-bb3c-64034b85c0a4,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f39ead9-93')
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.031 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.032 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.032 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:d3:39:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.032 226109 INFO nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Using config drive
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.061 226109 DEBUG nova.storage.rbd_utils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 829161c5-19b5-459e-88a3-58512aaa5fc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.433 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:01.651 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:01.651 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:01.652 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.748 226109 INFO nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Creating config drive at /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7/disk.config
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.754 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpusaemcu9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:01 compute-1 nova_compute[226101]: 2025-12-06 07:34:01.887 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpusaemcu9" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2964872426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:02 compute-1 ceph-mon[81689]: pgmap v2307: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 12 MiB/s wr, 325 op/s
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.173 226109 DEBUG nova.storage.rbd_utils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 829161c5-19b5-459e-88a3-58512aaa5fc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.178 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7/disk.config 829161c5-19b5-459e-88a3-58512aaa5fc7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.346 226109 DEBUG oslo_concurrency.processutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7/disk.config 829161c5-19b5-459e-88a3-58512aaa5fc7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.347 226109 INFO nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Deleting local config drive /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7/disk.config because it was imported into RBD.
Dec 06 07:34:02 compute-1 kernel: tap1f39ead9-93: entered promiscuous mode
Dec 06 07:34:02 compute-1 NetworkManager[49031]: <info>  [1765006442.3936] manager: (tap1f39ead9-93): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Dec 06 07:34:02 compute-1 ovn_controller[130279]: 2025-12-06T07:34:02Z|00495|binding|INFO|Claiming lport 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 for this chassis.
Dec 06 07:34:02 compute-1 ovn_controller[130279]: 2025-12-06T07:34:02Z|00496|binding|INFO|1f39ead9-93ea-42d4-bb3c-64034b85c0a4: Claiming fa:16:3e:d3:39:62 10.100.0.4
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.395 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.408 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:39:62 10.100.0.4'], port_security=['fa:16:3e:d3:39:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '829161c5-19b5-459e-88a3-58512aaa5fc7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=1f39ead9-93ea-42d4-bb3c-64034b85c0a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.410 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis
Dec 06 07:34:02 compute-1 ovn_controller[130279]: 2025-12-06T07:34:02Z|00497|binding|INFO|Setting lport 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 ovn-installed in OVS
Dec 06 07:34:02 compute-1 ovn_controller[130279]: 2025-12-06T07:34:02Z|00498|binding|INFO|Setting lport 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 up in Southbound
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.412 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.413 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.419 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.423 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2025ddb4-6a5f-46cc-9230-44bde0e219c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.425 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5bb49e8a-b1 in ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.428 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5bb49e8a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.428 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0670e5e5-5d53-4ec0-bafb-2b961dade701]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.428 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[343bfb90-2424-4297-888c-d31676c6b678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 systemd-udevd[273839]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:34:02 compute-1 systemd-machined[190302]: New machine qemu-58-instance-0000007a.
Dec 06 07:34:02 compute-1 NetworkManager[49031]: <info>  [1765006442.4432] device (tap1f39ead9-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:34:02 compute-1 systemd[1]: Started Virtual Machine qemu-58-instance-0000007a.
Dec 06 07:34:02 compute-1 NetworkManager[49031]: <info>  [1765006442.4448] device (tap1f39ead9-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.443 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ecc71e-77a7-4f5d-901f-b39e156202eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.459 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7a979b9d-74bf-4d71-868a-ef497608648c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.489 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[697f2e84-ea62-48e2-9631-b801658c2f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.494 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d75eb338-9d82-4914-af17-167432e0c009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 NetworkManager[49031]: <info>  [1765006442.4959] manager: (tap5bb49e8a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/230)
Dec 06 07:34:02 compute-1 systemd-udevd[273842]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.532 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7413e1d3-6b10-4c40-a639-9adf4a4884d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.537 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce37b35-b034-46b9-b1d1-4ffa38805601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 NetworkManager[49031]: <info>  [1765006442.5617] device (tap5bb49e8a-b0): carrier: link connected
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.568 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e3010051-643e-41d7-970a-be0c38ab8ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.585 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bfaf9122-e83d-4854-b37e-13f027c369df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672206, 'reachable_time': 41951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273871, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.599 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e43352b7-da19-46b1-b874-9f8c78b46f38]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:bf8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672206, 'tstamp': 672206}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273872, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.616 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad72d74-460b-4b89-a389-2ff992c2c3a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672206, 'reachable_time': 41951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273873, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.648 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d389418f-7684-41c0-be8d-82217c485221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:02.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.717 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[87dd7f7e-1c48-490f-a62f-4882300da248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.718 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.719 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.719 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.721 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:02 compute-1 NetworkManager[49031]: <info>  [1765006442.7219] manager: (tap5bb49e8a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Dec 06 07:34:02 compute-1 kernel: tap5bb49e8a-b0: entered promiscuous mode
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.724 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.725 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:02 compute-1 ovn_controller[130279]: 2025-12-06T07:34:02Z|00499|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.740 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.742 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5bb49e8a-b939-4c79-851c-62c634be0272.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5bb49e8a-b939-4c79-851c-62c634be0272.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.744 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1375901c-7456-4911-8c9d-b062a08382d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.746 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-5bb49e8a-b939-4c79-851c-62c634be0272
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/5bb49e8a-b939-4c79-851c-62c634be0272.pid.haproxy
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 5bb49e8a-b939-4c79-851c-62c634be0272
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.747 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'env', 'PROCESS_TAG=haproxy-5bb49e8a-b939-4c79-851c-62c634be0272', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5bb49e8a-b939-4c79-851c-62c634be0272.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.783 226109 DEBUG nova.compute.manager [req-dee4cbf5-bf70-4466-9eec-0bde42f96a36 req-c8f344e1-1414-49e4-b0f4-13aade8e0274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.784 226109 DEBUG oslo_concurrency.lockutils [req-dee4cbf5-bf70-4466-9eec-0bde42f96a36 req-c8f344e1-1414-49e4-b0f4-13aade8e0274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.784 226109 DEBUG oslo_concurrency.lockutils [req-dee4cbf5-bf70-4466-9eec-0bde42f96a36 req-c8f344e1-1414-49e4-b0f4-13aade8e0274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.784 226109 DEBUG oslo_concurrency.lockutils [req-dee4cbf5-bf70-4466-9eec-0bde42f96a36 req-c8f344e1-1414-49e4-b0f4-13aade8e0274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.785 226109 DEBUG nova.compute.manager [req-dee4cbf5-bf70-4466-9eec-0bde42f96a36 req-c8f344e1-1414-49e4-b0f4-13aade8e0274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Processing event network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:34:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:02.915 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:34:02 compute-1 nova_compute[226101]: 2025-12-06 07:34:02.917 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.018 226109 DEBUG nova.storage.rbd_utils [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] removing snapshot(86339c57bb3348f9b419369364f086d1) on rbd image(a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:34:03 compute-1 podman[273960]: 2025-12-06 07:34:03.172602343 +0000 UTC m=+0.091566012 container create 0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 07:34:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e282 e282: 3 total, 3 up, 3 in
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.185 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006443.185245, 829161c5-19b5-459e-88a3-58512aaa5fc7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.186 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] VM Started (Lifecycle Event)
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.189 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.193 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.199 226109 INFO nova.virt.libvirt.driver [-] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Instance spawned successfully.
Dec 06 07:34:03 compute-1 podman[273960]: 2025-12-06 07:34:03.10459939 +0000 UTC m=+0.023563079 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.199 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.208 226109 DEBUG nova.storage.rbd_utils [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(snap) on rbd image(c4654208-04b4-4132-87e5-c9f3bf472e83) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:34:03 compute-1 systemd[1]: Started libpod-conmon-0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8.scope.
Dec 06 07:34:03 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:34:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9794ce3684fa58419244269e9b9d799fe215b012cdf2f1929f868edae3c47df5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:34:03 compute-1 podman[273960]: 2025-12-06 07:34:03.345471034 +0000 UTC m=+0.264434733 container init 0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.353 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:03 compute-1 podman[273960]: 2025-12-06 07:34:03.36410287 +0000 UTC m=+0.283066539 container start 0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.364 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.367 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.368 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.368 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.369 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.369 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.370 226109 DEBUG nova.virt.libvirt.driver [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:03 compute-1 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[273995]: [NOTICE]   (274002) : New worker (274004) forked
Dec 06 07:34:03 compute-1 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[273995]: [NOTICE]   (274002) : Loading success.
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.427 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.428 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006443.1863966, 829161c5-19b5-459e-88a3-58512aaa5fc7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.429 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] VM Paused (Lifecycle Event)
Dec 06 07:34:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:03.447 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.455 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.460 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006443.1930752, 829161c5-19b5-459e-88a3-58512aaa5fc7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.460 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] VM Resumed (Lifecycle Event)
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.475 226109 INFO nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Took 15.81 seconds to spawn the instance on the hypervisor.
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.476 226109 DEBUG nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.483 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.486 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.511 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.527 226109 INFO nova.compute.manager [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Took 20.32 seconds to build instance.
Dec 06 07:34:03 compute-1 nova_compute[226101]: 2025-12-06 07:34:03.545 226109 DEBUG oslo_concurrency.lockutils [None req-68770983-590c-45d8-81ae-e8d3950bd105 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e283 e283: 3 total, 3 up, 3 in
Dec 06 07:34:04 compute-1 ceph-mon[81689]: pgmap v2308: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 7.1 MiB/s wr, 181 op/s
Dec 06 07:34:04 compute-1 ceph-mon[81689]: osdmap e282: 3 total, 3 up, 3 in
Dec 06 07:34:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3598755728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:04.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:04.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:04 compute-1 nova_compute[226101]: 2025-12-06 07:34:04.879 226109 DEBUG nova.compute.manager [req-29a54894-e176-43dc-be59-7ecece1ed234 req-4c8e05da-5562-4104-a30f-1e5607466537 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:04 compute-1 nova_compute[226101]: 2025-12-06 07:34:04.880 226109 DEBUG oslo_concurrency.lockutils [req-29a54894-e176-43dc-be59-7ecece1ed234 req-4c8e05da-5562-4104-a30f-1e5607466537 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:04 compute-1 nova_compute[226101]: 2025-12-06 07:34:04.881 226109 DEBUG oslo_concurrency.lockutils [req-29a54894-e176-43dc-be59-7ecece1ed234 req-4c8e05da-5562-4104-a30f-1e5607466537 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:04 compute-1 nova_compute[226101]: 2025-12-06 07:34:04.881 226109 DEBUG oslo_concurrency.lockutils [req-29a54894-e176-43dc-be59-7ecece1ed234 req-4c8e05da-5562-4104-a30f-1e5607466537 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:04 compute-1 nova_compute[226101]: 2025-12-06 07:34:04.881 226109 DEBUG nova.compute.manager [req-29a54894-e176-43dc-be59-7ecece1ed234 req-4c8e05da-5562-4104-a30f-1e5607466537 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] No waiting events found dispatching network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:34:04 compute-1 nova_compute[226101]: 2025-12-06 07:34:04.882 226109 WARNING nova.compute.manager [req-29a54894-e176-43dc-be59-7ecece1ed234 req-4c8e05da-5562-4104-a30f-1e5607466537 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received unexpected event network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 for instance with vm_state active and task_state None.
Dec 06 07:34:05 compute-1 ceph-mon[81689]: osdmap e283: 3 total, 3 up, 3 in
Dec 06 07:34:05 compute-1 ceph-mon[81689]: pgmap v2311: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Dec 06 07:34:05 compute-1 nova_compute[226101]: 2025-12-06 07:34:05.844 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:06 compute-1 nova_compute[226101]: 2025-12-06 07:34:06.434 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:06.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:06.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3739392517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:08 compute-1 ceph-mon[81689]: pgmap v2312: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.9 MiB/s wr, 254 op/s
Dec 06 07:34:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:08.450 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.451 226109 DEBUG nova.compute.manager [req-3a1fc9a6-7fbb-4527-8f83-97a9bee290f4 req-d1354e6f-9183-4f77-ada3-4e70003dd251 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-changed-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.452 226109 DEBUG nova.compute.manager [req-3a1fc9a6-7fbb-4527-8f83-97a9bee290f4 req-d1354e6f-9183-4f77-ada3-4e70003dd251 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Refreshing instance network info cache due to event network-changed-1f39ead9-93ea-42d4-bb3c-64034b85c0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.453 226109 DEBUG oslo_concurrency.lockutils [req-3a1fc9a6-7fbb-4527-8f83-97a9bee290f4 req-d1354e6f-9183-4f77-ada3-4e70003dd251 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.454 226109 DEBUG oslo_concurrency.lockutils [req-3a1fc9a6-7fbb-4527-8f83-97a9bee290f4 req-d1354e6f-9183-4f77-ada3-4e70003dd251 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.454 226109 DEBUG nova.network.neutron [req-3a1fc9a6-7fbb-4527-8f83-97a9bee290f4 req-d1354e6f-9183-4f77-ada3-4e70003dd251 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Refreshing network info cache for port 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.510 226109 INFO nova.virt.libvirt.driver [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Snapshot image upload complete
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.511 226109 INFO nova.compute.manager [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Took 14.78 seconds to snapshot the instance on the hypervisor.
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.611 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.783 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.784 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.784 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.784 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:34:08 compute-1 nova_compute[226101]: 2025-12-06 07:34:08.784 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:09 compute-1 nova_compute[226101]: 2025-12-06 07:34:09.175 226109 DEBUG nova.compute.manager [None req-ab14cb18-9536-4502-bd62-869671682c8f a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Dec 06 07:34:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:34:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1991038349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:09 compute-1 nova_compute[226101]: 2025-12-06 07:34:09.405 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:09 compute-1 nova_compute[226101]: 2025-12-06 07:34:09.490 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:34:09 compute-1 nova_compute[226101]: 2025-12-06 07:34:09.490 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:34:09 compute-1 nova_compute[226101]: 2025-12-06 07:34:09.494 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:34:09 compute-1 nova_compute[226101]: 2025-12-06 07:34:09.494 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:34:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e284 e284: 3 total, 3 up, 3 in
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.322 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.324 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4138MB free_disk=20.855270385742188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.324 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.324 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4141776775' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:34:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4141776775' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.554 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance a01650e7-2f06-400b-82aa-0ad2c8b84e6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.555 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 829161c5-19b5-459e-88a3-58512aaa5fc7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.555 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.555 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.620 226109 DEBUG nova.compute.manager [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.651 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.696 226109 INFO nova.compute.manager [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] instance snapshotting
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.699 226109 DEBUG nova.objects.instance [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'flavor' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:10.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:10 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:10.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.846 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.886 226109 DEBUG nova.network.neutron [req-3a1fc9a6-7fbb-4527-8f83-97a9bee290f4 req-d1354e6f-9183-4f77-ada3-4e70003dd251 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updated VIF entry in instance network info cache for port 1f39ead9-93ea-42d4-bb3c-64034b85c0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.887 226109 DEBUG nova.network.neutron [req-3a1fc9a6-7fbb-4527-8f83-97a9bee290f4 req-d1354e6f-9183-4f77-ada3-4e70003dd251 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating instance_info_cache with network_info: [{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.900 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.900 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.927 226109 DEBUG oslo_concurrency.lockutils [req-3a1fc9a6-7fbb-4527-8f83-97a9bee290f4 req-d1354e6f-9183-4f77-ada3-4e70003dd251 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:10 compute-1 nova_compute[226101]: 2025-12-06 07:34:10.935 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:34:10 compute-1 ceph-mon[81689]: pgmap v2313: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 302 op/s
Dec 06 07:34:10 compute-1 ceph-mon[81689]: osdmap e284: 3 total, 3 up, 3 in
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.063 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.143 226109 INFO nova.virt.libvirt.driver [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Beginning live snapshot process
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.365 226109 DEBUG nova.virt.libvirt.imagebackend [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.437 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:34:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3782614172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.627 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.976s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.632 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.664 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.700 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.701 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.701 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.708 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.709 226109 INFO nova.compute.claims [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.762 226109 DEBUG nova.storage.rbd_utils [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(21ef4f2a96a145a9ba8cfed53d29c1c7) on rbd image(a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.856 226109 DEBUG nova.scheduler.client.report [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.944 226109 DEBUG nova.scheduler.client.report [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.945 226109 DEBUG nova.compute.provider_tree [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:34:11 compute-1 nova_compute[226101]: 2025-12-06 07:34:11.979 226109 DEBUG nova.scheduler.client.report [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.009 226109 DEBUG nova.scheduler.client.report [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.075 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1991038349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-1 ceph-mon[81689]: pgmap v2315: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 363 op/s
Dec 06 07:34:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/30104927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1171411006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3782614172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.681 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.682 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.682 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.701 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.702 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.702 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.703 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.703 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:12.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:12 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:12.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e285 e285: 3 total, 3 up, 3 in
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.951 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.876s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.957 226109 DEBUG nova.compute.provider_tree [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:34:12 compute-1 nova_compute[226101]: 2025-12-06 07:34:12.981 226109 DEBUG nova.scheduler.client.report [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.014 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.015 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.068 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.068 226109 DEBUG nova.network.neutron [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.107 226109 INFO nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.137 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.216 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.218 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.219 226109 INFO nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Creating image(s)
Dec 06 07:34:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1849849855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1153200742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3016613992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:13 compute-1 ceph-mon[81689]: osdmap e285: 3 total, 3 up, 3 in
Dec 06 07:34:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/853046914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.805 226109 DEBUG nova.storage.rbd_utils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.919 226109 DEBUG nova.storage.rbd_utils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.946 226109 DEBUG nova.storage.rbd_utils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.950 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:13 compute-1 nova_compute[226101]: 2025-12-06 07:34:13.979 226109 DEBUG nova.policy [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '605b5481e0c944048e6a67046c30d693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.006 226109 DEBUG nova.storage.rbd_utils [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] cloning vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk@21ef4f2a96a145a9ba8cfed53d29c1c7 to images/a40e0372-ba8c-4000-a161-8b667b00eb86 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.044 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.045 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.046 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.046 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.082 226109 DEBUG nova.storage.rbd_utils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.088 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.233 226109 DEBUG nova.storage.rbd_utils [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] flattening images/a40e0372-ba8c-4000-a161-8b667b00eb86 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.697 226109 DEBUG nova.network.neutron [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Successfully created port: 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:34:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:14.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.791 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.808 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.808 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.809 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.809 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.809 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:14 compute-1 nova_compute[226101]: 2025-12-06 07:34:14.809 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:34:15 compute-1 ceph-mon[81689]: pgmap v2317: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 3.9 MiB/s wr, 274 op/s
Dec 06 07:34:15 compute-1 nova_compute[226101]: 2025-12-06 07:34:15.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:15 compute-1 nova_compute[226101]: 2025-12-06 07:34:15.719 226109 DEBUG nova.network.neutron [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Successfully updated port: 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:34:15 compute-1 nova_compute[226101]: 2025-12-06 07:34:15.825 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:15 compute-1 nova_compute[226101]: 2025-12-06 07:34:15.825 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:15 compute-1 nova_compute[226101]: 2025-12-06 07:34:15.826 226109 DEBUG nova.network.neutron [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:34:15 compute-1 nova_compute[226101]: 2025-12-06 07:34:15.849 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:16 compute-1 nova_compute[226101]: 2025-12-06 07:34:16.037 226109 DEBUG nova.network.neutron [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:34:16 compute-1 nova_compute[226101]: 2025-12-06 07:34:16.168 226109 DEBUG nova.compute.manager [req-89deaede-c00a-4e8d-bccc-52608e76e369 req-4a218770-08f1-4426-a1a9-4cadfcd732a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-changed-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:16 compute-1 nova_compute[226101]: 2025-12-06 07:34:16.169 226109 DEBUG nova.compute.manager [req-89deaede-c00a-4e8d-bccc-52608e76e369 req-4a218770-08f1-4426-a1a9-4cadfcd732a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Refreshing instance network info cache due to event network-changed-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:34:16 compute-1 nova_compute[226101]: 2025-12-06 07:34:16.169 226109 DEBUG oslo_concurrency.lockutils [req-89deaede-c00a-4e8d-bccc-52608e76e369 req-4a218770-08f1-4426-a1a9-4cadfcd732a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:16 compute-1 nova_compute[226101]: 2025-12-06 07:34:16.439 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:16.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:16.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:17 compute-1 nova_compute[226101]: 2025-12-06 07:34:17.575 226109 DEBUG nova.network.neutron [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updating instance_info_cache with network_info: [{"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:17 compute-1 nova_compute[226101]: 2025-12-06 07:34:17.595 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:17 compute-1 nova_compute[226101]: 2025-12-06 07:34:17.596 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Instance network_info: |[{"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:34:17 compute-1 nova_compute[226101]: 2025-12-06 07:34:17.596 226109 DEBUG oslo_concurrency.lockutils [req-89deaede-c00a-4e8d-bccc-52608e76e369 req-4a218770-08f1-4426-a1a9-4cadfcd732a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:17 compute-1 nova_compute[226101]: 2025-12-06 07:34:17.596 226109 DEBUG nova.network.neutron [req-89deaede-c00a-4e8d-bccc-52608e76e369 req-4a218770-08f1-4426-a1a9-4cadfcd732a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Refreshing network info cache for port 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:34:18 compute-1 podman[274280]: 2025-12-06 07:34:18.097843242 +0000 UTC m=+0.056032455 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 06 07:34:18 compute-1 podman[274279]: 2025-12-06 07:34:18.10038398 +0000 UTC m=+0.058531142 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 06 07:34:18 compute-1 podman[274281]: 2025-12-06 07:34:18.126453205 +0000 UTC m=+0.085232884 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:34:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:18.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:18.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:18 compute-1 ceph-mon[81689]: pgmap v2318: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 18 KiB/s wr, 121 op/s
Dec 06 07:34:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:19 compute-1 nova_compute[226101]: 2025-12-06 07:34:19.133 226109 DEBUG nova.network.neutron [req-89deaede-c00a-4e8d-bccc-52608e76e369 req-4a218770-08f1-4426-a1a9-4cadfcd732a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updated VIF entry in instance network info cache for port 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:34:19 compute-1 nova_compute[226101]: 2025-12-06 07:34:19.133 226109 DEBUG nova.network.neutron [req-89deaede-c00a-4e8d-bccc-52608e76e369 req-4a218770-08f1-4426-a1a9-4cadfcd732a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updating instance_info_cache with network_info: [{"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:19 compute-1 nova_compute[226101]: 2025-12-06 07:34:19.152 226109 DEBUG oslo_concurrency.lockutils [req-89deaede-c00a-4e8d-bccc-52608e76e369 req-4a218770-08f1-4426-a1a9-4cadfcd732a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:19 compute-1 nova_compute[226101]: 2025-12-06 07:34:19.411 226109 DEBUG nova.storage.rbd_utils [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] removing snapshot(21ef4f2a96a145a9ba8cfed53d29c1c7) on rbd image(a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:34:19 compute-1 nova_compute[226101]: 2025-12-06 07:34:19.454 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:19 compute-1 nova_compute[226101]: 2025-12-06 07:34:19.531 226109 DEBUG nova.storage.rbd_utils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] resizing rbd image 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:34:19 compute-1 nova_compute[226101]: 2025-12-06 07:34:19.781 226109 DEBUG nova.objects.instance [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'migration_context' on Instance uuid 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:20 compute-1 ceph-mon[81689]: pgmap v2319: 305 pgs: 305 active+clean; 613 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.3 MiB/s wr, 122 op/s
Dec 06 07:34:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/420931447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:20 compute-1 ceph-mon[81689]: pgmap v2320: 305 pgs: 305 active+clean; 613 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.8 MiB/s wr, 104 op/s
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.077 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.077 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Ensure instance console log exists: /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.078 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.078 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.078 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.080 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Start _get_guest_xml network_info=[{"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.084 226109 WARNING nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.088 226109 DEBUG nova.virt.libvirt.host [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.089 226109 DEBUG nova.virt.libvirt.host [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.092 226109 DEBUG nova.virt.libvirt.host [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.092 226109 DEBUG nova.virt.libvirt.host [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.093 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.093 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.094 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.094 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.094 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.095 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.095 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.095 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.095 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.096 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.096 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.096 226109 DEBUG nova.virt.hardware [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.099 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e286 e286: 3 total, 3 up, 3 in
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.263 226109 DEBUG nova.storage.rbd_utils [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(snap) on rbd image(a40e0372-ba8c-4000-a161-8b667b00eb86) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:34:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:34:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/717285692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.606 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.635 226109 DEBUG nova.storage.rbd_utils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.640 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:20.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:20.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:20 compute-1 nova_compute[226101]: 2025-12-06 07:34:20.851 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:34:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1044629662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:21 compute-1 ceph-mon[81689]: osdmap e286: 3 total, 3 up, 3 in
Dec 06 07:34:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/717285692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1013349408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1044629662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.201 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.203 226109 DEBUG nova.virt.libvirt.vif [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=125,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-rhw08b3b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=0c3768cf-031f-4b14-a8b4-9ff73a9cfa72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.204 226109 DEBUG nova.network.os_vif_util [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.205 226109 DEBUG nova.network.os_vif_util [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:88:de,bridge_name='br-int',has_traffic_filtering=True,id=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap588bab74-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.206 226109 DEBUG nova.objects.instance [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e287 e287: 3 total, 3 up, 3 in
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.227 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <uuid>0c3768cf-031f-4b14-a8b4-9ff73a9cfa72</uuid>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <name>instance-0000007d</name>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <nova:name>multiattach-server-1</nova:name>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:34:20</nova:creationTime>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <nova:user uuid="605b5481e0c944048e6a67046c30d693">tempest-AttachVolumeMultiAttachTest-690984293-project-member</nova:user>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <nova:project uuid="833f4cf9f5a64b2ab94c3bf330353a31">tempest-AttachVolumeMultiAttachTest-690984293</nova:project>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <nova:port uuid="588bab74-7ca6-4e9a-a06b-0b7fbcf29c63">
Dec 06 07:34:21 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <system>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:34:21 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:34:21 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <entry name="serial">0c3768cf-031f-4b14-a8b4-9ff73a9cfa72</entry>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <entry name="uuid">0c3768cf-031f-4b14-a8b4-9ff73a9cfa72</entry>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </system>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <os>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   </os>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <features>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   </features>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk">
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       </source>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk.config">
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       </source>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:34:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:7f:88:de"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <target dev="tap588bab74-7c"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72/console.log" append="off"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <video>
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </video>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:34:21 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:34:21 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:34:21 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:34:21 compute-1 nova_compute[226101]: </domain>
Dec 06 07:34:21 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.228 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Preparing to wait for external event network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.229 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.229 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.229 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.230 226109 DEBUG nova.virt.libvirt.vif [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=125,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-rhw08b3b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=0c3768cf-031f-4b14-a8b4-9ff73a9cfa72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.230 226109 DEBUG nova.network.os_vif_util [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.231 226109 DEBUG nova.network.os_vif_util [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:88:de,bridge_name='br-int',has_traffic_filtering=True,id=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap588bab74-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.231 226109 DEBUG os_vif [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:88:de,bridge_name='br-int',has_traffic_filtering=True,id=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap588bab74-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.232 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.232 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.233 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.236 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap588bab74-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.237 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap588bab74-7c, col_values=(('external_ids', {'iface-id': '588bab74-7ca6-4e9a-a06b-0b7fbcf29c63', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:88:de', 'vm-uuid': '0c3768cf-031f-4b14-a8b4-9ff73a9cfa72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:21 compute-1 NetworkManager[49031]: <info>  [1765006461.2395] manager: (tap588bab74-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.238 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.241 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.244 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.246 226109 INFO os_vif [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:88:de,bridge_name='br-int',has_traffic_filtering=True,id=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap588bab74-7c')
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.312 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.313 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.313 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:7f:88:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.314 226109 INFO nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Using config drive
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.349 226109 DEBUG nova.storage.rbd_utils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.442 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.984 226109 INFO nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Creating config drive at /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72/disk.config
Dec 06 07:34:21 compute-1 nova_compute[226101]: 2025-12-06 07:34:21.992 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ir7ub5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:22 compute-1 ovn_controller[130279]: 2025-12-06T07:34:22Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:39:62 10.100.0.4
Dec 06 07:34:22 compute-1 ovn_controller[130279]: 2025-12-06T07:34:22Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:39:62 10.100.0.4
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.132 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ir7ub5i" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.164 226109 DEBUG nova.storage.rbd_utils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] rbd image 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.168 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72/disk.config 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:22.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.923 226109 DEBUG oslo_concurrency.processutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72/disk.config 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.755s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.924 226109 INFO nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Deleting local config drive /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72/disk.config because it was imported into RBD.
Dec 06 07:34:22 compute-1 kernel: tap588bab74-7c: entered promiscuous mode
Dec 06 07:34:22 compute-1 NetworkManager[49031]: <info>  [1765006462.9712] manager: (tap588bab74-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Dec 06 07:34:22 compute-1 ovn_controller[130279]: 2025-12-06T07:34:22Z|00500|binding|INFO|Claiming lport 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 for this chassis.
Dec 06 07:34:22 compute-1 ovn_controller[130279]: 2025-12-06T07:34:22Z|00501|binding|INFO|588bab74-7ca6-4e9a-a06b-0b7fbcf29c63: Claiming fa:16:3e:7f:88:de 10.100.0.12
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.973 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:22.980 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:88:de 10.100.0.12'], port_security=['fa:16:3e:7f:88:de 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0c3768cf-031f-4b14-a8b4-9ff73a9cfa72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:34:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:22.982 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis
Dec 06 07:34:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:22.983 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272
Dec 06 07:34:22 compute-1 ovn_controller[130279]: 2025-12-06T07:34:22Z|00502|binding|INFO|Setting lport 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 ovn-installed in OVS
Dec 06 07:34:22 compute-1 ovn_controller[130279]: 2025-12-06T07:34:22Z|00503|binding|INFO|Setting lport 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 up in Southbound
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.991 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:22 compute-1 nova_compute[226101]: 2025-12-06 07:34:22.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.001 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4c80ce-e0a8-4168-a4cb-8be5b27d7827]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:23 compute-1 systemd-machined[190302]: New machine qemu-59-instance-0000007d.
Dec 06 07:34:23 compute-1 systemd[1]: Started Virtual Machine qemu-59-instance-0000007d.
Dec 06 07:34:23 compute-1 systemd-udevd[274592]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.030 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e8982a-3ff2-43be-b25f-9581fa48537c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.033 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2a2358-588e-4edd-a9ec-c2004536a8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:23 compute-1 NetworkManager[49031]: <info>  [1765006463.0404] device (tap588bab74-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:34:23 compute-1 NetworkManager[49031]: <info>  [1765006463.0413] device (tap588bab74-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.060 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[74bc061e-b4e2-4aa4-949e-93b0a2644e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.078 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e2297499-bab8-4a75-8496-9b8e54ccdf91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672206, 'reachable_time': 41951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274602, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.094 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4bc62a-6348-467f-95b9-c10f557ec4ea]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672218, 'tstamp': 672218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274603, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672221, 'tstamp': 672221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274603, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.095 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:23 compute-1 nova_compute[226101]: 2025-12-06 07:34:23.098 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.098 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.098 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.099 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:34:23.099 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:34:23 compute-1 ceph-mon[81689]: pgmap v2322: 305 pgs: 305 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 9.7 MiB/s wr, 181 op/s
Dec 06 07:34:23 compute-1 ceph-mon[81689]: osdmap e287: 3 total, 3 up, 3 in
Dec 06 07:34:23 compute-1 nova_compute[226101]: 2025-12-06 07:34:23.277 226109 INFO nova.virt.libvirt.driver [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Snapshot image upload complete
Dec 06 07:34:23 compute-1 nova_compute[226101]: 2025-12-06 07:34:23.277 226109 INFO nova.compute.manager [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Took 12.52 seconds to snapshot the instance on the hypervisor.
Dec 06 07:34:23 compute-1 nova_compute[226101]: 2025-12-06 07:34:23.675 226109 DEBUG nova.compute.manager [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Dec 06 07:34:23 compute-1 nova_compute[226101]: 2025-12-06 07:34:23.676 226109 DEBUG nova.compute.manager [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Dec 06 07:34:23 compute-1 nova_compute[226101]: 2025-12-06 07:34:23.676 226109 DEBUG nova.compute.manager [None req-28c68685-51b3-4dfb-8b79-e32b43f02118 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Deleting image 24540ea9-2655-4ff6-a852-08458e75db90 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Dec 06 07:34:24 compute-1 ceph-mon[81689]: pgmap v2324: 305 pgs: 305 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 9.9 MiB/s wr, 173 op/s
Dec 06 07:34:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3848048345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.377 226109 DEBUG nova.compute.manager [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.377 226109 DEBUG oslo_concurrency.lockutils [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.378 226109 DEBUG oslo_concurrency.lockutils [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.378 226109 DEBUG oslo_concurrency.lockutils [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.379 226109 DEBUG nova.compute.manager [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Processing event network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.379 226109 DEBUG nova.compute.manager [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.379 226109 DEBUG oslo_concurrency.lockutils [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.379 226109 DEBUG oslo_concurrency.lockutils [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.379 226109 DEBUG oslo_concurrency.lockutils [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.379 226109 DEBUG nova.compute.manager [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] No waiting events found dispatching network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.380 226109 WARNING nova.compute.manager [req-6cf2f676-40eb-4d42-add5-9ca618384dd3 req-61b35268-d6e6-4630-8efa-c16f13feb853 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received unexpected event network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 for instance with vm_state building and task_state spawning.
Dec 06 07:34:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.562 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.564 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006464.5635722, 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.564 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] VM Started (Lifecycle Event)
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.567 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.570 226109 INFO nova.virt.libvirt.driver [-] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Instance spawned successfully.
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.570 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.584 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.587 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.594 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.594 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.595 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.595 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.595 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.596 226109 DEBUG nova.virt.libvirt.driver [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.609 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.609 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006464.5668204, 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.609 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] VM Paused (Lifecycle Event)
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.654 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.658 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006464.567052, 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.659 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] VM Resumed (Lifecycle Event)
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.691 226109 INFO nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Took 11.47 seconds to spawn the instance on the hypervisor.
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.692 226109 DEBUG nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.693 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.701 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.731 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:34:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:24.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:24.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.759 226109 INFO nova.compute.manager [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Took 13.73 seconds to build instance.
Dec 06 07:34:24 compute-1 nova_compute[226101]: 2025-12-06 07:34:24.777 226109 DEBUG oslo_concurrency.lockutils [None req-1036cd74-43bd-4b9c-9bf8-eb17fec05a50 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:25 compute-1 ceph-mon[81689]: pgmap v2325: 305 pgs: 305 active+clean; 705 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 7.5 MiB/s wr, 187 op/s
Dec 06 07:34:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e288 e288: 3 total, 3 up, 3 in
Dec 06 07:34:26 compute-1 nova_compute[226101]: 2025-12-06 07:34:26.239 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:26 compute-1 nova_compute[226101]: 2025-12-06 07:34:26.443 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:26.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:26.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:26 compute-1 ceph-mon[81689]: osdmap e288: 3 total, 3 up, 3 in
Dec 06 07:34:27 compute-1 nova_compute[226101]: 2025-12-06 07:34:27.416 226109 DEBUG nova.compute.manager [req-13bea287-157e-404a-b688-152204156878 req-caa5048b-6f11-41c2-bb13-bfa897cbb563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-changed-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:27 compute-1 nova_compute[226101]: 2025-12-06 07:34:27.417 226109 DEBUG nova.compute.manager [req-13bea287-157e-404a-b688-152204156878 req-caa5048b-6f11-41c2-bb13-bfa897cbb563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Refreshing instance network info cache due to event network-changed-1f39ead9-93ea-42d4-bb3c-64034b85c0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:34:27 compute-1 nova_compute[226101]: 2025-12-06 07:34:27.417 226109 DEBUG oslo_concurrency.lockutils [req-13bea287-157e-404a-b688-152204156878 req-caa5048b-6f11-41c2-bb13-bfa897cbb563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:27 compute-1 nova_compute[226101]: 2025-12-06 07:34:27.417 226109 DEBUG oslo_concurrency.lockutils [req-13bea287-157e-404a-b688-152204156878 req-caa5048b-6f11-41c2-bb13-bfa897cbb563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:27 compute-1 nova_compute[226101]: 2025-12-06 07:34:27.418 226109 DEBUG nova.network.neutron [req-13bea287-157e-404a-b688-152204156878 req-caa5048b-6f11-41c2-bb13-bfa897cbb563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Refreshing network info cache for port 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:34:27 compute-1 ovn_controller[130279]: 2025-12-06T07:34:27Z|00504|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec 06 07:34:27 compute-1 ovn_controller[130279]: 2025-12-06T07:34:27Z|00505|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:34:27 compute-1 nova_compute[226101]: 2025-12-06 07:34:27.897 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:27 compute-1 ceph-mon[81689]: pgmap v2327: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 3.9 MiB/s wr, 271 op/s
Dec 06 07:34:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:28.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e289 e289: 3 total, 3 up, 3 in
Dec 06 07:34:28 compute-1 nova_compute[226101]: 2025-12-06 07:34:28.989 226109 DEBUG nova.network.neutron [req-13bea287-157e-404a-b688-152204156878 req-caa5048b-6f11-41c2-bb13-bfa897cbb563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updated VIF entry in instance network info cache for port 1f39ead9-93ea-42d4-bb3c-64034b85c0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:34:28 compute-1 nova_compute[226101]: 2025-12-06 07:34:28.991 226109 DEBUG nova.network.neutron [req-13bea287-157e-404a-b688-152204156878 req-caa5048b-6f11-41c2-bb13-bfa897cbb563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating instance_info_cache with network_info: [{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:29 compute-1 nova_compute[226101]: 2025-12-06 07:34:29.011 226109 DEBUG oslo_concurrency.lockutils [req-13bea287-157e-404a-b688-152204156878 req-caa5048b-6f11-41c2-bb13-bfa897cbb563 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e290 e290: 3 total, 3 up, 3 in
Dec 06 07:34:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:29 compute-1 nova_compute[226101]: 2025-12-06 07:34:29.562 226109 DEBUG nova.compute.manager [req-85da9e16-e4e2-42ac-b989-3b393ed0ea75 req-d7f8e8f2-83a5-4cb9-b9c7-601cab53d543 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-changed-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:29 compute-1 nova_compute[226101]: 2025-12-06 07:34:29.563 226109 DEBUG nova.compute.manager [req-85da9e16-e4e2-42ac-b989-3b393ed0ea75 req-d7f8e8f2-83a5-4cb9-b9c7-601cab53d543 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Refreshing instance network info cache due to event network-changed-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:34:29 compute-1 nova_compute[226101]: 2025-12-06 07:34:29.563 226109 DEBUG oslo_concurrency.lockutils [req-85da9e16-e4e2-42ac-b989-3b393ed0ea75 req-d7f8e8f2-83a5-4cb9-b9c7-601cab53d543 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:29 compute-1 nova_compute[226101]: 2025-12-06 07:34:29.564 226109 DEBUG oslo_concurrency.lockutils [req-85da9e16-e4e2-42ac-b989-3b393ed0ea75 req-d7f8e8f2-83a5-4cb9-b9c7-601cab53d543 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:29 compute-1 nova_compute[226101]: 2025-12-06 07:34:29.564 226109 DEBUG nova.network.neutron [req-85da9e16-e4e2-42ac-b989-3b393ed0ea75 req-d7f8e8f2-83a5-4cb9-b9c7-601cab53d543 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Refreshing network info cache for port 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:34:29 compute-1 ceph-mon[81689]: pgmap v2328: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:34:29 compute-1 ceph-mon[81689]: osdmap e289: 3 total, 3 up, 3 in
Dec 06 07:34:29 compute-1 ceph-mon[81689]: osdmap e290: 3 total, 3 up, 3 in
Dec 06 07:34:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e291 e291: 3 total, 3 up, 3 in
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.284 226109 DEBUG oslo_concurrency.lockutils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.285 226109 DEBUG oslo_concurrency.lockutils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.300 226109 DEBUG nova.objects.instance [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 829161c5-19b5-459e-88a3-58512aaa5fc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.335 226109 DEBUG oslo_concurrency.lockutils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.601 226109 DEBUG oslo_concurrency.lockutils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.602 226109 DEBUG oslo_concurrency.lockutils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.602 226109 INFO nova.compute.manager [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Attaching volume 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6 to /dev/vdb
Dec 06 07:34:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:30.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.852 226109 DEBUG os_brick.utils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.854 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.888 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.888 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb089e5-f1ea-42a0-a1d3-25a92a132b0a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.891 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.900 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.900 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[50760db6-dc9d-450b-b881-efb02f90f87a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.902 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.911 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.911 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[1caa0951-1a75-4795-954c-13f2e32c5aa2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.913 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[e4213366-d500-4bcf-b107-1447322edf2c]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.914 226109 DEBUG oslo_concurrency.processutils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.946 226109 DEBUG oslo_concurrency.processutils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.950 226109 DEBUG os_brick.initiator.connectors.lightos [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.950 226109 DEBUG os_brick.initiator.connectors.lightos [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.951 226109 DEBUG os_brick.initiator.connectors.lightos [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.951 226109 DEBUG os_brick.utils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:34:30 compute-1 nova_compute[226101]: 2025-12-06 07:34:30.952 226109 DEBUG nova.virt.block_device [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating existing volume attachment record: c773479c-7f2e-48d2-b912-dd798ce25bc8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:34:31 compute-1 nova_compute[226101]: 2025-12-06 07:34:31.242 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:31 compute-1 nova_compute[226101]: 2025-12-06 07:34:31.449 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:31 compute-1 nova_compute[226101]: 2025-12-06 07:34:31.815 226109 DEBUG nova.network.neutron [req-85da9e16-e4e2-42ac-b989-3b393ed0ea75 req-d7f8e8f2-83a5-4cb9-b9c7-601cab53d543 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updated VIF entry in instance network info cache for port 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:34:31 compute-1 nova_compute[226101]: 2025-12-06 07:34:31.816 226109 DEBUG nova.network.neutron [req-85da9e16-e4e2-42ac-b989-3b393ed0ea75 req-d7f8e8f2-83a5-4cb9-b9c7-601cab53d543 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updating instance_info_cache with network_info: [{"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:31 compute-1 nova_compute[226101]: 2025-12-06 07:34:31.824 226109 DEBUG nova.objects.instance [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 829161c5-19b5-459e-88a3-58512aaa5fc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:31 compute-1 nova_compute[226101]: 2025-12-06 07:34:31.844 226109 DEBUG nova.virt.libvirt.driver [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Attempting to attach volume 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:34:31 compute-1 nova_compute[226101]: 2025-12-06 07:34:31.846 226109 DEBUG nova.virt.libvirt.guest [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:34:31 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:34:31 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-783c2bdc-ead3-4bc6-bfc6-f7f7388312c6">
Dec 06 07:34:31 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:31 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:31 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:31 compute-1 nova_compute[226101]:   </source>
Dec 06 07:34:31 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:34:31 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:34:31 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:34:31 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:34:31 compute-1 nova_compute[226101]:   <serial>783c2bdc-ead3-4bc6-bfc6-f7f7388312c6</serial>
Dec 06 07:34:31 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:34:31 compute-1 nova_compute[226101]: </disk>
Dec 06 07:34:31 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:34:32 compute-1 nova_compute[226101]: 2025-12-06 07:34:32.048 226109 DEBUG oslo_concurrency.lockutils [req-85da9e16-e4e2-42ac-b989-3b393ed0ea75 req-d7f8e8f2-83a5-4cb9-b9c7-601cab53d543 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:32.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:32.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:33 compute-1 ceph-mon[81689]: osdmap e291: 3 total, 3 up, 3 in
Dec 06 07:34:33 compute-1 ceph-mon[81689]: pgmap v2332: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 37 KiB/s wr, 257 op/s
Dec 06 07:34:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:34.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:36 compute-1 nova_compute[226101]: 2025-12-06 07:34:36.276 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:36 compute-1 nova_compute[226101]: 2025-12-06 07:34:36.449 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:36.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:38 compute-1 nova_compute[226101]: 2025-12-06 07:34:38.041 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:38 compute-1 nova_compute[226101]: 2025-12-06 07:34:38.181 226109 DEBUG nova.virt.libvirt.driver [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:38 compute-1 nova_compute[226101]: 2025-12-06 07:34:38.182 226109 DEBUG nova.virt.libvirt.driver [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:38 compute-1 nova_compute[226101]: 2025-12-06 07:34:38.182 226109 DEBUG nova.virt.libvirt.driver [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:38 compute-1 nova_compute[226101]: 2025-12-06 07:34:38.183 226109 DEBUG nova.virt.libvirt.driver [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:d3:39:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:34:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:38 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.124450684s, txc = 0x55b553367b00
Dec 06 07:34:38 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.124416351s
Dec 06 07:34:38 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.124416351s
Dec 06 07:34:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1262995560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:38 compute-1 ceph-mon[81689]: pgmap v2333: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 30 KiB/s wr, 208 op/s
Dec 06 07:34:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:38.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:38 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:38.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:39 compute-1 nova_compute[226101]: 2025-12-06 07:34:39.585 226109 DEBUG oslo_concurrency.lockutils [None req-d490e099-bcb3-4385-b1eb-cef95cd31881 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 8.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:39 compute-1 nova_compute[226101]: 2025-12-06 07:34:39.884 226109 DEBUG oslo_concurrency.lockutils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:39 compute-1 nova_compute[226101]: 2025-12-06 07:34:39.885 226109 DEBUG oslo_concurrency.lockutils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:39 compute-1 nova_compute[226101]: 2025-12-06 07:34:39.949 226109 DEBUG nova.objects.instance [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:40 compute-1 nova_compute[226101]: 2025-12-06 07:34:40.233 226109 DEBUG oslo_concurrency.lockutils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:40.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:40.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:40 compute-1 nova_compute[226101]: 2025-12-06 07:34:40.887 226109 DEBUG oslo_concurrency.lockutils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:40 compute-1 nova_compute[226101]: 2025-12-06 07:34:40.888 226109 DEBUG oslo_concurrency.lockutils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:40 compute-1 nova_compute[226101]: 2025-12-06 07:34:40.888 226109 INFO nova.compute.manager [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Attaching volume 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6 to /dev/vdb
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.017 226109 DEBUG os_brick.utils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.019 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.035 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.036 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[562676d9-165d-431e-9789-db95c7cadce4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.037 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.047 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.048 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab657fa-660e-41de-8602-665269e7cf23]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.050 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.060 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.060 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[083529e8-bd54-45b8-a12e-cf4046525084]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.062 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe0462c-8265-4d7d-b306-ad04a555814f]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.063 226109 DEBUG oslo_concurrency.processutils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.096 226109 DEBUG oslo_concurrency.processutils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.098 226109 DEBUG os_brick.initiator.connectors.lightos [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.099 226109 DEBUG os_brick.initiator.connectors.lightos [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.099 226109 DEBUG os_brick.initiator.connectors.lightos [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.099 226109 DEBUG os_brick.utils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.099 226109 DEBUG nova.virt.block_device [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updating existing volume attachment record: 7374f7da-58b8-4b31-ba66-95645456f3e3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:41 compute-1 nova_compute[226101]: 2025-12-06 07:34:41.451 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:42 compute-1 nova_compute[226101]: 2025-12-06 07:34:42.424 226109 DEBUG nova.objects.instance [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:42 compute-1 nova_compute[226101]: 2025-12-06 07:34:42.464 226109 DEBUG nova.virt.libvirt.driver [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Attempting to attach volume 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:34:42 compute-1 nova_compute[226101]: 2025-12-06 07:34:42.466 226109 DEBUG nova.virt.libvirt.guest [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:34:42 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:34:42 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-783c2bdc-ead3-4bc6-bfc6-f7f7388312c6">
Dec 06 07:34:42 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:42 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:42 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:42 compute-1 nova_compute[226101]:   </source>
Dec 06 07:34:42 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:34:42 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:34:42 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:34:42 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:34:42 compute-1 nova_compute[226101]:   <serial>783c2bdc-ead3-4bc6-bfc6-f7f7388312c6</serial>
Dec 06 07:34:42 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:34:42 compute-1 nova_compute[226101]: </disk>
Dec 06 07:34:42 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:34:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:42.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e292 e292: 3 total, 3 up, 3 in
Dec 06 07:34:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:44 compute-1 ceph-mon[81689]: pgmap v2334: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 32 KiB/s wr, 233 op/s
Dec 06 07:34:44 compute-1 ceph-mon[81689]: pgmap v2335: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 25 KiB/s wr, 164 op/s
Dec 06 07:34:44 compute-1 ceph-mon[81689]: pgmap v2336: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 134 op/s
Dec 06 07:34:44 compute-1 nova_compute[226101]: 2025-12-06 07:34:44.262 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:44.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:45 compute-1 ceph-mon[81689]: osdmap e292: 3 total, 3 up, 3 in
Dec 06 07:34:45 compute-1 ceph-mon[81689]: pgmap v2338: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 06 07:34:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/461198439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:45 compute-1 ceph-mon[81689]: pgmap v2339: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 06 07:34:46 compute-1 nova_compute[226101]: 2025-12-06 07:34:46.285 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:46 compute-1 nova_compute[226101]: 2025-12-06 07:34:46.453 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:46 compute-1 nova_compute[226101]: 2025-12-06 07:34:46.495 226109 DEBUG nova.virt.libvirt.driver [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:46 compute-1 nova_compute[226101]: 2025-12-06 07:34:46.496 226109 DEBUG nova.virt.libvirt.driver [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:46 compute-1 nova_compute[226101]: 2025-12-06 07:34:46.496 226109 DEBUG nova.virt.libvirt.driver [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:46 compute-1 nova_compute[226101]: 2025-12-06 07:34:46.496 226109 DEBUG nova.virt.libvirt.driver [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:7f:88:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:34:46 compute-1 nova_compute[226101]: 2025-12-06 07:34:46.674 226109 DEBUG oslo_concurrency.lockutils [None req-8a9f39cd-5801-45f9-9a83-432d451976e2 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:46 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000078s ======
Dec 06 07:34:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:46.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Dec 06 07:34:47 compute-1 ceph-mon[81689]: pgmap v2340: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.078 226109 DEBUG oslo_concurrency.lockutils [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.079 226109 DEBUG oslo_concurrency.lockutils [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.105 226109 INFO nova.compute.manager [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Detaching volume 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.263 226109 INFO nova.virt.block_device [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Attempting to driver detach volume 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6 from mountpoint /dev/vdb
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.275 226109 DEBUG nova.virt.libvirt.driver [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Attempting to detach device vdb from instance 829161c5-19b5-459e-88a3-58512aaa5fc7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.275 226109 DEBUG nova.virt.libvirt.guest [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-783c2bdc-ead3-4bc6-bfc6-f7f7388312c6">
Dec 06 07:34:48 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   </source>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <serial>783c2bdc-ead3-4bc6-bfc6-f7f7388312c6</serial>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]: </disk>
Dec 06 07:34:48 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.480 226109 INFO nova.virt.libvirt.driver [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully detached device vdb from instance 829161c5-19b5-459e-88a3-58512aaa5fc7 from the persistent domain config.
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.480 226109 DEBUG nova.virt.libvirt.driver [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 829161c5-19b5-459e-88a3-58512aaa5fc7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.482 226109 DEBUG nova.virt.libvirt.guest [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-783c2bdc-ead3-4bc6-bfc6-f7f7388312c6">
Dec 06 07:34:48 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   </source>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <serial>783c2bdc-ead3-4bc6-bfc6-f7f7388312c6</serial>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:34:48 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:34:48 compute-1 nova_compute[226101]: </disk>
Dec 06 07:34:48 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.602 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765006488.601658, 829161c5-19b5-459e-88a3-58512aaa5fc7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.603 226109 DEBUG nova.virt.libvirt.driver [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 829161c5-19b5-459e-88a3-58512aaa5fc7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.606 226109 INFO nova.virt.libvirt.driver [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully detached device vdb from instance 829161c5-19b5-459e-88a3-58512aaa5fc7 from the live domain config.
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.759 226109 INFO nova.virt.libvirt.driver [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Detected multiple connections on this host for volume: 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6, skipping target disconnect.
Dec 06 07:34:48 compute-1 ceph-mon[81689]: pgmap v2341: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 06 07:34:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2908565525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:48.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:48.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:48 compute-1 nova_compute[226101]: 2025-12-06 07:34:48.965 226109 DEBUG nova.objects.instance [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 829161c5-19b5-459e-88a3-58512aaa5fc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:49 compute-1 nova_compute[226101]: 2025-12-06 07:34:49.003 226109 DEBUG oslo_concurrency.lockutils [None req-20a60741-a5ec-4a0c-93fe-e874ffaea973 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:49 compute-1 podman[274709]: 2025-12-06 07:34:49.078381799 +0000 UTC m=+0.058025028 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 07:34:49 compute-1 podman[274708]: 2025-12-06 07:34:49.10765725 +0000 UTC m=+0.087054162 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:34:49 compute-1 podman[274710]: 2025-12-06 07:34:49.109722945 +0000 UTC m=+0.088273005 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:34:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 e293: 3 total, 3 up, 3 in
Dec 06 07:34:50 compute-1 sshd-session[274706]: Invalid user admin from 78.128.112.74 port 54426
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.180 226109 DEBUG oslo_concurrency.lockutils [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.180 226109 DEBUG oslo_concurrency.lockutils [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.195 226109 INFO nova.compute.manager [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Detaching volume 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6
Dec 06 07:34:50 compute-1 sshd-session[274706]: Connection closed by invalid user admin 78.128.112.74 port 54426 [preauth]
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.324 226109 INFO nova.virt.block_device [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Attempting to driver detach volume 783c2bdc-ead3-4bc6-bfc6-f7f7388312c6 from mountpoint /dev/vdb
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.333 226109 DEBUG nova.virt.libvirt.driver [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Attempting to detach device vdb from instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.334 226109 DEBUG nova.virt.libvirt.guest [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-783c2bdc-ead3-4bc6-bfc6-f7f7388312c6">
Dec 06 07:34:50 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   </source>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <serial>783c2bdc-ead3-4bc6-bfc6-f7f7388312c6</serial>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]: </disk>
Dec 06 07:34:50 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.458 226109 INFO nova.virt.libvirt.driver [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully detached device vdb from instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 from the persistent domain config.
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.458 226109 DEBUG nova.virt.libvirt.driver [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.459 226109 DEBUG nova.virt.libvirt.guest [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-783c2bdc-ead3-4bc6-bfc6-f7f7388312c6">
Dec 06 07:34:50 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   </source>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <serial>783c2bdc-ead3-4bc6-bfc6-f7f7388312c6</serial>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:34:50 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:34:50 compute-1 nova_compute[226101]: </disk>
Dec 06 07:34:50 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:34:50 compute-1 ceph-mon[81689]: pgmap v2342: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.722 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765006490.7218628, 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.723 226109 DEBUG nova.virt.libvirt.driver [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:34:50 compute-1 nova_compute[226101]: 2025-12-06 07:34:50.725 226109 INFO nova.virt.libvirt.driver [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully detached device vdb from instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 from the live domain config.
Dec 06 07:34:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:50.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:50.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:51 compute-1 ovn_controller[130279]: 2025-12-06T07:34:51Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:88:de 10.100.0.12
Dec 06 07:34:51 compute-1 ovn_controller[130279]: 2025-12-06T07:34:51Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:88:de 10.100.0.12
Dec 06 07:34:51 compute-1 nova_compute[226101]: 2025-12-06 07:34:51.154 226109 DEBUG nova.objects.instance [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:51 compute-1 nova_compute[226101]: 2025-12-06 07:34:51.188 226109 DEBUG oslo_concurrency.lockutils [None req-cc40aecf-c4cf-43ba-a081-4c82e03bcc1c 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:51 compute-1 nova_compute[226101]: 2025-12-06 07:34:51.287 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:51 compute-1 nova_compute[226101]: 2025-12-06 07:34:51.456 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:51 compute-1 ceph-mon[81689]: osdmap e293: 3 total, 3 up, 3 in
Dec 06 07:34:51 compute-1 ceph-mon[81689]: pgmap v2344: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Dec 06 07:34:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:52.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:52.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:54 compute-1 ceph-mon[81689]: pgmap v2345: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Dec 06 07:34:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:54.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:54.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:55 compute-1 ceph-mon[81689]: pgmap v2346: 305 pgs: 305 active+clean; 491 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.5 MiB/s wr, 46 op/s
Dec 06 07:34:56 compute-1 nova_compute[226101]: 2025-12-06 07:34:56.290 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:56 compute-1 nova_compute[226101]: 2025-12-06 07:34:56.496 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:56 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:34:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:34:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:58.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3450741167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3678721108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:59 compute-1 ceph-mon[81689]: pgmap v2347: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 3.2 MiB/s wr, 84 op/s
Dec 06 07:34:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4270521250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:00 compute-1 sudo[274769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:00 compute-1 sudo[274769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:00 compute-1 sudo[274769]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:00 compute-1 sudo[274794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:35:00 compute-1 sudo[274794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:00 compute-1 sudo[274794]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:00 compute-1 sudo[274819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:00 compute-1 sudo[274819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:00 compute-1 sudo[274819]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:00 compute-1 sudo[274844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:35:00 compute-1 sudo[274844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:00 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:00 compute-1 sudo[274844]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:01 compute-1 nova_compute[226101]: 2025-12-06 07:35:01.357 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:01 compute-1 nova_compute[226101]: 2025-12-06 07:35:01.498 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:01.652 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:01.653 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:01.654 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:02 compute-1 ceph-mon[81689]: pgmap v2348: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 3.2 MiB/s wr, 84 op/s
Dec 06 07:35:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:02.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:02.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:04.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:04.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1378460008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:05 compute-1 ceph-mon[81689]: pgmap v2349: 305 pgs: 305 active+clean; 555 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 3.8 MiB/s wr, 90 op/s
Dec 06 07:35:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:35:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:35:06 compute-1 sshd-session[274899]: Connection reset by authenticating user root 91.202.233.33 port 22850 [preauth]
Dec 06 07:35:06 compute-1 nova_compute[226101]: 2025-12-06 07:35:06.360 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:06 compute-1 nova_compute[226101]: 2025-12-06 07:35:06.500 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:06.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:06.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:08 compute-1 ceph-mon[81689]: pgmap v2350: 305 pgs: 305 active+clean; 555 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 2.8 MiB/s wr, 71 op/s
Dec 06 07:35:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:35:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:35:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:35:08 compute-1 ceph-mon[81689]: pgmap v2351: 305 pgs: 305 active+clean; 570 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 3.6 MiB/s wr, 79 op/s
Dec 06 07:35:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:08.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:08.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:08 compute-1 sshd-session[274901]: Connection reset by authenticating user root 91.202.233.33 port 22856 [preauth]
Dec 06 07:35:09 compute-1 nova_compute[226101]: 2025-12-06 07:35:09.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:09 compute-1 nova_compute[226101]: 2025-12-06 07:35:09.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:35:10 compute-1 nova_compute[226101]: 2025-12-06 07:35:10.549 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:35:10 compute-1 nova_compute[226101]: 2025-12-06 07:35:10.550 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:35:10 compute-1 nova_compute[226101]: 2025-12-06 07:35:10.550 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:35:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:11 compute-1 nova_compute[226101]: 2025-12-06 07:35:11.397 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:11 compute-1 nova_compute[226101]: 2025-12-06 07:35:11.503 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:11 compute-1 sshd-session[274903]: Connection reset by authenticating user root 91.202.233.33 port 22860 [preauth]
Dec 06 07:35:11 compute-1 nova_compute[226101]: 2025-12-06 07:35:11.999 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating instance_info_cache with network_info: [{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:35:12 compute-1 nova_compute[226101]: 2025-12-06 07:35:12.015 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:35:12 compute-1 nova_compute[226101]: 2025-12-06 07:35:12.016 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:35:12 compute-1 nova_compute[226101]: 2025-12-06 07:35:12.017 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:12 compute-1 nova_compute[226101]: 2025-12-06 07:35:12.045 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:12 compute-1 nova_compute[226101]: 2025-12-06 07:35:12.046 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:12 compute-1 nova_compute[226101]: 2025-12-06 07:35:12.046 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:12 compute-1 nova_compute[226101]: 2025-12-06 07:35:12.046 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:35:12 compute-1 nova_compute[226101]: 2025-12-06 07:35:12.046 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:12 compute-1 ceph-mon[81689]: pgmap v2352: 305 pgs: 305 active+clean; 576 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 3.3 MiB/s wr, 74 op/s
Dec 06 07:35:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:12.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:12.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:13 compute-1 ceph-mon[81689]: pgmap v2353: 305 pgs: 305 active+clean; 576 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Dec 06 07:35:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3238803652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:13 compute-1 sudo[274927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:13 compute-1 sudo[274927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:13 compute-1 sudo[274927]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.435 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:13 compute-1 sudo[274952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:35:13 compute-1 sudo[274952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:13 compute-1 sudo[274952]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.511 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.512 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.515 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.515 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.518 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.519 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:35:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743687950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.702 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.703 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3915MB free_disk=20.764301300048828GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.704 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.704 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.799 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance a01650e7-2f06-400b-82aa-0ad2c8b84e6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.800 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 829161c5-19b5-459e-88a3-58512aaa5fc7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.800 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.800 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:35:13 compute-1 nova_compute[226101]: 2025-12-06 07:35:13.800 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:35:14 compute-1 nova_compute[226101]: 2025-12-06 07:35:14.063 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3009168779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:35:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3009168779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:35:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1980977133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-1 ceph-mon[81689]: pgmap v2354: 305 pgs: 305 active+clean; 623 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec 06 07:35:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:14 compute-1 ceph-mon[81689]: pgmap v2355: 305 pgs: 305 active+clean; 623 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.6 MiB/s wr, 38 op/s
Dec 06 07:35:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1926352857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/743687950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-1 sshd-session[274905]: Connection reset by authenticating user root 91.202.233.33 port 33754 [preauth]
Dec 06 07:35:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:35:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/524363761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-1 nova_compute[226101]: 2025-12-06 07:35:14.551 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:14 compute-1 nova_compute[226101]: 2025-12-06 07:35:14.557 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:35:14 compute-1 nova_compute[226101]: 2025-12-06 07:35:14.627 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:35:14 compute-1 nova_compute[226101]: 2025-12-06 07:35:14.810 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:35:14 compute-1 nova_compute[226101]: 2025-12-06 07:35:14.811 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:14.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:14.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/288153005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/524363761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3588510582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2354002293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:16 compute-1 nova_compute[226101]: 2025-12-06 07:35:16.384 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:16 compute-1 nova_compute[226101]: 2025-12-06 07:35:16.385 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:16 compute-1 nova_compute[226101]: 2025-12-06 07:35:16.385 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:16 compute-1 nova_compute[226101]: 2025-12-06 07:35:16.386 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:35:16 compute-1 nova_compute[226101]: 2025-12-06 07:35:16.399 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:16 compute-1 nova_compute[226101]: 2025-12-06 07:35:16.504 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:16 compute-1 ceph-mon[81689]: pgmap v2356: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Dec 06 07:35:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:16.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:16.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:16 compute-1 sshd-session[275000]: Connection reset by authenticating user root 91.202.233.33 port 33776 [preauth]
Dec 06 07:35:17 compute-1 nova_compute[226101]: 2025-12-06 07:35:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:17 compute-1 ceph-mon[81689]: pgmap v2357: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 782 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Dec 06 07:35:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:18.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:18 compute-1 nova_compute[226101]: 2025-12-06 07:35:18.842 226109 DEBUG nova.compute.manager [req-c10d19ac-a91b-47a4-9b34-0f7fd2897015 req-9940bbef-6ade-4183-9d1b-6c11d1794bc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-changed-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:18 compute-1 nova_compute[226101]: 2025-12-06 07:35:18.843 226109 DEBUG nova.compute.manager [req-c10d19ac-a91b-47a4-9b34-0f7fd2897015 req-9940bbef-6ade-4183-9d1b-6c11d1794bc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Refreshing instance network info cache due to event network-changed-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:35:18 compute-1 nova_compute[226101]: 2025-12-06 07:35:18.843 226109 DEBUG oslo_concurrency.lockutils [req-c10d19ac-a91b-47a4-9b34-0f7fd2897015 req-9940bbef-6ade-4183-9d1b-6c11d1794bc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:35:18 compute-1 nova_compute[226101]: 2025-12-06 07:35:18.843 226109 DEBUG oslo_concurrency.lockutils [req-c10d19ac-a91b-47a4-9b34-0f7fd2897015 req-9940bbef-6ade-4183-9d1b-6c11d1794bc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:35:18 compute-1 nova_compute[226101]: 2025-12-06 07:35:18.844 226109 DEBUG nova.network.neutron [req-c10d19ac-a91b-47a4-9b34-0f7fd2897015 req-9940bbef-6ade-4183-9d1b-6c11d1794bc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Refreshing network info cache for port 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:35:20 compute-1 podman[275005]: 2025-12-06 07:35:20.083227648 +0000 UTC m=+0.061046519 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:35:20 compute-1 podman[275004]: 2025-12-06 07:35:20.08401142 +0000 UTC m=+0.061394319 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:35:20 compute-1 podman[275006]: 2025-12-06 07:35:20.103323814 +0000 UTC m=+0.077452416 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 06 07:35:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:20.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:20.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:21 compute-1 nova_compute[226101]: 2025-12-06 07:35:21.402 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:21 compute-1 nova_compute[226101]: 2025-12-06 07:35:21.506 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:21 compute-1 nova_compute[226101]: 2025-12-06 07:35:21.832 226109 DEBUG nova.network.neutron [req-c10d19ac-a91b-47a4-9b34-0f7fd2897015 req-9940bbef-6ade-4183-9d1b-6c11d1794bc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updated VIF entry in instance network info cache for port 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:35:21 compute-1 nova_compute[226101]: 2025-12-06 07:35:21.833 226109 DEBUG nova.network.neutron [req-c10d19ac-a91b-47a4-9b34-0f7fd2897015 req-9940bbef-6ade-4183-9d1b-6c11d1794bc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updating instance_info_cache with network_info: [{"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:35:21 compute-1 nova_compute[226101]: 2025-12-06 07:35:21.855 226109 DEBUG oslo_concurrency.lockutils [req-c10d19ac-a91b-47a4-9b34-0f7fd2897015 req-9940bbef-6ade-4183-9d1b-6c11d1794bc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:35:22 compute-1 nova_compute[226101]: 2025-12-06 07:35:22.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:22 compute-1 nova_compute[226101]: 2025-12-06 07:35:22.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:22.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:22.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:22 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.076285839s, txc = 0x55b55391a900
Dec 06 07:35:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1096218273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/661832997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:23 compute-1 nova_compute[226101]: 2025-12-06 07:35:23.712 226109 DEBUG nova.compute.manager [None req-142ca0dd-96b3-4f00-a459-b4b8787aa459 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196
Dec 06 07:35:24 compute-1 nova_compute[226101]: 2025-12-06 07:35:24.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:24 compute-1 ceph-mon[81689]: pgmap v2358: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 773 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 06 07:35:24 compute-1 ceph-mon[81689]: pgmap v2359: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 256 op/s
Dec 06 07:35:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:24.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:24.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:25 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:25.602 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:35:25 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:25.603 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:35:25 compute-1 nova_compute[226101]: 2025-12-06 07:35:25.603 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:25 compute-1 ceph-mon[81689]: pgmap v2360: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 40 KiB/s wr, 239 op/s
Dec 06 07:35:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3294554558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:25 compute-1 ceph-mon[81689]: pgmap v2361: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 40 KiB/s wr, 239 op/s
Dec 06 07:35:25 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Dec 06 07:35:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:26 compute-1 nova_compute[226101]: 2025-12-06 07:35:26.404 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:26 compute-1 nova_compute[226101]: 2025-12-06 07:35:26.508 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:27 compute-1 nova_compute[226101]: 2025-12-06 07:35:27.770 226109 DEBUG oslo_concurrency.lockutils [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:27 compute-1 nova_compute[226101]: 2025-12-06 07:35:27.770 226109 DEBUG oslo_concurrency.lockutils [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:27 compute-1 nova_compute[226101]: 2025-12-06 07:35:27.770 226109 DEBUG nova.compute.manager [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:27 compute-1 nova_compute[226101]: 2025-12-06 07:35:27.774 226109 DEBUG nova.compute.manager [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 06 07:35:27 compute-1 nova_compute[226101]: 2025-12-06 07:35:27.775 226109 DEBUG nova.objects.instance [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'flavor' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:27 compute-1 nova_compute[226101]: 2025-12-06 07:35:27.801 226109 DEBUG nova.virt.libvirt.driver [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:35:28 compute-1 nova_compute[226101]: 2025-12-06 07:35:28.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:28.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:29 compute-1 ceph-mon[81689]: pgmap v2362: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 37 KiB/s wr, 225 op/s
Dec 06 07:35:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:30.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:31 compute-1 nova_compute[226101]: 2025-12-06 07:35:31.406 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:31 compute-1 nova_compute[226101]: 2025-12-06 07:35:31.511 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:32 compute-1 ceph-mon[81689]: pgmap v2363: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 22 KiB/s wr, 192 op/s
Dec 06 07:35:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:33.605 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:34 compute-1 ceph-mon[81689]: pgmap v2364: 305 pgs: 305 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 231 op/s
Dec 06 07:35:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:34.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:35 compute-1 ceph-mon[81689]: pgmap v2365: 305 pgs: 305 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 06 07:35:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2666969850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/839873704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:36 compute-1 kernel: tap844228dd-a0 (unregistering): left promiscuous mode
Dec 06 07:35:36 compute-1 NetworkManager[49031]: <info>  [1765006536.2260] device (tap844228dd-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:35:36 compute-1 ovn_controller[130279]: 2025-12-06T07:35:36Z|00506|binding|INFO|Releasing lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 from this chassis (sb_readonly=0)
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.233 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:36 compute-1 ovn_controller[130279]: 2025-12-06T07:35:36Z|00507|binding|INFO|Setting lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 down in Southbound
Dec 06 07:35:36 compute-1 ovn_controller[130279]: 2025-12-06T07:35:36Z|00508|binding|INFO|Removing iface tap844228dd-a0 ovn-installed in OVS
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.237 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.249 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:36 compute-1 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000076.scope: Deactivated successfully.
Dec 06 07:35:36 compute-1 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000076.scope: Consumed 21.238s CPU time.
Dec 06 07:35:36 compute-1 systemd-machined[190302]: Machine qemu-57-instance-00000076 terminated.
Dec 06 07:35:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:36.379 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:74:4a 10.100.0.12'], port_security=['fa:16:3e:5e:74:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a01650e7-2f06-400b-82aa-0ad2c8b84e6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=844228dd-a09d-4ac5-bca7-4a4f664afd31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:35:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:36.381 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 844228dd-a09d-4ac5-bca7-4a4f664afd31 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis
Dec 06 07:35:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:36.382 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3beede49-1cbb-425c-b1af-82f43dc57163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:35:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:36.383 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4e002aa4-c04e-4568-82cf-2a8b2d68b267]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:36.385 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace which is not needed anymore
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.408 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.455 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.462 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.513 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:36 compute-1 ceph-mon[81689]: pgmap v2366: 305 pgs: 305 active+clean; 657 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.3 MiB/s wr, 65 op/s
Dec 06 07:35:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4096406409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2344886864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.842 226109 INFO nova.virt.libvirt.driver [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance shutdown successfully after 9 seconds.
Dec 06 07:35:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:36 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.847 226109 INFO nova.virt.libvirt.driver [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance destroyed successfully.
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.848 226109 DEBUG nova.objects.instance [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'numa_topology' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.851 226109 DEBUG nova.compute.manager [req-8a9b3b32-15a5-4573-b4ff-1366dde38493 req-d0909557-7d8f-4fba-86ae-03fe935c20f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-unplugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.851 226109 DEBUG oslo_concurrency.lockutils [req-8a9b3b32-15a5-4573-b4ff-1366dde38493 req-d0909557-7d8f-4fba-86ae-03fe935c20f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.852 226109 DEBUG oslo_concurrency.lockutils [req-8a9b3b32-15a5-4573-b4ff-1366dde38493 req-d0909557-7d8f-4fba-86ae-03fe935c20f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.852 226109 DEBUG oslo_concurrency.lockutils [req-8a9b3b32-15a5-4573-b4ff-1366dde38493 req-d0909557-7d8f-4fba-86ae-03fe935c20f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.852 226109 DEBUG nova.compute.manager [req-8a9b3b32-15a5-4573-b4ff-1366dde38493 req-d0909557-7d8f-4fba-86ae-03fe935c20f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] No waiting events found dispatching network-vif-unplugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.852 226109 WARNING nova.compute.manager [req-8a9b3b32-15a5-4573-b4ff-1366dde38493 req-d0909557-7d8f-4fba-86ae-03fe935c20f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received unexpected event network-vif-unplugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 for instance with vm_state active and task_state powering-off.
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.868 226109 DEBUG nova.compute.manager [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:36 compute-1 nova_compute[226101]: 2025-12-06 07:35:36.925 226109 DEBUG oslo_concurrency.lockutils [None req-9682fd6a-1623-425c-8fdc-3c4a19ab61fe a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 9.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:37 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[272770]: [NOTICE]   (272774) : haproxy version is 2.8.14-c23fe91
Dec 06 07:35:37 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[272770]: [NOTICE]   (272774) : path to executable is /usr/sbin/haproxy
Dec 06 07:35:37 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[272770]: [WARNING]  (272774) : Exiting Master process...
Dec 06 07:35:37 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[272770]: [ALERT]    (272774) : Current worker (272776) exited with code 143 (Terminated)
Dec 06 07:35:37 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[272770]: [WARNING]  (272774) : All workers exited. Exiting... (0)
Dec 06 07:35:37 compute-1 systemd[1]: libpod-23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4.scope: Deactivated successfully.
Dec 06 07:35:37 compute-1 podman[275100]: 2025-12-06 07:35:37.129834517 +0000 UTC m=+0.632722913 container died 23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:35:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:38 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:38 compute-1 nova_compute[226101]: 2025-12-06 07:35:38.935 226109 DEBUG nova.compute.manager [req-f877de48-5aa8-4282-9a75-fc6bc0069169 req-44fc7091-03e2-4293-8c9d-4d1f544f8988 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:38 compute-1 nova_compute[226101]: 2025-12-06 07:35:38.935 226109 DEBUG oslo_concurrency.lockutils [req-f877de48-5aa8-4282-9a75-fc6bc0069169 req-44fc7091-03e2-4293-8c9d-4d1f544f8988 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:38 compute-1 nova_compute[226101]: 2025-12-06 07:35:38.936 226109 DEBUG oslo_concurrency.lockutils [req-f877de48-5aa8-4282-9a75-fc6bc0069169 req-44fc7091-03e2-4293-8c9d-4d1f544f8988 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:38 compute-1 nova_compute[226101]: 2025-12-06 07:35:38.936 226109 DEBUG oslo_concurrency.lockutils [req-f877de48-5aa8-4282-9a75-fc6bc0069169 req-44fc7091-03e2-4293-8c9d-4d1f544f8988 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:38 compute-1 nova_compute[226101]: 2025-12-06 07:35:38.936 226109 DEBUG nova.compute.manager [req-f877de48-5aa8-4282-9a75-fc6bc0069169 req-44fc7091-03e2-4293-8c9d-4d1f544f8988 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] No waiting events found dispatching network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:38 compute-1 nova_compute[226101]: 2025-12-06 07:35:38.936 226109 WARNING nova.compute.manager [req-f877de48-5aa8-4282-9a75-fc6bc0069169 req-44fc7091-03e2-4293-8c9d-4d1f544f8988 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received unexpected event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 for instance with vm_state stopped and task_state resize_prep.
Dec 06 07:35:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:40 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.171 226109 DEBUG nova.compute.manager [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Stashing vm_state: stopped _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.443 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.491 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.492 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.514 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.516 226109 DEBUG nova.objects.instance [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_requests' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.581 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.582 226109 INFO nova.compute.claims [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.582 226109 DEBUG nova.objects.instance [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.621 226109 DEBUG nova.objects.instance [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.782 226109 INFO nova.compute.resource_tracker [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating resource usage from migration 8b259a51-a4d2-43cd-a395-2985c92913ad
Dec 06 07:35:41 compute-1 nova_compute[226101]: 2025-12-06 07:35:41.861 226109 DEBUG oslo_concurrency.processutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:41 compute-1 ceph-mon[81689]: pgmap v2367: 305 pgs: 305 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 5.1 MiB/s wr, 103 op/s
Dec 06 07:35:42 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4-userdata-shm.mount: Deactivated successfully.
Dec 06 07:35:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-46e4a5c82973c5aaf7b5c78428d1817c47a7d6786f9d969ce8c50b7d79541048-merged.mount: Deactivated successfully.
Dec 06 07:35:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:42.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:43 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.696361065s, txc = 0x55b552a40600
Dec 06 07:35:43 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.513537884s, txc = 0x55b552b40c00
Dec 06 07:35:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:44 compute-1 podman[275100]: 2025-12-06 07:35:44.354508505 +0000 UTC m=+7.857396901 container cleanup 23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:35:44 compute-1 systemd[1]: libpod-conmon-23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4.scope: Deactivated successfully.
Dec 06 07:35:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:44.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:44 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:44.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:45 compute-1 ceph-mon[81689]: pgmap v2368: 305 pgs: 305 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 5.1 MiB/s wr, 98 op/s
Dec 06 07:35:46 compute-1 nova_compute[226101]: 2025-12-06 07:35:46.446 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:46 compute-1 nova_compute[226101]: 2025-12-06 07:35:46.516 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:46 compute-1 podman[275149]: 2025-12-06 07:35:46.613146894 +0000 UTC m=+2.237966509 container remove 23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.618 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8e01d3-d788-492e-9b4d-731e757da4d7]: (4, ('Sat Dec  6 07:35:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4)\n23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4\nSat Dec  6 07:35:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4)\n23732d6c9b0ad1837433d58cf90e784db31c22a6e01933e408f2d163205da5a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.620 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[909f925e-9aea-4d92-b5cc-3efa2c0c93bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.621 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:46 compute-1 kernel: tap3beede49-10: left promiscuous mode
Dec 06 07:35:46 compute-1 nova_compute[226101]: 2025-12-06 07:35:46.623 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:46 compute-1 nova_compute[226101]: 2025-12-06 07:35:46.639 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.642 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[839f4cec-889c-4355-8070-107fb8028ce7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.660 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b93dc999-a23c-48a7-8f22-8fc473fcc143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.661 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[618b25f7-446a-4dc9-8bc9-d5d6ea081bfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.674 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b6d707-6a63-4845-a8c1-9200a33ea0cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664168, 'reachable_time': 30072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275169, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.677 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:35:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:35:46.677 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[b40db044-5a0d-464c-9b08-3905119b4c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:46 compute-1 systemd[1]: run-netns-ovnmeta\x2d3beede49\x2d1cbb\x2d425c\x2db1af\x2d82f43dc57163.mount: Deactivated successfully.
Dec 06 07:35:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:46 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:46.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:46.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:35:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1498299116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:47 compute-1 nova_compute[226101]: 2025-12-06 07:35:47.448 226109 DEBUG oslo_concurrency.processutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:47 compute-1 nova_compute[226101]: 2025-12-06 07:35:47.453 226109 DEBUG nova.compute.provider_tree [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:35:47 compute-1 nova_compute[226101]: 2025-12-06 07:35:47.519 226109 DEBUG nova.scheduler.client.report [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:35:47 compute-1 nova_compute[226101]: 2025-12-06 07:35:47.592 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 6.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:47 compute-1 nova_compute[226101]: 2025-12-06 07:35:47.593 226109 INFO nova.compute.manager [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Migrating
Dec 06 07:35:47 compute-1 nova_compute[226101]: 2025-12-06 07:35:47.834 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:35:47 compute-1 nova_compute[226101]: 2025-12-06 07:35:47.834 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:35:47 compute-1 nova_compute[226101]: 2025-12-06 07:35:47.835 226109 DEBUG nova.network.neutron [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:35:47 compute-1 ceph-mon[81689]: pgmap v2369: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 5.9 MiB/s wr, 138 op/s
Dec 06 07:35:47 compute-1 ceph-mon[81689]: pgmap v2370: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 4.1 MiB/s wr, 99 op/s
Dec 06 07:35:47 compute-1 ceph-mon[81689]: pgmap v2371: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 4.1 MiB/s wr, 104 op/s
Dec 06 07:35:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:35:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:35:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:48.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:35:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:35:48 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:48.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.336 226109 DEBUG nova.network.neutron [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.431 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:35:49 compute-1 ceph-mon[81689]: pgmap v2372: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.7 MiB/s wr, 127 op/s
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.691 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.694 226109 INFO nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance already shutdown.
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.700 226109 INFO nova.virt.libvirt.driver [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance destroyed successfully.
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.701 226109 DEBUG nova.virt.libvirt.vif [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:32:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:35:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:5e:74:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.701 226109 DEBUG nova.network.os_vif_util [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:5e:74:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.702 226109 DEBUG nova.network.os_vif_util [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.702 226109 DEBUG os_vif [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.704 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.704 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844228dd-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.706 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.708 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.711 226109 INFO os_vif [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0')
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.715 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.715 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:49 compute-1 nova_compute[226101]: 2025-12-06 07:35:49.918 226109 DEBUG nova.network.neutron [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Port 844228dd-a09d-4ac5-bca7-4a4f664afd31 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Dec 06 07:35:50 compute-1 nova_compute[226101]: 2025-12-06 07:35:50.180 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:50 compute-1 nova_compute[226101]: 2025-12-06 07:35:50.181 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:50 compute-1 nova_compute[226101]: 2025-12-06 07:35:50.181 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:50 compute-1 nova_compute[226101]: 2025-12-06 07:35:50.602 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:35:50 compute-1 nova_compute[226101]: 2025-12-06 07:35:50.603 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:35:50 compute-1 nova_compute[226101]: 2025-12-06 07:35:50.604 226109 DEBUG nova.network.neutron [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:35:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:50.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:50.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:51 compute-1 podman[275175]: 2025-12-06 07:35:51.076346173 +0000 UTC m=+0.058392398 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:35:51 compute-1 podman[275176]: 2025-12-06 07:35:51.076448646 +0000 UTC m=+0.051060033 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:35:51 compute-1 podman[275177]: 2025-12-06 07:35:51.11037623 +0000 UTC m=+0.085863590 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:35:51 compute-1 nova_compute[226101]: 2025-12-06 07:35:51.469 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006536.4685895, a01650e7-2f06-400b-82aa-0ad2c8b84e6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:51 compute-1 nova_compute[226101]: 2025-12-06 07:35:51.470 226109 INFO nova.compute.manager [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] VM Stopped (Lifecycle Event)
Dec 06 07:35:51 compute-1 nova_compute[226101]: 2025-12-06 07:35:51.489 226109 DEBUG nova.compute.manager [None req-d2c388a9-090c-4ce8-a950-97d945d9826e - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:51 compute-1 nova_compute[226101]: 2025-12-06 07:35:51.492 226109 DEBUG nova.compute.manager [None req-d2c388a9-090c-4ce8-a950-97d945d9826e - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_migrated, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:35:51 compute-1 nova_compute[226101]: 2025-12-06 07:35:51.514 226109 INFO nova.compute.manager [None req-d2c388a9-090c-4ce8-a950-97d945d9826e - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] During sync_power_state the instance has a pending task (resize_migrated). Skip.
Dec 06 07:35:51 compute-1 nova_compute[226101]: 2025-12-06 07:35:51.620 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1498299116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:51 compute-1 ceph-mon[81689]: pgmap v2373: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 891 KiB/s wr, 89 op/s
Dec 06 07:35:52 compute-1 nova_compute[226101]: 2025-12-06 07:35:52.069 226109 DEBUG nova.network.neutron [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:35:52 compute-1 nova_compute[226101]: 2025-12-06 07:35:52.106 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:35:52 compute-1 nova_compute[226101]: 2025-12-06 07:35:52.299 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 07:35:52 compute-1 nova_compute[226101]: 2025-12-06 07:35:52.300 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:35:52 compute-1 nova_compute[226101]: 2025-12-06 07:35:52.300 226109 INFO nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Creating image(s)
Dec 06 07:35:52 compute-1 nova_compute[226101]: 2025-12-06 07:35:52.470 226109 DEBUG nova.storage.rbd_utils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(nova-resize) on rbd image(a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:35:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:52.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:52.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:53 compute-1 ceph-mon[81689]: pgmap v2374: 305 pgs: 305 active+clean; 675 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 944 KiB/s wr, 159 op/s
Dec 06 07:35:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e294 e294: 3 total, 3 up, 3 in
Dec 06 07:35:54 compute-1 ceph-mon[81689]: pgmap v2375: 305 pgs: 305 active+clean; 676 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 126 KiB/s wr, 153 op/s
Dec 06 07:35:54 compute-1 nova_compute[226101]: 2025-12-06 07:35:54.708 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:54.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:54.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:54 compute-1 nova_compute[226101]: 2025-12-06 07:35:54.946 226109 DEBUG nova.objects.instance [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.099 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.099 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Ensure instance console log exists: /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.100 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.100 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.100 226109 DEBUG oslo_concurrency.lockutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.103 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Start _get_guest_xml network_info=[{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:5e:74:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.107 226109 WARNING nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.111 226109 DEBUG nova.virt.libvirt.host [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.112 226109 DEBUG nova.virt.libvirt.host [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.115 226109 DEBUG nova.virt.libvirt.host [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.115 226109 DEBUG nova.virt.libvirt.host [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.116 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.117 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.117 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.117 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.118 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.118 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.118 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.118 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.118 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.119 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.119 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.119 226109 DEBUG nova.virt.hardware [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.119 226109 DEBUG nova.objects.instance [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:55 compute-1 nova_compute[226101]: 2025-12-06 07:35:55.135 226109 DEBUG oslo_concurrency.processutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:56 compute-1 nova_compute[226101]: 2025-12-06 07:35:56.675 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:56.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:56.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:57 compute-1 ceph-mon[81689]: osdmap e294: 3 total, 3 up, 3 in
Dec 06 07:35:57 compute-1 ceph-mon[81689]: pgmap v2377: 305 pgs: 305 active+clean; 684 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 165 KiB/s wr, 227 op/s
Dec 06 07:35:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:35:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4059959374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:58 compute-1 nova_compute[226101]: 2025-12-06 07:35:58.323 226109 DEBUG oslo_concurrency.processutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:35:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:58.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:35:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:35:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:58.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:59 compute-1 ceph-osd[79002]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Dec 06 07:35:59 compute-1 ceph-mon[81689]: pgmap v2378: 305 pgs: 305 active+clean; 690 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 183 KiB/s wr, 204 op/s
Dec 06 07:36:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:00.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:00.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:01.653 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:01.654 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:01.654 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4059959374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:02 compute-1 ceph-mon[81689]: pgmap v2379: 305 pgs: 305 active+clean; 690 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 183 KiB/s wr, 204 op/s
Dec 06 07:36:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:02.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:02 compute-1 nova_compute[226101]: 2025-12-06 07:36:02.953 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:02 compute-1 nova_compute[226101]: 2025-12-06 07:36:02.983 226109 DEBUG oslo_concurrency.processutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.011 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:36:03 compute-1 ceph-mon[81689]: pgmap v2380: 305 pgs: 305 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 807 KiB/s wr, 152 op/s
Dec 06 07:36:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:36:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3407513672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.500 226109 DEBUG oslo_concurrency.processutils [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.501 226109 DEBUG nova.virt.libvirt.vif [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:32:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:35:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:5e:74:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.501 226109 DEBUG nova.network.os_vif_util [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:5e:74:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.502 226109 DEBUG nova.network.os_vif_util [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.505 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <uuid>a01650e7-2f06-400b-82aa-0ad2c8b84e6c</uuid>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <name>instance-00000076</name>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <memory>196608</memory>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerActionsTestOtherB-server-1460317127</nova:name>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:35:55</nova:creationTime>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <nova:flavor name="m1.micro">
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <nova:memory>192</nova:memory>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <nova:port uuid="844228dd-a09d-4ac5-bca7-4a4f664afd31">
Dec 06 07:36:03 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <system>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <entry name="serial">a01650e7-2f06-400b-82aa-0ad2c8b84e6c</entry>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <entry name="uuid">a01650e7-2f06-400b-82aa-0ad2c8b84e6c</entry>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </system>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <os>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   </os>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <features>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   </features>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk">
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       </source>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk.config">
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       </source>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:36:03 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:5e:74:4a"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <target dev="tap844228dd-a0"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/console.log" append="off"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <video>
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </video>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:36:03 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:36:03 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:36:03 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:36:03 compute-1 nova_compute[226101]: </domain>
Dec 06 07:36:03 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.507 226109 DEBUG nova.virt.libvirt.vif [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:32:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:35:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:5e:74:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.507 226109 DEBUG nova.network.os_vif_util [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:5e:74:4a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.508 226109 DEBUG nova.network.os_vif_util [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.508 226109 DEBUG os_vif [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.509 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.509 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.510 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.512 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.512 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844228dd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.513 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap844228dd-a0, col_values=(('external_ids', {'iface-id': '844228dd-a09d-4ac5-bca7-4a4f664afd31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:74:4a', 'vm-uuid': 'a01650e7-2f06-400b-82aa-0ad2c8b84e6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.514 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:03 compute-1 NetworkManager[49031]: <info>  [1765006563.5153] manager: (tap844228dd-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.516 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.519 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.519 226109 INFO os_vif [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0')
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.633 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.633 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.633 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:5e:74:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.634 226109 INFO nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Using config drive
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.673 226109 DEBUG nova.compute.manager [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:36:03 compute-1 nova_compute[226101]: 2025-12-06 07:36:03.673 226109 DEBUG nova.virt.libvirt.driver [None req-87c39ffd-d874-470a-bcc6-290a8336f17d a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 07:36:04 compute-1 ceph-mon[81689]: pgmap v2381: 305 pgs: 305 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 795 KiB/s wr, 111 op/s
Dec 06 07:36:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3407513672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:04.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:04.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:06 compute-1 ceph-mon[81689]: pgmap v2382: 305 pgs: 305 active+clean; 703 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 2.0 MiB/s wr, 88 op/s
Dec 06 07:36:06 compute-1 nova_compute[226101]: 2025-12-06 07:36:06.561 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:06 compute-1 nova_compute[226101]: 2025-12-06 07:36:06.561 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:06 compute-1 nova_compute[226101]: 2025-12-06 07:36:06.562 226109 DEBUG nova.compute.manager [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Going to confirm migration 17 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Dec 06 07:36:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:06 compute-1 nova_compute[226101]: 2025-12-06 07:36:06.751 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:06.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:06.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:06 compute-1 nova_compute[226101]: 2025-12-06 07:36:06.945 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:06 compute-1 nova_compute[226101]: 2025-12-06 07:36:06.945 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:06 compute-1 nova_compute[226101]: 2025-12-06 07:36:06.946 226109 DEBUG nova.network.neutron [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:36:06 compute-1 nova_compute[226101]: 2025-12-06 07:36:06.946 226109 DEBUG nova.objects.instance [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'info_cache' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2192358059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1808890436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:07 compute-1 ceph-mon[81689]: pgmap v2383: 305 pgs: 305 active+clean; 711 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Dec 06 07:36:08 compute-1 nova_compute[226101]: 2025-12-06 07:36:08.515 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:08.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:09 compute-1 nova_compute[226101]: 2025-12-06 07:36:09.339 226109 DEBUG nova.network.neutron [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:36:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/690703462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:36:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/690703462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:36:09 compute-1 nova_compute[226101]: 2025-12-06 07:36:09.692 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:36:09 compute-1 nova_compute[226101]: 2025-12-06 07:36:09.693 226109 DEBUG nova.objects.instance [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'migration_context' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:10 compute-1 ceph-mon[81689]: pgmap v2384: 305 pgs: 305 active+clean; 711 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.455 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "7bca344a-3af1-4217-b97d-3f288712b57d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.455 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.499 226109 DEBUG nova.storage.rbd_utils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] removing snapshot(nova-resize) on rbd image(a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.529 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.635 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.636 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.636 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.637 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.637 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.670 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.671 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.679 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:36:10 compute-1 nova_compute[226101]: 2025-12-06 07:36:10.680 226109 INFO nova.compute.claims [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:36:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:10.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:10.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.026 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:36:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1025593138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.110 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.191 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.192 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.197 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.198 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.201 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.202 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:11 compute-1 ovn_controller[130279]: 2025-12-06T07:36:11Z|00509|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.440574) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571440637, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 2129, "num_deletes": 258, "total_data_size": 5001573, "memory_usage": 5066048, "flush_reason": "Manual Compaction"}
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Dec 06 07:36:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e295 e295: 3 total, 3 up, 3 in
Dec 06 07:36:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3904011872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:11 compute-1 ceph-mon[81689]: pgmap v2385: 305 pgs: 305 active+clean; 723 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:36:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1025593138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.451 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.452 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4045MB free_disk=20.669841766357422GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.452 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.471 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571481118, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 3284966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48327, "largest_seqno": 50451, "table_properties": {"data_size": 3275826, "index_size": 5698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19961, "raw_average_key_size": 21, "raw_value_size": 3257343, "raw_average_value_size": 3472, "num_data_blocks": 245, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006374, "oldest_key_time": 1765006374, "file_creation_time": 1765006571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 40604 microseconds, and 7862 cpu microseconds.
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.481183) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 3284966 bytes OK
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.481206) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.483522) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.483537) EVENT_LOG_v1 {"time_micros": 1765006571483532, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.483553) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 4991848, prev total WAL file size 4991889, number of live WAL files 2.
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.484872) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(3207KB)], [93(10019KB)]
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571484951, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 13544970, "oldest_snapshot_seqno": -1}
Dec 06 07:36:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:36:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2445399869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.554 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.560 226109 DEBUG nova.compute.provider_tree [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.577 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.582 226109 DEBUG nova.scheduler.client.report [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 8244 keys, 11524957 bytes, temperature: kUnknown
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571598254, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 11524957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11471188, "index_size": 32025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 213321, "raw_average_key_size": 25, "raw_value_size": 11325504, "raw_average_value_size": 1373, "num_data_blocks": 1258, "num_entries": 8244, "num_filter_entries": 8244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765006571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:36:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.598790) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 11524957 bytes
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.607319) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.5 rd, 101.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 8774, records dropped: 530 output_compression: NoCompression
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.607357) EVENT_LOG_v1 {"time_micros": 1765006571607343, "job": 58, "event": "compaction_finished", "compaction_time_micros": 113377, "compaction_time_cpu_micros": 27332, "output_level": 6, "num_output_files": 1, "total_output_size": 11524957, "num_input_records": 8774, "num_output_records": 8244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.607 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571608207, "job": 58, "event": "table_file_deletion", "file_number": 95}
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.608 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571609904, "job": 58, "event": "table_file_deletion", "file_number": 93}
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.484730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.609980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.609984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.609986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.609988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:11.609990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.612 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.706 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.706 226109 DEBUG nova.network.neutron [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.716 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Applying migration context for instance a01650e7-2f06-400b-82aa-0ad2c8b84e6c as it has an incoming, in-progress migration 8b259a51-a4d2-43cd-a395-2985c92913ad. Migration status is confirming _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.717 226109 INFO nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating resource usage from migration 8b259a51-a4d2-43cd-a395-2985c92913ad
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.724 226109 INFO nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.747 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.753 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 829161c5-19b5-459e-88a3-58512aaa5fc7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.754 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.754 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Migration 8b259a51-a4d2-43cd-a395-2985c92913ad is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.754 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance a01650e7-2f06-400b-82aa-0ad2c8b84e6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.754 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 7bca344a-3af1-4217-b97d-3f288712b57d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.754 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.754 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=1216MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.756 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.857 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.858 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.859 226109 INFO nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Creating image(s)
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.882 226109 DEBUG nova.storage.rbd_utils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 7bca344a-3af1-4217-b97d-3f288712b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.907 226109 DEBUG nova.storage.rbd_utils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 7bca344a-3af1-4217-b97d-3f288712b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.936 226109 DEBUG nova.storage.rbd_utils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 7bca344a-3af1-4217-b97d-3f288712b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.940 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:11 compute-1 nova_compute[226101]: 2025-12-06 07:36:11.970 226109 DEBUG nova.policy [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2aa5b15c15f84a8cb24776d5c781eb09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.006 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.007 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.007 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.007 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:12.010 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:36:12 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:12.011 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.033 226109 DEBUG nova.storage.rbd_utils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 7bca344a-3af1-4217-b97d-3f288712b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.036 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 7bca344a-3af1-4217-b97d-3f288712b57d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.066 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.080 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:12 compute-1 ceph-mon[81689]: osdmap e295: 3 total, 3 up, 3 in
Dec 06 07:36:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2445399869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:36:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4100487112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.588 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.595 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.617 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.658 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.658 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.658 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.673 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 7bca344a-3af1-4217-b97d-3f288712b57d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.746 226109 DEBUG nova.storage.rbd_utils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] resizing rbd image 7bca344a-3af1-4217-b97d-3f288712b57d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.857 226109 DEBUG nova.objects.instance [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'migration_context' on Instance uuid 7bca344a-3af1-4217-b97d-3f288712b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.882 226109 DEBUG oslo_concurrency.processutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:12.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:12.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.909 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.910 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Ensure instance console log exists: /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.911 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.911 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:12 compute-1 nova_compute[226101]: 2025-12-06 07:36:12.911 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:36:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1337473103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.319 226109 DEBUG oslo_concurrency.processutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.327 226109 DEBUG nova.compute.provider_tree [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.342 226109 DEBUG nova.scheduler.client.report [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.443 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.443 226109 DEBUG nova.compute.manager [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805
Dec 06 07:36:13 compute-1 sudo[275684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.648 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:13 compute-1 sudo[275684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:13 compute-1 sudo[275684]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.660 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.661 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.661 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.694 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:36:13 compute-1 sudo[275709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:36:13 compute-1 sudo[275709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:13 compute-1 sudo[275709]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.718 226109 INFO nova.scheduler.client.report [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Deleted allocation for migration 8b259a51-a4d2-43cd-a395-2985c92913ad
Dec 06 07:36:13 compute-1 sudo[275734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:13 compute-1 sudo[275734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:13 compute-1 sudo[275734]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.803 226109 DEBUG oslo_concurrency.lockutils [None req-a2339f70-58da-418d-bf27-a60e66694cb6 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:13 compute-1 sudo[275759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:36:13 compute-1 sudo[275759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4100487112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:13 compute-1 ceph-mon[81689]: pgmap v2387: 305 pgs: 305 active+clean; 723 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 1.9 MiB/s wr, 101 op/s
Dec 06 07:36:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1337473103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.905 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.906 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.907 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:36:13 compute-1 nova_compute[226101]: 2025-12-06 07:36:13.907 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:14 compute-1 nova_compute[226101]: 2025-12-06 07:36:14.224 226109 DEBUG nova.network.neutron [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Successfully created port: 66ec4ebb-3c12-40ec-8b46-858eeb166d64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.242263) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574242300, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 296, "num_deletes": 261, "total_data_size": 77029, "memory_usage": 83864, "flush_reason": "Manual Compaction"}
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Dec 06 07:36:14 compute-1 sudo[275759]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574303498, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 50360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50452, "largest_seqno": 50747, "table_properties": {"data_size": 48413, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4970, "raw_average_key_size": 17, "raw_value_size": 44515, "raw_average_value_size": 157, "num_data_blocks": 5, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006571, "oldest_key_time": 1765006571, "file_creation_time": 1765006574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 61297 microseconds, and 893 cpu microseconds.
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.303561) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 50360 bytes OK
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.303580) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.311688) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.311732) EVENT_LOG_v1 {"time_micros": 1765006574311720, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.311751) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 74816, prev total WAL file size 74816, number of live WAL files 2.
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.312231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353132' seq:72057594037927935, type:22 .. '6C6F676D0031373639' seq:0, type:0; will stop at (end)
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(49KB)], [96(10MB)]
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574312286, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11575317, "oldest_snapshot_seqno": -1}
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7995 keys, 11428429 bytes, temperature: kUnknown
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574463138, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 11428429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11375903, "index_size": 31422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 209124, "raw_average_key_size": 26, "raw_value_size": 11234262, "raw_average_value_size": 1405, "num_data_blocks": 1228, "num_entries": 7995, "num_filter_entries": 7995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765006574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.463337) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 11428429 bytes
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.514285) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.7 rd, 75.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(456.8) write-amplify(226.9) OK, records in: 8526, records dropped: 531 output_compression: NoCompression
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.514323) EVENT_LOG_v1 {"time_micros": 1765006574514309, "job": 60, "event": "compaction_finished", "compaction_time_micros": 150906, "compaction_time_cpu_micros": 25586, "output_level": 6, "num_output_files": 1, "total_output_size": 11428429, "num_input_records": 8526, "num_output_records": 7995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574514491, "job": 60, "event": "table_file_deletion", "file_number": 98}
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574516014, "job": 60, "event": "table_file_deletion", "file_number": 96}
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.312137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.516086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.516092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.516094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.516095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:36:14.516097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:14.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1884224956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:36:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:36:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:36:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:36:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:36:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.220 226109 DEBUG nova.network.neutron [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Successfully updated port: 66ec4ebb-3c12-40ec-8b46-858eeb166d64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.237 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "refresh_cache-7bca344a-3af1-4217-b97d-3f288712b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.237 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquired lock "refresh_cache-7bca344a-3af1-4217-b97d-3f288712b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.238 226109 DEBUG nova.network.neutron [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.358 226109 DEBUG nova.compute.manager [req-364ac0b5-e33c-451e-81cd-64d6fa857037 req-e21a362e-fb14-433e-bcc2-44bde7f0a4a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received event network-changed-66ec4ebb-3c12-40ec-8b46-858eeb166d64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.359 226109 DEBUG nova.compute.manager [req-364ac0b5-e33c-451e-81cd-64d6fa857037 req-e21a362e-fb14-433e-bcc2-44bde7f0a4a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Refreshing instance network info cache due to event network-changed-66ec4ebb-3c12-40ec-8b46-858eeb166d64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.359 226109 DEBUG oslo_concurrency.lockutils [req-364ac0b5-e33c-451e-81cd-64d6fa857037 req-e21a362e-fb14-433e-bcc2-44bde7f0a4a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-7bca344a-3af1-4217-b97d-3f288712b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.411 226109 DEBUG nova.network.neutron [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.523 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.540 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.540 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.540 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:15 compute-1 nova_compute[226101]: 2025-12-06 07:36:15.541 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:36:16 compute-1 nova_compute[226101]: 2025-12-06 07:36:16.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:16 compute-1 nova_compute[226101]: 2025-12-06 07:36:16.756 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:16 compute-1 ceph-mon[81689]: pgmap v2388: 305 pgs: 305 active+clean; 740 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 1.6 MiB/s wr, 88 op/s
Dec 06 07:36:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3988076846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2477259455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:16.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:16.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.117 226109 DEBUG nova.network.neutron [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Updating instance_info_cache with network_info: [{"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.138 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Releasing lock "refresh_cache-7bca344a-3af1-4217-b97d-3f288712b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.139 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Instance network_info: |[{"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.139 226109 DEBUG oslo_concurrency.lockutils [req-364ac0b5-e33c-451e-81cd-64d6fa857037 req-e21a362e-fb14-433e-bcc2-44bde7f0a4a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-7bca344a-3af1-4217-b97d-3f288712b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.139 226109 DEBUG nova.network.neutron [req-364ac0b5-e33c-451e-81cd-64d6fa857037 req-e21a362e-fb14-433e-bcc2-44bde7f0a4a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Refreshing network info cache for port 66ec4ebb-3c12-40ec-8b46-858eeb166d64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.141 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Start _get_guest_xml network_info=[{"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.147 226109 WARNING nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.155 226109 DEBUG nova.virt.libvirt.host [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.156 226109 DEBUG nova.virt.libvirt.host [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.163 226109 DEBUG nova.virt.libvirt.host [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.163 226109 DEBUG nova.virt.libvirt.host [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.165 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.165 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.166 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.166 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.167 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.167 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.167 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.168 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.168 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.168 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.169 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.169 226109 DEBUG nova.virt.hardware [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.172 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:17 compute-1 ceph-mon[81689]: pgmap v2389: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 06 07:36:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:36:17 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2282150798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.600 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.623 226109 DEBUG nova.storage.rbd_utils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 7bca344a-3af1-4217-b97d-3f288712b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:17 compute-1 nova_compute[226101]: 2025-12-06 07:36:17.626 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:18.013 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:36:18 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1026922230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.173 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.175 226109 DEBUG nova.virt.libvirt.vif [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:36:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1059207158',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1059207158',id=130,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-goa0p5ar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:11Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=7bca344a-3af1-4217-b97d-3f288712b57d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.176 226109 DEBUG nova.network.os_vif_util [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.177 226109 DEBUG nova.network.os_vif_util [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:bc:74,bridge_name='br-int',has_traffic_filtering=True,id=66ec4ebb-3c12-40ec-8b46-858eeb166d64,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66ec4ebb-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.178 226109 DEBUG nova.objects.instance [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bca344a-3af1-4217-b97d-3f288712b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.320 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <uuid>7bca344a-3af1-4217-b97d-3f288712b57d</uuid>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <name>instance-00000082</name>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1059207158</nova:name>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:36:17</nova:creationTime>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <nova:user uuid="2aa5b15c15f84a8cb24776d5c781eb09">tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member</nova:user>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <nova:project uuid="17cdfa63c4424ec7a0eb4bb3d7372c14">tempest-ServerBootFromVolumeStableRescueTest-344238221</nova:project>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <nova:port uuid="66ec4ebb-3c12-40ec-8b46-858eeb166d64">
Dec 06 07:36:18 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <system>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <entry name="serial">7bca344a-3af1-4217-b97d-3f288712b57d</entry>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <entry name="uuid">7bca344a-3af1-4217-b97d-3f288712b57d</entry>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </system>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <os>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   </os>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <features>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   </features>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/7bca344a-3af1-4217-b97d-3f288712b57d_disk">
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       </source>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/7bca344a-3af1-4217-b97d-3f288712b57d_disk.config">
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       </source>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:36:18 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:db:bc:74"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <target dev="tap66ec4ebb-3c"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d/console.log" append="off"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <video>
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </video>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:36:18 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:36:18 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:36:18 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:36:18 compute-1 nova_compute[226101]: </domain>
Dec 06 07:36:18 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.321 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Preparing to wait for external event network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.321 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.321 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.321 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.322 226109 DEBUG nova.virt.libvirt.vif [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:36:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1059207158',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1059207158',id=130,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-goa0p5ar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:11Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=7bca344a-3af1-4217-b97d-3f288712b57d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.322 226109 DEBUG nova.network.os_vif_util [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.323 226109 DEBUG nova.network.os_vif_util [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:bc:74,bridge_name='br-int',has_traffic_filtering=True,id=66ec4ebb-3c12-40ec-8b46-858eeb166d64,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66ec4ebb-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.324 226109 DEBUG os_vif [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:bc:74,bridge_name='br-int',has_traffic_filtering=True,id=66ec4ebb-3c12-40ec-8b46-858eeb166d64,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66ec4ebb-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.325 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.325 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.326 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.327 226109 DEBUG nova.objects.instance [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'flavor' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.330 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.330 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66ec4ebb-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.330 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66ec4ebb-3c, col_values=(('external_ids', {'iface-id': '66ec4ebb-3c12-40ec-8b46-858eeb166d64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:bc:74', 'vm-uuid': '7bca344a-3af1-4217-b97d-3f288712b57d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.332 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:18 compute-1 NetworkManager[49031]: <info>  [1765006578.3328] manager: (tap66ec4ebb-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.333 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.339 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.340 226109 INFO os_vif [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:bc:74,bridge_name='br-int',has_traffic_filtering=True,id=66ec4ebb-3c12-40ec-8b46-858eeb166d64,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66ec4ebb-3c')
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.353 226109 DEBUG oslo_concurrency.lockutils [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.354 226109 DEBUG oslo_concurrency.lockutils [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.354 226109 DEBUG nova.network.neutron [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.354 226109 DEBUG nova.objects.instance [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'info_cache' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.682 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.682 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.683 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No VIF found with MAC fa:16:3e:db:bc:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.683 226109 INFO nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Using config drive
Dec 06 07:36:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1306820683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2282150798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1026922230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:18 compute-1 nova_compute[226101]: 2025-12-06 07:36:18.747 226109 DEBUG nova.storage.rbd_utils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 7bca344a-3af1-4217-b97d-3f288712b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:18.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:18.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.102 226109 DEBUG nova.network.neutron [req-364ac0b5-e33c-451e-81cd-64d6fa857037 req-e21a362e-fb14-433e-bcc2-44bde7f0a4a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Updated VIF entry in instance network info cache for port 66ec4ebb-3c12-40ec-8b46-858eeb166d64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.103 226109 DEBUG nova.network.neutron [req-364ac0b5-e33c-451e-81cd-64d6fa857037 req-e21a362e-fb14-433e-bcc2-44bde7f0a4a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Updating instance_info_cache with network_info: [{"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.121 226109 DEBUG oslo_concurrency.lockutils [req-364ac0b5-e33c-451e-81cd-64d6fa857037 req-e21a362e-fb14-433e-bcc2-44bde7f0a4a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-7bca344a-3af1-4217-b97d-3f288712b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.269 226109 INFO nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Creating config drive at /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d/disk.config
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.275 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoi99qxyc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e296 e296: 3 total, 3 up, 3 in
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.413 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoi99qxyc" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.440 226109 DEBUG nova.storage.rbd_utils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 7bca344a-3af1-4217-b97d-3f288712b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.444 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d/disk.config 7bca344a-3af1-4217-b97d-3f288712b57d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:19 compute-1 ceph-mon[81689]: pgmap v2390: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 06 07:36:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1629657261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:19 compute-1 ceph-mon[81689]: osdmap e296: 3 total, 3 up, 3 in
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.929 226109 DEBUG nova.network.neutron [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:36:19 compute-1 nova_compute[226101]: 2025-12-06 07:36:19.989 226109 DEBUG oslo_concurrency.lockutils [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-a01650e7-2f06-400b-82aa-0ad2c8b84e6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.033 226109 DEBUG oslo_concurrency.processutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d/disk.config 7bca344a-3af1-4217-b97d-3f288712b57d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.034 226109 INFO nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Deleting local config drive /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d/disk.config because it was imported into RBD.
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.039 226109 INFO nova.virt.libvirt.driver [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance destroyed successfully.
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.040 226109 DEBUG nova.objects.instance [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'numa_topology' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.070 226109 DEBUG nova.objects.instance [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.085 226109 DEBUG nova.virt.libvirt.vif [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:36:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:36:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.086 226109 DEBUG nova.network.os_vif_util [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.086 226109 DEBUG nova.network.os_vif_util [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.087 226109 DEBUG os_vif [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.089 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844228dd-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-1 kernel: tap66ec4ebb-3c: entered promiscuous mode
Dec 06 07:36:20 compute-1 NetworkManager[49031]: <info>  [1765006580.0960] manager: (tap66ec4ebb-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.096 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:36:20 compute-1 ovn_controller[130279]: 2025-12-06T07:36:20Z|00510|binding|INFO|Claiming lport 66ec4ebb-3c12-40ec-8b46-858eeb166d64 for this chassis.
Dec 06 07:36:20 compute-1 ovn_controller[130279]: 2025-12-06T07:36:20Z|00511|binding|INFO|66ec4ebb-3c12-40ec-8b46-858eeb166d64: Claiming fa:16:3e:db:bc:74 10.100.0.3
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.109 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:bc:74 10.100.0.3'], port_security=['fa:16:3e:db:bc:74 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7bca344a-3af1-4217-b97d-3f288712b57d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=66ec4ebb-3c12-40ec-8b46-858eeb166d64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.110 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 66ec4ebb-3c12-40ec-8b46-858eeb166d64 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 bound to our chassis
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.111 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:36:20 compute-1 ovn_controller[130279]: 2025-12-06T07:36:20Z|00512|binding|INFO|Setting lport 66ec4ebb-3c12-40ec-8b46-858eeb166d64 ovn-installed in OVS
Dec 06 07:36:20 compute-1 ovn_controller[130279]: 2025-12-06T07:36:20Z|00513|binding|INFO|Setting lport 66ec4ebb-3c12-40ec-8b46-858eeb166d64 up in Southbound
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.118 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.120 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.122 226109 INFO os_vif [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0')
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.127 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1d516c6b-4d04-4118-a17a-2b2017474ae2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.128 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap40bc9d32-81 in ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.130 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap40bc9d32-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.130 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[240e225a-8a48-4320-8476-316894497430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.131 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[08629261-2a1f-4392-83be-200ab930a11c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 systemd-machined[190302]: New machine qemu-60-instance-00000082.
Dec 06 07:36:20 compute-1 systemd-udevd[275953]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.134 226109 DEBUG nova.virt.libvirt.driver [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Start _get_guest_xml network_info=[{"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.139 226109 WARNING nova.virt.libvirt.driver [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:36:20 compute-1 systemd[1]: Started Virtual Machine qemu-60-instance-00000082.
Dec 06 07:36:20 compute-1 NetworkManager[49031]: <info>  [1765006580.1470] device (tap66ec4ebb-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.145 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc4397b-e5f2-49cd-ba9d-6a58b787ce9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 NetworkManager[49031]: <info>  [1765006580.1487] device (tap66ec4ebb-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.148 226109 DEBUG nova.virt.libvirt.host [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.149 226109 DEBUG nova.virt.libvirt.host [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.153 226109 DEBUG nova.virt.libvirt.host [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.154 226109 DEBUG nova.virt.libvirt.host [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.156 226109 DEBUG nova.virt.libvirt.driver [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.156 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.156 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.157 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.157 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.157 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.157 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.158 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.158 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.159 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.159 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.159 226109 DEBUG nova.virt.hardware [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.159 226109 DEBUG nova.objects.instance [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.162 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[32276fde-6949-4b2d-9a7a-03e410317fa9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.181 226109 DEBUG oslo_concurrency.processutils [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.196 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a92d1406-2de2-4ae6-b6bf-d0a1c41ab254]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 systemd-udevd[275956]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:20 compute-1 NetworkManager[49031]: <info>  [1765006580.2048] manager: (tap40bc9d32-80): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.205 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2da7600d-5ed4-4c40-9f9d-f4118aa31a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.238 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed388ce-591e-4db3-94c2-54f07027c3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.242 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[820b5bf9-0003-4edb-8546-4b995e6f9e13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 NetworkManager[49031]: <info>  [1765006580.2698] device (tap40bc9d32-80): carrier: link connected
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.279 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[845e7c2f-1c22-4a40-9f7d-09247d0d4675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.302 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[48d03c40-5f70-4e2e-ad66-2a6c85a56360]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685977, 'reachable_time': 40183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275986, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.319 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[69e5f4da-a8ef-409b-a348-bbae37635f69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:6673'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685977, 'tstamp': 685977}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275987, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.339 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[24142d3c-b871-43ce-81c9-33086145336b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685977, 'reachable_time': 40183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275993, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.373 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ba16a6-23b3-4c09-ac8c-77f097d0c5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.424 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac17429-9e35-43c2-bc86-c0db201b422b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.425 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.425 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.426 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:20 compute-1 NetworkManager[49031]: <info>  [1765006580.4281] manager: (tap40bc9d32-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.427 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-1 kernel: tap40bc9d32-80: entered promiscuous mode
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.431 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:20 compute-1 ovn_controller[130279]: 2025-12-06T07:36:20Z|00514|binding|INFO|Releasing lport 0d2044a5-87cb-4c28-912c-9a2682bb94de from this chassis (sb_readonly=0)
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.435 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/40bc9d32-839b-4591-acbc-c5d535123ff1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/40bc9d32-839b-4591-acbc-c5d535123ff1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.437 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7732750a-c0aa-4249-ba65-a6eb19ee7ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.437 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/40bc9d32-839b-4591-acbc-c5d535123ff1.pid.haproxy
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:36:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:20.438 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'env', 'PROCESS_TAG=haproxy-40bc9d32-839b-4591-acbc-c5d535123ff1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/40bc9d32-839b-4591-acbc-c5d535123ff1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.448 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.475 226109 DEBUG nova.compute.manager [req-468fb40d-46b5-45bc-b411-69a56fe94cbc req-2773a574-84cd-4532-92f9-cba5a96f311b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received event network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.476 226109 DEBUG oslo_concurrency.lockutils [req-468fb40d-46b5-45bc-b411-69a56fe94cbc req-2773a574-84cd-4532-92f9-cba5a96f311b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.476 226109 DEBUG oslo_concurrency.lockutils [req-468fb40d-46b5-45bc-b411-69a56fe94cbc req-2773a574-84cd-4532-92f9-cba5a96f311b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.476 226109 DEBUG oslo_concurrency.lockutils [req-468fb40d-46b5-45bc-b411-69a56fe94cbc req-2773a574-84cd-4532-92f9-cba5a96f311b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.476 226109 DEBUG nova.compute.manager [req-468fb40d-46b5-45bc-b411-69a56fe94cbc req-2773a574-84cd-4532-92f9-cba5a96f311b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Processing event network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:36:20 compute-1 sudo[276058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:20 compute-1 sudo[276058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:20 compute-1 sudo[276058]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:20 compute-1 podman[276057]: 2025-12-06 07:36:20.796350232 +0000 UTC m=+0.024139114 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:36:20 compute-1 sudo[276092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:36:20 compute-1 sudo[276092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:20 compute-1 sudo[276092]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:20.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:20.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:20 compute-1 nova_compute[226101]: 2025-12-06 07:36:20.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:36:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/754207191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.075 226109 DEBUG oslo_concurrency.processutils [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.894s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.119 226109 DEBUG oslo_concurrency.processutils [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:21 compute-1 podman[276057]: 2025-12-06 07:36:21.274121753 +0000 UTC m=+0.501910625 container create 41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.294 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006581.2936666, 7bca344a-3af1-4217-b97d-3f288712b57d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.296 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] VM Started (Lifecycle Event)
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.301 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.309 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.319 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.322 226109 INFO nova.virt.libvirt.driver [-] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Instance spawned successfully.
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.324 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.339 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.368 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.369 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.370 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.371 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.371 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.372 226109 DEBUG nova.virt.libvirt.driver [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.376 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.376 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006581.2941508, 7bca344a-3af1-4217-b97d-3f288712b57d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.376 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] VM Paused (Lifecycle Event)
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.423 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.427 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006581.3065271, 7bca344a-3af1-4217-b97d-3f288712b57d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.427 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] VM Resumed (Lifecycle Event)
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.450 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.459 226109 INFO nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Took 9.60 seconds to spawn the instance on the hypervisor.
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.460 226109 DEBUG nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.462 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.484 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:36:21 compute-1 systemd[1]: Started libpod-conmon-41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0.scope.
Dec 06 07:36:21 compute-1 podman[276187]: 2025-12-06 07:36:21.510451585 +0000 UTC m=+0.191227381 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:36:21 compute-1 podman[276186]: 2025-12-06 07:36:21.514152384 +0000 UTC m=+0.195314600 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:36:21 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:36:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68691bb501f12fc19d2d24b23b6f6325ed83f1fa9ebbad0ae4bd40bcc870ccf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.580 226109 INFO nova.compute.manager [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Took 10.99 seconds to build instance.
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.603 226109 DEBUG oslo_concurrency.lockutils [None req-b646587e-1316-4be9-8314-1f1eb32fde31 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:21 compute-1 nova_compute[226101]: 2025-12-06 07:36:21.758 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:36:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1505987283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:22 compute-1 ceph-mon[81689]: pgmap v2392: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Dec 06 07:36:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/754207191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:22 compute-1 podman[276188]: 2025-12-06 07:36:22.1436287 +0000 UTC m=+0.823506031 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:36:22 compute-1 podman[276057]: 2025-12-06 07:36:22.154007397 +0000 UTC m=+1.381796259 container init 41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:36:22 compute-1 podman[276057]: 2025-12-06 07:36:22.159811692 +0000 UTC m=+1.387600554 container start 41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:36:22 compute-1 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[276234]: [NOTICE]   (276254) : New worker (276256) forked
Dec 06 07:36:22 compute-1 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[276234]: [NOTICE]   (276254) : Loading success.
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.234 226109 DEBUG oslo_concurrency.processutils [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.235 226109 DEBUG nova.virt.libvirt.vif [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:36:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:36:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.236 226109 DEBUG nova.network.os_vif_util [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.236 226109 DEBUG nova.network.os_vif_util [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.237 226109 DEBUG nova.objects.instance [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.640 226109 DEBUG nova.virt.libvirt.driver [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <uuid>a01650e7-2f06-400b-82aa-0ad2c8b84e6c</uuid>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <name>instance-00000076</name>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <memory>196608</memory>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerActionsTestOtherB-server-1460317127</nova:name>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:36:20</nova:creationTime>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <nova:flavor name="m1.micro">
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <nova:memory>192</nova:memory>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <nova:port uuid="844228dd-a09d-4ac5-bca7-4a4f664afd31">
Dec 06 07:36:22 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <system>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <entry name="serial">a01650e7-2f06-400b-82aa-0ad2c8b84e6c</entry>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <entry name="uuid">a01650e7-2f06-400b-82aa-0ad2c8b84e6c</entry>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </system>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <os>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   </os>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <features>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   </features>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk">
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       </source>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_disk.config">
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       </source>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:36:22 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:5e:74:4a"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <target dev="tap844228dd-a0"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c/console.log" append="off"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <video>
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </video>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:36:22 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:36:22 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:36:22 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:36:22 compute-1 nova_compute[226101]: </domain>
Dec 06 07:36:22 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.641 226109 DEBUG nova.virt.libvirt.driver [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.642 226109 DEBUG nova.virt.libvirt.driver [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.643 226109 DEBUG nova.virt.libvirt.vif [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:36:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:36:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.643 226109 DEBUG nova.network.os_vif_util [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.644 226109 DEBUG nova.network.os_vif_util [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.644 226109 DEBUG os_vif [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.645 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.645 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.646 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.650 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.650 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844228dd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.651 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap844228dd-a0, col_values=(('external_ids', {'iface-id': '844228dd-a09d-4ac5-bca7-4a4f664afd31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:74:4a', 'vm-uuid': 'a01650e7-2f06-400b-82aa-0ad2c8b84e6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:22 compute-1 NetworkManager[49031]: <info>  [1765006582.6952] manager: (tap844228dd-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.694 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.696 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.701 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.703 226109 INFO os_vif [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0')
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.749 226109 DEBUG nova.compute.manager [req-22879504-e714-4872-82db-f1d566b8189a req-fec0eb60-16c0-4e10-8d95-5379a64c3ed2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received event network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.750 226109 DEBUG oslo_concurrency.lockutils [req-22879504-e714-4872-82db-f1d566b8189a req-fec0eb60-16c0-4e10-8d95-5379a64c3ed2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.750 226109 DEBUG oslo_concurrency.lockutils [req-22879504-e714-4872-82db-f1d566b8189a req-fec0eb60-16c0-4e10-8d95-5379a64c3ed2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.750 226109 DEBUG oslo_concurrency.lockutils [req-22879504-e714-4872-82db-f1d566b8189a req-fec0eb60-16c0-4e10-8d95-5379a64c3ed2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.751 226109 DEBUG nova.compute.manager [req-22879504-e714-4872-82db-f1d566b8189a req-fec0eb60-16c0-4e10-8d95-5379a64c3ed2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] No waiting events found dispatching network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:36:22 compute-1 nova_compute[226101]: 2025-12-06 07:36:22.751 226109 WARNING nova.compute.manager [req-22879504-e714-4872-82db-f1d566b8189a req-fec0eb60-16c0-4e10-8d95-5379a64c3ed2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received unexpected event network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 for instance with vm_state active and task_state None.
Dec 06 07:36:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:22.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:22.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:23 compute-1 nova_compute[226101]: 2025-12-06 07:36:23.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:24 compute-1 nova_compute[226101]: 2025-12-06 07:36:24.191 226109 DEBUG nova.compute.manager [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:24 compute-1 nova_compute[226101]: 2025-12-06 07:36:24.245 226109 INFO nova.compute.manager [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] instance snapshotting
Dec 06 07:36:24 compute-1 kernel: tap844228dd-a0: entered promiscuous mode
Dec 06 07:36:24 compute-1 NetworkManager[49031]: <info>  [1765006584.4510] manager: (tap844228dd-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Dec 06 07:36:24 compute-1 ovn_controller[130279]: 2025-12-06T07:36:24Z|00515|binding|INFO|Claiming lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 for this chassis.
Dec 06 07:36:24 compute-1 ovn_controller[130279]: 2025-12-06T07:36:24Z|00516|binding|INFO|844228dd-a09d-4ac5-bca7-4a4f664afd31: Claiming fa:16:3e:5e:74:4a 10.100.0.12
Dec 06 07:36:24 compute-1 nova_compute[226101]: 2025-12-06 07:36:24.452 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:24 compute-1 nova_compute[226101]: 2025-12-06 07:36:24.460 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.474 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:74:4a 10.100.0.12'], port_security=['fa:16:3e:5e:74:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a01650e7-2f06-400b-82aa-0ad2c8b84e6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=844228dd-a09d-4ac5-bca7-4a4f664afd31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.476 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 844228dd-a09d-4ac5-bca7-4a4f664afd31 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.477 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:36:24 compute-1 ovn_controller[130279]: 2025-12-06T07:36:24Z|00517|binding|INFO|Setting lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 ovn-installed in OVS
Dec 06 07:36:24 compute-1 ovn_controller[130279]: 2025-12-06T07:36:24Z|00518|binding|INFO|Setting lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 up in Southbound
Dec 06 07:36:24 compute-1 systemd-udevd[276282]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.492 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4242fde9-72c6-40c4-99f3-182add2952b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.493 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3beede49-11 in ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:36:24 compute-1 nova_compute[226101]: 2025-12-06 07:36:24.492 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.495 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3beede49-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.495 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[674c091d-a7af-4dbc-aed0-c97afc2eda18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 systemd-machined[190302]: New machine qemu-61-instance-00000076.
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.498 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed7cf11-d9c2-4457-a7fe-14e9ae631844]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 NetworkManager[49031]: <info>  [1765006584.5037] device (tap844228dd-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:36:24 compute-1 NetworkManager[49031]: <info>  [1765006584.5048] device (tap844228dd-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:36:24 compute-1 systemd[1]: Started Virtual Machine qemu-61-instance-00000076.
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.517 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[463eb1d2-7422-4980-aa76-6ec6e0d767b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.530 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[904580de-92fa-4e18-a4f8-8547b18740f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.560 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0b197062-91c5-40c1-965f-070eab93f9cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 systemd-udevd[276285]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:24 compute-1 NetworkManager[49031]: <info>  [1765006584.5663] manager: (tap3beede49-10): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.567 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0acb5d8c-7ad8-4d49-a898-c96b44c12423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 nova_compute[226101]: 2025-12-06 07:36:24.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:24 compute-1 nova_compute[226101]: 2025-12-06 07:36:24.591 226109 INFO nova.virt.libvirt.driver [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Beginning live snapshot process
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.602 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[94fe1979-ed92-4254-b88a-3e8670cb22ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.606 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5c0ba4-36fa-4f85-a5bc-e477ad74d187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 NetworkManager[49031]: <info>  [1765006584.6291] device (tap3beede49-10): carrier: link connected
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.634 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[86d20eac-6a5a-45cb-94ed-968762064ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.651 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[33c27765-52d9-4595-915a-a6057900a396]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686413, 'reachable_time': 42048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276315, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.667 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4c5b18-f90a-4def-ab54-00ba538cb702]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:c755'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686413, 'tstamp': 686413}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276316, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.689 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[88d562ac-d64e-4641-bf62-ed7313bcd1b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686413, 'reachable_time': 42048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276317, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.718 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[17869a80-615d-459a-852a-2a4541f05955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.772 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7771ed2e-0a01-4bb5-83b1-be52085a3fbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.774 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.774 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.775 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:24 compute-1 kernel: tap3beede49-10: entered promiscuous mode
Dec 06 07:36:24 compute-1 NetworkManager[49031]: <info>  [1765006584.7777] manager: (tap3beede49-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.782 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:24 compute-1 ovn_controller[130279]: 2025-12-06T07:36:24Z|00519|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.799 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.800 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a212a987-25e0-40bb-92bd-5c952d3b3c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.800 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:36:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:24.801 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'env', 'PROCESS_TAG=haproxy-3beede49-1cbb-425c-b1af-82f43dc57163', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3beede49-1cbb-425c-b1af-82f43dc57163.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:36:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:24.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1505987283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:25 compute-1 ceph-mon[81689]: pgmap v2393: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 MiB/s wr, 50 op/s
Dec 06 07:36:25 compute-1 podman[276378]: 2025-12-06 07:36:25.342841113 +0000 UTC m=+0.026975910 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:36:26 compute-1 nova_compute[226101]: 2025-12-06 07:36:26.635 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:26 compute-1 nova_compute[226101]: 2025-12-06 07:36:26.644 226109 DEBUG nova.virt.libvirt.imagebackend [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:36:26 compute-1 nova_compute[226101]: 2025-12-06 07:36:26.760 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:26 compute-1 nova_compute[226101]: 2025-12-06 07:36:26.910 226109 DEBUG nova.storage.rbd_utils [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] creating snapshot(ddf12e1260d34870aab35bdc16824754) on rbd image(7bca344a-3af1-4217-b97d-3f288712b57d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:36:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:26.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:26.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:27 compute-1 ceph-mon[81689]: pgmap v2394: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 77 op/s
Dec 06 07:36:27 compute-1 podman[276378]: 2025-12-06 07:36:27.424161606 +0000 UTC m=+2.108296353 container create 13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:36:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:36:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4090472694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:27 compute-1 systemd[1]: Started libpod-conmon-13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159.scope.
Dec 06 07:36:27 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:36:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23583fd3766ec754d3671d4d9220199f6fc4b9c832b8798a266c738cb80aa708/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:28.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:28.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:29 compute-1 ceph-mon[81689]: pgmap v2395: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 96 op/s
Dec 06 07:36:29 compute-1 podman[276378]: 2025-12-06 07:36:29.751507339 +0000 UTC m=+4.435642096 container init 13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 07:36:29 compute-1 podman[276378]: 2025-12-06 07:36:29.763981542 +0000 UTC m=+4.448116279 container start 13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:36:29 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[276444]: [NOTICE]   (276463) : New worker (276465) forked
Dec 06 07:36:29 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[276444]: [NOTICE]   (276463) : Loading success.
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.811 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006587.918029, a01650e7-2f06-400b-82aa-0ad2c8b84e6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.811 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] VM Resumed (Lifecycle Event)
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.812 226109 DEBUG nova.compute.manager [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.818 226109 INFO nova.virt.libvirt.driver [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance rebooted successfully.
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.818 226109 DEBUG nova.compute.manager [None req-e785de0d-d1ee-4331-822d-854f6489bbb2 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.853 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.856 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.888 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] During sync_power_state the instance has a pending task (powering-on). Skip.
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.888 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006587.918135, a01650e7-2f06-400b-82aa-0ad2c8b84e6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.889 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] VM Started (Lifecycle Event)
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.909 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:29 compute-1 nova_compute[226101]: 2025-12-06 07:36:29.915 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:36:30 compute-1 nova_compute[226101]: 2025-12-06 07:36:30.469 226109 DEBUG nova.compute.manager [req-4c09f404-8af4-4942-a2c1-c85f9df84723 req-801e2881-d594-4213-b4ed-d1edf2d63e0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:30 compute-1 nova_compute[226101]: 2025-12-06 07:36:30.470 226109 DEBUG oslo_concurrency.lockutils [req-4c09f404-8af4-4942-a2c1-c85f9df84723 req-801e2881-d594-4213-b4ed-d1edf2d63e0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:30 compute-1 nova_compute[226101]: 2025-12-06 07:36:30.470 226109 DEBUG oslo_concurrency.lockutils [req-4c09f404-8af4-4942-a2c1-c85f9df84723 req-801e2881-d594-4213-b4ed-d1edf2d63e0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:30 compute-1 nova_compute[226101]: 2025-12-06 07:36:30.470 226109 DEBUG oslo_concurrency.lockutils [req-4c09f404-8af4-4942-a2c1-c85f9df84723 req-801e2881-d594-4213-b4ed-d1edf2d63e0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:30 compute-1 nova_compute[226101]: 2025-12-06 07:36:30.470 226109 DEBUG nova.compute.manager [req-4c09f404-8af4-4942-a2c1-c85f9df84723 req-801e2881-d594-4213-b4ed-d1edf2d63e0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] No waiting events found dispatching network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:36:30 compute-1 nova_compute[226101]: 2025-12-06 07:36:30.471 226109 WARNING nova.compute.manager [req-4c09f404-8af4-4942-a2c1-c85f9df84723 req-801e2881-d594-4213-b4ed-d1edf2d63e0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received unexpected event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 for instance with vm_state active and task_state None.
Dec 06 07:36:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:30.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:30.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:31 compute-1 nova_compute[226101]: 2025-12-06 07:36:31.811 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.466 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.466 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.466 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.466 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.467 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.468 226109 INFO nova.compute.manager [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Terminating instance
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.468 226109 DEBUG nova.compute.manager [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:36:32 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.703 226109 DEBUG nova.compute.manager [req-49b50480-7385-4c4f-ac39-5f7cba5fb3a4 req-555b1d25-bf9d-456d-84c7-adcb4d8e9ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.704 226109 DEBUG oslo_concurrency.lockutils [req-49b50480-7385-4c4f-ac39-5f7cba5fb3a4 req-555b1d25-bf9d-456d-84c7-adcb4d8e9ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.704 226109 DEBUG oslo_concurrency.lockutils [req-49b50480-7385-4c4f-ac39-5f7cba5fb3a4 req-555b1d25-bf9d-456d-84c7-adcb4d8e9ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.704 226109 DEBUG oslo_concurrency.lockutils [req-49b50480-7385-4c4f-ac39-5f7cba5fb3a4 req-555b1d25-bf9d-456d-84c7-adcb4d8e9ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.704 226109 DEBUG nova.compute.manager [req-49b50480-7385-4c4f-ac39-5f7cba5fb3a4 req-555b1d25-bf9d-456d-84c7-adcb4d8e9ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] No waiting events found dispatching network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.705 226109 WARNING nova.compute.manager [req-49b50480-7385-4c4f-ac39-5f7cba5fb3a4 req-555b1d25-bf9d-456d-84c7-adcb4d8e9ef8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received unexpected event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 for instance with vm_state active and task_state deleting.
Dec 06 07:36:32 compute-1 kernel: tap844228dd-a0 (unregistering): left promiscuous mode
Dec 06 07:36:32 compute-1 NetworkManager[49031]: <info>  [1765006592.7269] device (tap844228dd-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:36:32 compute-1 ovn_controller[130279]: 2025-12-06T07:36:32Z|00520|binding|INFO|Releasing lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 from this chassis (sb_readonly=0)
Dec 06 07:36:32 compute-1 ovn_controller[130279]: 2025-12-06T07:36:32Z|00521|binding|INFO|Setting lport 844228dd-a09d-4ac5-bca7-4a4f664afd31 down in Southbound
Dec 06 07:36:32 compute-1 ovn_controller[130279]: 2025-12-06T07:36:32Z|00522|binding|INFO|Removing iface tap844228dd-a0 ovn-installed in OVS
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.745 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.765 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:32 compute-1 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000076.scope: Deactivated successfully.
Dec 06 07:36:32 compute-1 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000076.scope: Consumed 5.244s CPU time.
Dec 06 07:36:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:32.799 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:74:4a 10.100.0.12'], port_security=['fa:16:3e:5e:74:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a01650e7-2f06-400b-82aa-0ad2c8b84e6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=844228dd-a09d-4ac5-bca7-4a4f664afd31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:36:32 compute-1 systemd-machined[190302]: Machine qemu-61-instance-00000076 terminated.
Dec 06 07:36:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:32.804 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 844228dd-a09d-4ac5-bca7-4a4f664afd31 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis
Dec 06 07:36:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:32.806 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3beede49-1cbb-425c-b1af-82f43dc57163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:36:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:32.807 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[751532a7-b6a1-48b7-9ccc-53e8aac54edf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:32.808 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace which is not needed anymore
Dec 06 07:36:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:32.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:32.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.935 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.942 226109 INFO nova.virt.libvirt.driver [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Instance destroyed successfully.
Dec 06 07:36:32 compute-1 nova_compute[226101]: 2025-12-06 07:36:32.942 226109 DEBUG nova.objects.instance [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid a01650e7-2f06-400b-82aa-0ad2c8b84e6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.032 226109 DEBUG nova.virt.libvirt.vif [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:31:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1460317127',display_name='tempest-ServerActionsTestOtherB-server-1460317127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1460317127',id=118,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:36:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-50zw6ys5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:36:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=a01650e7-2f06-400b-82aa-0ad2c8b84e6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.032 226109 DEBUG nova.network.os_vif_util [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "address": "fa:16:3e:5e:74:4a", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap844228dd-a0", "ovs_interfaceid": "844228dd-a09d-4ac5-bca7-4a4f664afd31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.033 226109 DEBUG nova.network.os_vif_util [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.033 226109 DEBUG os_vif [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.035 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.035 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844228dd-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:33 compute-1 nova_compute[226101]: 2025-12-06 07:36:33.040 226109 INFO os_vif [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:74:4a,bridge_name='br-int',has_traffic_filtering=True,id=844228dd-a09d-4ac5-bca7-4a4f664afd31,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap844228dd-a0')
Dec 06 07:36:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4090472694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:33 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[276444]: [NOTICE]   (276463) : haproxy version is 2.8.14-c23fe91
Dec 06 07:36:33 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[276444]: [NOTICE]   (276463) : path to executable is /usr/sbin/haproxy
Dec 06 07:36:33 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[276444]: [WARNING]  (276463) : Exiting Master process...
Dec 06 07:36:33 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[276444]: [ALERT]    (276463) : Current worker (276465) exited with code 143 (Terminated)
Dec 06 07:36:33 compute-1 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[276444]: [WARNING]  (276463) : All workers exited. Exiting... (0)
Dec 06 07:36:33 compute-1 systemd[1]: libpod-13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159.scope: Deactivated successfully.
Dec 06 07:36:33 compute-1 podman[276501]: 2025-12-06 07:36:33.313988308 +0000 UTC m=+0.368639830 container died 13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:36:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e297 e297: 3 total, 3 up, 3 in
Dec 06 07:36:34 compute-1 nova_compute[226101]: 2025-12-06 07:36:34.883 226109 DEBUG nova.compute.manager [req-1e203ac8-4abb-464d-8da0-4bdf0ea15699 req-d5fa976a-6677-47df-9ff1-ea462f7222f8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-unplugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:34 compute-1 nova_compute[226101]: 2025-12-06 07:36:34.883 226109 DEBUG oslo_concurrency.lockutils [req-1e203ac8-4abb-464d-8da0-4bdf0ea15699 req-d5fa976a-6677-47df-9ff1-ea462f7222f8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:34 compute-1 nova_compute[226101]: 2025-12-06 07:36:34.884 226109 DEBUG oslo_concurrency.lockutils [req-1e203ac8-4abb-464d-8da0-4bdf0ea15699 req-d5fa976a-6677-47df-9ff1-ea462f7222f8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:34 compute-1 nova_compute[226101]: 2025-12-06 07:36:34.884 226109 DEBUG oslo_concurrency.lockutils [req-1e203ac8-4abb-464d-8da0-4bdf0ea15699 req-d5fa976a-6677-47df-9ff1-ea462f7222f8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:34 compute-1 nova_compute[226101]: 2025-12-06 07:36:34.884 226109 DEBUG nova.compute.manager [req-1e203ac8-4abb-464d-8da0-4bdf0ea15699 req-d5fa976a-6677-47df-9ff1-ea462f7222f8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] No waiting events found dispatching network-vif-unplugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:36:34 compute-1 nova_compute[226101]: 2025-12-06 07:36:34.885 226109 DEBUG nova.compute.manager [req-1e203ac8-4abb-464d-8da0-4bdf0ea15699 req-d5fa976a-6677-47df-9ff1-ea462f7222f8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-unplugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:36:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:34.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:34.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159-userdata-shm.mount: Deactivated successfully.
Dec 06 07:36:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-23583fd3766ec754d3671d4d9220199f6fc4b9c832b8798a266c738cb80aa708-merged.mount: Deactivated successfully.
Dec 06 07:36:36 compute-1 ceph-mon[81689]: pgmap v2396: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 96 op/s
Dec 06 07:36:36 compute-1 ceph-mon[81689]: pgmap v2397: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 07:36:36 compute-1 ceph-mon[81689]: pgmap v2398: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.8 KiB/s wr, 153 op/s
Dec 06 07:36:36 compute-1 podman[276501]: 2025-12-06 07:36:36.15347872 +0000 UTC m=+3.208130252 container cleanup 13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:36:36 compute-1 systemd[1]: libpod-conmon-13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159.scope: Deactivated successfully.
Dec 06 07:36:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e298 e298: 3 total, 3 up, 3 in
Dec 06 07:36:36 compute-1 ceph-mon[81689]: osdmap e297: 3 total, 3 up, 3 in
Dec 06 07:36:36 compute-1 ceph-mon[81689]: pgmap v2400: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 KiB/s wr, 141 op/s
Dec 06 07:36:36 compute-1 podman[276558]: 2025-12-06 07:36:36.737110813 +0000 UTC m=+0.554772823 container remove 13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.744 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e80dfb6e-c949-4c01-affe-6ad3a78b020a]: (4, ('Sat Dec  6 07:36:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159)\n13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159\nSat Dec  6 07:36:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159)\n13e44c302589b3b651197e43d058646a4cbf77e537b8956eac3ae6b15cf5b159\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.746 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[62f02ab6-4c6e-4b84-ae69-24e802cd9c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.749 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:36 compute-1 kernel: tap3beede49-10: left promiscuous mode
Dec 06 07:36:36 compute-1 nova_compute[226101]: 2025-12-06 07:36:36.751 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:36 compute-1 nova_compute[226101]: 2025-12-06 07:36:36.770 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.774 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d955c7-7807-42d4-9aa7-fd729324ff85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.792 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[69e2bb81-8b06-4c79-b5de-258c2413769c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.793 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[59509b03-a4ea-43c0-8f5c-7abf40877c79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.809 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb1e987-da3f-414c-ac47-1068cc7ce376]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686405, 'reachable_time': 22169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276574, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.815 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:36:36 compute-1 systemd[1]: run-netns-ovnmeta\x2d3beede49\x2d1cbb\x2d425c\x2db1af\x2d82f43dc57163.mount: Deactivated successfully.
Dec 06 07:36:36 compute-1 nova_compute[226101]: 2025-12-06 07:36:36.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:36.815 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[03463673-b471-4e69-a48a-b89f63ad8b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:36:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:36.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:36.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:37 compute-1 nova_compute[226101]: 2025-12-06 07:36:37.100 226109 DEBUG nova.compute.manager [req-d56dcac7-ffed-4c28-896a-33c338019b03 req-ce42b669-52b9-4041-8551-f27e43b9c23f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:37 compute-1 nova_compute[226101]: 2025-12-06 07:36:37.101 226109 DEBUG oslo_concurrency.lockutils [req-d56dcac7-ffed-4c28-896a-33c338019b03 req-ce42b669-52b9-4041-8551-f27e43b9c23f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:37 compute-1 nova_compute[226101]: 2025-12-06 07:36:37.101 226109 DEBUG oslo_concurrency.lockutils [req-d56dcac7-ffed-4c28-896a-33c338019b03 req-ce42b669-52b9-4041-8551-f27e43b9c23f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:37 compute-1 nova_compute[226101]: 2025-12-06 07:36:37.101 226109 DEBUG oslo_concurrency.lockutils [req-d56dcac7-ffed-4c28-896a-33c338019b03 req-ce42b669-52b9-4041-8551-f27e43b9c23f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:37 compute-1 nova_compute[226101]: 2025-12-06 07:36:37.102 226109 DEBUG nova.compute.manager [req-d56dcac7-ffed-4c28-896a-33c338019b03 req-ce42b669-52b9-4041-8551-f27e43b9c23f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] No waiting events found dispatching network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:36:37 compute-1 nova_compute[226101]: 2025-12-06 07:36:37.102 226109 WARNING nova.compute.manager [req-d56dcac7-ffed-4c28-896a-33c338019b03 req-ce42b669-52b9-4041-8551-f27e43b9c23f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received unexpected event network-vif-plugged-844228dd-a09d-4ac5-bca7-4a4f664afd31 for instance with vm_state active and task_state deleting.
Dec 06 07:36:37 compute-1 ceph-mon[81689]: osdmap e298: 3 total, 3 up, 3 in
Dec 06 07:36:37 compute-1 ceph-mon[81689]: pgmap v2402: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 133 op/s
Dec 06 07:36:37 compute-1 nova_compute[226101]: 2025-12-06 07:36:37.662 226109 DEBUG nova.storage.rbd_utils [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] cloning vms/7bca344a-3af1-4217-b97d-3f288712b57d_disk@ddf12e1260d34870aab35bdc16824754 to images/441cc72d-7648-41a4-9dbf-54930196cf56 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:36:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:38 compute-1 nova_compute[226101]: 2025-12-06 07:36:38.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:38 compute-1 nova_compute[226101]: 2025-12-06 07:36:38.530 226109 DEBUG nova.storage.rbd_utils [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] flattening images/441cc72d-7648-41a4-9dbf-54930196cf56 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:36:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/607710496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:38.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:38.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:39 compute-1 nova_compute[226101]: 2025-12-06 07:36:39.141 226109 DEBUG nova.storage.rbd_utils [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] removing snapshot(ddf12e1260d34870aab35bdc16824754) on rbd image(7bca344a-3af1-4217-b97d-3f288712b57d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:36:40 compute-1 ovn_controller[130279]: 2025-12-06T07:36:40Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:bc:74 10.100.0.3
Dec 06 07:36:40 compute-1 ovn_controller[130279]: 2025-12-06T07:36:40Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:bc:74 10.100.0.3
Dec 06 07:36:40 compute-1 ceph-mon[81689]: pgmap v2403: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 11 KiB/s wr, 16 op/s
Dec 06 07:36:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3620983311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:40.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e299 e299: 3 total, 3 up, 3 in
Dec 06 07:36:41 compute-1 nova_compute[226101]: 2025-12-06 07:36:41.819 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:42.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:42.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:43 compute-1 nova_compute[226101]: 2025-12-06 07:36:43.040 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:43 compute-1 ceph-mon[81689]: pgmap v2404: 305 pgs: 305 active+clean; 766 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.7 MiB/s wr, 202 op/s
Dec 06 07:36:43 compute-1 nova_compute[226101]: 2025-12-06 07:36:43.897 226109 INFO nova.virt.libvirt.driver [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Deleting instance files /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_del
Dec 06 07:36:43 compute-1 nova_compute[226101]: 2025-12-06 07:36:43.897 226109 INFO nova.virt.libvirt.driver [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Deletion of /var/lib/nova/instances/a01650e7-2f06-400b-82aa-0ad2c8b84e6c_del complete
Dec 06 07:36:44 compute-1 nova_compute[226101]: 2025-12-06 07:36:44.038 226109 INFO nova.compute.manager [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Took 11.57 seconds to destroy the instance on the hypervisor.
Dec 06 07:36:44 compute-1 nova_compute[226101]: 2025-12-06 07:36:44.039 226109 DEBUG oslo.service.loopingcall [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:36:44 compute-1 nova_compute[226101]: 2025-12-06 07:36:44.039 226109 DEBUG nova.compute.manager [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:36:44 compute-1 nova_compute[226101]: 2025-12-06 07:36:44.039 226109 DEBUG nova.network.neutron [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:36:44 compute-1 nova_compute[226101]: 2025-12-06 07:36:44.066 226109 DEBUG nova.storage.rbd_utils [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] creating snapshot(snap) on rbd image(441cc72d-7648-41a4-9dbf-54930196cf56) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:36:44 compute-1 ceph-mon[81689]: osdmap e299: 3 total, 3 up, 3 in
Dec 06 07:36:44 compute-1 ceph-mon[81689]: pgmap v2406: 305 pgs: 305 active+clean; 766 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.7 MiB/s wr, 199 op/s
Dec 06 07:36:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:44.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:44.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e300 e300: 3 total, 3 up, 3 in
Dec 06 07:36:46 compute-1 ceph-mon[81689]: pgmap v2407: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 765 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.6 MiB/s wr, 241 op/s
Dec 06 07:36:46 compute-1 nova_compute[226101]: 2025-12-06 07:36:46.822 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:46.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:46.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.044 226109 DEBUG nova.network.neutron [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.072 226109 INFO nova.compute.manager [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Took 3.03 seconds to deallocate network for instance.
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.122 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.123 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.126 226109 DEBUG nova.compute.manager [req-c19a288c-8a8f-41a0-930d-db466fe6273a req-27ab6604-5f48-4007-903d-9b52165f8b47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Received event network-vif-deleted-844228dd-a09d-4ac5-bca7-4a4f664afd31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.128 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.193 226109 INFO nova.scheduler.client.report [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Deleted allocations for instance a01650e7-2f06-400b-82aa-0ad2c8b84e6c
Dec 06 07:36:47 compute-1 sshd-session[276666]: Connection reset by authenticating user root 45.140.17.124 port 65088 [preauth]
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.408 226109 DEBUG oslo_concurrency.lockutils [None req-23b633b3-e654-402e-b794-b55291e903ad a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "a01650e7-2f06-400b-82aa-0ad2c8b84e6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:47 compute-1 ceph-mon[81689]: osdmap e300: 3 total, 3 up, 3 in
Dec 06 07:36:47 compute-1 ceph-mon[81689]: pgmap v2409: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.8 MiB/s wr, 341 op/s
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.942 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006592.9407585, a01650e7-2f06-400b-82aa-0ad2c8b84e6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.942 226109 INFO nova.compute.manager [-] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] VM Stopped (Lifecycle Event)
Dec 06 07:36:47 compute-1 nova_compute[226101]: 2025-12-06 07:36:47.997 226109 DEBUG nova.compute.manager [None req-a0124cd2-854f-458c-9b11-b41c518d8f8d - - - - - -] [instance: a01650e7-2f06-400b-82aa-0ad2c8b84e6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:48 compute-1 nova_compute[226101]: 2025-12-06 07:36:48.087 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:48 compute-1 nova_compute[226101]: 2025-12-06 07:36:48.447 226109 INFO nova.virt.libvirt.driver [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Snapshot image upload complete
Dec 06 07:36:48 compute-1 nova_compute[226101]: 2025-12-06 07:36:48.447 226109 INFO nova.compute.manager [None req-44943114-1cbb-4625-b7e0-15d2fd66b413 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Took 24.20 seconds to snapshot the instance on the hypervisor.
Dec 06 07:36:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:48.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:48.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:50 compute-1 sshd-session[276672]: Connection reset by authenticating user root 45.140.17.124 port 65104 [preauth]
Dec 06 07:36:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:50.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:50.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:51 compute-1 nova_compute[226101]: 2025-12-06 07:36:51.829 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:51 compute-1 ceph-mon[81689]: pgmap v2410: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 55 KiB/s wr, 155 op/s
Dec 06 07:36:52 compute-1 podman[276677]: 2025-12-06 07:36:52.09799857 +0000 UTC m=+0.066198345 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:36:52 compute-1 podman[276678]: 2025-12-06 07:36:52.118036748 +0000 UTC m=+0.086112510 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Dec 06 07:36:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:52.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:52.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:53 compute-1 sshd-session[276675]: Connection reset by authenticating user root 45.140.17.124 port 65114 [preauth]
Dec 06 07:36:53 compute-1 nova_compute[226101]: 2025-12-06 07:36:53.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:53 compute-1 podman[276717]: 2025-12-06 07:36:53.117225475 +0000 UTC m=+0.099088253 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:36:53 compute-1 ceph-mon[81689]: pgmap v2411: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 126 KiB/s wr, 180 op/s
Dec 06 07:36:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e301 e301: 3 total, 3 up, 3 in
Dec 06 07:36:54 compute-1 ceph-mon[81689]: pgmap v2412: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 117 KiB/s wr, 167 op/s
Dec 06 07:36:54 compute-1 ceph-mon[81689]: osdmap e301: 3 total, 3 up, 3 in
Dec 06 07:36:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4217498047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1062886133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:54.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:54.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e302 e302: 3 total, 3 up, 3 in
Dec 06 07:36:55 compute-1 sshd-session[276743]: Connection reset by authenticating user root 45.140.17.124 port 61858 [preauth]
Dec 06 07:36:56 compute-1 nova_compute[226101]: 2025-12-06 07:36:56.246 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:56.246 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:36:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:56.248 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:36:56 compute-1 nova_compute[226101]: 2025-12-06 07:36:56.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:36:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:36:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:56.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:57 compute-1 ceph-mon[81689]: pgmap v2414: 305 pgs: 305 active+clean; 786 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 188 KiB/s rd, 880 KiB/s wr, 79 op/s
Dec 06 07:36:57 compute-1 ceph-mon[81689]: osdmap e302: 3 total, 3 up, 3 in
Dec 06 07:36:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2392313173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:58 compute-1 nova_compute[226101]: 2025-12-06 07:36:58.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:58 compute-1 ceph-mon[81689]: pgmap v2416: 305 pgs: 305 active+clean; 830 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 3.1 MiB/s wr, 157 op/s
Dec 06 07:36:58 compute-1 sshd-session[276745]: Connection reset by authenticating user root 45.140.17.124 port 61876 [preauth]
Dec 06 07:36:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:58.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:36:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:58.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:36:59.250 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e303 e303: 3 total, 3 up, 3 in
Dec 06 07:37:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:37:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:37:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:00.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:01 compute-1 ceph-mon[81689]: pgmap v2417: 305 pgs: 305 active+clean; 830 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 3.1 MiB/s wr, 103 op/s
Dec 06 07:37:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/825412902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:01.654 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:01.655 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:01.656 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:01 compute-1 ceph-mon[81689]: osdmap e303: 3 total, 3 up, 3 in
Dec 06 07:37:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1614414098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:01 compute-1 ceph-mon[81689]: pgmap v2419: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 959 KiB/s rd, 5.7 MiB/s wr, 160 op/s
Dec 06 07:37:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3009061304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/923916124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:01 compute-1 nova_compute[226101]: 2025-12-06 07:37:01.873 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:02.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:02.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:03 compute-1 nova_compute[226101]: 2025-12-06 07:37:03.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:04 compute-1 ceph-mon[81689]: pgmap v2420: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 742 KiB/s rd, 4.5 MiB/s wr, 114 op/s
Dec 06 07:37:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:04.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:04.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:06 compute-1 nova_compute[226101]: 2025-12-06 07:37:06.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:06.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:06.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:08 compute-1 nova_compute[226101]: 2025-12-06 07:37:08.144 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:08 compute-1 ceph-mon[81689]: pgmap v2421: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 65 op/s
Dec 06 07:37:08 compute-1 nova_compute[226101]: 2025-12-06 07:37:08.454 226109 DEBUG nova.compute.manager [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 07:37:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:08.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:08.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:09 compute-1 ceph-mon[81689]: pgmap v2422: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 59 op/s
Dec 06 07:37:09 compute-1 nova_compute[226101]: 2025-12-06 07:37:09.561 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:09 compute-1 nova_compute[226101]: 2025-12-06 07:37:09.562 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:09 compute-1 nova_compute[226101]: 2025-12-06 07:37:09.669 226109 DEBUG nova.objects.instance [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:09 compute-1 nova_compute[226101]: 2025-12-06 07:37:09.746 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:37:09 compute-1 nova_compute[226101]: 2025-12-06 07:37:09.747 226109 INFO nova.compute.claims [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:37:09 compute-1 nova_compute[226101]: 2025-12-06 07:37:09.747 226109 DEBUG nova.objects.instance [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'resources' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:09 compute-1 nova_compute[226101]: 2025-12-06 07:37:09.767 226109 DEBUG nova.objects.instance [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.016 226109 INFO nova.compute.resource_tracker [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating resource usage from migration e793cfc3-2f7a-47a4-9e94-7497b4835c62
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.016 226109 DEBUG nova.compute.resource_tracker [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Starting to track incoming migration e793cfc3-2f7a-47a4-9e94-7497b4835c62 with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.146 226109 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/225869628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:37:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/225869628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:37:10 compute-1 ceph-mon[81689]: pgmap v2423: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 59 op/s
Dec 06 07:37:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1594552602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:37:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/407077210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.649 226109 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.655 226109 DEBUG nova.compute.provider_tree [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.667 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.791 226109 DEBUG nova.scheduler.client.report [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.909 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.909 226109 INFO nova.compute.manager [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Migrating
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.916 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.917 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.917 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:37:10 compute-1 nova_compute[226101]: 2025-12-06 07:37:10.918 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:10.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:37:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3571071003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:11 compute-1 nova_compute[226101]: 2025-12-06 07:37:11.780 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.862s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:11 compute-1 nova_compute[226101]: 2025-12-06 07:37:11.952 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.015 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.015 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.019 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.019 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.023 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.023 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.214 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.215 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3809MB free_disk=20.648136138916016GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.215 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.215 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.400 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Migration for instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.430 226109 INFO nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating resource usage from migration e793cfc3-2f7a-47a4-9e94-7497b4835c62
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.430 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Starting to track incoming migration e793cfc3-2f7a-47a4-9e94-7497b4835c62 with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.448 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 829161c5-19b5-459e-88a3-58512aaa5fc7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.448 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.449 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 7bca344a-3af1-4217-b97d-3f288712b57d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:37:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/407077210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:12 compute-1 ceph-mon[81689]: pgmap v2424: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.512 226109 WARNING nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.512 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.512 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:37:12 compute-1 nova_compute[226101]: 2025-12-06 07:37:12.615 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:37:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:37:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:12.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3571071003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:13 compute-1 nova_compute[226101]: 2025-12-06 07:37:13.146 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:37:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/679455447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:13 compute-1 nova_compute[226101]: 2025-12-06 07:37:13.392 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:13 compute-1 nova_compute[226101]: 2025-12-06 07:37:13.398 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:37:13 compute-1 nova_compute[226101]: 2025-12-06 07:37:13.585 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:37:13 compute-1 sshd-session[276814]: Accepted publickey for nova from 192.168.122.102 port 60722 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:37:13 compute-1 nova_compute[226101]: 2025-12-06 07:37:13.716 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:37:13 compute-1 nova_compute[226101]: 2025-12-06 07:37:13.717 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:13 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Dec 06 07:37:13 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec 06 07:37:13 compute-1 systemd-logind[788]: New session 57 of user nova.
Dec 06 07:37:13 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec 06 07:37:13 compute-1 systemd[1]: Starting User Manager for UID 42436...
Dec 06 07:37:13 compute-1 systemd[276818]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:37:13 compute-1 systemd[276818]: Queued start job for default target Main User Target.
Dec 06 07:37:13 compute-1 systemd[276818]: Created slice User Application Slice.
Dec 06 07:37:13 compute-1 systemd[276818]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:37:13 compute-1 systemd[276818]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 07:37:13 compute-1 systemd[276818]: Reached target Paths.
Dec 06 07:37:13 compute-1 systemd[276818]: Reached target Timers.
Dec 06 07:37:13 compute-1 systemd[276818]: Starting D-Bus User Message Bus Socket...
Dec 06 07:37:13 compute-1 systemd[276818]: Starting Create User's Volatile Files and Directories...
Dec 06 07:37:13 compute-1 systemd[276818]: Finished Create User's Volatile Files and Directories.
Dec 06 07:37:13 compute-1 systemd[276818]: Listening on D-Bus User Message Bus Socket.
Dec 06 07:37:13 compute-1 systemd[276818]: Reached target Sockets.
Dec 06 07:37:13 compute-1 systemd[276818]: Reached target Basic System.
Dec 06 07:37:13 compute-1 systemd[276818]: Reached target Main User Target.
Dec 06 07:37:13 compute-1 systemd[276818]: Startup finished in 165ms.
Dec 06 07:37:13 compute-1 systemd[1]: Started User Manager for UID 42436.
Dec 06 07:37:13 compute-1 systemd[1]: Started Session 57 of User nova.
Dec 06 07:37:13 compute-1 sshd-session[276814]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:37:14 compute-1 ceph-mon[81689]: pgmap v2425: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 38 KiB/s wr, 83 op/s
Dec 06 07:37:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/679455447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:14 compute-1 sshd-session[276833]: Received disconnect from 192.168.122.102 port 60722:11: disconnected by user
Dec 06 07:37:14 compute-1 sshd-session[276833]: Disconnected from user nova 192.168.122.102 port 60722
Dec 06 07:37:14 compute-1 sshd-session[276814]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:37:14 compute-1 systemd[1]: session-57.scope: Deactivated successfully.
Dec 06 07:37:14 compute-1 systemd-logind[788]: Session 57 logged out. Waiting for processes to exit.
Dec 06 07:37:14 compute-1 systemd-logind[788]: Removed session 57.
Dec 06 07:37:14 compute-1 sshd-session[276835]: Accepted publickey for nova from 192.168.122.102 port 60732 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:37:14 compute-1 systemd-logind[788]: New session 59 of user nova.
Dec 06 07:37:14 compute-1 systemd[1]: Started Session 59 of User nova.
Dec 06 07:37:14 compute-1 sshd-session[276835]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:37:14 compute-1 sshd-session[276838]: Received disconnect from 192.168.122.102 port 60732:11: disconnected by user
Dec 06 07:37:14 compute-1 sshd-session[276838]: Disconnected from user nova 192.168.122.102 port 60732
Dec 06 07:37:14 compute-1 sshd-session[276835]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:37:14 compute-1 systemd[1]: session-59.scope: Deactivated successfully.
Dec 06 07:37:14 compute-1 systemd-logind[788]: Session 59 logged out. Waiting for processes to exit.
Dec 06 07:37:14 compute-1 systemd-logind[788]: Removed session 59.
Dec 06 07:37:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:14.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:15 compute-1 ceph-mon[81689]: pgmap v2426: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 39 KiB/s wr, 88 op/s
Dec 06 07:37:15 compute-1 nova_compute[226101]: 2025-12-06 07:37:15.717 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:15 compute-1 nova_compute[226101]: 2025-12-06 07:37:15.717 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:37:15 compute-1 nova_compute[226101]: 2025-12-06 07:37:15.986 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:37:15 compute-1 nova_compute[226101]: 2025-12-06 07:37:15.986 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:37:15 compute-1 nova_compute[226101]: 2025-12-06 07:37:15.987 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:37:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1918043786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2798193347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:16 compute-1 nova_compute[226101]: 2025-12-06 07:37:16.955 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:16.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:17.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:17 compute-1 ceph-mon[81689]: pgmap v2427: 305 pgs: 305 active+clean; 864 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 67 KiB/s wr, 119 op/s
Dec 06 07:37:17 compute-1 nova_compute[226101]: 2025-12-06 07:37:17.975 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating instance_info_cache with network_info: [{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:17 compute-1 nova_compute[226101]: 2025-12-06 07:37:17.992 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:17 compute-1 nova_compute[226101]: 2025-12-06 07:37:17.993 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:37:17 compute-1 nova_compute[226101]: 2025-12-06 07:37:17.993 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:17 compute-1 nova_compute[226101]: 2025-12-06 07:37:17.993 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.149 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.754 226109 DEBUG nova.compute.manager [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.754 226109 DEBUG oslo_concurrency.lockutils [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.754 226109 DEBUG oslo_concurrency.lockutils [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.755 226109 DEBUG oslo_concurrency.lockutils [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.755 226109 DEBUG nova.compute.manager [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:37:18 compute-1 nova_compute[226101]: 2025-12-06 07:37:18.755 226109 WARNING nova.compute.manager [req-2bbc746b-1a73-4bd7-a626-6815a7b042b1 req-5d573582-2bf1-462c-9e4a-8762fb8d188e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state active and task_state resize_migrating.
Dec 06 07:37:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:18.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:19.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:19 compute-1 nova_compute[226101]: 2025-12-06 07:37:19.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:19 compute-1 ceph-mon[81689]: pgmap v2428: 305 pgs: 305 active+clean; 864 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 41 KiB/s wr, 112 op/s
Dec 06 07:37:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:20 compute-1 nova_compute[226101]: 2025-12-06 07:37:20.292 226109 INFO nova.network.neutron [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 07:37:20 compute-1 nova_compute[226101]: 2025-12-06 07:37:20.912 226109 DEBUG nova.compute.manager [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:20 compute-1 nova_compute[226101]: 2025-12-06 07:37:20.912 226109 DEBUG oslo_concurrency.lockutils [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:20 compute-1 nova_compute[226101]: 2025-12-06 07:37:20.912 226109 DEBUG oslo_concurrency.lockutils [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:20 compute-1 nova_compute[226101]: 2025-12-06 07:37:20.912 226109 DEBUG oslo_concurrency.lockutils [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:20 compute-1 nova_compute[226101]: 2025-12-06 07:37:20.913 226109 DEBUG nova.compute.manager [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:37:20 compute-1 nova_compute[226101]: 2025-12-06 07:37:20.913 226109 WARNING nova.compute.manager [req-ede48b2d-b231-485a-aa10-ccf0f741c045 req-c8a45aa8-39f1-46ae-b38e-1bb10d8aba21 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state active and task_state resize_migrated.
Dec 06 07:37:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:20.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:21.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:21 compute-1 sudo[276840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:21 compute-1 sudo[276840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:21 compute-1 sudo[276840]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:21 compute-1 sudo[276865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:37:21 compute-1 sudo[276865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:21 compute-1 sudo[276865]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:21 compute-1 sudo[276890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:21 compute-1 sudo[276890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:21 compute-1 sudo[276890]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:21 compute-1 sudo[276915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:37:21 compute-1 sudo[276915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3578164270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2304411446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:21 compute-1 ceph-mon[81689]: pgmap v2429: 305 pgs: 305 active+clean; 926 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.3 MiB/s wr, 203 op/s
Dec 06 07:37:21 compute-1 sudo[276915]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:21 compute-1 nova_compute[226101]: 2025-12-06 07:37:21.844 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:37:21 compute-1 nova_compute[226101]: 2025-12-06 07:37:21.846 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:37:21 compute-1 nova_compute[226101]: 2025-12-06 07:37:21.846 226109 DEBUG nova.network.neutron [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:37:21 compute-1 nova_compute[226101]: 2025-12-06 07:37:21.987 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:23.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.031 226109 DEBUG nova.compute.manager [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.031 226109 DEBUG nova.compute.manager [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing instance network info cache due to event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.031 226109 DEBUG oslo_concurrency.lockutils [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:37:23 compute-1 podman[276969]: 2025-12-06 07:37:23.072536959 +0000 UTC m=+0.055750080 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:37:23 compute-1 podman[276970]: 2025-12-06 07:37:23.078315512 +0000 UTC m=+0.061275956 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.191 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:37:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:37:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:37:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:37:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.846 226109 DEBUG nova.network.neutron [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.873 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.883 226109 DEBUG oslo_concurrency.lockutils [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:37:23 compute-1 nova_compute[226101]: 2025-12-06 07:37:23.884 226109 DEBUG nova.network.neutron [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.007 226109 DEBUG os_brick.utils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.009 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.029 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.030 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[a0304f9e-71cf-4557-b96f-1249c68a1083]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.031 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.041 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.041 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe22024-cb82-4cbc-b6ad-306b93f2333d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.044 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.051 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.052 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[6b964aa0-09d0-4bec-848a-1831b957a0af]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.053 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[279890a0-a069-486f-9333-003f47718651]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.055 226109 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.086 226109 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.088 226109 DEBUG os_brick.initiator.connectors.lightos [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.088 226109 DEBUG os_brick.initiator.connectors.lightos [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.089 226109 DEBUG os_brick.initiator.connectors.lightos [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.089 226109 DEBUG os_brick.utils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:37:24 compute-1 podman[277008]: 2025-12-06 07:37:24.108492764 +0000 UTC m=+0.086637134 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:37:24 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Dec 06 07:37:24 compute-1 systemd[276818]: Activating special unit Exit the Session...
Dec 06 07:37:24 compute-1 systemd[276818]: Stopped target Main User Target.
Dec 06 07:37:24 compute-1 systemd[276818]: Stopped target Basic System.
Dec 06 07:37:24 compute-1 systemd[276818]: Stopped target Paths.
Dec 06 07:37:24 compute-1 systemd[276818]: Stopped target Sockets.
Dec 06 07:37:24 compute-1 systemd[276818]: Stopped target Timers.
Dec 06 07:37:24 compute-1 systemd[276818]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:37:24 compute-1 systemd[276818]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 07:37:24 compute-1 systemd[276818]: Closed D-Bus User Message Bus Socket.
Dec 06 07:37:24 compute-1 systemd[276818]: Stopped Create User's Volatile Files and Directories.
Dec 06 07:37:24 compute-1 systemd[276818]: Removed slice User Application Slice.
Dec 06 07:37:24 compute-1 systemd[276818]: Reached target Shutdown.
Dec 06 07:37:24 compute-1 systemd[276818]: Finished Exit the Session.
Dec 06 07:37:24 compute-1 systemd[276818]: Reached target Exit the Session.
Dec 06 07:37:24 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Dec 06 07:37:24 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Dec 06 07:37:24 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec 06 07:37:24 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec 06 07:37:24 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec 06 07:37:24 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec 06 07:37:24 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Dec 06 07:37:24 compute-1 ceph-mon[81689]: pgmap v2430: 305 pgs: 305 active+clean; 926 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 137 op/s
Dec 06 07:37:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/873242481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:24 compute-1 nova_compute[226101]: 2025-12-06 07:37:24.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:24.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:25.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:25 compute-1 nova_compute[226101]: 2025-12-06 07:37:25.209 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 07:37:25 compute-1 nova_compute[226101]: 2025-12-06 07:37:25.211 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:37:25 compute-1 nova_compute[226101]: 2025-12-06 07:37:25.211 226109 INFO nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Creating image(s)
Dec 06 07:37:25 compute-1 nova_compute[226101]: 2025-12-06 07:37:25.249 226109 DEBUG nova.storage.rbd_utils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] creating snapshot(nova-resize) on rbd image(3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:37:25 compute-1 nova_compute[226101]: 2025-12-06 07:37:25.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:25 compute-1 ceph-mon[81689]: pgmap v2431: 305 pgs: 305 active+clean; 934 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 141 op/s
Dec 06 07:37:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/671171046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1351833603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:25 compute-1 nova_compute[226101]: 2025-12-06 07:37:25.951 226109 DEBUG nova.network.neutron [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updated VIF entry in instance network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:37:25 compute-1 nova_compute[226101]: 2025-12-06 07:37:25.952 226109 DEBUG nova.network.neutron [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:25 compute-1 nova_compute[226101]: 2025-12-06 07:37:25.970 226109 DEBUG oslo_concurrency.lockutils [req-3807eb01-6d05-4d1e-a54e-c5a3cba407ac req-195b8230-ece3-4e72-b7d7-6f42a019df84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:37:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:27 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:27.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:37:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:27.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:37:27 compute-1 nova_compute[226101]: 2025-12-06 07:37:27.076 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:28 compute-1 ceph-mon[81689]: pgmap v2432: 305 pgs: 305 active+clean; 943 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 06 07:37:28 compute-1 nova_compute[226101]: 2025-12-06 07:37:28.193 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:29.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:37:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:29 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:29.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:31.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:31.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:31 compute-1 nova_compute[226101]: 2025-12-06 07:37:31.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:32 compute-1 nova_compute[226101]: 2025-12-06 07:37:32.078 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:32 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:37:32 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.534893513s, txc = 0x55b5539c3200
Dec 06 07:37:32 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.502654552s, txc = 0x55b552b92300
Dec 06 07:37:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:33.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:33.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:33 compute-1 nova_compute[226101]: 2025-12-06 07:37:33.235 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 4268..4853) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.053527236s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 07:37:33 compute-1 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-1[81685]: 2025-12-06T07:37:33.790+0000 7fc98ce5b640 -1 mon.compute-1@2(peon).paxos(paxos updating c 4268..4853) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 1.053527236s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Dec 06 07:37:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e304 e304: 3 total, 3 up, 3 in
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.832510471s, txc = 0x55b552795b00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831513405s, txc = 0x55b553858300
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830894947s, txc = 0x55b552b1e900
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830541611s, txc = 0x55b552ce7800
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830168724s, txc = 0x55b552b9bb00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.829474449s, txc = 0x55b553a64600
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.829098701s, txc = 0x55b55335e000
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.828953266s, txc = 0x55b5533c0f00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.828828335s, txc = 0x55b55387c900
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.827291965s, txc = 0x55b5534ffb00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.815388680s, txc = 0x55b55391b500
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.791286945s, txc = 0x55b551fc7800
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.743538857s, txc = 0x55b553962300
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.742968082s, txc = 0x55b5526f5500
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.742217064s, txc = 0x55b552b9b200
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.741637230s, txc = 0x55b5539c2f00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.741484165s, txc = 0x55b5539cc600
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.739983082s, txc = 0x55b553ccc900
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.737415791s, txc = 0x55b552b40000
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.736876965s, txc = 0x55b552ce6000
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.736741543s, txc = 0x55b55391ac00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.736557484s, txc = 0x55b55279cc00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.736157417s, txc = 0x55b5538c4c00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.736034870s, txc = 0x55b55387d200
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.735741138s, txc = 0x55b5535d0300
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.734623432s, txc = 0x55b553a65500
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.734423637s, txc = 0x55b553a64c00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.732347488s, txc = 0x55b5534b1500
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.730948448s, txc = 0x55b552de1800
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.730608940s, txc = 0x55b552ce7500
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.767218113s, txc = 0x55b5535c5500
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.775246620s, txc = 0x55b552ce7b00
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.767367840s, txc = 0x55b55391a300
Dec 06 07:37:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.768092155s, txc = 0x55b553858f00
Dec 06 07:37:34 compute-1 ceph-mon[81689]: pgmap v2433: 305 pgs: 305 active+clean; 943 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 135 op/s
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.213 226109 DEBUG nova.objects.instance [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:34 compute-1 sudo[277080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:34 compute-1 sudo[277080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:34 compute-1 sudo[277080]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.373 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.375 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Ensure instance console log exists: /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.376 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.376 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.376 226109 DEBUG oslo_concurrency.lockutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.379 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Start _get_guest_xml network_info=[{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:c3:31:81"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-fb115316-35b8-4af3-9847-c6315737a158', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'fb115316-35b8-4af3-9847-c6315737a158', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '3d4b97f9-b8c3-4764-87f9-4006968fecd4', 'attached_at': '2025-12-06T07:37:24.000000', 'detached_at': '', 'volume_id': 'fb115316-35b8-4af3-9847-c6315737a158', 'multiattach': True, 'serial': 'fb115316-35b8-4af3-9847-c6315737a158'}, 'mount_device': '/dev/vdb', 'boot_index': None, 'attachment_id': 'ff7c18f6-2b04-47ef-839a-eeab9de58e5a', 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.384 226109 WARNING nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:37:34 compute-1 sudo[277141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:37:34 compute-1 sudo[277141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:34 compute-1 sudo[277141]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.394 226109 DEBUG nova.virt.libvirt.host [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.396 226109 DEBUG nova.virt.libvirt.host [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.405 226109 DEBUG nova.virt.libvirt.host [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.406 226109 DEBUG nova.virt.libvirt.host [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.407 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.408 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.408 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.409 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.409 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.409 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.409 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.410 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.410 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.410 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.411 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.411 226109 DEBUG nova.virt.hardware [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.411 226109 DEBUG nova.objects.instance [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.470 226109 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:37:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4211913717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.912 226109 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:34 compute-1 nova_compute[226101]: 2025-12-06 07:37:34.947 226109 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:34 compute-1 ceph-mon[81689]: pgmap v2434: 305 pgs: 305 active+clean; 947 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.4 MiB/s wr, 146 op/s
Dec 06 07:37:34 compute-1 ceph-mon[81689]: pgmap v2435: 305 pgs: 305 active+clean; 948 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 261 KiB/s rd, 1.2 MiB/s wr, 65 op/s
Dec 06 07:37:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:34 compute-1 ceph-mon[81689]: osdmap e304: 3 total, 3 up, 3 in
Dec 06 07:37:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4211913717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:35.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:37:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:35.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:37:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:37:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3011747925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.439 226109 DEBUG oslo_concurrency.processutils [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.465 226109 DEBUG nova.virt.libvirt.vif [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=129,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-pbz3nfcu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:37:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=3d4b97f9-b8c3-4764-87f9-4006968fecd4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:c3:31:81"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.466 226109 DEBUG nova.network.os_vif_util [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:c3:31:81"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.466 226109 DEBUG nova.network.os_vif_util [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.469 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <uuid>3d4b97f9-b8c3-4764-87f9-4006968fecd4</uuid>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <name>instance-00000081</name>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <memory>196608</memory>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <nova:name>multiattach-server-1</nova:name>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:37:34</nova:creationTime>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <nova:flavor name="m1.micro">
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <nova:memory>192</nova:memory>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <nova:user uuid="605b5481e0c944048e6a67046c30d693">tempest-AttachVolumeMultiAttachTest-690984293-project-member</nova:user>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <nova:project uuid="833f4cf9f5a64b2ab94c3bf330353a31">tempest-AttachVolumeMultiAttachTest-690984293</nova:project>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <nova:port uuid="bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1">
Dec 06 07:37:35 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <system>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <entry name="serial">3d4b97f9-b8c3-4764-87f9-4006968fecd4</entry>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <entry name="uuid">3d4b97f9-b8c3-4764-87f9-4006968fecd4</entry>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </system>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <os>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   </os>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <features>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   </features>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk">
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </source>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/3d4b97f9-b8c3-4764-87f9-4006968fecd4_disk.config">
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </source>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <source protocol="rbd" name="volumes/volume-fb115316-35b8-4af3-9847-c6315737a158">
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </source>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:37:35 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <serial>fb115316-35b8-4af3-9847-c6315737a158</serial>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <shareable/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:c3:31:81"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <target dev="tapbfac9ac2-c1"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4/console.log" append="off"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <video>
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </video>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:37:35 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:37:35 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:37:35 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:37:35 compute-1 nova_compute[226101]: </domain>
Dec 06 07:37:35 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.472 226109 DEBUG nova.virt.libvirt.vif [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=129,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:43Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-pbz3nfcu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:37:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=3d4b97f9-b8c3-4764-87f9-4006968fecd4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:c3:31:81"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.472 226109 DEBUG nova.network.os_vif_util [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "vif_mac": "fa:16:3e:c3:31:81"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.473 226109 DEBUG nova.network.os_vif_util [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.474 226109 DEBUG os_vif [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.475 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.475 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.476 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.480 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.480 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfac9ac2-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.480 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfac9ac2-c1, col_values=(('external_ids', {'iface-id': 'bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:31:81', 'vm-uuid': '3d4b97f9-b8c3-4764-87f9-4006968fecd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.519 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 NetworkManager[49031]: <info>  [1765006655.5202] manager: (tapbfac9ac2-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.522 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.526 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.527 226109 INFO os_vif [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1')
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.591 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.592 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.592 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.592 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] No VIF found with MAC fa:16:3e:c3:31:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.592 226109 INFO nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Using config drive
Dec 06 07:37:35 compute-1 kernel: tapbfac9ac2-c1: entered promiscuous mode
Dec 06 07:37:35 compute-1 NetworkManager[49031]: <info>  [1765006655.6779] manager: (tapbfac9ac2-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Dec 06 07:37:35 compute-1 ovn_controller[130279]: 2025-12-06T07:37:35Z|00523|binding|INFO|Claiming lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for this chassis.
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.678 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 ovn_controller[130279]: 2025-12-06T07:37:35Z|00524|binding|INFO|bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1: Claiming fa:16:3e:c3:31:81 10.100.0.6
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.686 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:31:81 10.100.0.6'], port_security=['fa:16:3e:c3:31:81 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3d4b97f9-b8c3-4764-87f9-4006968fecd4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '6', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.688 139580 INFO neutron.agent.ovn.metadata.agent [-] Port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 bound to our chassis
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.690 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272
Dec 06 07:37:35 compute-1 ovn_controller[130279]: 2025-12-06T07:37:35Z|00525|binding|INFO|Setting lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 ovn-installed in OVS
Dec 06 07:37:35 compute-1 ovn_controller[130279]: 2025-12-06T07:37:35Z|00526|binding|INFO|Setting lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 up in Southbound
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.698 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.702 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 systemd-udevd[277260]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.710 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[abcc15cb-098b-4e8f-884b-064a76edf907]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:35 compute-1 systemd-machined[190302]: New machine qemu-62-instance-00000081.
Dec 06 07:37:35 compute-1 NetworkManager[49031]: <info>  [1765006655.7241] device (tapbfac9ac2-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:37:35 compute-1 NetworkManager[49031]: <info>  [1765006655.7250] device (tapbfac9ac2-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.743 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[44270b9d-387d-4a69-8c90-b1568c553a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:35 compute-1 systemd[1]: Started Virtual Machine qemu-62-instance-00000081.
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.747 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ce871b02-917e-44c3-bddc-43d606385f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.773 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ed97d7-919c-487b-bc59-c4461cc51a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.790 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b6b2a2-f092-44e2-a2d4-db1b35494e12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672206, 'reachable_time': 34158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277268, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.806 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6c1f98-80e6-4dd9-bfcb-65ef65be1981]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672218, 'tstamp': 672218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277272, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672221, 'tstamp': 672221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277272, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.808 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.809 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 nova_compute[226101]: 2025-12-06 07:37:35.811 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.811 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.811 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.812 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:37:35.812 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:35 compute-1 ceph-mon[81689]: pgmap v2437: 305 pgs: 305 active+clean; 948 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 873 KiB/s wr, 73 op/s
Dec 06 07:37:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3011747925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.148 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006656.1479554, 3d4b97f9-b8c3-4764-87f9-4006968fecd4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.148 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] VM Resumed (Lifecycle Event)
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.151 226109 DEBUG nova.compute.manager [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.154 226109 INFO nova.virt.libvirt.driver [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance running successfully.
Dec 06 07:37:36 compute-1 virtqemud[225710]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.156 226109 DEBUG nova.virt.libvirt.guest [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.156 226109 DEBUG nova.virt.libvirt.driver [None req-a49e81ba-ab4d-4f15-a6a4-53e1cd348675 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.171 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.175 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.193 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.193 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006656.150466, 3d4b97f9-b8c3-4764-87f9-4006968fecd4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.194 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] VM Started (Lifecycle Event)
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.209 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.212 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.238 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.740 226109 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.740 226109 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.741 226109 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.741 226109 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.741 226109 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.741 226109 WARNING nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state resized and task_state None.
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.741 226109 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.742 226109 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.742 226109 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.742 226109 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.742 226109 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:37:36 compute-1 nova_compute[226101]: 2025-12-06 07:37:36.742 226109 WARNING nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state resized and task_state None.
Dec 06 07:37:37 compute-1 ceph-mon[81689]: pgmap v2438: 305 pgs: 305 active+clean; 961 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.9 MiB/s wr, 79 op/s
Dec 06 07:37:37 compute-1 nova_compute[226101]: 2025-12-06 07:37:37.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:37.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:37.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:39.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:39.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:40 compute-1 ceph-mon[81689]: pgmap v2439: 305 pgs: 305 active+clean; 972 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Dec 06 07:37:40 compute-1 nova_compute[226101]: 2025-12-06 07:37:40.521 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:41 compute-1 ceph-mon[81689]: pgmap v2440: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Dec 06 07:37:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:41.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:41.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e305 e305: 3 total, 3 up, 3 in
Dec 06 07:37:42 compute-1 nova_compute[226101]: 2025-12-06 07:37:42.082 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:43 compute-1 ceph-mon[81689]: osdmap e305: 3 total, 3 up, 3 in
Dec 06 07:37:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3416436855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:43 compute-1 ceph-mon[81689]: pgmap v2442: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.1 MiB/s wr, 282 op/s
Dec 06 07:37:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:43.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:37:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:43.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:37:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:37:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:45.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:37:45 compute-1 nova_compute[226101]: 2025-12-06 07:37:45.523 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:46 compute-1 ceph-mon[81689]: pgmap v2443: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.9 MiB/s wr, 252 op/s
Dec 06 07:37:47 compute-1 nova_compute[226101]: 2025-12-06 07:37:47.087 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:47.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:47 compute-1 ceph-mon[81689]: pgmap v2444: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 771 KiB/s wr, 207 op/s
Dec 06 07:37:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:49.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:49.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e306 e306: 3 total, 3 up, 3 in
Dec 06 07:37:49 compute-1 ceph-mon[81689]: pgmap v2445: 305 pgs: 305 active+clean; 977 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 631 KiB/s wr, 129 op/s
Dec 06 07:37:50 compute-1 nova_compute[226101]: 2025-12-06 07:37:50.528 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:51.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:51.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:51 compute-1 ovn_controller[130279]: 2025-12-06T07:37:51Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:31:81 10.100.0.6
Dec 06 07:37:52 compute-1 nova_compute[226101]: 2025-12-06 07:37:52.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:52 compute-1 ceph-mon[81689]: osdmap e306: 3 total, 3 up, 3 in
Dec 06 07:37:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:53.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:53.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:54 compute-1 podman[277337]: 2025-12-06 07:37:54.077731885 +0000 UTC m=+0.056427129 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 07:37:54 compute-1 podman[277336]: 2025-12-06 07:37:54.109085171 +0000 UTC m=+0.088149435 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 07:37:54 compute-1 podman[277375]: 2025-12-06 07:37:54.241653195 +0000 UTC m=+0.096558396 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:37:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:54 compute-1 ceph-mon[81689]: pgmap v2447: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 2.7 MiB/s wr, 88 op/s
Dec 06 07:37:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/474200040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:55.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:55 compute-1 ceph-mon[81689]: pgmap v2448: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 865 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 06 07:37:55 compute-1 ceph-mon[81689]: pgmap v2449: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 865 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 06 07:37:55 compute-1 nova_compute[226101]: 2025-12-06 07:37:55.530 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:56 compute-1 nova_compute[226101]: 2025-12-06 07:37:56.353 226109 DEBUG nova.compute.manager [req-25607c5b-c91a-49be-829a-9e441c0708cc req-2db3ad60-cf9f-49ac-955c-7b489f5078b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:56 compute-1 nova_compute[226101]: 2025-12-06 07:37:56.354 226109 DEBUG nova.compute.manager [req-25607c5b-c91a-49be-829a-9e441c0708cc req-2db3ad60-cf9f-49ac-955c-7b489f5078b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing instance network info cache due to event network-changed-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:37:56 compute-1 nova_compute[226101]: 2025-12-06 07:37:56.354 226109 DEBUG oslo_concurrency.lockutils [req-25607c5b-c91a-49be-829a-9e441c0708cc req-2db3ad60-cf9f-49ac-955c-7b489f5078b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:37:56 compute-1 nova_compute[226101]: 2025-12-06 07:37:56.355 226109 DEBUG oslo_concurrency.lockutils [req-25607c5b-c91a-49be-829a-9e441c0708cc req-2db3ad60-cf9f-49ac-955c-7b489f5078b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:37:56 compute-1 nova_compute[226101]: 2025-12-06 07:37:56.355 226109 DEBUG nova.network.neutron [req-25607c5b-c91a-49be-829a-9e441c0708cc req-2db3ad60-cf9f-49ac-955c-7b489f5078b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Refreshing network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:37:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:37:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:57 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:57.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:57.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:57 compute-1 nova_compute[226101]: 2025-12-06 07:37:57.162 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:57 compute-1 ceph-mon[81689]: pgmap v2450: 305 pgs: 305 active+clean; 1006 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Dec 06 07:37:58 compute-1 nova_compute[226101]: 2025-12-06 07:37:58.543 226109 DEBUG nova.network.neutron [req-25607c5b-c91a-49be-829a-9e441c0708cc req-2db3ad60-cf9f-49ac-955c-7b489f5078b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updated VIF entry in instance network info cache for port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:37:58 compute-1 nova_compute[226101]: 2025-12-06 07:37:58.544 226109 DEBUG nova.network.neutron [req-25607c5b-c91a-49be-829a-9e441c0708cc req-2db3ad60-cf9f-49ac-955c-7b489f5078b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [{"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:58 compute-1 nova_compute[226101]: 2025-12-06 07:37:58.812 226109 DEBUG oslo_concurrency.lockutils [req-25607c5b-c91a-49be-829a-9e441c0708cc req-2db3ad60-cf9f-49ac-955c-7b489f5078b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3d4b97f9-b8c3-4764-87f9-4006968fecd4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:59 compute-1 ceph-mon[81689]: pgmap v2451: 305 pgs: 305 active+clean; 1009 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 133 op/s
Dec 06 07:37:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:37:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:59 compute-1 nova_compute[226101]: 2025-12-06 07:37:59.752 226109 DEBUG oslo_concurrency.lockutils [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:59 compute-1 nova_compute[226101]: 2025-12-06 07:37:59.752 226109 DEBUG oslo_concurrency.lockutils [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:59 compute-1 nova_compute[226101]: 2025-12-06 07:37:59.771 226109 INFO nova.compute.manager [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Detaching volume fb115316-35b8-4af3-9847-c6315737a158
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.009 226109 INFO nova.virt.block_device [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Attempting to driver detach volume fb115316-35b8-4af3-9847-c6315737a158 from mountpoint /dev/vdb
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.018 226109 DEBUG nova.virt.libvirt.driver [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Attempting to detach device vdb from instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.019 226109 DEBUG nova.virt.libvirt.guest [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-fb115316-35b8-4af3-9847-c6315737a158">
Dec 06 07:38:00 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   </source>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <serial>fb115316-35b8-4af3-9847-c6315737a158</serial>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]: </disk>
Dec 06 07:38:00 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.181 226109 INFO nova.virt.libvirt.driver [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully detached device vdb from instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 from the persistent domain config.
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.181 226109 DEBUG nova.virt.libvirt.driver [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.182 226109 DEBUG nova.virt.libvirt.guest [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-fb115316-35b8-4af3-9847-c6315737a158">
Dec 06 07:38:00 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   </source>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <serial>fb115316-35b8-4af3-9847-c6315737a158</serial>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <shareable/>
Dec 06 07:38:00 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec 06 07:38:00 compute-1 nova_compute[226101]: </disk>
Dec 06 07:38:00 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.311 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765006680.3109353, 3d4b97f9-b8c3-4764-87f9-4006968fecd4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.314 226109 DEBUG nova.virt.libvirt.driver [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.315 226109 INFO nova.virt.libvirt.driver [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully detached device vdb from instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 from the live domain config.
Dec 06 07:38:00 compute-1 nova_compute[226101]: 2025-12-06 07:38:00.538 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:01 compute-1 nova_compute[226101]: 2025-12-06 07:38:01.147 226109 DEBUG nova.objects.instance [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'flavor' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:38:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:38:01 compute-1 nova_compute[226101]: 2025-12-06 07:38:01.296 226109 DEBUG oslo_concurrency.lockutils [None req-10f8a09a-20cd-459e-a98d-a4a002f94e92 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:38:01.655 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:38:01.656 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:38:01.657 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:01 compute-1 ceph-mon[81689]: pgmap v2452: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 1.6 MiB/s wr, 136 op/s
Dec 06 07:38:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1196958827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:02 compute-1 nova_compute[226101]: 2025-12-06 07:38:02.163 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:03.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:03.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:03 compute-1 ceph-mon[81689]: pgmap v2453: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 194 KiB/s wr, 91 op/s
Dec 06 07:38:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:05 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:05 compute-1 ceph-mon[81689]: pgmap v2454: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 925 KiB/s rd, 131 KiB/s wr, 64 op/s
Dec 06 07:38:05 compute-1 nova_compute[226101]: 2025-12-06 07:38:05.541 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:07 compute-1 nova_compute[226101]: 2025-12-06 07:38:07.165 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:07 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:07.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:07.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:08 compute-1 ceph-mon[81689]: pgmap v2455: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 156 KiB/s wr, 68 op/s
Dec 06 07:38:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:38:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3978816651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:38:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:38:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3978816651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:38:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:38:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:38:09 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:38:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:38:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3978816651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:38:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3978816651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:38:09 compute-1 ceph-mon[81689]: pgmap v2456: 305 pgs: 305 active+clean; 982 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 76 KiB/s wr, 57 op/s
Dec 06 07:38:10 compute-1 nova_compute[226101]: 2025-12-06 07:38:10.544 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:10 compute-1 nova_compute[226101]: 2025-12-06 07:38:10.944 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:38:10.944 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:38:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:38:10.946 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:38:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:11.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:11 compute-1 ceph-mon[81689]: pgmap v2457: 305 pgs: 305 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 52 KiB/s wr, 52 op/s
Dec 06 07:38:11 compute-1 nova_compute[226101]: 2025-12-06 07:38:11.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:11 compute-1 nova_compute[226101]: 2025-12-06 07:38:11.748 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:11 compute-1 nova_compute[226101]: 2025-12-06 07:38:11.748 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:11 compute-1 nova_compute[226101]: 2025-12-06 07:38:11.748 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:11 compute-1 nova_compute[226101]: 2025-12-06 07:38:11.749 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:38:11 compute-1 nova_compute[226101]: 2025-12-06 07:38:11.749 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:38:11.948 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.166 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:38:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1245743315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.401 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.582 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.583 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.586 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.586 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.590 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.590 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.594 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.594 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.793 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.794 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3531MB free_disk=20.623065948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.794 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:12 compute-1 nova_compute[226101]: 2025-12-06 07:38:12.795 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:13 compute-1 nova_compute[226101]: 2025-12-06 07:38:13.096 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 829161c5-19b5-459e-88a3-58512aaa5fc7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:38:13 compute-1 nova_compute[226101]: 2025-12-06 07:38:13.096 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:38:13 compute-1 nova_compute[226101]: 2025-12-06 07:38:13.096 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 7bca344a-3af1-4217-b97d-3f288712b57d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:38:13 compute-1 nova_compute[226101]: 2025-12-06 07:38:13.097 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:38:13 compute-1 nova_compute[226101]: 2025-12-06 07:38:13.097 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:38:13 compute-1 nova_compute[226101]: 2025-12-06 07:38:13.097 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:38:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:13.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:13 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:13.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:13 compute-1 nova_compute[226101]: 2025-12-06 07:38:13.190 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1245743315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:14 compute-1 nova_compute[226101]: 2025-12-06 07:38:14.227 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:14 compute-1 nova_compute[226101]: 2025-12-06 07:38:14.233 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:38:14 compute-1 nova_compute[226101]: 2025-12-06 07:38:14.250 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:38:14 compute-1 nova_compute[226101]: 2025-12-06 07:38:14.272 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:38:14 compute-1 nova_compute[226101]: 2025-12-06 07:38:14.272 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:14 compute-1 ceph-mon[81689]: pgmap v2458: 305 pgs: 305 active+clean; 950 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 803 KiB/s wr, 48 op/s
Dec 06 07:38:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1172078197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:15 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:15 compute-1 nova_compute[226101]: 2025-12-06 07:38:15.272 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:15 compute-1 nova_compute[226101]: 2025-12-06 07:38:15.273 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:38:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:38:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1309206469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:15 compute-1 nova_compute[226101]: 2025-12-06 07:38:15.483 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:38:15 compute-1 nova_compute[226101]: 2025-12-06 07:38:15.483 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:38:15 compute-1 nova_compute[226101]: 2025-12-06 07:38:15.483 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:38:15 compute-1 nova_compute[226101]: 2025-12-06 07:38:15.546 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:16 compute-1 ceph-mon[81689]: pgmap v2459: 305 pgs: 305 active+clean; 950 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 796 KiB/s wr, 42 op/s
Dec 06 07:38:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1309206469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:17 compute-1 nova_compute[226101]: 2025-12-06 07:38:17.168 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:17 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:38:17 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2276773940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:17 compute-1 nova_compute[226101]: 2025-12-06 07:38:17.528 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updating instance_info_cache with network_info: [{"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:38:17 compute-1 nova_compute[226101]: 2025-12-06 07:38:17.547 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:38:17 compute-1 nova_compute[226101]: 2025-12-06 07:38:17.548 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:38:17 compute-1 nova_compute[226101]: 2025-12-06 07:38:17.548 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:17 compute-1 nova_compute[226101]: 2025-12-06 07:38:17.548 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:38:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1297615286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:18 compute-1 ceph-mon[81689]: pgmap v2460: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 07:38:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3489610966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2276773940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:18 compute-1 nova_compute[226101]: 2025-12-06 07:38:18.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:19.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:19 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:19.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1056692813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:19 compute-1 nova_compute[226101]: 2025-12-06 07:38:19.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e307 e307: 3 total, 3 up, 3 in
Dec 06 07:38:20 compute-1 nova_compute[226101]: 2025-12-06 07:38:20.550 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:20 compute-1 ceph-mon[81689]: pgmap v2461: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 55 op/s
Dec 06 07:38:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2196679724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:20 compute-1 ceph-mon[81689]: osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:21 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:21 compute-1 nova_compute[226101]: 2025-12-06 07:38:21.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:22 compute-1 nova_compute[226101]: 2025-12-06 07:38:22.171 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:22 compute-1 ovn_controller[130279]: 2025-12-06T07:38:22Z|00527|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 06 07:38:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:38:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4205.4 total, 600.0 interval
                                           Cumulative writes: 39K writes, 156K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 39K writes, 13K syncs, 2.95 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5948 writes, 23K keys, 5948 commit groups, 1.0 writes per commit group, ingest: 25.56 MB, 0.04 MB/s
                                           Interval WAL: 5947 writes, 1942 syncs, 3.06 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:38:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2244263686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:22 compute-1 ceph-mon[81689]: pgmap v2463: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 07:38:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2972126965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:38:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:23 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:23 compute-1 nova_compute[226101]: 2025-12-06 07:38:23.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:25 compute-1 podman[277449]: 2025-12-06 07:38:25.092127325 +0000 UTC m=+0.074572458 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:38:25 compute-1 podman[277450]: 2025-12-06 07:38:25.093046688 +0000 UTC m=+0.066260807 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:38:25 compute-1 podman[277451]: 2025-12-06 07:38:25.12536239 +0000 UTC m=+0.096073033 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:38:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:25.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:25 compute-1 nova_compute[226101]: 2025-12-06 07:38:25.551 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:26 compute-1 nova_compute[226101]: 2025-12-06 07:38:26.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:26 compute-1 nova_compute[226101]: 2025-12-06 07:38:26.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:27 compute-1 nova_compute[226101]: 2025-12-06 07:38:27.174 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:38:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:27.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:38:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:27.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:28 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:38:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:29.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:30 compute-1 nova_compute[226101]: 2025-12-06 07:38:30.553 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:31.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:32 compute-1 nova_compute[226101]: 2025-12-06 07:38:32.176 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:32 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:38:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:33.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:33.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:34 compute-1 sudo[277513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:34 compute-1 sudo[277513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:34 compute-1 sudo[277513]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:34 compute-1 sudo[277538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:38:34 compute-1 sudo[277538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:34 compute-1 sudo[277538]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:34 compute-1 sudo[277563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:34 compute-1 sudo[277563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:34 compute-1 sudo[277563]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:34 compute-1 sudo[277588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:38:34 compute-1 sudo[277588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:35.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:35 compute-1 sudo[277588]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:35 compute-1 nova_compute[226101]: 2025-12-06 07:38:35.555 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:35 compute-1 nova_compute[226101]: 2025-12-06 07:38:35.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:35 compute-1 nova_compute[226101]: 2025-12-06 07:38:35.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:38:36 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:38:37 compute-1 nova_compute[226101]: 2025-12-06 07:38:37.178 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:37.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:37.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:37 compute-1 nova_compute[226101]: 2025-12-06 07:38:37.785 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:38:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:39.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:39.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:40 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:38:40 compute-1 nova_compute[226101]: 2025-12-06 07:38:40.560 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:41.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:41.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:41 compute-1 nova_compute[226101]: 2025-12-06 07:38:41.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:41 compute-1 nova_compute[226101]: 2025-12-06 07:38:41.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:38:42 compute-1 nova_compute[226101]: 2025-12-06 07:38:42.180 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:43.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:43.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:44 compute-1 sudo[277646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:44 compute-1 sudo[277646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:44 compute-1 sudo[277646]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:44 compute-1 sudo[277671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:38:44 compute-1 sudo[277671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:44 compute-1 sudo[277671]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:44 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:38:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:45.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:45.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:45 compute-1 nova_compute[226101]: 2025-12-06 07:38:45.563 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:45 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 22.442642212s
Dec 06 07:38:45 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 22.442642212s
Dec 06 07:38:45 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 22.822004318s, txc = 0x55b553ccc300
Dec 06 07:38:45 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 22.588148117s, txc = 0x55b553ccc600
Dec 06 07:38:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 4268..4908) lease_timeout -- calling new election
Dec 06 07:38:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 22.834920883s, txc = 0x55b552b41800
Dec 06 07:38:46 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.737027168s, txc = 0x55b552b1f500
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1646551191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2464: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2465: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2466: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 116 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2467: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 120 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2468: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 7.1 KiB/s wr, 78 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 07:38:46 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2469: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.3 KiB/s wr, 67 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2470: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 3.8 KiB/s wr, 33 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2471: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 3.8 KiB/s wr, 36 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 07:38:46 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:38:46 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:38:46 compute-1 ceph-mon[81689]: osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:46 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 72m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:38:46 compute-1 ceph-mon[81689]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 07:38:46 compute-1 ceph-mon[81689]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:38:46 compute-1 ceph-mon[81689]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:38:46 compute-1 ceph-mon[81689]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2472: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 853 B/s wr, 11 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2473: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2474: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:46 compute-1 ceph-mon[81689]: pgmap v2475: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 07:38:47 compute-1 nova_compute[226101]: 2025-12-06 07:38:47.182 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:47.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:47.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:47 compute-1 ceph-mon[81689]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 06 07:38:47 compute-1 ceph-mon[81689]: paxos.2).electionLogic(56) init, last seen epoch 56
Dec 06 07:38:47 compute-1 nova_compute[226101]: 2025-12-06 07:38:47.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:48 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:38:49 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:38:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:49.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:49 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:38:49 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:38:49 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:38:50 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:38:50 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:38:50 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:38:50 compute-1 nova_compute[226101]: 2025-12-06 07:38:50.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:51 compute-1 ceph-mon[81689]: pgmap v2477: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 426 B/s wr, 12 op/s
Dec 06 07:38:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:51.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:51.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:38:52 compute-1 nova_compute[226101]: 2025-12-06 07:38:52.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:53 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 07:38:53 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 07:38:53 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:38:53 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:38:53 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:38:53 compute-1 ceph-mon[81689]: osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:53 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 72m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:38:53 compute-1 ceph-mon[81689]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 07:38:53 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 07:38:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:53.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:53.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:54 compute-1 ceph-mon[81689]: mon.compute-1 calling monitor election
Dec 06 07:38:54 compute-1 ceph-mon[81689]: pgmap v2478: 305 pgs: 305 active+clean; 982 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 743 KiB/s wr, 77 op/s
Dec 06 07:38:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1314860632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1306523317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:54 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:38:54 compute-1 ceph-mon[81689]: pgmap v2479: 305 pgs: 305 active+clean; 987 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 103 op/s
Dec 06 07:38:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:55.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:55.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:55 compute-1 nova_compute[226101]: 2025-12-06 07:38:55.569 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:55 compute-1 ceph-mon[81689]: pgmap v2480: 305 pgs: 305 active+clean; 987 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 103 op/s
Dec 06 07:38:56 compute-1 podman[277697]: 2025-12-06 07:38:56.073789993 +0000 UTC m=+0.056198782 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:38:56 compute-1 podman[277696]: 2025-12-06 07:38:56.077372198 +0000 UTC m=+0.059762047 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:38:56 compute-1 podman[277698]: 2025-12-06 07:38:56.105020726 +0000 UTC m=+0.084671163 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:38:56 compute-1 ceph-osd[79002]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Dec 06 07:38:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:57 compute-1 nova_compute[226101]: 2025-12-06 07:38:57.186 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:38:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:57.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:38:58 compute-1 ceph-mon[81689]: pgmap v2481: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 118 op/s
Dec 06 07:38:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:38:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1863463196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:59 compute-1 ceph-mon[81689]: pgmap v2482: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Dec 06 07:39:00 compute-1 nova_compute[226101]: 2025-12-06 07:39:00.574 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:01.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:01.657 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:01.658 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:01.658 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:02 compute-1 ceph-mon[81689]: pgmap v2483: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Dec 06 07:39:02 compute-1 nova_compute[226101]: 2025-12-06 07:39:02.189 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:03 compute-1 ceph-mon[81689]: pgmap v2484: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 169 op/s
Dec 06 07:39:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:03.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:05.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:05 compute-1 nova_compute[226101]: 2025-12-06 07:39:05.609 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:06 compute-1 ceph-mon[81689]: pgmap v2485: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 143 op/s
Dec 06 07:39:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:07 compute-1 nova_compute[226101]: 2025-12-06 07:39:07.191 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:07 compute-1 ceph-mon[81689]: pgmap v2486: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 144 op/s
Dec 06 07:39:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:07.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:07.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:09.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:09.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:10 compute-1 ceph-mon[81689]: pgmap v2487: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 544 KiB/s wr, 129 op/s
Dec 06 07:39:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/518087762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/518087762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:10 compute-1 nova_compute[226101]: 2025-12-06 07:39:10.654 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:11 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:11.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:11.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:11 compute-1 nova_compute[226101]: 2025-12-06 07:39:11.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:11 compute-1 ceph-mon[81689]: pgmap v2488: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 37 KiB/s wr, 86 op/s
Dec 06 07:39:11 compute-1 nova_compute[226101]: 2025-12-06 07:39:11.717 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:11 compute-1 nova_compute[226101]: 2025-12-06 07:39:11.717 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:11 compute-1 nova_compute[226101]: 2025-12-06 07:39:11.718 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:11 compute-1 nova_compute[226101]: 2025-12-06 07:39:11.718 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:39:11 compute-1 nova_compute[226101]: 2025-12-06 07:39:11.718 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1505260623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.156 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.192 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.281 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.281 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.284 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.284 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.288 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.288 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.291 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.292 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.470 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.471 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3520MB free_disk=20.622718811035156GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.471 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.472 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.853 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 829161c5-19b5-459e-88a3-58512aaa5fc7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.854 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.854 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 7bca344a-3af1-4217-b97d-3f288712b57d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.854 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.855 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:39:12 compute-1 nova_compute[226101]: 2025-12-06 07:39:12.855 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:39:13 compute-1 nova_compute[226101]: 2025-12-06 07:39:13.007 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:39:13 compute-1 nova_compute[226101]: 2025-12-06 07:39:13.029 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:39:13 compute-1 nova_compute[226101]: 2025-12-06 07:39:13.030 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:39:13 compute-1 nova_compute[226101]: 2025-12-06 07:39:13.044 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:39:13 compute-1 nova_compute[226101]: 2025-12-06 07:39:13.106 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:39:13 compute-1 nova_compute[226101]: 2025-12-06 07:39:13.365 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:13 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1505260623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/896338344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:13.749 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:13.751 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:39:13 compute-1 nova_compute[226101]: 2025-12-06 07:39:13.750 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:15.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:15 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:15.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:15 compute-1 nova_compute[226101]: 2025-12-06 07:39:15.657 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/846822878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:15 compute-1 ceph-mon[81689]: pgmap v2489: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 189 KiB/s rd, 14 KiB/s wr, 8 op/s
Dec 06 07:39:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:16 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2832921854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:16 compute-1 nova_compute[226101]: 2025-12-06 07:39:16.291 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.926s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:16 compute-1 nova_compute[226101]: 2025-12-06 07:39:16.296 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:39:16 compute-1 nova_compute[226101]: 2025-12-06 07:39:16.465 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:39:16 compute-1 nova_compute[226101]: 2025-12-06 07:39:16.467 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:39:16 compute-1 nova_compute[226101]: 2025-12-06 07:39:16.467 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:17 compute-1 nova_compute[226101]: 2025-12-06 07:39:17.193 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:17 compute-1 nova_compute[226101]: 2025-12-06 07:39:17.447 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:17 compute-1 nova_compute[226101]: 2025-12-06 07:39:17.447 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:39:17 compute-1 nova_compute[226101]: 2025-12-06 07:39:17.447 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:39:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:17.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:17.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:17 compute-1 ceph-mon[81689]: pgmap v2490: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 14 KiB/s wr, 2 op/s
Dec 06 07:39:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2832921854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:18 compute-1 nova_compute[226101]: 2025-12-06 07:39:18.166 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:39:18 compute-1 nova_compute[226101]: 2025-12-06 07:39:18.167 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:39:18 compute-1 nova_compute[226101]: 2025-12-06 07:39:18.167 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:39:18 compute-1 nova_compute[226101]: 2025-12-06 07:39:18.167 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 829161c5-19b5-459e-88a3-58512aaa5fc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e308 e308: 3 total, 3 up, 3 in
Dec 06 07:39:19 compute-1 ceph-mon[81689]: pgmap v2491: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.8 KiB/s rd, 14 KiB/s wr, 4 op/s
Dec 06 07:39:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4080526882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:19.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:19 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:19.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:19.752 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:20 compute-1 ceph-mon[81689]: pgmap v2492: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 KiB/s rd, 13 KiB/s wr, 40 op/s
Dec 06 07:39:20 compute-1 ceph-mon[81689]: osdmap e308: 3 total, 3 up, 3 in
Dec 06 07:39:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2166284350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1132695538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:20 compute-1 nova_compute[226101]: 2025-12-06 07:39:20.680 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:21.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:21 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:21.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3987226000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/694726960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:21 compute-1 ceph-mon[81689]: pgmap v2494: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 110 op/s
Dec 06 07:39:21 compute-1 nova_compute[226101]: 2025-12-06 07:39:21.851 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating instance_info_cache with network_info: [{"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:21 compute-1 nova_compute[226101]: 2025-12-06 07:39:21.888 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-829161c5-19b5-459e-88a3-58512aaa5fc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:39:21 compute-1 nova_compute[226101]: 2025-12-06 07:39:21.889 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:39:21 compute-1 nova_compute[226101]: 2025-12-06 07:39:21.890 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:21 compute-1 nova_compute[226101]: 2025-12-06 07:39:21.890 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:21 compute-1 nova_compute[226101]: 2025-12-06 07:39:21.891 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:21 compute-1 nova_compute[226101]: 2025-12-06 07:39:21.892 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:21 compute-1 nova_compute[226101]: 2025-12-06 07:39:21.892 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:39:22 compute-1 nova_compute[226101]: 2025-12-06 07:39:22.194 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:23.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:23 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:23.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:23 compute-1 nova_compute[226101]: 2025-12-06 07:39:23.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3875607001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3527279665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:25.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:25.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:25 compute-1 nova_compute[226101]: 2025-12-06 07:39:25.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:25 compute-1 ceph-mon[81689]: pgmap v2495: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 122 op/s
Dec 06 07:39:26 compute-1 nova_compute[226101]: 2025-12-06 07:39:26.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:27 compute-1 podman[277803]: 2025-12-06 07:39:27.110859168 +0000 UTC m=+0.081365896 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Dec 06 07:39:27 compute-1 podman[277804]: 2025-12-06 07:39:27.121248052 +0000 UTC m=+0.095872108 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 07:39:27 compute-1 podman[277805]: 2025-12-06 07:39:27.1644441 +0000 UTC m=+0.136498688 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Dec 06 07:39:27 compute-1 nova_compute[226101]: 2025-12-06 07:39:27.197 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3445992299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:27 compute-1 ceph-mon[81689]: pgmap v2496: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 122 op/s
Dec 06 07:39:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:27.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:27.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:28 compute-1 ceph-mon[81689]: pgmap v2497: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.2 KiB/s wr, 195 op/s
Dec 06 07:39:28 compute-1 nova_compute[226101]: 2025-12-06 07:39:28.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:29 compute-1 ceph-mon[81689]: pgmap v2498: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.4 KiB/s wr, 175 op/s
Dec 06 07:39:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:29.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:29.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e309 e309: 3 total, 3 up, 3 in
Dec 06 07:39:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:39:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.5 total, 600.0 interval
                                           Cumulative writes: 10K writes, 52K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.11 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1285 writes, 6423 keys, 1285 commit groups, 1.0 writes per commit group, ingest: 13.98 MB, 0.02 MB/s
                                           Interval WAL: 1285 writes, 1285 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     14.7      4.32              0.18        30    0.144       0      0       0.0       0.0
                                             L6      1/0   10.90 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4     39.3     33.1      8.51              0.77        29    0.293    187K    16K       0.0       0.0
                                            Sum      1/0   10.90 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     26.1     26.9     12.84              0.95        59    0.218    187K    16K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     18.3     18.4      2.83              0.13         8    0.353     34K   2088       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     39.3     33.1      8.51              0.77        29    0.293    187K    16K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     14.9      4.28              0.18        29    0.147       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.062, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.34 GB write, 0.08 MB/s write, 0.33 GB read, 0.08 MB/s read, 12.8 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 2.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 37.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000269 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2149,36.03 MB,11.8512%) FilterBlock(59,527.92 KB,0.169588%) IndexBlock(59,889.84 KB,0.285851%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:39:30 compute-1 ceph-mon[81689]: osdmap e309: 3 total, 3 up, 3 in
Dec 06 07:39:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/704595085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:30 compute-1 nova_compute[226101]: 2025-12-06 07:39:30.686 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:31.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:31.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:31 compute-1 ceph-mon[81689]: pgmap v2500: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 161 op/s
Dec 06 07:39:32 compute-1 nova_compute[226101]: 2025-12-06 07:39:32.198 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:33 compute-1 ceph-mon[81689]: pgmap v2501: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 28 KiB/s wr, 203 op/s
Dec 06 07:39:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:33.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:33.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:35.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:35.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:35 compute-1 ceph-mon[81689]: pgmap v2502: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 28 KiB/s wr, 203 op/s
Dec 06 07:39:35 compute-1 nova_compute[226101]: 2025-12-06 07:39:35.738 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:36 compute-1 nova_compute[226101]: 2025-12-06 07:39:36.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:37 compute-1 nova_compute[226101]: 2025-12-06 07:39:37.222 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:37.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:37.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2089923814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3332702504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3332702504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e310 e310: 3 total, 3 up, 3 in
Dec 06 07:39:38 compute-1 ceph-mon[81689]: pgmap v2503: 305 pgs: 305 active+clean; 925 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 201 op/s
Dec 06 07:39:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2424324093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:39.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:39.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e311 e311: 3 total, 3 up, 3 in
Dec 06 07:39:39 compute-1 ceph-mon[81689]: osdmap e310: 3 total, 3 up, 3 in
Dec 06 07:39:39 compute-1 ceph-mon[81689]: pgmap v2505: 305 pgs: 305 active+clean; 898 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 30 KiB/s wr, 193 op/s
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.838 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "7bca344a-3af1-4217-b97d-3f288712b57d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.838 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.839 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.839 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.839 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.840 226109 INFO nova.compute.manager [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Terminating instance
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.841 226109 DEBUG nova.compute.manager [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:39:39 compute-1 kernel: tap66ec4ebb-3c (unregistering): left promiscuous mode
Dec 06 07:39:39 compute-1 NetworkManager[49031]: <info>  [1765006779.9415] device (tap66ec4ebb-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:39:39 compute-1 ovn_controller[130279]: 2025-12-06T07:39:39Z|00528|binding|INFO|Releasing lport 66ec4ebb-3c12-40ec-8b46-858eeb166d64 from this chassis (sb_readonly=0)
Dec 06 07:39:39 compute-1 ovn_controller[130279]: 2025-12-06T07:39:39Z|00529|binding|INFO|Setting lport 66ec4ebb-3c12-40ec-8b46-858eeb166d64 down in Southbound
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.954 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:39 compute-1 ovn_controller[130279]: 2025-12-06T07:39:39Z|00530|binding|INFO|Removing iface tap66ec4ebb-3c ovn-installed in OVS
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.956 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:39.966 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:bc:74 10.100.0.3'], port_security=['fa:16:3e:db:bc:74 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7bca344a-3af1-4217-b97d-3f288712b57d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=66ec4ebb-3c12-40ec-8b46-858eeb166d64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:39.967 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 66ec4ebb-3c12-40ec-8b46-858eeb166d64 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 unbound from our chassis
Dec 06 07:39:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:39.969 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40bc9d32-839b-4591-acbc-c5d535123ff1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:39:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:39.971 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[36a65b08-c6e1-4461-b7cd-67ca397a33b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:39.971 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 namespace which is not needed anymore
Dec 06 07:39:39 compute-1 nova_compute[226101]: 2025-12-06 07:39:39.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:39 compute-1 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000082.scope: Deactivated successfully.
Dec 06 07:39:39 compute-1 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000082.scope: Consumed 22.115s CPU time.
Dec 06 07:39:39 compute-1 systemd-machined[190302]: Machine qemu-60-instance-00000082 terminated.
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.062 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.067 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.077 226109 INFO nova.virt.libvirt.driver [-] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Instance destroyed successfully.
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.077 226109 DEBUG nova.objects.instance [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'resources' on Instance uuid 7bca344a-3af1-4217-b97d-3f288712b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.103 226109 DEBUG nova.virt.libvirt.vif [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1059207158',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1059207158',id=130,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:36:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-goa0p5ar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:36:48Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=7bca344a-3af1-4217-b97d-3f288712b57d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.104 226109 DEBUG nova.network.os_vif_util [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "address": "fa:16:3e:db:bc:74", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66ec4ebb-3c", "ovs_interfaceid": "66ec4ebb-3c12-40ec-8b46-858eeb166d64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.105 226109 DEBUG nova.network.os_vif_util [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:bc:74,bridge_name='br-int',has_traffic_filtering=True,id=66ec4ebb-3c12-40ec-8b46-858eeb166d64,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66ec4ebb-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.105 226109 DEBUG os_vif [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:bc:74,bridge_name='br-int',has_traffic_filtering=True,id=66ec4ebb-3c12-40ec-8b46-858eeb166d64,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66ec4ebb-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.107 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.107 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66ec4ebb-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.108 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.109 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.112 226109 INFO os_vif [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:bc:74,bridge_name='br-int',has_traffic_filtering=True,id=66ec4ebb-3c12-40ec-8b46-858eeb166d64,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66ec4ebb-3c')
Dec 06 07:39:40 compute-1 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[276234]: [NOTICE]   (276254) : haproxy version is 2.8.14-c23fe91
Dec 06 07:39:40 compute-1 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[276234]: [NOTICE]   (276254) : path to executable is /usr/sbin/haproxy
Dec 06 07:39:40 compute-1 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[276234]: [WARNING]  (276254) : Exiting Master process...
Dec 06 07:39:40 compute-1 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[276234]: [ALERT]    (276254) : Current worker (276256) exited with code 143 (Terminated)
Dec 06 07:39:40 compute-1 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[276234]: [WARNING]  (276254) : All workers exited. Exiting... (0)
Dec 06 07:39:40 compute-1 systemd[1]: libpod-41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0.scope: Deactivated successfully.
Dec 06 07:39:40 compute-1 conmon[276234]: conmon 41ba7019ca14abadaba7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0.scope/container/memory.events
Dec 06 07:39:40 compute-1 podman[277893]: 2025-12-06 07:39:40.134238906 +0000 UTC m=+0.070867160 container died 41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:39:40 compute-1 ovn_controller[130279]: 2025-12-06T07:39:40Z|00531|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec 06 07:39:40 compute-1 ovn_controller[130279]: 2025-12-06T07:39:40Z|00532|binding|INFO|Releasing lport 0d2044a5-87cb-4c28-912c-9a2682bb94de from this chassis (sb_readonly=0)
Dec 06 07:39:40 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0-userdata-shm.mount: Deactivated successfully.
Dec 06 07:39:40 compute-1 systemd[1]: var-lib-containers-storage-overlay-d68691bb501f12fc19d2d24b23b6f6325ed83f1fa9ebbad0ae4bd40bcc870ccf-merged.mount: Deactivated successfully.
Dec 06 07:39:40 compute-1 podman[277893]: 2025-12-06 07:39:40.267071157 +0000 UTC m=+0.203699411 container cleanup 41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.283 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-1 systemd[1]: libpod-conmon-41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0.scope: Deactivated successfully.
Dec 06 07:39:40 compute-1 podman[277949]: 2025-12-06 07:39:40.343743727 +0000 UTC m=+0.052854833 container remove 41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.351 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdc2c47-5b39-4f79-8924-7a36cf01031b]: (4, ('Sat Dec  6 07:39:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 (41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0)\n41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0\nSat Dec  6 07:39:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 (41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0)\n41ba7019ca14abadaba7735781a12452fa529219ed862fdadff9c5e0bbd3f3a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.352 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d5328140-0441-4f10-858e-520f3192b65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.353 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.355 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-1 kernel: tap40bc9d32-80: left promiscuous mode
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.369 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.373 226109 DEBUG nova.compute.manager [req-cb185e1f-33c9-4a0f-ab33-3283cbdc506b req-6a8267c6-ec40-4500-86f9-215aed4a8c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received event network-vif-unplugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.373 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8b15a1f8-e245-4cae-8048-d78b4a68c90e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.374 226109 DEBUG oslo_concurrency.lockutils [req-cb185e1f-33c9-4a0f-ab33-3283cbdc506b req-6a8267c6-ec40-4500-86f9-215aed4a8c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.374 226109 DEBUG oslo_concurrency.lockutils [req-cb185e1f-33c9-4a0f-ab33-3283cbdc506b req-6a8267c6-ec40-4500-86f9-215aed4a8c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.374 226109 DEBUG oslo_concurrency.lockutils [req-cb185e1f-33c9-4a0f-ab33-3283cbdc506b req-6a8267c6-ec40-4500-86f9-215aed4a8c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.375 226109 DEBUG nova.compute.manager [req-cb185e1f-33c9-4a0f-ab33-3283cbdc506b req-6a8267c6-ec40-4500-86f9-215aed4a8c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] No waiting events found dispatching network-vif-unplugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.375 226109 DEBUG nova.compute.manager [req-cb185e1f-33c9-4a0f-ab33-3283cbdc506b req-6a8267c6-ec40-4500-86f9-215aed4a8c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received event network-vif-unplugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.395 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[efc9036b-1a90-40db-b0dc-677647f5bdf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.396 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e2cbed96-2a2c-4d87-a8af-76319f7167a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.412 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e7414754-0bb6-4ab4-8dc1-ae35d662499f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685969, 'reachable_time': 44957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277965, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.414 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:39:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:40.414 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[179d0eea-6c9e-40b1-9786-f06b7e3d3e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:40 compute-1 systemd[1]: run-netns-ovnmeta\x2d40bc9d32\x2d839b\x2d4591\x2dacbc\x2dc5d535123ff1.mount: Deactivated successfully.
Dec 06 07:39:40 compute-1 ceph-mon[81689]: osdmap e311: 3 total, 3 up, 3 in
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.626 226109 INFO nova.virt.libvirt.driver [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Deleting instance files /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d_del
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.627 226109 INFO nova.virt.libvirt.driver [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Deletion of /var/lib/nova/instances/7bca344a-3af1-4217-b97d-3f288712b57d_del complete
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.727 226109 INFO nova.compute.manager [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Took 0.89 seconds to destroy the instance on the hypervisor.
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.727 226109 DEBUG oslo.service.loopingcall [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.727 226109 DEBUG nova.compute.manager [-] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:39:40 compute-1 nova_compute[226101]: 2025-12-06 07:39:40.728 226109 DEBUG nova.network.neutron [-] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:39:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:41 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:41.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:41.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:41 compute-1 nova_compute[226101]: 2025-12-06 07:39:41.829 226109 DEBUG nova.network.neutron [-] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:41 compute-1 nova_compute[226101]: 2025-12-06 07:39:41.864 226109 INFO nova.compute.manager [-] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Took 1.14 seconds to deallocate network for instance.
Dec 06 07:39:41 compute-1 ceph-mon[81689]: pgmap v2507: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.2 KiB/s wr, 140 op/s
Dec 06 07:39:41 compute-1 nova_compute[226101]: 2025-12-06 07:39:41.964 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:41 compute-1 nova_compute[226101]: 2025-12-06 07:39:41.964 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.223 226109 DEBUG oslo_concurrency.processutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.250 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.595 226109 DEBUG nova.compute.manager [req-2d237f4f-6cfd-405b-ba4f-b3f7983b782b req-69ae2be9-1b96-4627-ac59-8027cc2595fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received event network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.595 226109 DEBUG oslo_concurrency.lockutils [req-2d237f4f-6cfd-405b-ba4f-b3f7983b782b req-69ae2be9-1b96-4627-ac59-8027cc2595fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.596 226109 DEBUG oslo_concurrency.lockutils [req-2d237f4f-6cfd-405b-ba4f-b3f7983b782b req-69ae2be9-1b96-4627-ac59-8027cc2595fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.596 226109 DEBUG oslo_concurrency.lockutils [req-2d237f4f-6cfd-405b-ba4f-b3f7983b782b req-69ae2be9-1b96-4627-ac59-8027cc2595fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.596 226109 DEBUG nova.compute.manager [req-2d237f4f-6cfd-405b-ba4f-b3f7983b782b req-69ae2be9-1b96-4627-ac59-8027cc2595fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] No waiting events found dispatching network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.596 226109 WARNING nova.compute.manager [req-2d237f4f-6cfd-405b-ba4f-b3f7983b782b req-69ae2be9-1b96-4627-ac59-8027cc2595fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received unexpected event network-vif-plugged-66ec4ebb-3c12-40ec-8b46-858eeb166d64 for instance with vm_state deleted and task_state None.
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.597 226109 DEBUG nova.compute.manager [req-2d237f4f-6cfd-405b-ba4f-b3f7983b782b req-69ae2be9-1b96-4627-ac59-8027cc2595fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Received event network-vif-deleted-66ec4ebb-3c12-40ec-8b46-858eeb166d64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3522121454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.699 226109 DEBUG oslo_concurrency.processutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.705 226109 DEBUG nova.compute.provider_tree [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:39:42 compute-1 nova_compute[226101]: 2025-12-06 07:39:42.876 226109 DEBUG nova.scheduler.client.report [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:39:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3522121454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:43.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:43 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:43.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:43 compute-1 nova_compute[226101]: 2025-12-06 07:39:43.925 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:44 compute-1 nova_compute[226101]: 2025-12-06 07:39:44.026 226109 INFO nova.scheduler.client.report [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Deleted allocations for instance 7bca344a-3af1-4217-b97d-3f288712b57d
Dec 06 07:39:44 compute-1 nova_compute[226101]: 2025-12-06 07:39:44.196 226109 DEBUG oslo_concurrency.lockutils [None req-33e2100c-4902-482f-8ee0-7a4db162fe5b 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "7bca344a-3af1-4217-b97d-3f288712b57d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:44 compute-1 ceph-mon[81689]: pgmap v2508: 305 pgs: 305 active+clean; 816 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.6 KiB/s wr, 180 op/s
Dec 06 07:39:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e312 e312: 3 total, 3 up, 3 in
Dec 06 07:39:44 compute-1 sudo[277989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:44 compute-1 sudo[277989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:44 compute-1 sudo[277989]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:44 compute-1 sudo[278014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:39:44 compute-1 sudo[278014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:44 compute-1 sudo[278014]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:44 compute-1 sudo[278039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:44 compute-1 sudo[278039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:44 compute-1 sudo[278039]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:44 compute-1 sudo[278064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:39:44 compute-1 sudo[278064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:45 compute-1 nova_compute[226101]: 2025-12-06 07:39:45.108 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-1 podman[278161]: 2025-12-06 07:39:45.30253398 +0000 UTC m=+0.065191540 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:39:45 compute-1 ceph-mon[81689]: osdmap e312: 3 total, 3 up, 3 in
Dec 06 07:39:45 compute-1 ceph-mon[81689]: pgmap v2510: 305 pgs: 305 active+clean; 816 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 5.0 KiB/s wr, 104 op/s
Dec 06 07:39:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:45 compute-1 podman[278161]: 2025-12-06 07:39:45.401909919 +0000 UTC m=+0.164567459 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 07:39:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:45 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:45 compute-1 sudo[278064]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:45 compute-1 sudo[278283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:45 compute-1 sudo[278283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:45 compute-1 sudo[278283]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:45 compute-1 sudo[278308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:39:46 compute-1 sudo[278308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:46 compute-1 sudo[278308]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:46 compute-1 sudo[278333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:46 compute-1 sudo[278333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:46 compute-1 sudo[278333]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:46 compute-1 sudo[278358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:39:46 compute-1 sudo[278358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:46 compute-1 sudo[278358]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:47 compute-1 nova_compute[226101]: 2025-12-06 07:39:47.226 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:39:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:39:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:47 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:48 compute-1 ceph-mon[81689]: pgmap v2511: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 5.0 KiB/s wr, 109 op/s
Dec 06 07:39:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2183177135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:39:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:39:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:39:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4058252904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:49 compute-1 ceph-mon[81689]: pgmap v2512: 305 pgs: 305 active+clean; 776 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 37 KiB/s wr, 114 op/s
Dec 06 07:39:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:50 compute-1 nova_compute[226101]: 2025-12-06 07:39:50.112 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2086763005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e313 e313: 3 total, 3 up, 3 in
Dec 06 07:39:51 compute-1 ceph-mon[81689]: osdmap e313: 3 total, 3 up, 3 in
Dec 06 07:39:51 compute-1 ceph-mon[81689]: pgmap v2514: 305 pgs: 305 active+clean; 806 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 2.3 MiB/s wr, 89 op/s
Dec 06 07:39:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:39:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:39:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:51 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:51.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:52 compute-1 nova_compute[226101]: 2025-12-06 07:39:52.228 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e314 e314: 3 total, 3 up, 3 in
Dec 06 07:39:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2420898333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2420898333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:52 compute-1 sudo[278415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:52 compute-1 sudo[278415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:52 compute-1 sudo[278415]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:52 compute-1 sudo[278440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:39:52 compute-1 sudo[278440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:52 compute-1 sudo[278440]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:53 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:53 compute-1 ceph-mon[81689]: osdmap e314: 3 total, 3 up, 3 in
Dec 06 07:39:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3637923805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:53 compute-1 ceph-mon[81689]: pgmap v2516: 305 pgs: 305 active+clean; 793 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 2.7 MiB/s wr, 146 op/s
Dec 06 07:39:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4204668804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e315 e315: 3 total, 3 up, 3 in
Dec 06 07:39:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:55 compute-1 ceph-mon[81689]: osdmap e315: 3 total, 3 up, 3 in
Dec 06 07:39:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1604656726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:55 compute-1 nova_compute[226101]: 2025-12-06 07:39:55.076 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006780.0745192, 7bca344a-3af1-4217-b97d-3f288712b57d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:55 compute-1 nova_compute[226101]: 2025-12-06 07:39:55.076 226109 INFO nova.compute.manager [-] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] VM Stopped (Lifecycle Event)
Dec 06 07:39:55 compute-1 nova_compute[226101]: 2025-12-06 07:39:55.116 226109 DEBUG nova.compute.manager [None req-53371eb5-4513-4dd1-8512-cbc9f21182c1 - - - - - -] [instance: 7bca344a-3af1-4217-b97d-3f288712b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:55 compute-1 nova_compute[226101]: 2025-12-06 07:39:55.125 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:55 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:56 compute-1 ceph-mon[81689]: pgmap v2518: 305 pgs: 305 active+clean; 793 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Dec 06 07:39:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2926417048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4267829070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:57 compute-1 ceph-mon[81689]: pgmap v2519: 305 pgs: 305 active+clean; 746 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 124 KiB/s rd, 3.9 MiB/s wr, 178 op/s
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.231 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:39:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2787199876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:39:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2787199876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.679 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.680 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.680 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.680 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.680 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.681 226109 INFO nova.compute.manager [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Terminating instance
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.682 226109 DEBUG nova.compute.manager [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:39:57 compute-1 kernel: tapbfac9ac2-c1 (unregistering): left promiscuous mode
Dec 06 07:39:57 compute-1 NetworkManager[49031]: <info>  [1765006797.7331] device (tapbfac9ac2-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:39:57 compute-1 ovn_controller[130279]: 2025-12-06T07:39:57Z|00533|binding|INFO|Releasing lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 from this chassis (sb_readonly=0)
Dec 06 07:39:57 compute-1 ovn_controller[130279]: 2025-12-06T07:39:57Z|00534|binding|INFO|Setting lport bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 down in Southbound
Dec 06 07:39:57 compute-1 ovn_controller[130279]: 2025-12-06T07:39:57Z|00535|binding|INFO|Removing iface tapbfac9ac2-c1 ovn-installed in OVS
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.793 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.809 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:57 compute-1 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000081.scope: Deactivated successfully.
Dec 06 07:39:57 compute-1 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000081.scope: Consumed 18.889s CPU time.
Dec 06 07:39:57 compute-1 systemd-machined[190302]: Machine qemu-62-instance-00000081 terminated.
Dec 06 07:39:57 compute-1 podman[278468]: 2025-12-06 07:39:57.870280436 +0000 UTC m=+0.062161049 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.871 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:31:81 10.100.0.6'], port_security=['fa:16:3e:c3:31:81 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3d4b97f9-b8c3-4764-87f9-4006968fecd4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '8', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.873 139580 INFO neutron.agent.ovn.metadata.agent [-] Port bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.874 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272
Dec 06 07:39:57 compute-1 podman[278465]: 2025-12-06 07:39:57.877177827 +0000 UTC m=+0.069041800 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.887 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1be3f8-296e-4e62-b2af-e7adb5688589]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:57 compute-1 podman[278469]: 2025-12-06 07:39:57.906595443 +0000 UTC m=+0.095835648 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.912 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[40fe31a6-c8aa-45aa-b363-2f04d8dba3a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.918 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ea89b1c8-2698-4ba2-bd3f-6cb4f56602b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.921 226109 INFO nova.virt.libvirt.driver [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Instance destroyed successfully.
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.921 226109 DEBUG nova.objects.instance [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'resources' on Instance uuid 3d4b97f9-b8c3-4764-87f9-4006968fecd4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.945 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[118604d1-5c09-426e-ad20-cf7d1317f6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.963 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cd394e37-87ac-4cf8-88df-ca9813383911]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672206, 'reachable_time': 34158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278550, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.977 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c71dba96-b80f-4cc4-990e-a185d5d7949e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672218, 'tstamp': 672218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278551, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672221, 'tstamp': 672221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278551, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.978 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.980 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:57 compute-1 nova_compute[226101]: 2025-12-06 07:39:57.983 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.983 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.984 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.984 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:57 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:39:57.984 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2787199876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2787199876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.069890) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798069990, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2486, "num_deletes": 261, "total_data_size": 5808722, "memory_usage": 5898640, "flush_reason": "Manual Compaction"}
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.074 226109 DEBUG nova.virt.libvirt.vif [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=129,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-pbz3nfcu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:37:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=3d4b97f9-b8c3-4764-87f9-4006968fecd4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.075 226109 DEBUG nova.network.os_vif_util [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "address": "fa:16:3e:c3:31:81", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfac9ac2-c1", "ovs_interfaceid": "bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.076 226109 DEBUG nova.network.os_vif_util [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.076 226109 DEBUG os_vif [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.078 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.078 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfac9ac2-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.079 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.080 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.083 226109 INFO os_vif [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:31:81,bridge_name='br-int',has_traffic_filtering=True,id=bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfac9ac2-c1')
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798094744, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3887745, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50752, "largest_seqno": 53233, "table_properties": {"data_size": 3877010, "index_size": 6845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23602, "raw_average_key_size": 21, "raw_value_size": 3855123, "raw_average_value_size": 3495, "num_data_blocks": 296, "num_entries": 1103, "num_filter_entries": 1103, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006574, "oldest_key_time": 1765006574, "file_creation_time": 1765006798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 24873 microseconds, and 8546 cpu microseconds.
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.094780) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3887745 bytes OK
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.094798) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.096545) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.096559) EVENT_LOG_v1 {"time_micros": 1765006798096555, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.096577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5797263, prev total WAL file size 5797263, number of live WAL files 2.
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.097837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3796KB)], [99(10MB)]
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798097884, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 15316174, "oldest_snapshot_seqno": -1}
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 8556 keys, 13157254 bytes, temperature: kUnknown
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798178390, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 13157254, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13099356, "index_size": 35386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 222288, "raw_average_key_size": 25, "raw_value_size": 12946254, "raw_average_value_size": 1513, "num_data_blocks": 1390, "num_entries": 8556, "num_filter_entries": 8556, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765006798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.178708) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 13157254 bytes
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.180779) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.9 rd, 163.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 10.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 9098, records dropped: 542 output_compression: NoCompression
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.180802) EVENT_LOG_v1 {"time_micros": 1765006798180793, "job": 62, "event": "compaction_finished", "compaction_time_micros": 80651, "compaction_time_cpu_micros": 28588, "output_level": 6, "num_output_files": 1, "total_output_size": 13157254, "num_input_records": 9098, "num_output_records": 8556, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798181474, "job": 62, "event": "table_file_deletion", "file_number": 101}
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798183492, "job": 62, "event": "table_file_deletion", "file_number": 99}
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.097757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.183523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.183529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.183534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.183536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:39:58.183538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.574 226109 INFO nova.virt.libvirt.driver [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Deleting instance files /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4_del
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.575 226109 INFO nova.virt.libvirt.driver [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Deletion of /var/lib/nova/instances/3d4b97f9-b8c3-4764-87f9-4006968fecd4_del complete
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.774 226109 INFO nova.compute.manager [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Took 1.09 seconds to destroy the instance on the hypervisor.
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.775 226109 DEBUG oslo.service.loopingcall [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.775 226109 DEBUG nova.compute.manager [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:39:58 compute-1 nova_compute[226101]: 2025-12-06 07:39:58.775 226109 DEBUG nova.network.neutron [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:39:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:59 compute-1 ceph-mon[81689]: pgmap v2520: 305 pgs: 305 active+clean; 735 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 126 KiB/s rd, 3.1 MiB/s wr, 177 op/s
Dec 06 07:39:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 e316: 3 total, 3 up, 3 in
Dec 06 07:39:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:39:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:39:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:59.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:39:59 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:59.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.925 226109 DEBUG nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.926 226109 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.926 226109 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.926 226109 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.927 226109 DEBUG nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.927 226109 DEBUG nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-unplugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.927 226109 DEBUG nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.928 226109 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.928 226109 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.928 226109 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.928 226109 DEBUG nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] No waiting events found dispatching network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:59 compute-1 nova_compute[226101]: 2025-12-06 07:39:59.929 226109 WARNING nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received unexpected event network-vif-plugged-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 for instance with vm_state active and task_state deleting.
Dec 06 07:40:00 compute-1 ceph-mon[81689]: osdmap e316: 3 total, 3 up, 3 in
Dec 06 07:40:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/351957969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:00 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:40:00 compute-1 nova_compute[226101]: 2025-12-06 07:40:00.910 226109 DEBUG nova.network.neutron [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:00 compute-1 nova_compute[226101]: 2025-12-06 07:40:00.963 226109 INFO nova.compute.manager [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Took 2.19 seconds to deallocate network for instance.
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.052 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.053 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.112 226109 DEBUG nova.compute.manager [req-d4a6c1b5-08bf-45d1-a510-2d6b7be7e69b req-e1524869-a6b3-431c-a166-ed203ce8b361 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Received event network-vif-deleted-bfac9ac2-c1cc-4dc1-830c-44e5990fb8e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.180 226109 DEBUG oslo_concurrency.processutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:01 compute-1 ceph-mon[81689]: pgmap v2522: 305 pgs: 305 active+clean; 645 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 346 op/s
Dec 06 07:40:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:01.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:01 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:01.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/87181181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.617 226109 DEBUG oslo_concurrency.processutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.624 226109 DEBUG nova.compute.provider_tree [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:40:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:01.658 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:01.658 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:01.658 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.674 226109 DEBUG nova.scheduler.client.report [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.807 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:01 compute-1 nova_compute[226101]: 2025-12-06 07:40:01.892 226109 INFO nova.scheduler.client.report [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Deleted allocations for instance 3d4b97f9-b8c3-4764-87f9-4006968fecd4
Dec 06 07:40:02 compute-1 nova_compute[226101]: 2025-12-06 07:40:02.234 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:02 compute-1 nova_compute[226101]: 2025-12-06 07:40:02.619 226109 DEBUG oslo_concurrency.lockutils [None req-18738929-72de-4026-846a-47ddddb83aa6 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "3d4b97f9-b8c3-4764-87f9-4006968fecd4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/87181181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:03 compute-1 nova_compute[226101]: 2025-12-06 07:40:03.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:03 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:03 compute-1 ceph-mon[81689]: pgmap v2523: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.4 MiB/s wr, 348 op/s
Dec 06 07:40:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:40:05 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:40:05 compute-1 ceph-mon[81689]: pgmap v2524: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Dec 06 07:40:07 compute-1 nova_compute[226101]: 2025-12-06 07:40:07.238 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:07 compute-1 ceph-mon[81689]: pgmap v2525: 305 pgs: 305 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 304 KiB/s wr, 273 op/s
Dec 06 07:40:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:07.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:07 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:07.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:08 compute-1 nova_compute[226101]: 2025-12-06 07:40:08.116 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2376389813' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:40:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2376389813' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:40:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:09 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:10 compute-1 ceph-mon[81689]: pgmap v2526: 305 pgs: 305 active+clean; 556 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 258 op/s
Dec 06 07:40:11 compute-1 ceph-mon[81689]: pgmap v2527: 305 pgs: 305 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 547 KiB/s wr, 258 op/s
Dec 06 07:40:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:11.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:11 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:11.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:12 compute-1 nova_compute[226101]: 2025-12-06 07:40:12.239 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4164932125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:12 compute-1 nova_compute[226101]: 2025-12-06 07:40:12.920 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006797.9182522, 3d4b97f9-b8c3-4764-87f9-4006968fecd4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:40:12 compute-1 nova_compute[226101]: 2025-12-06 07:40:12.921 226109 INFO nova.compute.manager [-] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] VM Stopped (Lifecycle Event)
Dec 06 07:40:13 compute-1 nova_compute[226101]: 2025-12-06 07:40:13.118 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:13 compute-1 nova_compute[226101]: 2025-12-06 07:40:13.173 226109 DEBUG nova.compute.manager [None req-c325db99-32ac-483c-b959-328456d645ee - - - - - -] [instance: 3d4b97f9-b8c3-4764-87f9-4006968fecd4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:40:13 compute-1 nova_compute[226101]: 2025-12-06 07:40:13.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:13.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:13.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:13 compute-1 ceph-mon[81689]: pgmap v2528: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 893 KiB/s rd, 1.2 MiB/s wr, 115 op/s
Dec 06 07:40:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/384442653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:15 compute-1 nova_compute[226101]: 2025-12-06 07:40:15.293 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:15 compute-1 nova_compute[226101]: 2025-12-06 07:40:15.293 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:15 compute-1 nova_compute[226101]: 2025-12-06 07:40:15.294 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:15 compute-1 nova_compute[226101]: 2025-12-06 07:40:15.294 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:40:15 compute-1 nova_compute[226101]: 2025-12-06 07:40:15.294 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:15.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:15 compute-1 ovn_controller[130279]: 2025-12-06T07:40:15Z|00536|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec 06 07:40:15 compute-1 nova_compute[226101]: 2025-12-06 07:40:15.693 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1248014117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:15 compute-1 nova_compute[226101]: 2025-12-06 07:40:15.824 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:15 compute-1 ovn_controller[130279]: 2025-12-06T07:40:15Z|00537|binding|INFO|Releasing lport e4d89947-8fab-4c13-b2db-4eed875f77a0 from this chassis (sb_readonly=0)
Dec 06 07:40:15 compute-1 nova_compute[226101]: 2025-12-06 07:40:15.834 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:16 compute-1 ceph-mon[81689]: pgmap v2529: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.241 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1248014117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.546 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.546 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.549 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.549 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:40:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:40:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:40:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:17.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.703 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.704 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3950MB free_disk=20.816905975341797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.704 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:17 compute-1 nova_compute[226101]: 2025-12-06 07:40:17.705 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:18 compute-1 nova_compute[226101]: 2025-12-06 07:40:18.119 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:18 compute-1 ceph-mon[81689]: pgmap v2530: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Dec 06 07:40:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3362342613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:18 compute-1 nova_compute[226101]: 2025-12-06 07:40:18.579 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 829161c5-19b5-459e-88a3-58512aaa5fc7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:40:18 compute-1 nova_compute[226101]: 2025-12-06 07:40:18.580 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:40:18 compute-1 nova_compute[226101]: 2025-12-06 07:40:18.580 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:40:18 compute-1 nova_compute[226101]: 2025-12-06 07:40:18.580 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:40:18 compute-1 nova_compute[226101]: 2025-12-06 07:40:18.843 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:19.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:19.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3096176524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:19 compute-1 nova_compute[226101]: 2025-12-06 07:40:19.721 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.878s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:19 compute-1 nova_compute[226101]: 2025-12-06 07:40:19.728 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:40:19 compute-1 nova_compute[226101]: 2025-12-06 07:40:19.762 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:40:20 compute-1 nova_compute[226101]: 2025-12-06 07:40:20.143 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:40:20 compute-1 nova_compute[226101]: 2025-12-06 07:40:20.144 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:21 compute-1 nova_compute[226101]: 2025-12-06 07:40:21.144 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:21 compute-1 nova_compute[226101]: 2025-12-06 07:40:21.145 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:40:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:21.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:21.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:21 compute-1 nova_compute[226101]: 2025-12-06 07:40:21.866 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:40:21 compute-1 nova_compute[226101]: 2025-12-06 07:40:21.867 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:40:21 compute-1 nova_compute[226101]: 2025-12-06 07:40:21.867 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:40:21 compute-1 ceph-mon[81689]: pgmap v2531: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 06 07:40:22 compute-1 nova_compute[226101]: 2025-12-06 07:40:22.280 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3750253994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3096176524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:22 compute-1 ceph-mon[81689]: pgmap v2532: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Dec 06 07:40:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/666352619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:23 compute-1 nova_compute[226101]: 2025-12-06 07:40:23.122 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:40:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3706009052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:40:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:40:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3706009052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:40:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:23.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:40:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:23.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:40:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3163851333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:23 compute-1 ceph-mon[81689]: pgmap v2533: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Dec 06 07:40:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3706009052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:40:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3706009052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:40:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.426 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.427 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.427 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.427 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.428 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.429 226109 INFO nova.compute.manager [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Terminating instance
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.430 226109 DEBUG nova.compute.manager [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:40:24 compute-1 kernel: tap588bab74-7c (unregistering): left promiscuous mode
Dec 06 07:40:24 compute-1 NetworkManager[49031]: <info>  [1765006824.4945] device (tap588bab74-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:40:24 compute-1 ovn_controller[130279]: 2025-12-06T07:40:24Z|00538|binding|INFO|Releasing lport 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 from this chassis (sb_readonly=0)
Dec 06 07:40:24 compute-1 ovn_controller[130279]: 2025-12-06T07:40:24Z|00539|binding|INFO|Setting lport 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 down in Southbound
Dec 06 07:40:24 compute-1 ovn_controller[130279]: 2025-12-06T07:40:24Z|00540|binding|INFO|Removing iface tap588bab74-7c ovn-installed in OVS
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.541 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.557 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:24 compute-1 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Dec 06 07:40:24 compute-1 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007d.scope: Consumed 29.388s CPU time.
Dec 06 07:40:24 compute-1 systemd-machined[190302]: Machine qemu-59-instance-0000007d terminated.
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.650 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.657 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.671 226109 INFO nova.virt.libvirt.driver [-] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Instance destroyed successfully.
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.672 226109 DEBUG nova.objects.instance [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'resources' on Instance uuid 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.687 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:88:de 10.100.0.12'], port_security=['fa:16:3e:7f:88:de 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0c3768cf-031f-4b14-a8b4-9ff73a9cfa72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.688 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.689 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb49e8a-b939-4c79-851c-62c634be0272
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.705 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a9cd04-fa4a-4ce6-b76c-42043dbc3bc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.733 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7d9905-4b26-4e50-87b0-a06773668ee0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.735 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a815dc7a-b818-4e24-ade8-752f4817ec60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.768 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ad7dc6-2ea2-475b-929e-9f364af7a4b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.783 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[114eedf0-07ea-49e0-8d57-0e9b6b9e8598]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb49e8a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:bf:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672206, 'reachable_time': 34158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278661, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.796 226109 DEBUG nova.virt.libvirt.vif [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=125,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:34:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-rhw08b3b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:34:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=0c3768cf-031f-4b14-a8b4-9ff73a9cfa72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.797 226109 DEBUG nova.network.os_vif_util [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.799 226109 DEBUG nova.network.os_vif_util [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:88:de,bridge_name='br-int',has_traffic_filtering=True,id=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap588bab74-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.800 226109 DEBUG os_vif [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:88:de,bridge_name='br-int',has_traffic_filtering=True,id=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap588bab74-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.803 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.804 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap588bab74-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.804 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ee728a-7275-4ccb-bb50-cc4153aefa1e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672218, 'tstamp': 672218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278662, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5bb49e8a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672221, 'tstamp': 672221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278662, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.807 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.807 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.809 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.812 226109 INFO os_vif [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:88:de,bridge_name='br-int',has_traffic_filtering=True,id=588bab74-7ca6-4e9a-a06b-0b7fbcf29c63,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap588bab74-7c')
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.813 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb49e8a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.814 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.815 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb49e8a-b0, col_values=(('external_ids', {'iface-id': 'e4d89947-8fab-4c13-b2db-4eed875f77a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:24.815 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.831 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.859 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updating instance_info_cache with network_info: [{"id": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "address": "fa:16:3e:7f:88:de", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap588bab74-7c", "ovs_interfaceid": "588bab74-7ca6-4e9a-a06b-0b7fbcf29c63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.993 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.994 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.994 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.995 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.995 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.996 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:24 compute-1 nova_compute[226101]: 2025-12-06 07:40:24.996 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.308 226109 DEBUG nova.compute.manager [req-08cae6d8-75bf-46a1-ae0f-f4f66bc3a343 req-2bf0053f-4e59-4226-aa3d-ce8bdb5a7f88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-vif-unplugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.309 226109 DEBUG oslo_concurrency.lockutils [req-08cae6d8-75bf-46a1-ae0f-f4f66bc3a343 req-2bf0053f-4e59-4226-aa3d-ce8bdb5a7f88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.309 226109 DEBUG oslo_concurrency.lockutils [req-08cae6d8-75bf-46a1-ae0f-f4f66bc3a343 req-2bf0053f-4e59-4226-aa3d-ce8bdb5a7f88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.309 226109 DEBUG oslo_concurrency.lockutils [req-08cae6d8-75bf-46a1-ae0f-f4f66bc3a343 req-2bf0053f-4e59-4226-aa3d-ce8bdb5a7f88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.310 226109 DEBUG nova.compute.manager [req-08cae6d8-75bf-46a1-ae0f-f4f66bc3a343 req-2bf0053f-4e59-4226-aa3d-ce8bdb5a7f88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] No waiting events found dispatching network-vif-unplugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.310 226109 DEBUG nova.compute.manager [req-08cae6d8-75bf-46a1-ae0f-f4f66bc3a343 req-2bf0053f-4e59-4226-aa3d-ce8bdb5a7f88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-vif-unplugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:25.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:25.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.677 226109 INFO nova.virt.libvirt.driver [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Deleting instance files /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_del
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.677 226109 INFO nova.virt.libvirt.driver [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Deletion of /var/lib/nova/instances/0c3768cf-031f-4b14-a8b4-9ff73a9cfa72_del complete
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.807 226109 INFO nova.compute.manager [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Took 1.38 seconds to destroy the instance on the hypervisor.
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.808 226109 DEBUG oslo.service.loopingcall [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.808 226109 DEBUG nova.compute.manager [-] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:40:25 compute-1 nova_compute[226101]: 2025-12-06 07:40:25.809 226109 DEBUG nova.network.neutron [-] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:40:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:26.270 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:40:26 compute-1 nova_compute[226101]: 2025-12-06 07:40:26.271 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:26.272 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:40:26 compute-1 ceph-mon[81689]: pgmap v2534: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 979 KiB/s wr, 46 op/s
Dec 06 07:40:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2930511714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:27 compute-1 ceph-mon[81689]: pgmap v2535: 305 pgs: 305 active+clean; 500 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 980 KiB/s wr, 54 op/s
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.461 226109 DEBUG nova.compute.manager [req-43c40c22-c21d-4a43-8ca9-0f60623b7313 req-62d6f703-ff34-443f-9718-05e1d022a63c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.462 226109 DEBUG oslo_concurrency.lockutils [req-43c40c22-c21d-4a43-8ca9-0f60623b7313 req-62d6f703-ff34-443f-9718-05e1d022a63c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.462 226109 DEBUG oslo_concurrency.lockutils [req-43c40c22-c21d-4a43-8ca9-0f60623b7313 req-62d6f703-ff34-443f-9718-05e1d022a63c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.463 226109 DEBUG oslo_concurrency.lockutils [req-43c40c22-c21d-4a43-8ca9-0f60623b7313 req-62d6f703-ff34-443f-9718-05e1d022a63c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.463 226109 DEBUG nova.compute.manager [req-43c40c22-c21d-4a43-8ca9-0f60623b7313 req-62d6f703-ff34-443f-9718-05e1d022a63c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] No waiting events found dispatching network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.463 226109 WARNING nova.compute.manager [req-43c40c22-c21d-4a43-8ca9-0f60623b7313 req-62d6f703-ff34-443f-9718-05e1d022a63c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received unexpected event network-vif-plugged-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 for instance with vm_state active and task_state deleting.
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.593 226109 DEBUG nova.network.neutron [-] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:27.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:27.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.623 226109 INFO nova.compute.manager [-] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Took 1.81 seconds to deallocate network for instance.
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.716 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.717 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.751 226109 DEBUG nova.compute.manager [req-7b3caf72-c434-47c4-aa12-a51fd7b655ef req-e7053bfc-e865-4fd5-a399-f2f4d4f97c3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Received event network-vif-deleted-588bab74-7ca6-4e9a-a06b-0b7fbcf29c63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:27 compute-1 nova_compute[226101]: 2025-12-06 07:40:27.820 226109 DEBUG oslo_concurrency.processutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:28 compute-1 podman[278703]: 2025-12-06 07:40:28.092406032 +0000 UTC m=+0.064177612 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 06 07:40:28 compute-1 podman[278702]: 2025-12-06 07:40:28.117776091 +0000 UTC m=+0.090106806 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:40:28 compute-1 podman[278704]: 2025-12-06 07:40:28.117968696 +0000 UTC m=+0.085599417 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:40:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:28 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2321178459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:28 compute-1 nova_compute[226101]: 2025-12-06 07:40:28.243 226109 DEBUG oslo_concurrency.processutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:28 compute-1 nova_compute[226101]: 2025-12-06 07:40:28.248 226109 DEBUG nova.compute.provider_tree [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:40:28 compute-1 nova_compute[226101]: 2025-12-06 07:40:28.271 226109 DEBUG nova.scheduler.client.report [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:40:28 compute-1 nova_compute[226101]: 2025-12-06 07:40:28.305 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2321178459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:28 compute-1 nova_compute[226101]: 2025-12-06 07:40:28.384 226109 INFO nova.scheduler.client.report [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Deleted allocations for instance 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72
Dec 06 07:40:28 compute-1 nova_compute[226101]: 2025-12-06 07:40:28.485 226109 DEBUG oslo_concurrency.lockutils [None req-53c89d8f-f102-4201-ba78-83b8b195a618 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "0c3768cf-031f-4b14-a8b4-9ff73a9cfa72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:28 compute-1 nova_compute[226101]: 2025-12-06 07:40:28.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:29.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:40:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:29.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:40:29 compute-1 ceph-mon[81689]: pgmap v2536: 305 pgs: 305 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 615 KiB/s wr, 39 op/s
Dec 06 07:40:29 compute-1 nova_compute[226101]: 2025-12-06 07:40:29.807 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.092 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.093 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.094 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.094 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.095 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.096 226109 INFO nova.compute.manager [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Terminating instance
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.098 226109 DEBUG nova.compute.manager [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:40:30 compute-1 kernel: tap1f39ead9-93 (unregistering): left promiscuous mode
Dec 06 07:40:30 compute-1 NetworkManager[49031]: <info>  [1765006830.4903] device (tap1f39ead9-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:40:30 compute-1 ovn_controller[130279]: 2025-12-06T07:40:30Z|00541|binding|INFO|Releasing lport 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 from this chassis (sb_readonly=0)
Dec 06 07:40:30 compute-1 ovn_controller[130279]: 2025-12-06T07:40:30Z|00542|binding|INFO|Setting lport 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 down in Southbound
Dec 06 07:40:30 compute-1 ovn_controller[130279]: 2025-12-06T07:40:30Z|00543|binding|INFO|Removing iface tap1f39ead9-93 ovn-installed in OVS
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.499 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.500 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:30.512 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:39:62 10.100.0.4'], port_security=['fa:16:3e:d3:39:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '829161c5-19b5-459e-88a3-58512aaa5fc7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb49e8a-b939-4c79-851c-62c634be0272', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '833f4cf9f5a64b2ab94c3bf330353a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44fc0f8f-27e8-483c-be93-2bb490e3ff3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56d5d28a-0d18-4549-b1d7-8420194c6348, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=1f39ead9-93ea-42d4-bb3c-64034b85c0a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:40:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:30.513 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 1f39ead9-93ea-42d4-bb3c-64034b85c0a4 in datapath 5bb49e8a-b939-4c79-851c-62c634be0272 unbound from our chassis
Dec 06 07:40:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:30.514 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bb49e8a-b939-4c79-851c-62c634be0272, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:40:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:30.515 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e28bc3ee-6038-4a5d-95c2-2b08040ea056]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:30.517 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 namespace which is not needed anymore
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.517 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Dec 06 07:40:30 compute-1 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007a.scope: Consumed 30.222s CPU time.
Dec 06 07:40:30 compute-1 systemd-machined[190302]: Machine qemu-58-instance-0000007a terminated.
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.720 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.724 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.736 226109 INFO nova.virt.libvirt.driver [-] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Instance destroyed successfully.
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.736 226109 DEBUG nova.objects.instance [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lazy-loading 'resources' on Instance uuid 829161c5-19b5-459e-88a3-58512aaa5fc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.750 226109 DEBUG nova.virt.libvirt.vif [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=122,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvvXUJX6GT0XQikpBKT/pWa9/7+F48wLkdnGcJYsqojmErT+oUc0gEHpXW8ulxQp5/Qun0IejqspzMhFiBiIMspmuXC7WiBfiNlH7z/XH9UjP9DXKhc6lZtmV9q0VvTnQ==',key_name='tempest-keypair-1449411209',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:34:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='833f4cf9f5a64b2ab94c3bf330353a31',ramdisk_id='',reservation_id='r-ayw3ijlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-690984293',owner_user_name='tempest-AttachVolumeMultiAttachTest-690984293-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:34:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='605b5481e0c944048e6a67046c30d693',uuid=829161c5-19b5-459e-88a3-58512aaa5fc7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.750 226109 DEBUG nova.network.os_vif_util [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converting VIF {"id": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "address": "fa:16:3e:d3:39:62", "network": {"id": "5bb49e8a-b939-4c79-851c-62c634be0272", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-50482264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "833f4cf9f5a64b2ab94c3bf330353a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f39ead9-93", "ovs_interfaceid": "1f39ead9-93ea-42d4-bb3c-64034b85c0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.751 226109 DEBUG nova.network.os_vif_util [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:39:62,bridge_name='br-int',has_traffic_filtering=True,id=1f39ead9-93ea-42d4-bb3c-64034b85c0a4,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f39ead9-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.751 226109 DEBUG os_vif [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:39:62,bridge_name='br-int',has_traffic_filtering=True,id=1f39ead9-93ea-42d4-bb3c-64034b85c0a4,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f39ead9-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.753 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.753 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f39ead9-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.754 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-1 nova_compute[226101]: 2025-12-06 07:40:30.757 226109 INFO os_vif [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:39:62,bridge_name='br-int',has_traffic_filtering=True,id=1f39ead9-93ea-42d4-bb3c-64034b85c0a4,network=Network(5bb49e8a-b939-4c79-851c-62c634be0272),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f39ead9-93')
Dec 06 07:40:30 compute-1 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[273995]: [NOTICE]   (274002) : haproxy version is 2.8.14-c23fe91
Dec 06 07:40:30 compute-1 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[273995]: [NOTICE]   (274002) : path to executable is /usr/sbin/haproxy
Dec 06 07:40:30 compute-1 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[273995]: [WARNING]  (274002) : Exiting Master process...
Dec 06 07:40:30 compute-1 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[273995]: [WARNING]  (274002) : Exiting Master process...
Dec 06 07:40:30 compute-1 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[273995]: [ALERT]    (274002) : Current worker (274004) exited with code 143 (Terminated)
Dec 06 07:40:30 compute-1 neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272[273995]: [WARNING]  (274002) : All workers exited. Exiting... (0)
Dec 06 07:40:30 compute-1 systemd[1]: libpod-0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8.scope: Deactivated successfully.
Dec 06 07:40:31 compute-1 podman[278790]: 2025-12-06 07:40:31.001546881 +0000 UTC m=+0.397826437 container died 0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:40:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:31.273 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:31 compute-1 nova_compute[226101]: 2025-12-06 07:40:31.448 226109 DEBUG nova.compute.manager [req-a673a376-dd32-45a0-8442-b1307bbcd499 req-2d795025-fa75-41bf-a469-8dd5482ab999 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-vif-unplugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:31 compute-1 nova_compute[226101]: 2025-12-06 07:40:31.449 226109 DEBUG oslo_concurrency.lockutils [req-a673a376-dd32-45a0-8442-b1307bbcd499 req-2d795025-fa75-41bf-a469-8dd5482ab999 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:31 compute-1 nova_compute[226101]: 2025-12-06 07:40:31.450 226109 DEBUG oslo_concurrency.lockutils [req-a673a376-dd32-45a0-8442-b1307bbcd499 req-2d795025-fa75-41bf-a469-8dd5482ab999 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:31 compute-1 nova_compute[226101]: 2025-12-06 07:40:31.450 226109 DEBUG oslo_concurrency.lockutils [req-a673a376-dd32-45a0-8442-b1307bbcd499 req-2d795025-fa75-41bf-a469-8dd5482ab999 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:31 compute-1 nova_compute[226101]: 2025-12-06 07:40:31.451 226109 DEBUG nova.compute.manager [req-a673a376-dd32-45a0-8442-b1307bbcd499 req-2d795025-fa75-41bf-a469-8dd5482ab999 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] No waiting events found dispatching network-vif-unplugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:40:31 compute-1 nova_compute[226101]: 2025-12-06 07:40:31.451 226109 DEBUG nova.compute.manager [req-a673a376-dd32-45a0-8442-b1307bbcd499 req-2d795025-fa75-41bf-a469-8dd5482ab999 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-vif-unplugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:40:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:40:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:31.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:40:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:31.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:31 compute-1 ceph-mon[81689]: pgmap v2537: 305 pgs: 305 active+clean; 479 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Dec 06 07:40:31 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8-userdata-shm.mount: Deactivated successfully.
Dec 06 07:40:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-9794ce3684fa58419244269e9b9d799fe215b012cdf2f1929f868edae3c47df5-merged.mount: Deactivated successfully.
Dec 06 07:40:31 compute-1 podman[278790]: 2025-12-06 07:40:31.932037606 +0000 UTC m=+1.328317162 container cleanup 0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:40:31 compute-1 systemd[1]: libpod-conmon-0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8.scope: Deactivated successfully.
Dec 06 07:40:32 compute-1 podman[278849]: 2025-12-06 07:40:32.123043491 +0000 UTC m=+0.169391156 container remove 0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.129 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4a761e-9908-457d-b1df-28c45e782120]: (4, ('Sat Dec  6 07:40:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 (0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8)\n0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8\nSat Dec  6 07:40:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 (0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8)\n0a57bb3ca9d8d045d2cc64afa4341363c42cda6ebdf889b8737e4d49089247f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.130 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[29e8b28d-10d0-4804-8dbf-28f0e8b80de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.131 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb49e8a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.132 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:32 compute-1 kernel: tap5bb49e8a-b0: left promiscuous mode
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.145 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.147 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[99951508-e590-4676-a506-f721d0eebd7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.164 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[859d56a5-f4bd-40aa-925e-f09221d58070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.166 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8841625f-06f2-4fde-aa1b-b9f46b2eaba2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.182 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[367e561b-db31-415e-be6a-75c08dc8e728]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672198, 'reachable_time': 43761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278865, 'error': None, 'target': 'ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:32 compute-1 systemd[1]: run-netns-ovnmeta\x2d5bb49e8a\x2db939\x2d4c79\x2d851c\x2d62c634be0272.mount: Deactivated successfully.
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.186 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5bb49e8a-b939-4c79-851c-62c634be0272 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:40:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:32.186 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[f175b0a7-35aa-42ce-9e0e-ac081bb337df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.283 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/936565583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3844405613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.726 226109 INFO nova.virt.libvirt.driver [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Deleting instance files /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7_del
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.727 226109 INFO nova.virt.libvirt.driver [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Deletion of /var/lib/nova/instances/829161c5-19b5-459e-88a3-58512aaa5fc7_del complete
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.797 226109 INFO nova.compute.manager [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Took 2.70 seconds to destroy the instance on the hypervisor.
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.797 226109 DEBUG oslo.service.loopingcall [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.798 226109 DEBUG nova.compute.manager [-] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:40:32 compute-1 nova_compute[226101]: 2025-12-06 07:40:32.798 226109 DEBUG nova.network.neutron [-] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:40:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:33.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:33 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:33.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:33 compute-1 ceph-mon[81689]: pgmap v2538: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 07:40:33 compute-1 nova_compute[226101]: 2025-12-06 07:40:33.679 226109 DEBUG nova.compute.manager [req-5e29c701-05de-48ca-b53c-05f5e16be97e req-720969ee-36b3-45bc-8b27-bf84783463cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:33 compute-1 nova_compute[226101]: 2025-12-06 07:40:33.679 226109 DEBUG oslo_concurrency.lockutils [req-5e29c701-05de-48ca-b53c-05f5e16be97e req-720969ee-36b3-45bc-8b27-bf84783463cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:33 compute-1 nova_compute[226101]: 2025-12-06 07:40:33.679 226109 DEBUG oslo_concurrency.lockutils [req-5e29c701-05de-48ca-b53c-05f5e16be97e req-720969ee-36b3-45bc-8b27-bf84783463cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:33 compute-1 nova_compute[226101]: 2025-12-06 07:40:33.680 226109 DEBUG oslo_concurrency.lockutils [req-5e29c701-05de-48ca-b53c-05f5e16be97e req-720969ee-36b3-45bc-8b27-bf84783463cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:33 compute-1 nova_compute[226101]: 2025-12-06 07:40:33.680 226109 DEBUG nova.compute.manager [req-5e29c701-05de-48ca-b53c-05f5e16be97e req-720969ee-36b3-45bc-8b27-bf84783463cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] No waiting events found dispatching network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:40:33 compute-1 nova_compute[226101]: 2025-12-06 07:40:33.680 226109 WARNING nova.compute.manager [req-5e29c701-05de-48ca-b53c-05f5e16be97e req-720969ee-36b3-45bc-8b27-bf84783463cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received unexpected event network-vif-plugged-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 for instance with vm_state active and task_state deleting.
Dec 06 07:40:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:34 compute-1 nova_compute[226101]: 2025-12-06 07:40:34.407 226109 DEBUG nova.network.neutron [-] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:34 compute-1 nova_compute[226101]: 2025-12-06 07:40:34.440 226109 INFO nova.compute.manager [-] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Took 1.64 seconds to deallocate network for instance.
Dec 06 07:40:34 compute-1 nova_compute[226101]: 2025-12-06 07:40:34.531 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:34 compute-1 nova_compute[226101]: 2025-12-06 07:40:34.531 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:34 compute-1 nova_compute[226101]: 2025-12-06 07:40:34.627 226109 DEBUG oslo_concurrency.processutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3102066638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:35 compute-1 nova_compute[226101]: 2025-12-06 07:40:35.052 226109 DEBUG oslo_concurrency.processutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:35 compute-1 nova_compute[226101]: 2025-12-06 07:40:35.060 226109 DEBUG nova.compute.provider_tree [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:40:35 compute-1 nova_compute[226101]: 2025-12-06 07:40:35.086 226109 DEBUG nova.scheduler.client.report [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:40:35 compute-1 nova_compute[226101]: 2025-12-06 07:40:35.121 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:35 compute-1 ceph-mon[81689]: pgmap v2539: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 06 07:40:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3102066638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:35 compute-1 nova_compute[226101]: 2025-12-06 07:40:35.160 226109 INFO nova.scheduler.client.report [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Deleted allocations for instance 829161c5-19b5-459e-88a3-58512aaa5fc7
Dec 06 07:40:35 compute-1 nova_compute[226101]: 2025-12-06 07:40:35.246 226109 DEBUG oslo_concurrency.lockutils [None req-ba3b6c87-e9d9-4194-9b19-502a4b6e424d 605b5481e0c944048e6a67046c30d693 833f4cf9f5a64b2ab94c3bf330353a31 - - default default] Lock "829161c5-19b5-459e-88a3-58512aaa5fc7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:35 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:35.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:35.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:35 compute-1 nova_compute[226101]: 2025-12-06 07:40:35.757 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:35 compute-1 nova_compute[226101]: 2025-12-06 07:40:35.885 226109 DEBUG nova.compute.manager [req-6a0cec4f-b4d5-4870-aef1-aeffd7931047 req-b077aeeb-75cb-48ad-ac94-4f6de3a9483c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Received event network-vif-deleted-1f39ead9-93ea-42d4-bb3c-64034b85c0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:37 compute-1 nova_compute[226101]: 2025-12-06 07:40:37.322 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:40:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:37.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:40:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:37 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:37.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:37 compute-1 ceph-mon[81689]: pgmap v2540: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Dec 06 07:40:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:39 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:39.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:39.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:39 compute-1 nova_compute[226101]: 2025-12-06 07:40:39.670 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006824.6691372, 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:40:39 compute-1 nova_compute[226101]: 2025-12-06 07:40:39.670 226109 INFO nova.compute.manager [-] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] VM Stopped (Lifecycle Event)
Dec 06 07:40:39 compute-1 nova_compute[226101]: 2025-12-06 07:40:39.705 226109 DEBUG nova.compute.manager [None req-ee0e1783-bb8a-437f-8acd-ce8e78431f9f - - - - - -] [instance: 0c3768cf-031f-4b14-a8b4-9ff73a9cfa72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:40:40 compute-1 ceph-mon[81689]: pgmap v2541: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Dec 06 07:40:40 compute-1 nova_compute[226101]: 2025-12-06 07:40:40.761 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:41.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:41.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:42 compute-1 ceph-mon[81689]: pgmap v2542: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:40:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2878607099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:42 compute-1 nova_compute[226101]: 2025-12-06 07:40:42.325 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:43 compute-1 ceph-mon[81689]: pgmap v2543: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 190 KiB/s wr, 116 op/s
Dec 06 07:40:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:43.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:43.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.478 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "0f193a24-e835-4d92-96ff-f743a5abab3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.479 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.497 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.510 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.510 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.622 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.715 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.716 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.722 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.722 226109 INFO nova.compute.claims [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.739 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:44 compute-1 nova_compute[226101]: 2025-12-06 07:40:44.935 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:45 compute-1 ceph-mon[81689]: pgmap v2544: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 108 op/s
Dec 06 07:40:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:45.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:45.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.735 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006830.7340906, 829161c5-19b5-459e-88a3-58512aaa5fc7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.736 226109 INFO nova.compute.manager [-] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] VM Stopped (Lifecycle Event)
Dec 06 07:40:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1599764996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.761 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.766 226109 DEBUG nova.compute.manager [None req-3abdc4bd-8a35-41e0-993a-03f551d4b412 - - - - - -] [instance: 829161c5-19b5-459e-88a3-58512aaa5fc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.766 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.769 226109 DEBUG nova.compute.provider_tree [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.792 226109 DEBUG nova.scheduler.client.report [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.816 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.817 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.819 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.826 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.827 226109 INFO nova.compute.claims [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.941 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.942 226109 DEBUG nova.network.neutron [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:40:45 compute-1 nova_compute[226101]: 2025-12-06 07:40:45.994 226109 INFO nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.024 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.070 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.185 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.187 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.188 226109 INFO nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Creating image(s)
Dec 06 07:40:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1599764996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e317 e317: 3 total, 3 up, 3 in
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.543 226109 DEBUG nova.storage.rbd_utils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] rbd image 0f193a24-e835-4d92-96ff-f743a5abab3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.575 226109 DEBUG nova.storage.rbd_utils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] rbd image 0f193a24-e835-4d92-96ff-f743a5abab3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.604 226109 DEBUG nova.storage.rbd_utils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] rbd image 0f193a24-e835-4d92-96ff-f743a5abab3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.607 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.641 226109 DEBUG nova.policy [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23cdebcd22d647eb808445dbd33d4f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28f1acf1ee4b40f695e30ee36e56b9d0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.693 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.694 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.695 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.695 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.720 226109 DEBUG nova.storage.rbd_utils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] rbd image 0f193a24-e835-4d92-96ff-f743a5abab3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.723 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0f193a24-e835-4d92-96ff-f743a5abab3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/831947400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.858 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.787s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.864 226109 DEBUG nova.compute.provider_tree [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:40:46 compute-1 nova_compute[226101]: 2025-12-06 07:40:46.945 226109 DEBUG nova.scheduler.client.report [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.016 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.017 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.204 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0f193a24-e835-4d92-96ff-f743a5abab3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.287 226109 DEBUG nova.storage.rbd_utils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] resizing rbd image 0f193a24-e835-4d92-96ff-f743a5abab3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:40:47 compute-1 ceph-mon[81689]: osdmap e317: 3 total, 3 up, 3 in
Dec 06 07:40:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/831947400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:47 compute-1 ceph-mon[81689]: pgmap v2546: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.5 KiB/s wr, 113 op/s
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.352 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.418 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.419 226109 DEBUG nova.network.neutron [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.471 226109 DEBUG nova.objects.instance [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f193a24-e835-4d92-96ff-f743a5abab3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.560 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.561 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Ensure instance console log exists: /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.561 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.561 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.562 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:47.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:47.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.743 226109 INFO nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.814 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.965 226109 DEBUG nova.policy [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d8b62a3276f4a8b8349af67b82134c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eff1f6a1654b45079de20eddb830e76d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.979 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.981 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:40:47 compute-1 nova_compute[226101]: 2025-12-06 07:40:47.981 226109 INFO nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Creating image(s)
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.006 226109 DEBUG nova.storage.rbd_utils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image d954d607-525c-4edf-ab9e-56658dd2525a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.032 226109 DEBUG nova.storage.rbd_utils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image d954d607-525c-4edf-ab9e-56658dd2525a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.061 226109 DEBUG nova.storage.rbd_utils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image d954d607-525c-4edf-ab9e-56658dd2525a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.065 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.132 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.133 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.134 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.134 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.166 226109 DEBUG nova.storage.rbd_utils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image d954d607-525c-4edf-ab9e-56658dd2525a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.170 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d954d607-525c-4edf-ab9e-56658dd2525a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1497706244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:40:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1497706244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:40:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e318 e318: 3 total, 3 up, 3 in
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.551 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d954d607-525c-4edf-ab9e-56658dd2525a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.643 226109 DEBUG nova.storage.rbd_utils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] resizing rbd image d954d607-525c-4edf-ab9e-56658dd2525a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.777 226109 DEBUG nova.objects.instance [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'migration_context' on Instance uuid d954d607-525c-4edf-ab9e-56658dd2525a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.870 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.870 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Ensure instance console log exists: /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.871 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.871 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.871 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:48 compute-1 nova_compute[226101]: 2025-12-06 07:40:48.916 226109 DEBUG nova.network.neutron [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Successfully created port: 89bff377-03f4-4e26-9d25-934e6005d866 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:40:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e319 e319: 3 total, 3 up, 3 in
Dec 06 07:40:49 compute-1 ceph-mon[81689]: osdmap e318: 3 total, 3 up, 3 in
Dec 06 07:40:49 compute-1 ceph-mon[81689]: pgmap v2548: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 46 op/s
Dec 06 07:40:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:49.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:49.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:49 compute-1 nova_compute[226101]: 2025-12-06 07:40:49.851 226109 DEBUG nova.network.neutron [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Successfully created port: 1c911a62-4d45-43ff-bf14-309d464c7c81 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:40:50 compute-1 ceph-mon[81689]: osdmap e319: 3 total, 3 up, 3 in
Dec 06 07:40:50 compute-1 nova_compute[226101]: 2025-12-06 07:40:50.770 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:51 compute-1 ceph-mon[81689]: pgmap v2550: 305 pgs: 305 active+clean; 456 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 10 MiB/s wr, 210 op/s
Dec 06 07:40:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:40:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:51.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:40:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:51.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.002 226109 DEBUG nova.network.neutron [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Successfully updated port: 89bff377-03f4-4e26-9d25-934e6005d866 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.085 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "refresh_cache-0f193a24-e835-4d92-96ff-f743a5abab3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.085 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquired lock "refresh_cache-0f193a24-e835-4d92-96ff-f743a5abab3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.085 226109 DEBUG nova.network.neutron [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.191 226109 DEBUG nova.compute.manager [req-7330b9d9-2cbd-42fb-a8fc-137cd9ccd586 req-5efeae75-3f5c-4f8d-9a9b-9ee0339aa014 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received event network-changed-89bff377-03f4-4e26-9d25-934e6005d866 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.191 226109 DEBUG nova.compute.manager [req-7330b9d9-2cbd-42fb-a8fc-137cd9ccd586 req-5efeae75-3f5c-4f8d-9a9b-9ee0339aa014 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Refreshing instance network info cache due to event network-changed-89bff377-03f4-4e26-9d25-934e6005d866. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.191 226109 DEBUG oslo_concurrency.lockutils [req-7330b9d9-2cbd-42fb-a8fc-137cd9ccd586 req-5efeae75-3f5c-4f8d-9a9b-9ee0339aa014 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0f193a24-e835-4d92-96ff-f743a5abab3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.329 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:52 compute-1 nova_compute[226101]: 2025-12-06 07:40:52.947 226109 DEBUG nova.network.neutron [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:40:53 compute-1 nova_compute[226101]: 2025-12-06 07:40:53.091 226109 DEBUG nova.network.neutron [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Successfully updated port: 1c911a62-4d45-43ff-bf14-309d464c7c81 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:40:53 compute-1 sudo[279265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:53 compute-1 sudo[279265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-1 sudo[279265]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:53 compute-1 nova_compute[226101]: 2025-12-06 07:40:53.214 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:40:53 compute-1 nova_compute[226101]: 2025-12-06 07:40:53.215 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquired lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:40:53 compute-1 sudo[279290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:40:53 compute-1 nova_compute[226101]: 2025-12-06 07:40:53.215 226109 DEBUG nova.network.neutron [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:40:53 compute-1 sudo[279290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-1 sudo[279290]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:53 compute-1 sudo[279315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:53 compute-1 sudo[279315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-1 sudo[279315]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:53 compute-1 sudo[279340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:40:53 compute-1 sudo[279340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:53.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:53 compute-1 sudo[279340]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:53 compute-1 nova_compute[226101]: 2025-12-06 07:40:53.941 226109 DEBUG nova.network.neutron [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:40:53 compute-1 ceph-mon[81689]: pgmap v2551: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 9.7 MiB/s wr, 216 op/s
Dec 06 07:40:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:40:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:40:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:40:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:40:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:40:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:40:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:54 compute-1 nova_compute[226101]: 2025-12-06 07:40:54.320 226109 DEBUG nova.compute.manager [req-d24afe89-dfef-44b5-8d54-84b783e8e328 req-df44b619-62db-4d81-b49d-4f4e27ec341d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received event network-changed-1c911a62-4d45-43ff-bf14-309d464c7c81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:54 compute-1 nova_compute[226101]: 2025-12-06 07:40:54.320 226109 DEBUG nova.compute.manager [req-d24afe89-dfef-44b5-8d54-84b783e8e328 req-df44b619-62db-4d81-b49d-4f4e27ec341d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Refreshing instance network info cache due to event network-changed-1c911a62-4d45-43ff-bf14-309d464c7c81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:40:54 compute-1 nova_compute[226101]: 2025-12-06 07:40:54.320 226109 DEBUG oslo_concurrency.lockutils [req-d24afe89-dfef-44b5-8d54-84b783e8e328 req-df44b619-62db-4d81-b49d-4f4e27ec341d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:40:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e320 e320: 3 total, 3 up, 3 in
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.517 226109 DEBUG nova.network.neutron [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Updating instance_info_cache with network_info: [{"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.604 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Releasing lock "refresh_cache-0f193a24-e835-4d92-96ff-f743a5abab3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.604 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Instance network_info: |[{"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.605 226109 DEBUG oslo_concurrency.lockutils [req-7330b9d9-2cbd-42fb-a8fc-137cd9ccd586 req-5efeae75-3f5c-4f8d-9a9b-9ee0339aa014 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0f193a24-e835-4d92-96ff-f743a5abab3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.605 226109 DEBUG nova.network.neutron [req-7330b9d9-2cbd-42fb-a8fc-137cd9ccd586 req-5efeae75-3f5c-4f8d-9a9b-9ee0339aa014 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Refreshing network info cache for port 89bff377-03f4-4e26-9d25-934e6005d866 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.607 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Start _get_guest_xml network_info=[{"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.612 226109 WARNING nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.617 226109 DEBUG nova.virt.libvirt.host [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.618 226109 DEBUG nova.virt.libvirt.host [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.621 226109 DEBUG nova.virt.libvirt.host [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.621 226109 DEBUG nova.virt.libvirt.host [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.622 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.622 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.623 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.623 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.623 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.623 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.624 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.624 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.624 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.624 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.625 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.625 226109 DEBUG nova.virt.hardware [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.627 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:55.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:55.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:55 compute-1 ceph-mon[81689]: osdmap e320: 3 total, 3 up, 3 in
Dec 06 07:40:55 compute-1 ceph-mon[81689]: pgmap v2553: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.6 MiB/s wr, 193 op/s
Dec 06 07:40:55 compute-1 nova_compute[226101]: 2025-12-06 07:40:55.772 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.002 226109 DEBUG nova.network.neutron [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Updating instance_info_cache with network_info: [{"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:40:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/509348024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.104 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.134 226109 DEBUG nova.storage.rbd_utils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] rbd image 0f193a24-e835-4d92-96ff-f743a5abab3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.138 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.165 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Releasing lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.166 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Instance network_info: |[{"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.167 226109 DEBUG oslo_concurrency.lockutils [req-d24afe89-dfef-44b5-8d54-84b783e8e328 req-df44b619-62db-4d81-b49d-4f4e27ec341d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.168 226109 DEBUG nova.network.neutron [req-d24afe89-dfef-44b5-8d54-84b783e8e328 req-df44b619-62db-4d81-b49d-4f4e27ec341d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Refreshing network info cache for port 1c911a62-4d45-43ff-bf14-309d464c7c81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.171 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Start _get_guest_xml network_info=[{"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.177 226109 WARNING nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.181 226109 DEBUG nova.virt.libvirt.host [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.182 226109 DEBUG nova.virt.libvirt.host [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.187 226109 DEBUG nova.virt.libvirt.host [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.188 226109 DEBUG nova.virt.libvirt.host [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.189 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.189 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.190 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.190 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.191 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.191 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.192 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.192 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.192 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.193 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.193 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.194 226109 DEBUG nova.virt.hardware [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.198 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:40:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/74222001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.610 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.612 226109 DEBUG nova.virt.libvirt.vif [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1514680331',display_name='tempest-ServerAddressesNegativeTestJSON-server-1514680331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1514680331',id=138,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28f1acf1ee4b40f695e30ee36e56b9d0',ramdisk_id='',reservation_id='r-hwekr06u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1541917604',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1541917604-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:40:46Z,user_data=None,user_id='23cdebcd22d647eb808445dbd33d4f04',uuid=0f193a24-e835-4d92-96ff-f743a5abab3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.612 226109 DEBUG nova.network.os_vif_util [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Converting VIF {"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.613 226109 DEBUG nova.network.os_vif_util [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:dc:dd,bridge_name='br-int',has_traffic_filtering=True,id=89bff377-03f4-4e26-9d25-934e6005d866,network=Network(fc56be0c-aa06-4810-9ca5-d00406ffcf42),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89bff377-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.614 226109 DEBUG nova.objects.instance [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f193a24-e835-4d92-96ff-f743a5abab3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:40:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:40:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1657904244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.677 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.700 226109 DEBUG nova.storage.rbd_utils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image d954d607-525c-4edf-ab9e-56658dd2525a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.703 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/509348024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/74222001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1657904244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.825 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <uuid>0f193a24-e835-4d92-96ff-f743a5abab3c</uuid>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <name>instance-0000008a</name>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1514680331</nova:name>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:40:55</nova:creationTime>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <nova:user uuid="23cdebcd22d647eb808445dbd33d4f04">tempest-ServerAddressesNegativeTestJSON-1541917604-project-member</nova:user>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <nova:project uuid="28f1acf1ee4b40f695e30ee36e56b9d0">tempest-ServerAddressesNegativeTestJSON-1541917604</nova:project>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <nova:port uuid="89bff377-03f4-4e26-9d25-934e6005d866">
Dec 06 07:40:56 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <system>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <entry name="serial">0f193a24-e835-4d92-96ff-f743a5abab3c</entry>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <entry name="uuid">0f193a24-e835-4d92-96ff-f743a5abab3c</entry>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </system>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <os>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   </os>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <features>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   </features>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0f193a24-e835-4d92-96ff-f743a5abab3c_disk">
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       </source>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0f193a24-e835-4d92-96ff-f743a5abab3c_disk.config">
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       </source>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:40:56 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d2:dc:dd"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <target dev="tap89bff377-03"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c/console.log" append="off"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <video>
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </video>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:40:56 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:40:56 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:40:56 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:40:56 compute-1 nova_compute[226101]: </domain>
Dec 06 07:40:56 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.827 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Preparing to wait for external event network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.827 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.827 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.827 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.828 226109 DEBUG nova.virt.libvirt.vif [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1514680331',display_name='tempest-ServerAddressesNegativeTestJSON-server-1514680331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1514680331',id=138,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28f1acf1ee4b40f695e30ee36e56b9d0',ramdisk_id='',reservation_id='r-hwekr06u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1541917604',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1541917604-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:40:46Z,user_data=None,user_id='23cdebcd22d647eb808445dbd33d4f04',uuid=0f193a24-e835-4d92-96ff-f743a5abab3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.828 226109 DEBUG nova.network.os_vif_util [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Converting VIF {"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.829 226109 DEBUG nova.network.os_vif_util [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:dc:dd,bridge_name='br-int',has_traffic_filtering=True,id=89bff377-03f4-4e26-9d25-934e6005d866,network=Network(fc56be0c-aa06-4810-9ca5-d00406ffcf42),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89bff377-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.830 226109 DEBUG os_vif [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:dc:dd,bridge_name='br-int',has_traffic_filtering=True,id=89bff377-03f4-4e26-9d25-934e6005d866,network=Network(fc56be0c-aa06-4810-9ca5-d00406ffcf42),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89bff377-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.830 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.830 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.831 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.833 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.834 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89bff377-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.834 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89bff377-03, col_values=(('external_ids', {'iface-id': '89bff377-03f4-4e26-9d25-934e6005d866', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:dc:dd', 'vm-uuid': '0f193a24-e835-4d92-96ff-f743a5abab3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:56 compute-1 NetworkManager[49031]: <info>  [1765006856.8368] manager: (tap89bff377-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.838 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.841 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:56 compute-1 nova_compute[226101]: 2025-12-06 07:40:56.843 226109 INFO os_vif [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:dc:dd,bridge_name='br-int',has_traffic_filtering=True,id=89bff377-03f4-4e26-9d25-934e6005d866,network=Network(fc56be0c-aa06-4810-9ca5-d00406ffcf42),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89bff377-03')
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.027 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.028 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.028 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] No VIF found with MAC fa:16:3e:d2:dc:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.029 226109 INFO nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Using config drive
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.056 226109 DEBUG nova.storage.rbd_utils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] rbd image 0f193a24-e835-4d92-96ff-f743a5abab3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:40:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1682043614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.144 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.146 226109 DEBUG nova.virt.libvirt.vif [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1486320730',display_name='tempest-₡-1486320730',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--1486320730',id=139,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-xgnkh50b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:40:47Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=d954d607-525c-4edf-ab9e-56658dd2525a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.146 226109 DEBUG nova.network.os_vif_util [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.147 226109 DEBUG nova.network.os_vif_util [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:66:fb,bridge_name='br-int',has_traffic_filtering=True,id=1c911a62-4d45-43ff-bf14-309d464c7c81,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c911a62-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.148 226109 DEBUG nova.objects.instance [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'pci_devices' on Instance uuid d954d607-525c-4edf-ab9e-56658dd2525a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.330 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.365 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <uuid>d954d607-525c-4edf-ab9e-56658dd2525a</uuid>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <name>instance-0000008b</name>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <nova:name>tempest-₡-1486320730</nova:name>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:40:56</nova:creationTime>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <nova:user uuid="0d8b62a3276f4a8b8349af67b82134c8">tempest-ServersTestJSON-374151197-project-member</nova:user>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <nova:project uuid="eff1f6a1654b45079de20eddb830e76d">tempest-ServersTestJSON-374151197</nova:project>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <nova:port uuid="1c911a62-4d45-43ff-bf14-309d464c7c81">
Dec 06 07:40:57 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <system>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <entry name="serial">d954d607-525c-4edf-ab9e-56658dd2525a</entry>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <entry name="uuid">d954d607-525c-4edf-ab9e-56658dd2525a</entry>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </system>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <os>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   </os>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <features>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   </features>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/d954d607-525c-4edf-ab9e-56658dd2525a_disk">
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       </source>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/d954d607-525c-4edf-ab9e-56658dd2525a_disk.config">
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       </source>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:40:57 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:a0:66:fb"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <target dev="tap1c911a62-4d"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a/console.log" append="off"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <video>
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </video>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:40:57 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:40:57 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:40:57 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:40:57 compute-1 nova_compute[226101]: </domain>
Dec 06 07:40:57 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.366 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Preparing to wait for external event network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.367 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.367 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.367 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.368 226109 DEBUG nova.virt.libvirt.vif [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1486320730',display_name='tempest-₡-1486320730',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--1486320730',id=139,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-xgnkh50b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:40:47Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=d954d607-525c-4edf-ab9e-56658dd2525a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.368 226109 DEBUG nova.network.os_vif_util [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.369 226109 DEBUG nova.network.os_vif_util [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:66:fb,bridge_name='br-int',has_traffic_filtering=True,id=1c911a62-4d45-43ff-bf14-309d464c7c81,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c911a62-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.369 226109 DEBUG os_vif [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:66:fb,bridge_name='br-int',has_traffic_filtering=True,id=1c911a62-4d45-43ff-bf14-309d464c7c81,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c911a62-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.369 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.370 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.370 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.372 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.372 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c911a62-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.372 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c911a62-4d, col_values=(('external_ids', {'iface-id': '1c911a62-4d45-43ff-bf14-309d464c7c81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:66:fb', 'vm-uuid': 'd954d607-525c-4edf-ab9e-56658dd2525a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.373 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:57 compute-1 NetworkManager[49031]: <info>  [1765006857.3745] manager: (tap1c911a62-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.376 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.380 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.381 226109 INFO os_vif [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:66:fb,bridge_name='br-int',has_traffic_filtering=True,id=1c911a62-4d45-43ff-bf14-309d464c7c81,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c911a62-4d')
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.542 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.542 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.542 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No VIF found with MAC fa:16:3e:a0:66:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.543 226109 INFO nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Using config drive
Dec 06 07:40:57 compute-1 nova_compute[226101]: 2025-12-06 07:40:57.563 226109 DEBUG nova.storage.rbd_utils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image d954d607-525c-4edf-ab9e-56658dd2525a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:40:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:57 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:57 compute-1 ceph-mon[81689]: pgmap v2554: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 159 op/s
Dec 06 07:40:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1682043614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.174 226109 INFO nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Creating config drive at /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a/disk.config
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.178 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjl50gs1c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.207 226109 INFO nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Creating config drive at /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c/disk.config
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.212 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw2ug96yo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.310 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjl50gs1c" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.337 226109 DEBUG nova.storage.rbd_utils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image d954d607-525c-4edf-ab9e-56658dd2525a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.341 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a/disk.config d954d607-525c-4edf-ab9e-56658dd2525a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.367 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw2ug96yo" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.396 226109 DEBUG nova.storage.rbd_utils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] rbd image 0f193a24-e835-4d92-96ff-f743a5abab3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.401 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c/disk.config 0f193a24-e835-4d92-96ff-f743a5abab3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.479 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.513 226109 DEBUG oslo_concurrency.processutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a/disk.config d954d607-525c-4edf-ab9e-56658dd2525a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.514 226109 INFO nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Deleting local config drive /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a/disk.config because it was imported into RBD.
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.563 226109 DEBUG oslo_concurrency.processutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c/disk.config 0f193a24-e835-4d92-96ff-f743a5abab3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.564 226109 INFO nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Deleting local config drive /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c/disk.config because it was imported into RBD.
Dec 06 07:40:58 compute-1 kernel: tap1c911a62-4d: entered promiscuous mode
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.5690] manager: (tap1c911a62-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00544|binding|INFO|Claiming lport 1c911a62-4d45-43ff-bf14-309d464c7c81 for this chassis.
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00545|binding|INFO|1c911a62-4d45-43ff-bf14-309d464c7c81: Claiming fa:16:3e:a0:66:fb 10.100.0.8
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.572 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 virtqemud[225710]: End of file while reading data: Input/output error
Dec 06 07:40:58 compute-1 systemd-machined[190302]: New machine qemu-63-instance-0000008b.
Dec 06 07:40:58 compute-1 systemd-udevd[279687]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:40:58 compute-1 systemd[1]: Started Virtual Machine qemu-63-instance-0000008b.
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.6405] manager: (tap89bff377-03): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Dec 06 07:40:58 compute-1 kernel: tap89bff377-03: entered promiscuous mode
Dec 06 07:40:58 compute-1 systemd-udevd[279697]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.6461] device (tap1c911a62-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.6471] device (tap1c911a62-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.649 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00546|if_status|INFO|Not updating pb chassis for 89bff377-03f4-4e26-9d25-934e6005d866 now as sb is readonly
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.661 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.664 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.6667] device (tap89bff377-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.6679] device (tap89bff377-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:40:58 compute-1 systemd-machined[190302]: New machine qemu-64-instance-0000008a.
Dec 06 07:40:58 compute-1 systemd[1]: Started Virtual Machine qemu-64-instance-0000008a.
Dec 06 07:40:58 compute-1 podman[279654]: 2025-12-06 07:40:58.688844854 +0000 UTC m=+0.081601752 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 06 07:40:58 compute-1 podman[279657]: 2025-12-06 07:40:58.691583376 +0000 UTC m=+0.086700346 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00547|binding|INFO|Claiming lport 89bff377-03f4-4e26-9d25-934e6005d866 for this chassis.
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00548|binding|INFO|89bff377-03f4-4e26-9d25-934e6005d866: Claiming fa:16:3e:d2:dc:dd 10.100.0.3
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.693 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:66:fb 10.100.0.8'], port_security=['fa:16:3e:a0:66:fb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd954d607-525c-4edf-ab9e-56658dd2525a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=1c911a62-4d45-43ff-bf14-309d464c7c81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.694 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 1c911a62-4d45-43ff-bf14-309d464c7c81 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 bound to our chassis
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.696 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00549|binding|INFO|Setting lport 1c911a62-4d45-43ff-bf14-309d464c7c81 ovn-installed in OVS
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00550|binding|INFO|Setting lport 1c911a62-4d45-43ff-bf14-309d464c7c81 up in Southbound
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.704 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.709 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b205f8-0d88-4a2e-8b52-d6a403b31b5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.710 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35a27638-31 in ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.714 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35a27638-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.714 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8f2d57-5b50-481d-b272-ad3d2aacd29b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.715 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3dcba2-e439-4ef3-9898-ef62963f7b22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.730 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ba246ecc-7db1-477e-98a5-4208b44a79bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.746 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:dc:dd 10.100.0.3'], port_security=['fa:16:3e:d2:dc:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f193a24-e835-4d92-96ff-f743a5abab3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc56be0c-aa06-4810-9ca5-d00406ffcf42', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f1acf1ee4b40f695e30ee36e56b9d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e083e6-8b57-42b3-a8f0-0deb92b8da5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9e4adf5-a070-4e11-817f-c49536c81a6c, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=89bff377-03f4-4e26-9d25-934e6005d866) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.759 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e7616193-089e-4553-abfe-b8f71ccd5ea9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00551|binding|INFO|Setting lport 89bff377-03f4-4e26-9d25-934e6005d866 ovn-installed in OVS
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00552|binding|INFO|Setting lport 89bff377-03f4-4e26-9d25-934e6005d866 up in Southbound
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.765 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 podman[279658]: 2025-12-06 07:40:58.769686785 +0000 UTC m=+0.162592327 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.789 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f21a90c1-626b-4180-a8af-c8bdb39139aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.7960] manager: (tap35a27638-30): new Veth device (/org/freedesktop/NetworkManager/Devices/249)
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.795 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e075a127-8d0a-4775-a3f1-83c9b91e840e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 systemd-udevd[279705]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.831 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7a1657-ad0a-4a67-93cf-3564afc435d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.834 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ee5809-688e-4209-9cc8-e5e340d4936a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.8550] device (tap35a27638-30): carrier: link connected
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.860 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b3ade9-1993-4bdc-b90e-4f96c0353924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.875 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[93dc9eb8-99ef-4c93-8535-28b24331e6bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713835, 'reachable_time': 30087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279767, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.886 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8fb55b-63c1-434d-b793-399f7cf9fc27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:c527'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713835, 'tstamp': 713835}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279768, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.899 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4446c666-ffe9-432d-ada4-fb2eb936cb88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713835, 'reachable_time': 30087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279769, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.925 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[55dc5485-42cd-437a-b6b1-0f87851aeac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.974 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b876cd5-da93-41c3-8935-6d4beb58ff6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.975 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.975 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.976 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:58 compute-1 kernel: tap35a27638-30: entered promiscuous mode
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.977 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.980 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:58 compute-1 NetworkManager[49031]: <info>  [1765006858.9807] manager: (tap35a27638-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.980 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 ovn_controller[130279]: 2025-12-06T07:40:58Z|00553|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.981 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.982 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.983 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a5718963-796d-4ead-8f43-e966849b55fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.983 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:40:58 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:58.984 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'env', 'PROCESS_TAG=haproxy-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35a27638-382c-4afb-83b0-edd6d7f4bca8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:40:58 compute-1 nova_compute[226101]: 2025-12-06 07:40:58.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.204 226109 DEBUG nova.network.neutron [req-7330b9d9-2cbd-42fb-a8fc-137cd9ccd586 req-5efeae75-3f5c-4f8d-9a9b-9ee0339aa014 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Updated VIF entry in instance network info cache for port 89bff377-03f4-4e26-9d25-934e6005d866. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.204 226109 DEBUG nova.network.neutron [req-7330b9d9-2cbd-42fb-a8fc-137cd9ccd586 req-5efeae75-3f5c-4f8d-9a9b-9ee0339aa014 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Updating instance_info_cache with network_info: [{"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.227 226109 DEBUG oslo_concurrency.lockutils [req-7330b9d9-2cbd-42fb-a8fc-137cd9ccd586 req-5efeae75-3f5c-4f8d-9a9b-9ee0339aa014 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0f193a24-e835-4d92-96ff-f743a5abab3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:40:59 compute-1 podman[279801]: 2025-12-06 07:40:59.338798846 +0000 UTC m=+0.053411510 container create 584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:40:59 compute-1 systemd[1]: Started libpod-conmon-584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6.scope.
Dec 06 07:40:59 compute-1 podman[279801]: 2025-12-06 07:40:59.308107507 +0000 UTC m=+0.022720201 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:40:59 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:40:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c719025635d9ef621e9648ad672d43a88af50b2095f33a7f436dfd8241a5d8eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:59 compute-1 podman[279801]: 2025-12-06 07:40:59.424235738 +0000 UTC m=+0.138848422 container init 584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:40:59 compute-1 podman[279801]: 2025-12-06 07:40:59.430742029 +0000 UTC m=+0.145354693 container start 584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.455 226109 DEBUG nova.network.neutron [req-d24afe89-dfef-44b5-8d54-84b783e8e328 req-df44b619-62db-4d81-b49d-4f4e27ec341d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Updated VIF entry in instance network info cache for port 1c911a62-4d45-43ff-bf14-309d464c7c81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.455 226109 DEBUG nova.network.neutron [req-d24afe89-dfef-44b5-8d54-84b783e8e328 req-df44b619-62db-4d81-b49d-4f4e27ec341d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Updating instance_info_cache with network_info: [{"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:59 compute-1 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[279856]: [NOTICE]   (279861) : New worker (279864) forked
Dec 06 07:40:59 compute-1 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[279856]: [NOTICE]   (279861) : Loading success.
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.468 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006859.4676626, d954d607-525c-4edf-ab9e-56658dd2525a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.468 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] VM Started (Lifecycle Event)
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.477 226109 DEBUG oslo_concurrency.lockutils [req-d24afe89-dfef-44b5-8d54-84b783e8e328 req-df44b619-62db-4d81-b49d-4f4e27ec341d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.488 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 89bff377-03f4-4e26-9d25-934e6005d866 in datapath fc56be0c-aa06-4810-9ca5-d00406ffcf42 unbound from our chassis
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.489 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc56be0c-aa06-4810-9ca5-d00406ffcf42
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.498 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.500 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e219e4f5-145f-4850-bc07-c28dc1e20b08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.500 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc56be0c-a1 in ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.502 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc56be0c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.502 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef58aa4-1e8c-4bfc-a023-63c5d595d381]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.502 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006859.4678714, d954d607-525c-4edf-ab9e-56658dd2525a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.503 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] VM Paused (Lifecycle Event)
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.503 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5b7310-7172-4e18-8e92-10ae541cae06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.513 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd31044-ced4-4c47-8582-1b5cb2c90a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.533 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.535 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[04d0e5e0-a010-47df-89ab-45a8a6a6760f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.536 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.559 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.560 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b4293083-bd92-44b8-b2a5-fb2cb06f57aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 NetworkManager[49031]: <info>  [1765006859.5685] manager: (tapfc56be0c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.567 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb9d9a4-d79f-4fa4-8b4d-870a008ca0f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.596 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf49c27-6cb6-4439-ac86-ecf4a914f033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.599 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b9004518-533d-4d55-a82d-0f94ad31155c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 NetworkManager[49031]: <info>  [1765006859.6207] device (tapfc56be0c-a0): carrier: link connected
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.626 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[36016d49-fe57-49b5-878b-a82c2986c941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.642 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3a8600-a20a-4d44-846f-4cfcaaa11549]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc56be0c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:26:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713912, 'reachable_time': 44146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279885, 'error': None, 'target': 'ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.657 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a8454a1a-6bb6-4aff-9fbb-6dc23bd798b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed4:26c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713912, 'tstamp': 713912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279886, 'error': None, 'target': 'ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:40:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:59.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.674 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2d2409-b309-451b-bd53-1715c2fbdb68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc56be0c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:26:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713912, 'reachable_time': 44146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279887, 'error': None, 'target': 'ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.704 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[08feceec-740b-41f9-b528-ee5720fbed13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.770 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ac48dc-9a15-401b-a3a7-dc8c8887c323]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.771 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc56be0c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.772 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.772 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc56be0c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.809 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:59 compute-1 NetworkManager[49031]: <info>  [1765006859.8097] manager: (tapfc56be0c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Dec 06 07:40:59 compute-1 kernel: tapfc56be0c-a0: entered promiscuous mode
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.815 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc56be0c-a0, col_values=(('external_ids', {'iface-id': 'c67fbd29-6691-452d-928c-22cc753e93f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:59 compute-1 ovn_controller[130279]: 2025-12-06T07:40:59Z|00554|binding|INFO|Releasing lport c67fbd29-6691-452d-928c-22cc753e93f0 from this chassis (sb_readonly=0)
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.819 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc56be0c-aa06-4810-9ca5-d00406ffcf42.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc56be0c-aa06-4810-9ca5-d00406ffcf42.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.820 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[293bcaa2-98f7-4147-b7d4-b70e0b08c1c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.820 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-fc56be0c-aa06-4810-9ca5-d00406ffcf42
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/fc56be0c-aa06-4810-9ca5-d00406ffcf42.pid.haproxy
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID fc56be0c-aa06-4810-9ca5-d00406ffcf42
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:40:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:40:59.821 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42', 'env', 'PROCESS_TAG=haproxy-fc56be0c-aa06-4810-9ca5-d00406ffcf42', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc56be0c-aa06-4810-9ca5-d00406ffcf42.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:40:59 compute-1 nova_compute[226101]: 2025-12-06 07:40:59.831 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:59 compute-1 sudo[279895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:59 compute-1 sudo[279895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:59 compute-1 sudo[279895]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:59 compute-1 sudo[279922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:40:59 compute-1 sudo[279922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:59 compute-1 sudo[279922]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:59 compute-1 ceph-mon[81689]: pgmap v2555: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.5 MiB/s wr, 134 op/s
Dec 06 07:40:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1098147259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:40:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:41:00 compute-1 podman[279969]: 2025-12-06 07:41:00.156223671 +0000 UTC m=+0.045444319 container create dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:41:00 compute-1 systemd[1]: Started libpod-conmon-dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70.scope.
Dec 06 07:41:00 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:41:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e809d759c35d43dffd9336cdb5635b45367e82834b039188fae42c657a3d2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:41:00 compute-1 podman[279969]: 2025-12-06 07:41:00.22752884 +0000 UTC m=+0.116749528 container init dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:41:00 compute-1 podman[279969]: 2025-12-06 07:41:00.131254153 +0000 UTC m=+0.020474811 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:41:00 compute-1 podman[279969]: 2025-12-06 07:41:00.234915736 +0000 UTC m=+0.124136394 container start dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:41:00 compute-1 neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42[279984]: [NOTICE]   (279988) : New worker (279990) forked
Dec 06 07:41:00 compute-1 neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42[279984]: [NOTICE]   (279988) : Loading success.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.482 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006860.4821508, 0f193a24-e835-4d92-96ff-f743a5abab3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.482 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] VM Started (Lifecycle Event)
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.508 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.512 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006860.4822452, 0f193a24-e835-4d92-96ff-f743a5abab3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.512 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] VM Paused (Lifecycle Event)
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.535 226109 DEBUG nova.compute.manager [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received event network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.535 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.536 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.536 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.536 226109 DEBUG nova.compute.manager [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Processing event network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.536 226109 DEBUG nova.compute.manager [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received event network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.537 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.537 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.537 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.537 226109 DEBUG nova.compute.manager [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] No waiting events found dispatching network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.537 226109 WARNING nova.compute.manager [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received unexpected event network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 for instance with vm_state building and task_state spawning.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.538 226109 DEBUG nova.compute.manager [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received event network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.538 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.538 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.538 226109 DEBUG oslo_concurrency.lockutils [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.538 226109 DEBUG nova.compute.manager [req-6a63b17b-00a2-4792-be91-8234414dcd3a req-b3d1d67a-e31c-4212-a048-1b13e7165745 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Processing event network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.539 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.539 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.544 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.545 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.547 226109 INFO nova.virt.libvirt.driver [-] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Instance spawned successfully.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.548 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.550 226109 INFO nova.virt.libvirt.driver [-] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Instance spawned successfully.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.551 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.598 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.605 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006860.5421906, 0f193a24-e835-4d92-96ff-f743a5abab3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.605 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] VM Resumed (Lifecycle Event)
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.609 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.609 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.609 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.610 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.610 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.611 226109 DEBUG nova.virt.libvirt.driver [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.617 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.617 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.618 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.618 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.619 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.619 226109 DEBUG nova.virt.libvirt.driver [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.657 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.661 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.713 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.713 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006860.5438657, d954d607-525c-4edf-ab9e-56658dd2525a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.713 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] VM Resumed (Lifecycle Event)
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.749 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.752 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.759 226109 INFO nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Took 14.57 seconds to spawn the instance on the hypervisor.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.760 226109 DEBUG nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.847 226109 INFO nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Took 12.87 seconds to spawn the instance on the hypervisor.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.848 226109 DEBUG nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.851 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.960 226109 INFO nova.compute.manager [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Took 16.28 seconds to build instance.
Dec 06 07:41:00 compute-1 nova_compute[226101]: 2025-12-06 07:41:00.998 226109 DEBUG oslo_concurrency.lockutils [None req-80fe68f9-2fed-4041-9b65-c6c28a9501a8 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:01 compute-1 nova_compute[226101]: 2025-12-06 07:41:01.001 226109 INFO nova.compute.manager [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Took 16.29 seconds to build instance.
Dec 06 07:41:01 compute-1 nova_compute[226101]: 2025-12-06 07:41:01.024 226109 DEBUG oslo_concurrency.lockutils [None req-7d1ddaba-e656-4bf5-8ddc-cc7a8c5bbc7f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:01.659 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:01.660 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:01.660 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:01.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:01.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:02 compute-1 ceph-mon[81689]: pgmap v2556: 305 pgs: 305 active+clean; 432 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 378 KiB/s wr, 57 op/s
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.333 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.374 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.638 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "0f193a24-e835-4d92-96ff-f743a5abab3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.638 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.638 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.639 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.639 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.640 226109 INFO nova.compute.manager [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Terminating instance
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.641 226109 DEBUG nova.compute.manager [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:41:02 compute-1 kernel: tap89bff377-03 (unregistering): left promiscuous mode
Dec 06 07:41:02 compute-1 NetworkManager[49031]: <info>  [1765006862.6856] device (tap89bff377-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:41:02 compute-1 ovn_controller[130279]: 2025-12-06T07:41:02Z|00555|binding|INFO|Releasing lport 89bff377-03f4-4e26-9d25-934e6005d866 from this chassis (sb_readonly=0)
Dec 06 07:41:02 compute-1 ovn_controller[130279]: 2025-12-06T07:41:02Z|00556|binding|INFO|Setting lport 89bff377-03f4-4e26-9d25-934e6005d866 down in Southbound
Dec 06 07:41:02 compute-1 ovn_controller[130279]: 2025-12-06T07:41:02Z|00557|binding|INFO|Removing iface tap89bff377-03 ovn-installed in OVS
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.700 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:02.707 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:dc:dd 10.100.0.3'], port_security=['fa:16:3e:d2:dc:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0f193a24-e835-4d92-96ff-f743a5abab3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc56be0c-aa06-4810-9ca5-d00406ffcf42', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f1acf1ee4b40f695e30ee36e56b9d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e083e6-8b57-42b3-a8f0-0deb92b8da5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9e4adf5-a070-4e11-817f-c49536c81a6c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=89bff377-03f4-4e26-9d25-934e6005d866) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:41:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:02.709 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 89bff377-03f4-4e26-9d25-934e6005d866 in datapath fc56be0c-aa06-4810-9ca5-d00406ffcf42 unbound from our chassis
Dec 06 07:41:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:02.710 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc56be0c-aa06-4810-9ca5-d00406ffcf42, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:41:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:02.712 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[92c3c85d-4c7f-498c-b1c6-e622ffe30f06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:02.712 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42 namespace which is not needed anymore
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.720 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:02 compute-1 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Dec 06 07:41:02 compute-1 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008a.scope: Consumed 3.964s CPU time.
Dec 06 07:41:02 compute-1 systemd-machined[190302]: Machine qemu-64-instance-0000008a terminated.
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.870 226109 DEBUG nova.compute.manager [req-7c3fd475-8d4d-4c4d-9c29-62e3a29670cb req-9ec5c00a-e9e3-47f7-91a0-e0c9c59cedc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received event network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.873 226109 DEBUG oslo_concurrency.lockutils [req-7c3fd475-8d4d-4c4d-9c29-62e3a29670cb req-9ec5c00a-e9e3-47f7-91a0-e0c9c59cedc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.874 226109 DEBUG oslo_concurrency.lockutils [req-7c3fd475-8d4d-4c4d-9c29-62e3a29670cb req-9ec5c00a-e9e3-47f7-91a0-e0c9c59cedc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.874 226109 DEBUG oslo_concurrency.lockutils [req-7c3fd475-8d4d-4c4d-9c29-62e3a29670cb req-9ec5c00a-e9e3-47f7-91a0-e0c9c59cedc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.874 226109 DEBUG nova.compute.manager [req-7c3fd475-8d4d-4c4d-9c29-62e3a29670cb req-9ec5c00a-e9e3-47f7-91a0-e0c9c59cedc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] No waiting events found dispatching network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.874 226109 WARNING nova.compute.manager [req-7c3fd475-8d4d-4c4d-9c29-62e3a29670cb req-9ec5c00a-e9e3-47f7-91a0-e0c9c59cedc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received unexpected event network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 for instance with vm_state active and task_state deleting.
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.887 226109 INFO nova.virt.libvirt.driver [-] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Instance destroyed successfully.
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.887 226109 DEBUG nova.objects.instance [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lazy-loading 'resources' on Instance uuid 0f193a24-e835-4d92-96ff-f743a5abab3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.920 226109 DEBUG nova.virt.libvirt.vif [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1514680331',display_name='tempest-ServerAddressesNegativeTestJSON-server-1514680331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1514680331',id=138,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:41:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28f1acf1ee4b40f695e30ee36e56b9d0',ramdisk_id='',reservation_id='r-hwekr06u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1541917604',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1541917604-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:41:00Z,user_data=None,user_id='23cdebcd22d647eb808445dbd33d4f04',uuid=0f193a24-e835-4d92-96ff-f743a5abab3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.922 226109 DEBUG nova.network.os_vif_util [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Converting VIF {"id": "89bff377-03f4-4e26-9d25-934e6005d866", "address": "fa:16:3e:d2:dc:dd", "network": {"id": "fc56be0c-aa06-4810-9ca5-d00406ffcf42", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-397602483-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28f1acf1ee4b40f695e30ee36e56b9d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89bff377-03", "ovs_interfaceid": "89bff377-03f4-4e26-9d25-934e6005d866", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.923 226109 DEBUG nova.network.os_vif_util [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:dc:dd,bridge_name='br-int',has_traffic_filtering=True,id=89bff377-03f4-4e26-9d25-934e6005d866,network=Network(fc56be0c-aa06-4810-9ca5-d00406ffcf42),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89bff377-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.923 226109 DEBUG os_vif [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:dc:dd,bridge_name='br-int',has_traffic_filtering=True,id=89bff377-03f4-4e26-9d25-934e6005d866,network=Network(fc56be0c-aa06-4810-9ca5-d00406ffcf42),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89bff377-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.925 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.925 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89bff377-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.928 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:02 compute-1 nova_compute[226101]: 2025-12-06 07:41:02.931 226109 INFO os_vif [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:dc:dd,bridge_name='br-int',has_traffic_filtering=True,id=89bff377-03f4-4e26-9d25-934e6005d866,network=Network(fc56be0c-aa06-4810-9ca5-d00406ffcf42),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89bff377-03')
Dec 06 07:41:03 compute-1 neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42[279984]: [NOTICE]   (279988) : haproxy version is 2.8.14-c23fe91
Dec 06 07:41:03 compute-1 neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42[279984]: [NOTICE]   (279988) : path to executable is /usr/sbin/haproxy
Dec 06 07:41:03 compute-1 neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42[279984]: [WARNING]  (279988) : Exiting Master process...
Dec 06 07:41:03 compute-1 neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42[279984]: [ALERT]    (279988) : Current worker (279990) exited with code 143 (Terminated)
Dec 06 07:41:03 compute-1 neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42[279984]: [WARNING]  (279988) : All workers exited. Exiting... (0)
Dec 06 07:41:03 compute-1 systemd[1]: libpod-dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70.scope: Deactivated successfully.
Dec 06 07:41:03 compute-1 ceph-mon[81689]: pgmap v2557: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 107 op/s
Dec 06 07:41:03 compute-1 podman[280063]: 2025-12-06 07:41:03.364472713 +0000 UTC m=+0.548718214 container died dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:41:03 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70-userdata-shm.mount: Deactivated successfully.
Dec 06 07:41:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-37e809d759c35d43dffd9336cdb5635b45367e82834b039188fae42c657a3d2d-merged.mount: Deactivated successfully.
Dec 06 07:41:03 compute-1 podman[280063]: 2025-12-06 07:41:03.416950146 +0000 UTC m=+0.601195627 container cleanup dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:41:03 compute-1 systemd[1]: libpod-conmon-dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70.scope: Deactivated successfully.
Dec 06 07:41:03 compute-1 podman[280122]: 2025-12-06 07:41:03.477704758 +0000 UTC m=+0.040917420 container remove dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.485 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[229392b5-1052-4b2e-b96a-81983b2a048b]: (4, ('Sat Dec  6 07:41:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42 (dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70)\ndbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70\nSat Dec  6 07:41:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42 (dbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70)\ndbc8745c3a4fbb520813e6f19dc1db730f8c221d685eff939eddfb9cde7d0f70\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.487 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e296e3-e828-47c5-8f8b-a372b0dd374e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.488 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc56be0c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.490 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:03 compute-1 kernel: tapfc56be0c-a0: left promiscuous mode
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.492 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.509 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.513 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[70a45490-506a-42a5-89c0-e42d1d0fd8aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.528 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d82469f0-46a3-4999-8af7-29f52f95960e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.529 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a1caa78d-7f27-42e7-807e-80fa8928e566]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.543 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f2dbc5ef-788f-4fd7-a9d4-5bd8aa2ac1f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713905, 'reachable_time': 28895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280138, 'error': None, 'target': 'ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:03 compute-1 systemd[1]: run-netns-ovnmeta\x2dfc56be0c\x2daa06\x2d4810\x2d9ca5\x2dd00406ffcf42.mount: Deactivated successfully.
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.548 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc56be0c-aa06-4810-9ca5-d00406ffcf42 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:41:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:03.548 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[c391bf63-f57e-4332-a620-fc4dfc8be575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:03.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:03.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.689 226109 INFO nova.virt.libvirt.driver [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Deleting instance files /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c_del
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.690 226109 INFO nova.virt.libvirt.driver [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Deletion of /var/lib/nova/instances/0f193a24-e835-4d92-96ff-f743a5abab3c_del complete
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.760 226109 INFO nova.compute.manager [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Took 1.12 seconds to destroy the instance on the hypervisor.
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.760 226109 DEBUG oslo.service.loopingcall [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.761 226109 DEBUG nova.compute.manager [-] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:41:03 compute-1 nova_compute[226101]: 2025-12-06 07:41:03.761 226109 DEBUG nova.network.neutron [-] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:41:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.976 226109 DEBUG nova.compute.manager [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received event network-vif-unplugged-89bff377-03f4-4e26-9d25-934e6005d866 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.977 226109 DEBUG oslo_concurrency.lockutils [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.978 226109 DEBUG oslo_concurrency.lockutils [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.978 226109 DEBUG oslo_concurrency.lockutils [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.978 226109 DEBUG nova.compute.manager [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] No waiting events found dispatching network-vif-unplugged-89bff377-03f4-4e26-9d25-934e6005d866 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.979 226109 DEBUG nova.compute.manager [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received event network-vif-unplugged-89bff377-03f4-4e26-9d25-934e6005d866 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.979 226109 DEBUG nova.compute.manager [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received event network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.979 226109 DEBUG oslo_concurrency.lockutils [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.980 226109 DEBUG oslo_concurrency.lockutils [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.980 226109 DEBUG oslo_concurrency.lockutils [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.980 226109 DEBUG nova.compute.manager [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] No waiting events found dispatching network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:41:04 compute-1 nova_compute[226101]: 2025-12-06 07:41:04.981 226109 WARNING nova.compute.manager [req-fc4ad833-bdce-4e97-b806-628f81d7f347 req-94a38c24-8a21-4e0c-9709-60eaa27de186 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received unexpected event network-vif-plugged-89bff377-03f4-4e26-9d25-934e6005d866 for instance with vm_state active and task_state deleting.
Dec 06 07:41:05 compute-1 ceph-mon[81689]: pgmap v2558: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 30 KiB/s wr, 103 op/s
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.172 226109 DEBUG nova.network.neutron [-] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.192 226109 INFO nova.compute.manager [-] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Took 1.43 seconds to deallocate network for instance.
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.241 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.242 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.312 226109 DEBUG oslo_concurrency.processutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.344 226109 DEBUG nova.compute.manager [req-9a868a39-e92a-4968-b008-1acc28f140a7 req-5020e6f9-b717-4cd3-96f9-950b4c305bab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Received event network-vif-deleted-89bff377-03f4-4e26-9d25-934e6005d866 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:41:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:05.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:41:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:05.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:41:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3407117804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.737 226109 DEBUG oslo_concurrency.processutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.742 226109 DEBUG nova.compute.provider_tree [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.758 226109 DEBUG nova.scheduler.client.report [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.777 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.801 226109 INFO nova.scheduler.client.report [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Deleted allocations for instance 0f193a24-e835-4d92-96ff-f743a5abab3c
Dec 06 07:41:05 compute-1 nova_compute[226101]: 2025-12-06 07:41:05.868 226109 DEBUG oslo_concurrency.lockutils [None req-45c104fb-bcd4-4473-82a8-10ef5e3a74b5 23cdebcd22d647eb808445dbd33d4f04 28f1acf1ee4b40f695e30ee36e56b9d0 - - default default] Lock "0f193a24-e835-4d92-96ff-f743a5abab3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3407117804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:07 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Dec 06 07:41:07 compute-1 nova_compute[226101]: 2025-12-06 07:41:07.334 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:07.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:07.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:07 compute-1 ceph-mon[81689]: pgmap v2559: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 199 op/s
Dec 06 07:41:07 compute-1 nova_compute[226101]: 2025-12-06 07:41:07.928 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3528680710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:41:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3528680710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:41:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:09.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:09.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:10 compute-1 ceph-mon[81689]: pgmap v2560: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 37 KiB/s wr, 201 op/s
Dec 06 07:41:10 compute-1 ovn_controller[130279]: 2025-12-06T07:41:10Z|00558|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec 06 07:41:10 compute-1 nova_compute[226101]: 2025-12-06 07:41:10.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:41:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:11.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:11 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:11.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:11 compute-1 ceph-mon[81689]: pgmap v2561: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 37 KiB/s wr, 217 op/s
Dec 06 07:41:12 compute-1 nova_compute[226101]: 2025-12-06 07:41:12.335 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3426479770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:12 compute-1 nova_compute[226101]: 2025-12-06 07:41:12.929 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:13 compute-1 nova_compute[226101]: 2025-12-06 07:41:13.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:41:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:13 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:13 compute-1 ceph-mon[81689]: pgmap v2562: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 776 KiB/s wr, 216 op/s
Dec 06 07:41:13 compute-1 nova_compute[226101]: 2025-12-06 07:41:13.935 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:13 compute-1 nova_compute[226101]: 2025-12-06 07:41:13.935 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:13 compute-1 nova_compute[226101]: 2025-12-06 07:41:13.936 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:13 compute-1 nova_compute[226101]: 2025-12-06 07:41:13.936 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:41:13 compute-1 nova_compute[226101]: 2025-12-06 07:41:13.936 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:41:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/835378175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.383 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.619 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.620 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.773 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.774 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4166MB free_disk=20.867637634277344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.775 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.775 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.860 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance d954d607-525c-4edf-ab9e-56658dd2525a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.860 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.861 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:41:14 compute-1 nova_compute[226101]: 2025-12-06 07:41:14.895 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/835378175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:41:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2710732886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:15 compute-1 nova_compute[226101]: 2025-12-06 07:41:15.326 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:15 compute-1 nova_compute[226101]: 2025-12-06 07:41:15.334 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:41:15 compute-1 nova_compute[226101]: 2025-12-06 07:41:15.358 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:41:15 compute-1 nova_compute[226101]: 2025-12-06 07:41:15.395 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:41:15 compute-1 nova_compute[226101]: 2025-12-06 07:41:15.395 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:15.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:41:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:15.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:41:15 compute-1 ovn_controller[130279]: 2025-12-06T07:41:15Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a0:66:fb 10.100.0.8
Dec 06 07:41:15 compute-1 ovn_controller[130279]: 2025-12-06T07:41:15Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:66:fb 10.100.0.8
Dec 06 07:41:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e321 e321: 3 total, 3 up, 3 in
Dec 06 07:41:16 compute-1 ceph-mon[81689]: pgmap v2563: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 763 KiB/s wr, 149 op/s
Dec 06 07:41:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2710732886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:17 compute-1 ceph-mon[81689]: osdmap e321: 3 total, 3 up, 3 in
Dec 06 07:41:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e322 e322: 3 total, 3 up, 3 in
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.383 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.395 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.396 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.432 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.432 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.432 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:41:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:17.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.884 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006862.8830507, 0f193a24-e835-4d92-96ff-f743a5abab3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.885 226109 INFO nova.compute.manager [-] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] VM Stopped (Lifecycle Event)
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.910 226109 DEBUG nova.compute.manager [None req-15e7edaf-712b-429f-9154-6b1e08ed3954 - - - - - -] [instance: 0f193a24-e835-4d92-96ff-f743a5abab3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:17 compute-1 nova_compute[226101]: 2025-12-06 07:41:17.931 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:18 compute-1 ceph-mon[81689]: pgmap v2565: 305 pgs: 305 active+clean; 439 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 392 KiB/s rd, 4.0 MiB/s wr, 201 op/s
Dec 06 07:41:18 compute-1 ceph-mon[81689]: osdmap e322: 3 total, 3 up, 3 in
Dec 06 07:41:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e323 e323: 3 total, 3 up, 3 in
Dec 06 07:41:19 compute-1 ceph-mon[81689]: osdmap e323: 3 total, 3 up, 3 in
Dec 06 07:41:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/673045323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2372975583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2685470459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:19 compute-1 ceph-mon[81689]: pgmap v2568: 305 pgs: 305 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 9.7 MiB/s wr, 391 op/s
Dec 06 07:41:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:19.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3072884106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:21 compute-1 ceph-mon[81689]: pgmap v2569: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 14 MiB/s wr, 619 op/s
Dec 06 07:41:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:21.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:21.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.384 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.733 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.733 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.755 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:41:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3397156865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.818 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.818 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.824 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.825 226109 INFO nova.compute.claims [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.934 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:22 compute-1 nova_compute[226101]: 2025-12-06 07:41:22.976 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:41:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/845513629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.449 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.458 226109 DEBUG nova.compute.provider_tree [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.517 226109 DEBUG nova.scheduler.client.report [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.725 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.726 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.793 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.793 226109 DEBUG nova.network.neutron [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.821 226109 INFO nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.842 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.947 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.949 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:41:23 compute-1 nova_compute[226101]: 2025-12-06 07:41:23.950 226109 INFO nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Creating image(s)
Dec 06 07:41:23 compute-1 ceph-mon[81689]: pgmap v2570: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 352 op/s
Dec 06 07:41:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/845513629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1794384952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.016 226109 DEBUG nova.storage.rbd_utils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.048 226109 DEBUG nova.storage.rbd_utils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.070 226109 DEBUG nova.storage.rbd_utils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.073 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.100 226109 DEBUG nova.policy [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '297bc99c242e4fa8aedea4a6367b61c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.139 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.140 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.140 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.141 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.161 226109 DEBUG nova.storage.rbd_utils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:24 compute-1 nova_compute[226101]: 2025-12-06 07:41:24.164 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e324 e324: 3 total, 3 up, 3 in
Dec 06 07:41:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.181 226109 DEBUG nova.network.neutron [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Successfully created port: 708d0bd8-6c04-48c8-80e4-9ccd10704179 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.198 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.261 226109 DEBUG nova.storage.rbd_utils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] resizing rbd image f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.360 226109 DEBUG nova.objects.instance [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'migration_context' on Instance uuid f50ab4f6-4d4c-488a-9793-0b9979a2e193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.380 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.380 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Ensure instance console log exists: /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.381 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.381 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.381 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:25 compute-1 ceph-mon[81689]: osdmap e324: 3 total, 3 up, 3 in
Dec 06 07:41:25 compute-1 ceph-mon[81689]: pgmap v2572: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.8 MiB/s wr, 309 op/s
Dec 06 07:41:25 compute-1 nova_compute[226101]: 2025-12-06 07:41:25.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:25.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:41:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:41:26 compute-1 nova_compute[226101]: 2025-12-06 07:41:26.245 226109 DEBUG nova.network.neutron [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Successfully updated port: 708d0bd8-6c04-48c8-80e4-9ccd10704179 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:41:26 compute-1 nova_compute[226101]: 2025-12-06 07:41:26.259 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:41:26 compute-1 nova_compute[226101]: 2025-12-06 07:41:26.260 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquired lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:41:26 compute-1 nova_compute[226101]: 2025-12-06 07:41:26.260 226109 DEBUG nova.network.neutron [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:41:26 compute-1 nova_compute[226101]: 2025-12-06 07:41:26.464 226109 DEBUG nova.network.neutron [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.388 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.604 226109 DEBUG nova.network.neutron [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updating instance_info_cache with network_info: [{"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.632 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Releasing lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.633 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Instance network_info: |[{"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.635 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Start _get_guest_xml network_info=[{"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.639 226109 WARNING nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.645 226109 DEBUG nova.virt.libvirt.host [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.646 226109 DEBUG nova.virt.libvirt.host [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.650 226109 DEBUG nova.virt.libvirt.host [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.651 226109 DEBUG nova.virt.libvirt.host [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.652 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.652 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.652 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.652 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.653 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.653 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.653 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.653 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.653 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.654 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.654 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.654 226109 DEBUG nova.virt.hardware [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.656 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:27.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:27 compute-1 nova_compute[226101]: 2025-12-06 07:41:27.935 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:28 compute-1 nova_compute[226101]: 2025-12-06 07:41:28.011 226109 DEBUG nova.compute.manager [req-4a7adac9-524d-41fa-a8fb-930b52f152a2 req-5a4ef9ca-a2f5-41ad-a0e5-e34f74b04fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received event network-changed-708d0bd8-6c04-48c8-80e4-9ccd10704179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:28 compute-1 nova_compute[226101]: 2025-12-06 07:41:28.011 226109 DEBUG nova.compute.manager [req-4a7adac9-524d-41fa-a8fb-930b52f152a2 req-5a4ef9ca-a2f5-41ad-a0e5-e34f74b04fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Refreshing instance network info cache due to event network-changed-708d0bd8-6c04-48c8-80e4-9ccd10704179. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:41:28 compute-1 nova_compute[226101]: 2025-12-06 07:41:28.012 226109 DEBUG oslo_concurrency.lockutils [req-4a7adac9-524d-41fa-a8fb-930b52f152a2 req-5a4ef9ca-a2f5-41ad-a0e5-e34f74b04fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:41:28 compute-1 nova_compute[226101]: 2025-12-06 07:41:28.012 226109 DEBUG oslo_concurrency.lockutils [req-4a7adac9-524d-41fa-a8fb-930b52f152a2 req-5a4ef9ca-a2f5-41ad-a0e5-e34f74b04fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:41:28 compute-1 nova_compute[226101]: 2025-12-06 07:41:28.012 226109 DEBUG nova.network.neutron [req-4a7adac9-524d-41fa-a8fb-930b52f152a2 req-5a4ef9ca-a2f5-41ad-a0e5-e34f74b04fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Refreshing network info cache for port 708d0bd8-6c04-48c8-80e4-9ccd10704179 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:41:28 compute-1 nova_compute[226101]: 2025-12-06 07:41:28.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:41:28 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3505286723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:29 compute-1 podman[280416]: 2025-12-06 07:41:29.103116403 +0000 UTC m=+0.074649489 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Dec 06 07:41:29 compute-1 podman[280417]: 2025-12-06 07:41:29.112086779 +0000 UTC m=+0.089131831 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:41:29 compute-1 podman[280418]: 2025-12-06 07:41:29.126509419 +0000 UTC m=+0.095484278 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:41:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:29.153 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:41:29 compute-1 nova_compute[226101]: 2025-12-06 07:41:29.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:29.154 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:41:29 compute-1 ceph-mon[81689]: pgmap v2573: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.2 MiB/s wr, 283 op/s
Dec 06 07:41:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/96112953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3383122313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:29 compute-1 nova_compute[226101]: 2025-12-06 07:41:29.662 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:29 compute-1 nova_compute[226101]: 2025-12-06 07:41:29.704 226109 DEBUG nova.storage.rbd_utils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:41:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:29.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:41:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:29.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:29 compute-1 nova_compute[226101]: 2025-12-06 07:41:29.711 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:30 compute-1 nova_compute[226101]: 2025-12-06 07:41:30.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3505286723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:30 compute-1 ceph-mon[81689]: pgmap v2574: 305 pgs: 305 active+clean; 523 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.8 MiB/s wr, 282 op/s
Dec 06 07:41:31 compute-1 nova_compute[226101]: 2025-12-06 07:41:31.027 226109 DEBUG nova.network.neutron [req-4a7adac9-524d-41fa-a8fb-930b52f152a2 req-5a4ef9ca-a2f5-41ad-a0e5-e34f74b04fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updated VIF entry in instance network info cache for port 708d0bd8-6c04-48c8-80e4-9ccd10704179. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:41:31 compute-1 nova_compute[226101]: 2025-12-06 07:41:31.028 226109 DEBUG nova.network.neutron [req-4a7adac9-524d-41fa-a8fb-930b52f152a2 req-5a4ef9ca-a2f5-41ad-a0e5-e34f74b04fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updating instance_info_cache with network_info: [{"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:41:31 compute-1 nova_compute[226101]: 2025-12-06 07:41:31.052 226109 DEBUG oslo_concurrency.lockutils [req-4a7adac9-524d-41fa-a8fb-930b52f152a2 req-5a4ef9ca-a2f5-41ad-a0e5-e34f74b04fe2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:41:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:31.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:31.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:32 compute-1 nova_compute[226101]: 2025-12-06 07:41:32.390 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:32 compute-1 nova_compute[226101]: 2025-12-06 07:41:32.938 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:41:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1684873521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:33 compute-1 ceph-mon[81689]: pgmap v2575: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 197 op/s
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.160 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.162 226109 DEBUG nova.virt.libvirt.vif [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-838297842',display_name='tempest-AttachVolumeNegativeTest-server-838297842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-838297842',id=141,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGlJuOvECyiA5NWlLIDOjDJWeZ15OCRR+GrIfRd1eb2YqFJmiHzUPd3GlF+M7lFiDx2HKicZQSeYWxrij87um/7Lq1h64lSuGI60htEXEcY7+qwoIwHqQQuRAT9NPhIr3w==',key_name='tempest-keypair-1389259306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-z6898ou0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=f50ab4f6-4d4c-488a-9793-0b9979a2e193,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.163 226109 DEBUG nova.network.os_vif_util [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.164 226109 DEBUG nova.network.os_vif_util [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:83:92,bridge_name='br-int',has_traffic_filtering=True,id=708d0bd8-6c04-48c8-80e4-9ccd10704179,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap708d0bd8-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.166 226109 DEBUG nova.objects.instance [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'pci_devices' on Instance uuid f50ab4f6-4d4c-488a-9793-0b9979a2e193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.189 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <uuid>f50ab4f6-4d4c-488a-9793-0b9979a2e193</uuid>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <name>instance-0000008d</name>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <nova:name>tempest-AttachVolumeNegativeTest-server-838297842</nova:name>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:41:27</nova:creationTime>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <nova:user uuid="297bc99c242e4fa8aedea4a6367b61c0">tempest-AttachVolumeNegativeTest-2080911030-project-member</nova:user>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <nova:project uuid="741dc47f9ced423cbd99fd6f9d32904f">tempest-AttachVolumeNegativeTest-2080911030</nova:project>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <nova:port uuid="708d0bd8-6c04-48c8-80e4-9ccd10704179">
Dec 06 07:41:33 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <system>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <entry name="serial">f50ab4f6-4d4c-488a-9793-0b9979a2e193</entry>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <entry name="uuid">f50ab4f6-4d4c-488a-9793-0b9979a2e193</entry>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </system>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <os>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   </os>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <features>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   </features>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk">
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       </source>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk.config">
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       </source>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:41:33 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:c0:83:92"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <target dev="tap708d0bd8-6c"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193/console.log" append="off"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <video>
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </video>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:41:33 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:41:33 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:41:33 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:41:33 compute-1 nova_compute[226101]: </domain>
Dec 06 07:41:33 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.191 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Preparing to wait for external event network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.192 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.192 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.192 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.193 226109 DEBUG nova.virt.libvirt.vif [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-838297842',display_name='tempest-AttachVolumeNegativeTest-server-838297842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-838297842',id=141,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGlJuOvECyiA5NWlLIDOjDJWeZ15OCRR+GrIfRd1eb2YqFJmiHzUPd3GlF+M7lFiDx2HKicZQSeYWxrij87um/7Lq1h64lSuGI60htEXEcY7+qwoIwHqQQuRAT9NPhIr3w==',key_name='tempest-keypair-1389259306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-z6898ou0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=f50ab4f6-4d4c-488a-9793-0b9979a2e193,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.193 226109 DEBUG nova.network.os_vif_util [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.194 226109 DEBUG nova.network.os_vif_util [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:83:92,bridge_name='br-int',has_traffic_filtering=True,id=708d0bd8-6c04-48c8-80e4-9ccd10704179,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap708d0bd8-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.195 226109 DEBUG os_vif [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:83:92,bridge_name='br-int',has_traffic_filtering=True,id=708d0bd8-6c04-48c8-80e4-9ccd10704179,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap708d0bd8-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.195 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.196 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.196 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.200 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.200 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap708d0bd8-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.201 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap708d0bd8-6c, col_values=(('external_ids', {'iface-id': '708d0bd8-6c04-48c8-80e4-9ccd10704179', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:83:92', 'vm-uuid': 'f50ab4f6-4d4c-488a-9793-0b9979a2e193'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.202 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:33 compute-1 NetworkManager[49031]: <info>  [1765006893.2032] manager: (tap708d0bd8-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.205 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.212 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.214 226109 INFO os_vif [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:83:92,bridge_name='br-int',has_traffic_filtering=True,id=708d0bd8-6c04-48c8-80e4-9ccd10704179,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap708d0bd8-6c')
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.265 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.266 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.266 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No VIF found with MAC fa:16:3e:c0:83:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.267 226109 INFO nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Using config drive
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.404 226109 DEBUG nova.storage.rbd_utils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:33.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:33.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.817 226109 INFO nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Creating config drive at /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193/disk.config
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.823 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps2yhugsl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.956 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps2yhugsl" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.984 226109 DEBUG nova.storage.rbd_utils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:33 compute-1 nova_compute[226101]: 2025-12-06 07:41:33.988 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193/disk.config f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:35.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:35.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:35 compute-1 ceph-mon[81689]: pgmap v2576: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Dec 06 07:41:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1684873521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:37 compute-1 nova_compute[226101]: 2025-12-06 07:41:37.393 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:37.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:37.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:37 compute-1 nova_compute[226101]: 2025-12-06 07:41:37.811 226109 DEBUG oslo_concurrency.processutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193/disk.config f50ab4f6-4d4c-488a-9793-0b9979a2e193_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.823s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:37 compute-1 nova_compute[226101]: 2025-12-06 07:41:37.812 226109 INFO nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Deleting local config drive /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193/disk.config because it was imported into RBD.
Dec 06 07:41:37 compute-1 kernel: tap708d0bd8-6c: entered promiscuous mode
Dec 06 07:41:37 compute-1 NetworkManager[49031]: <info>  [1765006897.8578] manager: (tap708d0bd8-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Dec 06 07:41:37 compute-1 ovn_controller[130279]: 2025-12-06T07:41:37Z|00559|binding|INFO|Claiming lport 708d0bd8-6c04-48c8-80e4-9ccd10704179 for this chassis.
Dec 06 07:41:37 compute-1 ovn_controller[130279]: 2025-12-06T07:41:37Z|00560|binding|INFO|708d0bd8-6c04-48c8-80e4-9ccd10704179: Claiming fa:16:3e:c0:83:92 10.100.0.12
Dec 06 07:41:37 compute-1 nova_compute[226101]: 2025-12-06 07:41:37.860 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:37 compute-1 systemd-udevd[280589]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:41:37 compute-1 NetworkManager[49031]: <info>  [1765006897.8964] device (tap708d0bd8-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:41:37 compute-1 NetworkManager[49031]: <info>  [1765006897.8979] device (tap708d0bd8-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:41:37 compute-1 systemd-machined[190302]: New machine qemu-65-instance-0000008d.
Dec 06 07:41:37 compute-1 systemd[1]: Started Virtual Machine qemu-65-instance-0000008d.
Dec 06 07:41:37 compute-1 nova_compute[226101]: 2025-12-06 07:41:37.944 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:37 compute-1 ovn_controller[130279]: 2025-12-06T07:41:37Z|00561|binding|INFO|Setting lport 708d0bd8-6c04-48c8-80e4-9ccd10704179 ovn-installed in OVS
Dec 06 07:41:37 compute-1 nova_compute[226101]: 2025-12-06 07:41:37.948 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.958 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:83:92 10.100.0.12'], port_security=['fa:16:3e:c0:83:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f50ab4f6-4d4c-488a-9793-0b9979a2e193', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a82c9b7f-b22a-4f20-92eb-ef47963df970', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a93df87f-d2df-4d3a-b692-98bba32f2fe1, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=708d0bd8-6c04-48c8-80e4-9ccd10704179) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:41:37 compute-1 ovn_controller[130279]: 2025-12-06T07:41:37Z|00562|binding|INFO|Setting lport 708d0bd8-6c04-48c8-80e4-9ccd10704179 up in Southbound
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.959 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 708d0bd8-6c04-48c8-80e4-9ccd10704179 in datapath 3c5d4817-c3d5-45fc-9890-418e779bacb2 bound to our chassis
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.960 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.976 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a29c6556-d7c6-42f3-9b9c-d27d04c6145e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.977 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c5d4817-c1 in ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.981 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c5d4817-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.981 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8a66b3-105d-4de7-a804-05c9d6613392]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.982 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f873c190-c996-46bb-922d-74207b9aad38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:37.994 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[d4084ec1-013c-4635-88b8-35e9c9ecf72c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.018 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e64d0b77-7be4-4b0a-a3d4-d6200565981f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.050 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[59069016-edb1-44c1-a58d-9cb7a2858d7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.055 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7081ab2b-27ef-4fa2-a527-ff98d4e98f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 NetworkManager[49031]: <info>  [1765006898.0572] manager: (tap3c5d4817-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/255)
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.090 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ceae60-bed2-476d-835c-e89e2e8e2c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.094 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[45fab166-e28d-4f89-b096-1cfc949fc17d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 NetworkManager[49031]: <info>  [1765006898.1236] device (tap3c5d4817-c0): carrier: link connected
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.132 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cb14a35c-5afe-43d8-9075-5e02645e3182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/193829452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:38 compute-1 ceph-mon[81689]: pgmap v2577: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Dec 06 07:41:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/257299139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.155 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9b280ebf-9c58-431b-bc59-38a23d3b0821]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c5d4817-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:51:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717762, 'reachable_time': 34260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280632, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.171 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[86009753-086f-4c3a-b00f-a2d2dad29897]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:517c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717762, 'tstamp': 717762}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280641, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.192 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[78a97453-7310-4116-a012-5429dd551aec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c5d4817-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:51:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717762, 'reachable_time': 34260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280643, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.202 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.221 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6262e135-f04c-4a52-9ff8-f65633a3c8c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.277 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[481f6bbb-a5db-4ccb-bb74-1720f471f4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.279 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c5d4817-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.279 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.280 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c5d4817-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.289 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:38 compute-1 NetworkManager[49031]: <info>  [1765006898.2894] manager: (tap3c5d4817-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Dec 06 07:41:38 compute-1 kernel: tap3c5d4817-c0: entered promiscuous mode
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.293 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c5d4817-c0, col_values=(('external_ids', {'iface-id': 'dc336d05-182d-42ac-ab5e-a73bf30a0662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:38 compute-1 ovn_controller[130279]: 2025-12-06T07:41:38Z|00563|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.295 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.296 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.297 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[272e24ae-239e-4881-9e8e-cae333526c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.297 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:41:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:38.298 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'env', 'PROCESS_TAG=haproxy-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c5d4817-c3d5-45fc-9890-418e779bacb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.307 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.394 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006898.3936906, f50ab4f6-4d4c-488a-9793-0b9979a2e193 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.394 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] VM Started (Lifecycle Event)
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.412 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.416 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006898.39659, f50ab4f6-4d4c-488a-9793-0b9979a2e193 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.416 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] VM Paused (Lifecycle Event)
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.449 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.453 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.472 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:41:38 compute-1 podman[280701]: 2025-12-06 07:41:38.662681399 +0000 UTC m=+0.051295652 container create 63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:41:38 compute-1 systemd[1]: Started libpod-conmon-63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4.scope.
Dec 06 07:41:38 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:41:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a26551b0868ebd958341be08824a954918b440fd2343dd4950b6beac338d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:41:38 compute-1 podman[280701]: 2025-12-06 07:41:38.63534801 +0000 UTC m=+0.023962283 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:41:38 compute-1 podman[280701]: 2025-12-06 07:41:38.737831571 +0000 UTC m=+0.126445844 container init 63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:41:38 compute-1 podman[280701]: 2025-12-06 07:41:38.742846582 +0000 UTC m=+0.131460835 container start 63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:41:38 compute-1 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[280716]: [NOTICE]   (280720) : New worker (280722) forked
Dec 06 07:41:38 compute-1 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[280716]: [NOTICE]   (280720) : Loading success.
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.787 226109 DEBUG nova.compute.manager [req-167b52cf-b3a5-474c-8a80-6b1db665c415 req-1c7280e4-4c0e-4acd-9696-e007041ed849 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received event network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.787 226109 DEBUG oslo_concurrency.lockutils [req-167b52cf-b3a5-474c-8a80-6b1db665c415 req-1c7280e4-4c0e-4acd-9696-e007041ed849 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.787 226109 DEBUG oslo_concurrency.lockutils [req-167b52cf-b3a5-474c-8a80-6b1db665c415 req-1c7280e4-4c0e-4acd-9696-e007041ed849 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.788 226109 DEBUG oslo_concurrency.lockutils [req-167b52cf-b3a5-474c-8a80-6b1db665c415 req-1c7280e4-4c0e-4acd-9696-e007041ed849 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.788 226109 DEBUG nova.compute.manager [req-167b52cf-b3a5-474c-8a80-6b1db665c415 req-1c7280e4-4c0e-4acd-9696-e007041ed849 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Processing event network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.788 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.791 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006898.791629, f50ab4f6-4d4c-488a-9793-0b9979a2e193 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.792 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] VM Resumed (Lifecycle Event)
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.793 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.795 226109 INFO nova.virt.libvirt.driver [-] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Instance spawned successfully.
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.796 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.813 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.817 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.817 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.818 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.818 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.818 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.819 226109 DEBUG nova.virt.libvirt.driver [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.824 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.869 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.896 226109 INFO nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Took 14.95 seconds to spawn the instance on the hypervisor.
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.897 226109 DEBUG nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.967 226109 INFO nova.compute.manager [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Took 16.17 seconds to build instance.
Dec 06 07:41:38 compute-1 nova_compute[226101]: 2025-12-06 07:41:38.989 226109 DEBUG oslo_concurrency.lockutils [None req-5386fe60-1390-4616-ad09-302c78ba5b4e 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:39 compute-1 ceph-mon[81689]: pgmap v2578: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 07:41:39 compute-1 ceph-mon[81689]: pgmap v2579: 305 pgs: 305 active+clean; 467 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 301 KiB/s rd, 1.7 MiB/s wr, 75 op/s
Dec 06 07:41:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:41:39.156 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:39.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:39.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2344964929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/523603762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:40 compute-1 nova_compute[226101]: 2025-12-06 07:41:40.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:40 compute-1 nova_compute[226101]: 2025-12-06 07:41:40.892 226109 DEBUG nova.compute.manager [req-7dd6bdd3-b314-4dc1-b084-0ec5dcdeb550 req-15985f02-e44e-4583-8d82-912a3196cc79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received event network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:40 compute-1 nova_compute[226101]: 2025-12-06 07:41:40.893 226109 DEBUG oslo_concurrency.lockutils [req-7dd6bdd3-b314-4dc1-b084-0ec5dcdeb550 req-15985f02-e44e-4583-8d82-912a3196cc79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:40 compute-1 nova_compute[226101]: 2025-12-06 07:41:40.893 226109 DEBUG oslo_concurrency.lockutils [req-7dd6bdd3-b314-4dc1-b084-0ec5dcdeb550 req-15985f02-e44e-4583-8d82-912a3196cc79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:40 compute-1 nova_compute[226101]: 2025-12-06 07:41:40.893 226109 DEBUG oslo_concurrency.lockutils [req-7dd6bdd3-b314-4dc1-b084-0ec5dcdeb550 req-15985f02-e44e-4583-8d82-912a3196cc79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:40 compute-1 nova_compute[226101]: 2025-12-06 07:41:40.894 226109 DEBUG nova.compute.manager [req-7dd6bdd3-b314-4dc1-b084-0ec5dcdeb550 req-15985f02-e44e-4583-8d82-912a3196cc79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] No waiting events found dispatching network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:41:40 compute-1 nova_compute[226101]: 2025-12-06 07:41:40.894 226109 WARNING nova.compute.manager [req-7dd6bdd3-b314-4dc1-b084-0ec5dcdeb550 req-15985f02-e44e-4583-8d82-912a3196cc79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received unexpected event network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 for instance with vm_state active and task_state None.
Dec 06 07:41:41 compute-1 ceph-mon[81689]: pgmap v2580: 305 pgs: 305 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 123 op/s
Dec 06 07:41:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:41.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:41.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:41 compute-1 NetworkManager[49031]: <info>  [1765006901.8484] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Dec 06 07:41:41 compute-1 NetworkManager[49031]: <info>  [1765006901.8497] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Dec 06 07:41:41 compute-1 nova_compute[226101]: 2025-12-06 07:41:41.850 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:41 compute-1 nova_compute[226101]: 2025-12-06 07:41:41.880 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:41 compute-1 ovn_controller[130279]: 2025-12-06T07:41:41Z|00564|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec 06 07:41:41 compute-1 ovn_controller[130279]: 2025-12-06T07:41:41Z|00565|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec 06 07:41:41 compute-1 nova_compute[226101]: 2025-12-06 07:41:41.893 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4180631999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3433887939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:42 compute-1 nova_compute[226101]: 2025-12-06 07:41:42.395 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:43 compute-1 nova_compute[226101]: 2025-12-06 07:41:43.007 226109 DEBUG nova.compute.manager [req-ecc9cf6c-5153-4370-9720-d9086581bdf8 req-1008314e-bc75-47d2-b3e3-e2197d46d9ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received event network-changed-708d0bd8-6c04-48c8-80e4-9ccd10704179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:43 compute-1 nova_compute[226101]: 2025-12-06 07:41:43.007 226109 DEBUG nova.compute.manager [req-ecc9cf6c-5153-4370-9720-d9086581bdf8 req-1008314e-bc75-47d2-b3e3-e2197d46d9ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Refreshing instance network info cache due to event network-changed-708d0bd8-6c04-48c8-80e4-9ccd10704179. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:41:43 compute-1 nova_compute[226101]: 2025-12-06 07:41:43.008 226109 DEBUG oslo_concurrency.lockutils [req-ecc9cf6c-5153-4370-9720-d9086581bdf8 req-1008314e-bc75-47d2-b3e3-e2197d46d9ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:41:43 compute-1 nova_compute[226101]: 2025-12-06 07:41:43.008 226109 DEBUG oslo_concurrency.lockutils [req-ecc9cf6c-5153-4370-9720-d9086581bdf8 req-1008314e-bc75-47d2-b3e3-e2197d46d9ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:41:43 compute-1 nova_compute[226101]: 2025-12-06 07:41:43.008 226109 DEBUG nova.network.neutron [req-ecc9cf6c-5153-4370-9720-d9086581bdf8 req-1008314e-bc75-47d2-b3e3-e2197d46d9ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Refreshing network info cache for port 708d0bd8-6c04-48c8-80e4-9ccd10704179 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:41:43 compute-1 ceph-mon[81689]: pgmap v2581: 305 pgs: 305 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 80 op/s
Dec 06 07:41:43 compute-1 nova_compute[226101]: 2025-12-06 07:41:43.205 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:43.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:43.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:45 compute-1 nova_compute[226101]: 2025-12-06 07:41:45.225 226109 DEBUG nova.network.neutron [req-ecc9cf6c-5153-4370-9720-d9086581bdf8 req-1008314e-bc75-47d2-b3e3-e2197d46d9ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updated VIF entry in instance network info cache for port 708d0bd8-6c04-48c8-80e4-9ccd10704179. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:41:45 compute-1 nova_compute[226101]: 2025-12-06 07:41:45.226 226109 DEBUG nova.network.neutron [req-ecc9cf6c-5153-4370-9720-d9086581bdf8 req-1008314e-bc75-47d2-b3e3-e2197d46d9ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updating instance_info_cache with network_info: [{"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:41:45 compute-1 nova_compute[226101]: 2025-12-06 07:41:45.244 226109 DEBUG oslo_concurrency.lockutils [req-ecc9cf6c-5153-4370-9720-d9086581bdf8 req-1008314e-bc75-47d2-b3e3-e2197d46d9ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:41:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:45.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:45.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:46 compute-1 nova_compute[226101]: 2025-12-06 07:41:46.347 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:47 compute-1 ceph-mon[81689]: pgmap v2582: 305 pgs: 305 active+clean; 538 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Dec 06 07:41:47 compute-1 nova_compute[226101]: 2025-12-06 07:41:47.398 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:47.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:47.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:47 compute-1 ceph-mon[81689]: pgmap v2583: 305 pgs: 305 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 5.7 MiB/s wr, 248 op/s
Dec 06 07:41:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e325 e325: 3 total, 3 up, 3 in
Dec 06 07:41:48 compute-1 nova_compute[226101]: 2025-12-06 07:41:48.207 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:49.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:49.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:49 compute-1 ceph-mon[81689]: osdmap e325: 3 total, 3 up, 3 in
Dec 06 07:41:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:50 compute-1 nova_compute[226101]: 2025-12-06 07:41:50.573 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:50 compute-1 ceph-mon[81689]: pgmap v2585: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 560 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.0 MiB/s wr, 345 op/s
Dec 06 07:41:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:51.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:51.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:52 compute-1 nova_compute[226101]: 2025-12-06 07:41:52.400 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:53 compute-1 nova_compute[226101]: 2025-12-06 07:41:53.208 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1112513091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:53 compute-1 ceph-mon[81689]: pgmap v2586: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 4.0 MiB/s wr, 350 op/s
Dec 06 07:41:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:41:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:53.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:41:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:53.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e326 e326: 3 total, 3 up, 3 in
Dec 06 07:41:55 compute-1 ovn_controller[130279]: 2025-12-06T07:41:55Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:83:92 10.100.0.12
Dec 06 07:41:55 compute-1 ovn_controller[130279]: 2025-12-06T07:41:55Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:83:92 10.100.0.12
Dec 06 07:41:55 compute-1 ceph-mon[81689]: pgmap v2587: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 4.0 MiB/s wr, 350 op/s
Dec 06 07:41:55 compute-1 ceph-mon[81689]: osdmap e326: 3 total, 3 up, 3 in
Dec 06 07:41:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:55.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:41:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:55.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:41:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:56 compute-1 ceph-mon[81689]: pgmap v2589: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 218 op/s
Dec 06 07:41:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2011568115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:57 compute-1 nova_compute[226101]: 2025-12-06 07:41:57.402 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:41:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:57.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:41:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:41:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:57.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:41:58 compute-1 nova_compute[226101]: 2025-12-06 07:41:58.211 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/26752850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:58 compute-1 ceph-mon[81689]: pgmap v2590: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 226 op/s
Dec 06 07:41:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:59.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:41:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:41:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:59.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:00 compute-1 podman[280734]: 2025-12-06 07:42:00.088544874 +0000 UTC m=+0.063926916 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:42:00 compute-1 podman[280733]: 2025-12-06 07:42:00.120449496 +0000 UTC m=+0.099883095 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 07:42:00 compute-1 podman[280735]: 2025-12-06 07:42:00.139157148 +0000 UTC m=+0.105465200 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:42:00 compute-1 sudo[280792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:00 compute-1 sudo[280792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-1 sudo[280792]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e327 e327: 3 total, 3 up, 3 in
Dec 06 07:42:00 compute-1 sudo[280824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:42:00 compute-1 sudo[280824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-1 sudo[280824]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-1 sudo[280849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:00 compute-1 sudo[280849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-1 sudo[280849]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-1 sudo[280874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:42:00 compute-1 sudo[280874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-1 ceph-mon[81689]: pgmap v2591: 305 pgs: 305 active+clean; 473 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 183 op/s
Dec 06 07:42:00 compute-1 sudo[280874]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:00 compute-1 sudo[280919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:00 compute-1 sudo[280919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-1 sudo[280919]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-1 sudo[280944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:42:01 compute-1 sudo[280944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:01 compute-1 sudo[280944]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:01 compute-1 sudo[280969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:01 compute-1 sudo[280969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:01 compute-1 sudo[280969]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:01 compute-1 sudo[280994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:42:01 compute-1 sudo[280994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:01 compute-1 ceph-mon[81689]: osdmap e327: 3 total, 3 up, 3 in
Dec 06 07:42:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:01 compute-1 ceph-mon[81689]: pgmap v2593: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 610 KiB/s rd, 6.7 MiB/s wr, 133 op/s
Dec 06 07:42:01 compute-1 sudo[280994]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:01.660 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:01.662 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:01.663 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:01.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:01.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3794994888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/820102900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1280951404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/41133090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-1 nova_compute[226101]: 2025-12-06 07:42:02.404 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-1 nova_compute[226101]: 2025-12-06 07:42:03.272 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-1 ceph-mon[81689]: pgmap v2594: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 515 KiB/s rd, 4.3 MiB/s wr, 90 op/s
Dec 06 07:42:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:03.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:03.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 e328: 3 total, 3 up, 3 in
Dec 06 07:42:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:05.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:05.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:05 compute-1 ceph-mon[81689]: osdmap e328: 3 total, 3 up, 3 in
Dec 06 07:42:05 compute-1 ceph-mon[81689]: pgmap v2596: 305 pgs: 305 active+clean; 535 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 4.6 MiB/s wr, 133 op/s
Dec 06 07:42:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.408 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.443 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "5a5fdc2a-bc74-432d-affc-9a57f442105e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.444 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.462 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.596 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.597 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.604 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.605 226109 INFO nova.compute.claims [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:42:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:07.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:07.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:07 compute-1 nova_compute[226101]: 2025-12-06 07:42:07.779 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:08 compute-1 nova_compute[226101]: 2025-12-06 07:42:08.274 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:08 compute-1 ceph-mon[81689]: pgmap v2597: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.5 MiB/s wr, 302 op/s
Dec 06 07:42:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:09.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:09.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:09 compute-1 sudo[281070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:09 compute-1 sudo[281070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:09 compute-1 sudo[281070]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:10 compute-1 sudo[281095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:42:10 compute-1 sudo[281095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:10 compute-1 sudo[281095]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/150849439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:10 compute-1 nova_compute[226101]: 2025-12-06 07:42:10.573 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.794s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:10 compute-1 nova_compute[226101]: 2025-12-06 07:42:10.581 226109 DEBUG nova.compute.provider_tree [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:10 compute-1 nova_compute[226101]: 2025-12-06 07:42:10.669 226109 DEBUG nova.scheduler.client.report [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:10 compute-1 ceph-mon[81689]: pgmap v2598: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.7 MiB/s wr, 295 op/s
Dec 06 07:42:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3429869309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:42:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3429869309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:42:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:10 compute-1 nova_compute[226101]: 2025-12-06 07:42:10.997 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:10 compute-1 nova_compute[226101]: 2025-12-06 07:42:10.998 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.038 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.039 226109 DEBUG nova.network.neutron [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.066 226109 INFO nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.083 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.187 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.190 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.191 226109 INFO nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Creating image(s)
Dec 06 07:42:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.492 226109 DEBUG nova.storage.rbd_utils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.530 226109 DEBUG nova.storage.rbd_utils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.558 226109 DEBUG nova.storage.rbd_utils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.563 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.635 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.637 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.637 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.638 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.666 226109 DEBUG nova.storage.rbd_utils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:11 compute-1 nova_compute[226101]: 2025-12-06 07:42:11.671 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:11.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:11.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/150849439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:11 compute-1 ceph-mon[81689]: pgmap v2599: 305 pgs: 305 active+clean; 483 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 316 op/s
Dec 06 07:42:12 compute-1 nova_compute[226101]: 2025-12-06 07:42:12.136 226109 DEBUG nova.policy [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d8b62a3276f4a8b8349af67b82134c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eff1f6a1654b45079de20eddb830e76d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:42:12 compute-1 nova_compute[226101]: 2025-12-06 07:42:12.339 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:12 compute-1 nova_compute[226101]: 2025-12-06 07:42:12.418 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:12 compute-1 nova_compute[226101]: 2025-12-06 07:42:12.428 226109 DEBUG nova.storage.rbd_utils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] resizing rbd image 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:42:13 compute-1 nova_compute[226101]: 2025-12-06 07:42:13.276 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:13 compute-1 nova_compute[226101]: 2025-12-06 07:42:13.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:13 compute-1 nova_compute[226101]: 2025-12-06 07:42:13.606 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:13 compute-1 nova_compute[226101]: 2025-12-06 07:42:13.607 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:13 compute-1 nova_compute[226101]: 2025-12-06 07:42:13.607 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:13 compute-1 nova_compute[226101]: 2025-12-06 07:42:13.607 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:42:13 compute-1 nova_compute[226101]: 2025-12-06 07:42:13.608 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:13.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:13.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:14 compute-1 nova_compute[226101]: 2025-12-06 07:42:14.689 226109 DEBUG nova.network.neutron [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Successfully created port: 9e158df5-da21-4989-b7ce-1f2ae62d5a2f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:42:14 compute-1 ceph-mon[81689]: pgmap v2600: 305 pgs: 305 active+clean; 483 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 316 op/s
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.045 226109 DEBUG nova.objects.instance [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'migration_context' on Instance uuid 5a5fdc2a-bc74-432d-affc-9a57f442105e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.059 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.060 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Ensure instance console log exists: /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.060 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.061 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.061 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/79433732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.186 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.260 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.260 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.263 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.263 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.448 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.450 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3991MB free_disk=20.79305648803711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.450 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.450 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.537 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance d954d607-525c-4edf-ab9e-56658dd2525a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.537 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance f50ab4f6-4d4c-488a-9793-0b9979a2e193 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.537 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 5a5fdc2a-bc74-432d-affc-9a57f442105e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.537 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.538 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:42:15 compute-1 nova_compute[226101]: 2025-12-06 07:42:15.604 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:15.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:15.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:16 compute-1 nova_compute[226101]: 2025-12-06 07:42:16.085 226109 DEBUG nova.network.neutron [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Successfully updated port: 9e158df5-da21-4989-b7ce-1f2ae62d5a2f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:42:16 compute-1 nova_compute[226101]: 2025-12-06 07:42:16.107 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "refresh_cache-5a5fdc2a-bc74-432d-affc-9a57f442105e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:42:16 compute-1 nova_compute[226101]: 2025-12-06 07:42:16.107 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquired lock "refresh_cache-5a5fdc2a-bc74-432d-affc-9a57f442105e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:42:16 compute-1 nova_compute[226101]: 2025-12-06 07:42:16.108 226109 DEBUG nova.network.neutron [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:42:16 compute-1 nova_compute[226101]: 2025-12-06 07:42:16.193 226109 DEBUG nova.compute.manager [req-c0c4c9a8-ae47-4678-b70e-f3492e9e1853 req-8ad2ed35-0a6f-45f7-83c4-acf567df1415 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received event network-changed-9e158df5-da21-4989-b7ce-1f2ae62d5a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:16 compute-1 nova_compute[226101]: 2025-12-06 07:42:16.193 226109 DEBUG nova.compute.manager [req-c0c4c9a8-ae47-4678-b70e-f3492e9e1853 req-8ad2ed35-0a6f-45f7-83c4-acf567df1415 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Refreshing instance network info cache due to event network-changed-9e158df5-da21-4989-b7ce-1f2ae62d5a2f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:42:16 compute-1 nova_compute[226101]: 2025-12-06 07:42:16.193 226109 DEBUG oslo_concurrency.lockutils [req-c0c4c9a8-ae47-4678-b70e-f3492e9e1853 req-8ad2ed35-0a6f-45f7-83c4-acf567df1415 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5a5fdc2a-bc74-432d-affc-9a57f442105e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:42:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:16 compute-1 ceph-mon[81689]: pgmap v2601: 305 pgs: 305 active+clean; 461 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.3 MiB/s wr, 273 op/s
Dec 06 07:42:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/79433732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.041 226109 DEBUG nova.network.neutron [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:42:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:17 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/757135810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.245 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.251 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.267 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.301 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.301 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.373 226109 DEBUG oslo_concurrency.lockutils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.374 226109 DEBUG oslo_concurrency.lockutils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.393 226109 DEBUG nova.objects.instance [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid f50ab4f6-4d4c-488a-9793-0b9979a2e193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.412 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.428 226109 DEBUG oslo_concurrency.lockutils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.685 226109 DEBUG oslo_concurrency.lockutils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.686 226109 DEBUG oslo_concurrency.lockutils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.686 226109 INFO nova.compute.manager [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Attaching volume df0e5c71-5906-44fa-bff0-b66876c47ac5 to /dev/vdb
Dec 06 07:42:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:17.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.877 226109 DEBUG os_brick.utils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.878 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.890 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.891 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0cd779-8d62-4bff-b2cd-dc9221f4c0ac]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.893 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.901 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.901 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[520473db-8425-481e-98d1-14eb8d4e2b3c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.903 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.912 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.912 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[0256db90-446a-4845-90b5-41d7a1172f63]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.913 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4fbb59-5971-4927-99a1-c1fefece4fef]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.914 226109 DEBUG oslo_concurrency.processutils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.949 226109 DEBUG oslo_concurrency.processutils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.951 226109 DEBUG os_brick.initiator.connectors.lightos [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.952 226109 DEBUG os_brick.initiator.connectors.lightos [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.952 226109 DEBUG os_brick.initiator.connectors.lightos [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.952 226109 DEBUG os_brick.utils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:42:17 compute-1 nova_compute[226101]: 2025-12-06 07:42:17.953 226109 DEBUG nova.virt.block_device [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updating existing volume attachment record: 4fee68e0-59a5-4e0f-9dd0-bfc35724e881 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:42:17 compute-1 ceph-mon[81689]: pgmap v2602: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.4 MiB/s wr, 269 op/s
Dec 06 07:42:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/757135810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.316 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.682 226109 DEBUG nova.network.neutron [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Updating instance_info_cache with network_info: [{"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.716 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Releasing lock "refresh_cache-5a5fdc2a-bc74-432d-affc-9a57f442105e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.717 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Instance network_info: |[{"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.717 226109 DEBUG oslo_concurrency.lockutils [req-c0c4c9a8-ae47-4678-b70e-f3492e9e1853 req-8ad2ed35-0a6f-45f7-83c4-acf567df1415 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5a5fdc2a-bc74-432d-affc-9a57f442105e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.717 226109 DEBUG nova.network.neutron [req-c0c4c9a8-ae47-4678-b70e-f3492e9e1853 req-8ad2ed35-0a6f-45f7-83c4-acf567df1415 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Refreshing network info cache for port 9e158df5-da21-4989-b7ce-1f2ae62d5a2f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.722 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Start _get_guest_xml network_info=[{"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.728 226109 WARNING nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.734 226109 DEBUG nova.virt.libvirt.host [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.735 226109 DEBUG nova.virt.libvirt.host [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.741 226109 DEBUG nova.virt.libvirt.host [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.742 226109 DEBUG nova.virt.libvirt.host [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.743 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.744 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.744 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.744 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.745 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.745 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.745 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.746 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.746 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.746 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.747 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.747 226109 DEBUG nova.virt.hardware [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:42:18 compute-1 nova_compute[226101]: 2025-12-06 07:42:18.752 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3917993197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.215 226109 DEBUG nova.objects.instance [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid f50ab4f6-4d4c-488a-9793-0b9979a2e193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.236 226109 DEBUG nova.virt.libvirt.driver [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Attempting to attach volume df0e5c71-5906-44fa-bff0-b66876c47ac5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.238 226109 DEBUG nova.virt.libvirt.guest [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-df0e5c71-5906-44fa-bff0-b66876c47ac5">
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </source>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <serial>df0e5c71-5906-44fa-bff0-b66876c47ac5</serial>
Dec 06 07:42:19 compute-1 nova_compute[226101]: </disk>
Dec 06 07:42:19 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:42:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:42:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/363974003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.515 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.542 226109 DEBUG nova.storage.rbd_utils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.546 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:19.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:19.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:42:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2820750602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.962 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.964 226109 DEBUG nova.virt.libvirt.vif [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:42:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-309022851',display_name='tempest-ServersTestJSON-server-309022851',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-309022851',id=145,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-s2zoillg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:42:11Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=5a5fdc2a-bc74-432d-affc-9a57f442105e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.964 226109 DEBUG nova.network.os_vif_util [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.965 226109 DEBUG nova.network.os_vif_util [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:dc:0f,bridge_name='br-int',has_traffic_filtering=True,id=9e158df5-da21-4989-b7ce-1f2ae62d5a2f,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e158df5-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.966 226109 DEBUG nova.objects.instance [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a5fdc2a-bc74-432d-affc-9a57f442105e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.982 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <uuid>5a5fdc2a-bc74-432d-affc-9a57f442105e</uuid>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <name>instance-00000091</name>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersTestJSON-server-309022851</nova:name>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:42:18</nova:creationTime>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <nova:user uuid="0d8b62a3276f4a8b8349af67b82134c8">tempest-ServersTestJSON-374151197-project-member</nova:user>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <nova:project uuid="eff1f6a1654b45079de20eddb830e76d">tempest-ServersTestJSON-374151197</nova:project>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <nova:port uuid="9e158df5-da21-4989-b7ce-1f2ae62d5a2f">
Dec 06 07:42:19 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <system>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <entry name="serial">5a5fdc2a-bc74-432d-affc-9a57f442105e</entry>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <entry name="uuid">5a5fdc2a-bc74-432d-affc-9a57f442105e</entry>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </system>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <os>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </os>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <features>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </features>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/5a5fdc2a-bc74-432d-affc-9a57f442105e_disk">
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       </source>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/5a5fdc2a-bc74-432d-affc-9a57f442105e_disk.config">
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       </source>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:42:19 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:29:dc:0f"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <target dev="tap9e158df5-da"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e/console.log" append="off"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <video>
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </video>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:42:19 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:42:19 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:42:19 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:42:19 compute-1 nova_compute[226101]: </domain>
Dec 06 07:42:19 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.983 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Preparing to wait for external event network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.984 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.984 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.985 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.986 226109 DEBUG nova.virt.libvirt.vif [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:42:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-309022851',display_name='tempest-ServersTestJSON-server-309022851',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-309022851',id=145,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-s2zoillg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:42:11Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=5a5fdc2a-bc74-432d-affc-9a57f442105e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.986 226109 DEBUG nova.network.os_vif_util [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.987 226109 DEBUG nova.network.os_vif_util [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:dc:0f,bridge_name='br-int',has_traffic_filtering=True,id=9e158df5-da21-4989-b7ce-1f2ae62d5a2f,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e158df5-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.987 226109 DEBUG os_vif [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:dc:0f,bridge_name='br-int',has_traffic_filtering=True,id=9e158df5-da21-4989-b7ce-1f2ae62d5a2f,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e158df5-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.991 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.992 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.993 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.996 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.996 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e158df5-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.997 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e158df5-da, col_values=(('external_ids', {'iface-id': '9e158df5-da21-4989-b7ce-1f2ae62d5a2f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:dc:0f', 'vm-uuid': '5a5fdc2a-bc74-432d-affc-9a57f442105e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:19 compute-1 nova_compute[226101]: 2025-12-06 07:42:19.998 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:19 compute-1 NetworkManager[49031]: <info>  [1765006939.9995] manager: (tap9e158df5-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.000 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.004 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.005 226109 INFO os_vif [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:dc:0f,bridge_name='br-int',has_traffic_filtering=True,id=9e158df5-da21-4989-b7ce-1f2ae62d5a2f,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e158df5-da')
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.078 226109 DEBUG nova.virt.libvirt.driver [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.079 226109 DEBUG nova.virt.libvirt.driver [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.079 226109 DEBUG nova.virt.libvirt.driver [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.079 226109 DEBUG nova.virt.libvirt.driver [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No VIF found with MAC fa:16:3e:c0:83:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.181 226109 DEBUG nova.network.neutron [req-c0c4c9a8-ae47-4678-b70e-f3492e9e1853 req-8ad2ed35-0a6f-45f7-83c4-acf567df1415 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Updated VIF entry in instance network info cache for port 9e158df5-da21-4989-b7ce-1f2ae62d5a2f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.182 226109 DEBUG nova.network.neutron [req-c0c4c9a8-ae47-4678-b70e-f3492e9e1853 req-8ad2ed35-0a6f-45f7-83c4-acf567df1415 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Updating instance_info_cache with network_info: [{"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.207 226109 DEBUG oslo_concurrency.lockutils [req-c0c4c9a8-ae47-4678-b70e-f3492e9e1853 req-8ad2ed35-0a6f-45f7-83c4-acf567df1415 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5a5fdc2a-bc74-432d-affc-9a57f442105e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.288 226109 DEBUG oslo_concurrency.lockutils [None req-fe28aa0b-7d23-40c5-ab57-10cae9f05cfa 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.302 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.303 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.303 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.319 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.470 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.470 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.471 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No VIF found with MAC fa:16:3e:29:dc:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:42:20 compute-1 nova_compute[226101]: 2025-12-06 07:42:20.471 226109 INFO nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Using config drive
Dec 06 07:42:20 compute-1 ceph-mon[81689]: pgmap v2603: 305 pgs: 305 active+clean; 459 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 153 op/s
Dec 06 07:42:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4069215907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/305989615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/363974003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2820750602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/733693291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:21.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:21.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:23.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:23.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:24 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:42:25 compute-1 ovn_controller[130279]: 2025-12-06T07:42:25Z|00566|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec 06 07:42:25 compute-1 ovn_controller[130279]: 2025-12-06T07:42:25Z|00567|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec 06 07:42:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:25.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:25.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:27.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:28 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 07:42:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:29.482 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:42:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:29.483 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:42:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:29.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:29 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 8.812995911s
Dec 06 07:42:29 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 8.812995911s
Dec 06 07:42:29 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.813199997s, txc = 0x55b5546b0000
Dec 06 07:42:30 compute-1 ovsdb-server[47252]: ovs|00006|reconnect|ERR|tcp:127.0.0.1:48354: no response to inactivity probe after 5 seconds, disconnecting
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.032 226109 DEBUG nova.storage.rbd_utils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.037 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.037 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.037 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.038 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid d954d607-525c-4edf-ab9e-56658dd2525a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 51 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.041 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.044 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.044 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering BACKOFF _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.533 226109 INFO nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Creating config drive at /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e/disk.config
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.538 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_z5b8pq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:30 compute-1 nova_compute[226101]: 2025-12-06 07:42:30.674 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_z5b8pq" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:31 compute-1 podman[281455]: 2025-12-06 07:42:31.101086028 +0000 UTC m=+0.071332901 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:42:31 compute-1 podman[281454]: 2025-12-06 07:42:31.109440568 +0000 UTC m=+0.080676506 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:42:31 compute-1 podman[281456]: 2025-12-06 07:42:31.183278895 +0000 UTC m=+0.145301871 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 07:42:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/559352823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:31 compute-1 ceph-mon[81689]: pgmap v2604: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 160 op/s
Dec 06 07:42:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:33.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.091993332s, txc = 0x55b5535c4f00
Dec 06 07:42:33 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.542945862s, txc = 0x55b55391a000
Dec 06 07:42:34 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.012213707s, txc = 0x55b5539c3200
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.331 226109 DEBUG nova.storage.rbd_utils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.336 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e/disk.config 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.371 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.372 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.373 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLOUT] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.373 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.374 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.377 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:34 compute-1 nova_compute[226101]: 2025-12-06 07:42:34.380 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/743800435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:34 compute-1 ceph-mon[81689]: pgmap v2605: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 282 KiB/s rd, 3.5 MiB/s wr, 107 op/s
Dec 06 07:42:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/946810008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:34 compute-1 ceph-mon[81689]: pgmap v2606: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Dec 06 07:42:34 compute-1 ceph-mon[81689]: pgmap v2607: 305 pgs: 305 active+clean; 476 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.3 MiB/s wr, 100 op/s
Dec 06 07:42:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3930922968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:34 compute-1 ceph-mon[81689]: pgmap v2608: 305 pgs: 305 active+clean; 476 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 2.0 MiB/s wr, 71 op/s
Dec 06 07:42:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.078 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Updating instance_info_cache with network_info: [{"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.101 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-d954d607-525c-4edf-ab9e-56658dd2525a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.101 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.102 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.102 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.102 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.102 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.102 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.103 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.103 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:42:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:35.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:35.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:35 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Dec 06 07:42:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:35.850382) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:42:35 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Dec 06 07:42:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955850495, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1909, "num_deletes": 258, "total_data_size": 4108997, "memory_usage": 4166504, "flush_reason": "Manual Compaction"}
Dec 06 07:42:35 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.876 226109 DEBUG oslo_concurrency.processutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e/disk.config 5a5fdc2a-bc74-432d-affc-9a57f442105e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.877 226109 INFO nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Deleting local config drive /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e/disk.config because it was imported into RBD.
Dec 06 07:42:35 compute-1 kernel: tap9e158df5-da: entered promiscuous mode
Dec 06 07:42:35 compute-1 NetworkManager[49031]: <info>  [1765006955.9632] manager: (tap9e158df5-da): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Dec 06 07:42:35 compute-1 ovn_controller[130279]: 2025-12-06T07:42:35Z|00568|binding|INFO|Claiming lport 9e158df5-da21-4989-b7ce-1f2ae62d5a2f for this chassis.
Dec 06 07:42:35 compute-1 ovn_controller[130279]: 2025-12-06T07:42:35Z|00569|binding|INFO|9e158df5-da21-4989-b7ce-1f2ae62d5a2f: Claiming fa:16:3e:29:dc:0f 10.100.0.9
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.966 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:35.978 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:dc:0f 10.100.0.9'], port_security=['fa:16:3e:29:dc:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5a5fdc2a-bc74-432d-affc-9a57f442105e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9e158df5-da21-4989-b7ce-1f2ae62d5a2f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:42:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:35.981 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9e158df5-da21-4989-b7ce-1f2ae62d5a2f in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 bound to our chassis
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.984 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:35.984 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:42:35 compute-1 ovn_controller[130279]: 2025-12-06T07:42:35Z|00570|binding|INFO|Setting lport 9e158df5-da21-4989-b7ce-1f2ae62d5a2f ovn-installed in OVS
Dec 06 07:42:35 compute-1 ovn_controller[130279]: 2025-12-06T07:42:35Z|00571|binding|INFO|Setting lport 9e158df5-da21-4989-b7ce-1f2ae62d5a2f up in Southbound
Dec 06 07:42:35 compute-1 nova_compute[226101]: 2025-12-06 07:42:35.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.005 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[887e3b59-0d60-471f-b7cd-dcd1e3aa78f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:36 compute-1 systemd-udevd[281561]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:42:36 compute-1 systemd-machined[190302]: New machine qemu-66-instance-00000091.
Dec 06 07:42:36 compute-1 NetworkManager[49031]: <info>  [1765006956.0258] device (tap9e158df5-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:42:36 compute-1 NetworkManager[49031]: <info>  [1765006956.0271] device (tap9e158df5-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:42:36 compute-1 systemd[1]: Started Virtual Machine qemu-66-instance-00000091.
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.041 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e68edca4-68cf-47ae-91b1-f4bd5755e1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.047 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2dba4275-c04a-4152-b582-713958abfeec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.079 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2e5f51-a932-41a4-9a2e-a1fe33432a07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.099 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[843dc34e-37fe-41af-8bc4-1ee8f8cbdbda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713835, 'reachable_time': 30087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281571, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.124 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb51355-c9bd-4da4-bf36-094fe2247035]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap35a27638-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713845, 'tstamp': 713845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281575, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35a27638-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713847, 'tstamp': 713847}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281575, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.128 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:36 compute-1 nova_compute[226101]: 2025-12-06 07:42:36.130 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:36 compute-1 nova_compute[226101]: 2025-12-06 07:42:36.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.133 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.134 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.134 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:36.135 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006956207020, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 1715795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53238, "largest_seqno": 55142, "table_properties": {"data_size": 1709538, "index_size": 3203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17153, "raw_average_key_size": 21, "raw_value_size": 1695498, "raw_average_value_size": 2151, "num_data_blocks": 140, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006799, "oldest_key_time": 1765006799, "file_creation_time": 1765006955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 356638 microseconds, and 6616 cpu microseconds.
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:42:36 compute-1 nova_compute[226101]: 2025-12-06 07:42:36.237 226109 DEBUG nova.compute.manager [req-2a6246e5-7e0b-41db-b103-fda750cb249c req-2f535496-e55a-41f7-a0e5-a9732f11f0af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received event network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:36 compute-1 nova_compute[226101]: 2025-12-06 07:42:36.237 226109 DEBUG oslo_concurrency.lockutils [req-2a6246e5-7e0b-41db-b103-fda750cb249c req-2f535496-e55a-41f7-a0e5-a9732f11f0af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:36 compute-1 nova_compute[226101]: 2025-12-06 07:42:36.238 226109 DEBUG oslo_concurrency.lockutils [req-2a6246e5-7e0b-41db-b103-fda750cb249c req-2f535496-e55a-41f7-a0e5-a9732f11f0af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:36 compute-1 nova_compute[226101]: 2025-12-06 07:42:36.238 226109 DEBUG oslo_concurrency.lockutils [req-2a6246e5-7e0b-41db-b103-fda750cb249c req-2f535496-e55a-41f7-a0e5-a9732f11f0af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:36 compute-1 nova_compute[226101]: 2025-12-06 07:42:36.238 226109 DEBUG nova.compute.manager [req-2a6246e5-7e0b-41db-b103-fda750cb249c req-2f535496-e55a-41f7-a0e5-a9732f11f0af 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Processing event network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:42:36 compute-1 ceph-mon[81689]: pgmap v2609: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 2.7 MiB/s wr, 67 op/s
Dec 06 07:42:36 compute-1 ceph-mon[81689]: pgmap v2610: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 1.5 MiB/s wr, 31 op/s
Dec 06 07:42:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1626641352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:36 compute-1 ceph-mon[81689]: pgmap v2611: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 1.5 MiB/s wr, 32 op/s
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:36.207064) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 1715795 bytes OK
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:36.207082) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:36.313935) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:36.313988) EVENT_LOG_v1 {"time_micros": 1765006956313977, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:36.314016) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 4100272, prev total WAL file size 4153274, number of live WAL files 2.
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:36.315561) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373735' seq:72057594037927935, type:22 .. '6D6772737461740032303237' seq:0, type:0; will stop at (end)
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(1675KB)], [102(12MB)]
Dec 06 07:42:36 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006956315619, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14873049, "oldest_snapshot_seqno": -1}
Dec 06 07:42:37 compute-1 nova_compute[226101]: 2025-12-06 07:42:37.458 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:37.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:37.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8874 keys, 11975268 bytes, temperature: kUnknown
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006958094273, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 11975268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11917828, "index_size": 34160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 229426, "raw_average_key_size": 25, "raw_value_size": 11761709, "raw_average_value_size": 1325, "num_data_blocks": 1340, "num_entries": 8874, "num_filter_entries": 8874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765006956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:38.094828) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 11975268 bytes
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:38.276524) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 8.4 rd, 6.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 12.5 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(15.6) write-amplify(7.0) OK, records in: 9344, records dropped: 470 output_compression: NoCompression
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:38.276585) EVENT_LOG_v1 {"time_micros": 1765006958276561, "job": 64, "event": "compaction_finished", "compaction_time_micros": 1778939, "compaction_time_cpu_micros": 26889, "output_level": 6, "num_output_files": 1, "total_output_size": 11975268, "num_input_records": 9344, "num_output_records": 8874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006958277567, "job": 64, "event": "table_file_deletion", "file_number": 104}
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006958282700, "job": 64, "event": "table_file_deletion", "file_number": 102}
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:36.315477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:38.282816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:38.282822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:38.282823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:38.282825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:38 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:42:38.282827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:38 compute-1 nova_compute[226101]: 2025-12-06 07:42:38.333 226109 DEBUG nova.compute.manager [req-54c8da63-a188-4ce9-91dd-3abed6cad971 req-41781a46-a114-464b-8dda-f9cca412a547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received event network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:38 compute-1 nova_compute[226101]: 2025-12-06 07:42:38.334 226109 DEBUG oslo_concurrency.lockutils [req-54c8da63-a188-4ce9-91dd-3abed6cad971 req-41781a46-a114-464b-8dda-f9cca412a547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:38 compute-1 nova_compute[226101]: 2025-12-06 07:42:38.334 226109 DEBUG oslo_concurrency.lockutils [req-54c8da63-a188-4ce9-91dd-3abed6cad971 req-41781a46-a114-464b-8dda-f9cca412a547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:38 compute-1 nova_compute[226101]: 2025-12-06 07:42:38.335 226109 DEBUG oslo_concurrency.lockutils [req-54c8da63-a188-4ce9-91dd-3abed6cad971 req-41781a46-a114-464b-8dda-f9cca412a547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:38 compute-1 nova_compute[226101]: 2025-12-06 07:42:38.335 226109 DEBUG nova.compute.manager [req-54c8da63-a188-4ce9-91dd-3abed6cad971 req-41781a46-a114-464b-8dda-f9cca412a547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] No waiting events found dispatching network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:42:38 compute-1 nova_compute[226101]: 2025-12-06 07:42:38.335 226109 WARNING nova.compute.manager [req-54c8da63-a188-4ce9-91dd-3abed6cad971 req-41781a46-a114-464b-8dda-f9cca412a547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received unexpected event network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f for instance with vm_state building and task_state spawning.
Dec 06 07:42:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:38.485 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3098859861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/80076762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1153323595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:38 compute-1 ceph-mon[81689]: pgmap v2612: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 32 op/s
Dec 06 07:42:38 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Dec 06 07:42:39 compute-1 nova_compute[226101]: 2025-12-06 07:42:39.379 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:39.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:39.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:40 compute-1 ceph-mon[81689]: pgmap v2613: 305 pgs: 305 active+clean; 523 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.292 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006961.2916064, 5a5fdc2a-bc74-432d-affc-9a57f442105e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.292 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] VM Started (Lifecycle Event)
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.294 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.298 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.303 226109 INFO nova.virt.libvirt.driver [-] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Instance spawned successfully.
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.303 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.311 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.316 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.323 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.323 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.324 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.324 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.325 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.325 226109 DEBUG nova.virt.libvirt.driver [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.333 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.334 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006961.2917404, 5a5fdc2a-bc74-432d-affc-9a57f442105e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.334 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] VM Paused (Lifecycle Event)
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.355 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.358 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006961.2978265, 5a5fdc2a-bc74-432d-affc-9a57f442105e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.358 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] VM Resumed (Lifecycle Event)
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.382 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.385 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.395 226109 INFO nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Took 30.21 seconds to spawn the instance on the hypervisor.
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.395 226109 DEBUG nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.407 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.451 226109 INFO nova.compute.manager [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Took 33.93 seconds to build instance.
Dec 06 07:42:41 compute-1 nova_compute[226101]: 2025-12-06 07:42:41.467 226109 DEBUG oslo_concurrency.lockutils [None req-ddf6fb18-15d0-44b4-a6c0-ddfefb975f5c 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 34.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:41.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:42 compute-1 ceph-mon[81689]: pgmap v2614: 305 pgs: 305 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Dec 06 07:42:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2337735335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:42 compute-1 nova_compute[226101]: 2025-12-06 07:42:42.501 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/713282379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4204003624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/35790465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:42:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/35790465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:42:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1277896716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:43 compute-1 ceph-mon[81689]: pgmap v2615: 305 pgs: 305 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.3 MiB/s wr, 42 op/s
Dec 06 07:42:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:43.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:43.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:44 compute-1 nova_compute[226101]: 2025-12-06 07:42:44.380 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.004 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "5a5fdc2a-bc74-432d-affc-9a57f442105e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.005 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.005 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.005 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.005 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.006 226109 INFO nova.compute.manager [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Terminating instance
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.007 226109 DEBUG nova.compute.manager [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:42:45 compute-1 ceph-mon[81689]: pgmap v2616: 305 pgs: 305 active+clean; 607 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 763 KiB/s rd, 4.0 MiB/s wr, 103 op/s
Dec 06 07:42:45 compute-1 kernel: tap9e158df5-da (unregistering): left promiscuous mode
Dec 06 07:42:45 compute-1 NetworkManager[49031]: <info>  [1765006965.2691] device (tap9e158df5-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:42:45 compute-1 ovn_controller[130279]: 2025-12-06T07:42:45Z|00572|binding|INFO|Releasing lport 9e158df5-da21-4989-b7ce-1f2ae62d5a2f from this chassis (sb_readonly=0)
Dec 06 07:42:45 compute-1 ovn_controller[130279]: 2025-12-06T07:42:45Z|00573|binding|INFO|Setting lport 9e158df5-da21-4989-b7ce-1f2ae62d5a2f down in Southbound
Dec 06 07:42:45 compute-1 ovn_controller[130279]: 2025-12-06T07:42:45Z|00574|binding|INFO|Removing iface tap9e158df5-da ovn-installed in OVS
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.279 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.288 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:dc:0f 10.100.0.9'], port_security=['fa:16:3e:29:dc:0f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5a5fdc2a-bc74-432d-affc-9a57f442105e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9e158df5-da21-4989-b7ce-1f2ae62d5a2f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.290 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9e158df5-da21-4989-b7ce-1f2ae62d5a2f in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 unbound from our chassis
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.291 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.297 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.310 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[412bf6d5-50eb-45e2-a891-4795176e1759]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:45 compute-1 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000091.scope: Deactivated successfully.
Dec 06 07:42:45 compute-1 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000091.scope: Consumed 4.345s CPU time.
Dec 06 07:42:45 compute-1 systemd-machined[190302]: Machine qemu-66-instance-00000091 terminated.
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.349 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d41667-3b1c-4c2e-9480-60a328b12477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.354 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4176ff-6c4a-4f80-acf1-798f5cab247c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.383 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.389 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[53d08910-68d4-418a-8fb5-790222c0fad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.408 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d01d611-2529-4e5b-a4d3-d15d3c6339fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713835, 'reachable_time': 30087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281632, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.431 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[71a0f896-326a-4e64-9855-c56ab162cf2b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap35a27638-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713845, 'tstamp': 713845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281634, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35a27638-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713847, 'tstamp': 713847}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281634, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.434 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.435 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.438 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.439 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.439 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.440 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:42:45.440 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.448 226109 INFO nova.virt.libvirt.driver [-] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Instance destroyed successfully.
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.449 226109 DEBUG nova.objects.instance [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'resources' on Instance uuid 5a5fdc2a-bc74-432d-affc-9a57f442105e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.461 226109 DEBUG nova.virt.libvirt.vif [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:42:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-309022851',display_name='tempest-ServersTestJSON-server-309022851',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-309022851',id=145,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:42:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-s2zoillg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:42:41Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=5a5fdc2a-bc74-432d-affc-9a57f442105e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.461 226109 DEBUG nova.network.os_vif_util [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "address": "fa:16:3e:29:dc:0f", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e158df5-da", "ovs_interfaceid": "9e158df5-da21-4989-b7ce-1f2ae62d5a2f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.462 226109 DEBUG nova.network.os_vif_util [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:dc:0f,bridge_name='br-int',has_traffic_filtering=True,id=9e158df5-da21-4989-b7ce-1f2ae62d5a2f,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e158df5-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.462 226109 DEBUG os_vif [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:dc:0f,bridge_name='br-int',has_traffic_filtering=True,id=9e158df5-da21-4989-b7ce-1f2ae62d5a2f,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e158df5-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.464 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.465 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e158df5-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.467 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:45 compute-1 nova_compute[226101]: 2025-12-06 07:42:45.469 226109 INFO os_vif [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:dc:0f,bridge_name='br-int',has_traffic_filtering=True,id=9e158df5-da21-4989-b7ce-1f2ae62d5a2f,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e158df5-da')
Dec 06 07:42:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:45.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:45.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.764 226109 DEBUG nova.compute.manager [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received event network-vif-unplugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.765 226109 DEBUG oslo_concurrency.lockutils [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.765 226109 DEBUG oslo_concurrency.lockutils [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.765 226109 DEBUG oslo_concurrency.lockutils [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.765 226109 DEBUG nova.compute.manager [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] No waiting events found dispatching network-vif-unplugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.766 226109 DEBUG nova.compute.manager [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received event network-vif-unplugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.766 226109 DEBUG nova.compute.manager [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received event network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.766 226109 DEBUG oslo_concurrency.lockutils [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.766 226109 DEBUG oslo_concurrency.lockutils [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.766 226109 DEBUG oslo_concurrency.lockutils [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.767 226109 DEBUG nova.compute.manager [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] No waiting events found dispatching network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:42:46 compute-1 nova_compute[226101]: 2025-12-06 07:42:46.767 226109 WARNING nova.compute.manager [req-57a4818a-9794-4bab-9b47-7236bce89002 req-1f32b49d-75c2-470d-be8d-a2664e520471 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received unexpected event network-vif-plugged-9e158df5-da21-4989-b7ce-1f2ae62d5a2f for instance with vm_state active and task_state deleting.
Dec 06 07:42:47 compute-1 ceph-mon[81689]: pgmap v2617: 305 pgs: 305 active+clean; 623 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 289 op/s
Dec 06 07:42:47 compute-1 nova_compute[226101]: 2025-12-06 07:42:47.320 226109 INFO nova.virt.libvirt.driver [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Deleting instance files /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e_del
Dec 06 07:42:47 compute-1 nova_compute[226101]: 2025-12-06 07:42:47.322 226109 INFO nova.virt.libvirt.driver [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Deletion of /var/lib/nova/instances/5a5fdc2a-bc74-432d-affc-9a57f442105e_del complete
Dec 06 07:42:47 compute-1 nova_compute[226101]: 2025-12-06 07:42:47.380 226109 INFO nova.compute.manager [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Took 2.37 seconds to destroy the instance on the hypervisor.
Dec 06 07:42:47 compute-1 nova_compute[226101]: 2025-12-06 07:42:47.380 226109 DEBUG oslo.service.loopingcall [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:42:47 compute-1 nova_compute[226101]: 2025-12-06 07:42:47.381 226109 DEBUG nova.compute.manager [-] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:42:47 compute-1 nova_compute[226101]: 2025-12-06 07:42:47.381 226109 DEBUG nova.network.neutron [-] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:42:47 compute-1 nova_compute[226101]: 2025-12-06 07:42:47.502 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:47.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:48 compute-1 nova_compute[226101]: 2025-12-06 07:42:48.643 226109 DEBUG nova.network.neutron [-] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:48 compute-1 nova_compute[226101]: 2025-12-06 07:42:48.665 226109 INFO nova.compute.manager [-] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Took 1.28 seconds to deallocate network for instance.
Dec 06 07:42:48 compute-1 nova_compute[226101]: 2025-12-06 07:42:48.707 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:48 compute-1 nova_compute[226101]: 2025-12-06 07:42:48.708 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:48 compute-1 nova_compute[226101]: 2025-12-06 07:42:48.816 226109 DEBUG oslo_concurrency.processutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2237471296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:49 compute-1 nova_compute[226101]: 2025-12-06 07:42:49.268 226109 DEBUG oslo_concurrency.processutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:49 compute-1 nova_compute[226101]: 2025-12-06 07:42:49.285 226109 DEBUG nova.compute.provider_tree [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:49 compute-1 nova_compute[226101]: 2025-12-06 07:42:49.308 226109 DEBUG nova.scheduler.client.report [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:49 compute-1 nova_compute[226101]: 2025-12-06 07:42:49.360 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:49 compute-1 nova_compute[226101]: 2025-12-06 07:42:49.391 226109 INFO nova.scheduler.client.report [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Deleted allocations for instance 5a5fdc2a-bc74-432d-affc-9a57f442105e
Dec 06 07:42:49 compute-1 nova_compute[226101]: 2025-12-06 07:42:49.465 226109 DEBUG oslo_concurrency.lockutils [None req-ee26f3f3-b597-49a1-bca8-2d76875317ce 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "5a5fdc2a-bc74-432d-affc-9a57f442105e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:49.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:50 compute-1 ceph-mon[81689]: pgmap v2618: 305 pgs: 305 active+clean; 585 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.3 MiB/s wr, 374 op/s
Dec 06 07:42:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2608166669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2237471296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:50 compute-1 nova_compute[226101]: 2025-12-06 07:42:50.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:50 compute-1 nova_compute[226101]: 2025-12-06 07:42:50.840 226109 DEBUG nova.compute.manager [req-6094419e-e04c-4439-bf29-8ac8b11abb1c req-ae9f9d8f-44dc-4a8b-88d3-7a2711fe9ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Received event network-vif-deleted-9e158df5-da21-4989-b7ce-1f2ae62d5a2f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:51.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:51.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:52 compute-1 ceph-mon[81689]: pgmap v2619: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 3.7 MiB/s wr, 420 op/s
Dec 06 07:42:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3645391525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:52 compute-1 nova_compute[226101]: 2025-12-06 07:42:52.538 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:53.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:54 compute-1 ceph-mon[81689]: pgmap v2620: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.0 MiB/s wr, 399 op/s
Dec 06 07:42:54 compute-1 ovn_controller[130279]: 2025-12-06T07:42:54Z|00575|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec 06 07:42:54 compute-1 ovn_controller[130279]: 2025-12-06T07:42:54Z|00576|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec 06 07:42:54 compute-1 nova_compute[226101]: 2025-12-06 07:42:54.157 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:55 compute-1 ceph-mon[81689]: pgmap v2621: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.0 MiB/s wr, 406 op/s
Dec 06 07:42:55 compute-1 nova_compute[226101]: 2025-12-06 07:42:55.468 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:55.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:55.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.223 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.223 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.239 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.319 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.320 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.325 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.325 226109 INFO nova.compute.claims [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.461 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2087247517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.901 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.907 226109 DEBUG nova.compute.provider_tree [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.931 226109 DEBUG nova.scheduler.client.report [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2087247517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.954 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:56 compute-1 nova_compute[226101]: 2025-12-06 07:42:56.955 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.002 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.002 226109 DEBUG nova.network.neutron [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.021 226109 INFO nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.038 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.116 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.117 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.118 226109 INFO nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Creating image(s)
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.142 226109 DEBUG nova.storage.rbd_utils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.167 226109 DEBUG nova.storage.rbd_utils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.191 226109 DEBUG nova.storage.rbd_utils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.196 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.262 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.263 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.264 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.264 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.287 226109 DEBUG nova.storage.rbd_utils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.290 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.325 226109 DEBUG nova.policy [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d8b62a3276f4a8b8349af67b82134c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eff1f6a1654b45079de20eddb830e76d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.589 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.601 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.669 226109 DEBUG nova.storage.rbd_utils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] resizing rbd image 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.788 226109 DEBUG nova.objects.instance [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'migration_context' on Instance uuid 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.803 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.804 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Ensure instance console log exists: /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.805 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.805 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:57 compute-1 nova_compute[226101]: 2025-12-06 07:42:57.806 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:57.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:42:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:57.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:42:57 compute-1 ceph-mon[81689]: pgmap v2622: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 1.7 MiB/s wr, 388 op/s
Dec 06 07:42:58 compute-1 nova_compute[226101]: 2025-12-06 07:42:58.028 226109 DEBUG nova.network.neutron [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Successfully created port: 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:42:58 compute-1 nova_compute[226101]: 2025-12-06 07:42:58.907 226109 DEBUG nova.network.neutron [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Successfully updated port: 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:42:58 compute-1 nova_compute[226101]: 2025-12-06 07:42:58.924 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "refresh_cache-2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:42:58 compute-1 nova_compute[226101]: 2025-12-06 07:42:58.924 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquired lock "refresh_cache-2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:42:58 compute-1 nova_compute[226101]: 2025-12-06 07:42:58.924 226109 DEBUG nova.network.neutron [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.003 226109 DEBUG nova.compute.manager [req-6ff66f4e-07a9-4dd0-9d52-c41de813efec req-82d45a24-dee0-4fb8-958f-b54c046336f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Received event network-changed-9b1ce5f2-ce3a-4770-b68b-cd43556f8871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.003 226109 DEBUG nova.compute.manager [req-6ff66f4e-07a9-4dd0-9d52-c41de813efec req-82d45a24-dee0-4fb8-958f-b54c046336f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Refreshing instance network info cache due to event network-changed-9b1ce5f2-ce3a-4770-b68b-cd43556f8871. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.004 226109 DEBUG oslo_concurrency.lockutils [req-6ff66f4e-07a9-4dd0-9d52-c41de813efec req-82d45a24-dee0-4fb8-958f-b54c046336f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.053 226109 DEBUG nova.network.neutron [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:42:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:42:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:42:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:59 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:42:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:42:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.945 226109 DEBUG nova.network.neutron [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Updating instance_info_cache with network_info: [{"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.965 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Releasing lock "refresh_cache-2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.965 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Instance network_info: |[{"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.966 226109 DEBUG oslo_concurrency.lockutils [req-6ff66f4e-07a9-4dd0-9d52-c41de813efec req-82d45a24-dee0-4fb8-958f-b54c046336f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.966 226109 DEBUG nova.network.neutron [req-6ff66f4e-07a9-4dd0-9d52-c41de813efec req-82d45a24-dee0-4fb8-958f-b54c046336f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Refreshing network info cache for port 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.968 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Start _get_guest_xml network_info=[{"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.972 226109 WARNING nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.987 226109 DEBUG nova.virt.libvirt.host [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.988 226109 DEBUG nova.virt.libvirt.host [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.995 226109 DEBUG nova.virt.libvirt.host [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.996 226109 DEBUG nova.virt.libvirt.host [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.997 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.997 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.997 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.998 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.998 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.998 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.998 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.998 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.999 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.999 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.999 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:42:59 compute-1 nova_compute[226101]: 2025-12-06 07:42:59.999 226109 DEBUG nova.virt.hardware [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.001 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:00 compute-1 ceph-mon[81689]: pgmap v2623: 305 pgs: 305 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 254 op/s
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.447 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006965.446788, 5a5fdc2a-bc74-432d-affc-9a57f442105e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.448 226109 INFO nova.compute.manager [-] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] VM Stopped (Lifecycle Event)
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.470 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.475 226109 DEBUG nova.compute.manager [None req-d85fd500-bb59-43e4-8444-0aca87f554db - - - - - -] [instance: 5a5fdc2a-bc74-432d-affc-9a57f442105e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3366057383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.511 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.542 226109 DEBUG nova.storage.rbd_utils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.547 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3366057383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2101538755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.987 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.989 226109 DEBUG nova.virt.libvirt.vif [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:42:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1055944777',display_name='tempest-ServersTestJSON-server-1055944777',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1055944777',id=149,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-5s9q9u4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:42:57Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=2ac028a3-53d2-4e7d-91b6-cd4578cc36f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.989 226109 DEBUG nova.network.os_vif_util [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.990 226109 DEBUG nova.network.os_vif_util [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:22:23,bridge_name='br-int',has_traffic_filtering=True,id=9b1ce5f2-ce3a-4770-b68b-cd43556f8871,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b1ce5f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:00 compute-1 nova_compute[226101]: 2025-12-06 07:43:00.991 226109 DEBUG nova.objects.instance [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.007 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <uuid>2ac028a3-53d2-4e7d-91b6-cd4578cc36f9</uuid>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <name>instance-00000095</name>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <nova:name>tempest-ServersTestJSON-server-1055944777</nova:name>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:42:59</nova:creationTime>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <nova:user uuid="0d8b62a3276f4a8b8349af67b82134c8">tempest-ServersTestJSON-374151197-project-member</nova:user>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <nova:project uuid="eff1f6a1654b45079de20eddb830e76d">tempest-ServersTestJSON-374151197</nova:project>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <nova:port uuid="9b1ce5f2-ce3a-4770-b68b-cd43556f8871">
Dec 06 07:43:01 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <system>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <entry name="serial">2ac028a3-53d2-4e7d-91b6-cd4578cc36f9</entry>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <entry name="uuid">2ac028a3-53d2-4e7d-91b6-cd4578cc36f9</entry>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </system>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <os>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   </os>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <features>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   </features>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk">
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       </source>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk.config">
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       </source>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:43:01 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:c8:22:23"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <target dev="tap9b1ce5f2-ce"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9/console.log" append="off"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <video>
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </video>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:43:01 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:43:01 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:43:01 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:43:01 compute-1 nova_compute[226101]: </domain>
Dec 06 07:43:01 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.008 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Preparing to wait for external event network-vif-plugged-9b1ce5f2-ce3a-4770-b68b-cd43556f8871 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.008 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.009 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.009 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.009 226109 DEBUG nova.virt.libvirt.vif [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:42:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1055944777',display_name='tempest-ServersTestJSON-server-1055944777',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1055944777',id=149,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-5s9q9u4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:42:57Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=2ac028a3-53d2-4e7d-91b6-cd4578cc36f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.010 226109 DEBUG nova.network.os_vif_util [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.010 226109 DEBUG nova.network.os_vif_util [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:22:23,bridge_name='br-int',has_traffic_filtering=True,id=9b1ce5f2-ce3a-4770-b68b-cd43556f8871,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b1ce5f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.011 226109 DEBUG os_vif [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:22:23,bridge_name='br-int',has_traffic_filtering=True,id=9b1ce5f2-ce3a-4770-b68b-cd43556f8871,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b1ce5f2-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.011 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.012 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.012 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.014 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.015 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b1ce5f2-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.015 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b1ce5f2-ce, col_values=(('external_ids', {'iface-id': '9b1ce5f2-ce3a-4770-b68b-cd43556f8871', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:22:23', 'vm-uuid': '2ac028a3-53d2-4e7d-91b6-cd4578cc36f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.017 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:01 compute-1 NetworkManager[49031]: <info>  [1765006981.0177] manager: (tap9b1ce5f2-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.022 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.023 226109 INFO os_vif [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:22:23,bridge_name='br-int',has_traffic_filtering=True,id=9b1ce5f2-ce3a-4770-b68b-cd43556f8871,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b1ce5f2-ce')
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.284 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.285 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.285 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No VIF found with MAC fa:16:3e:c8:22:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.286 226109 INFO nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Using config drive
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.311 226109 DEBUG nova.storage.rbd_utils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.452 226109 DEBUG nova.network.neutron [req-6ff66f4e-07a9-4dd0-9d52-c41de813efec req-82d45a24-dee0-4fb8-958f-b54c046336f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Updated VIF entry in instance network info cache for port 9b1ce5f2-ce3a-4770-b68b-cd43556f8871. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.453 226109 DEBUG nova.network.neutron [req-6ff66f4e-07a9-4dd0-9d52-c41de813efec req-82d45a24-dee0-4fb8-958f-b54c046336f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Updating instance_info_cache with network_info: [{"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:01 compute-1 nova_compute[226101]: 2025-12-06 07:43:01.468 226109 DEBUG oslo_concurrency.lockutils [req-6ff66f4e-07a9-4dd0-9d52-c41de813efec req-82d45a24-dee0-4fb8-958f-b54c046336f5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:01.662 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:01.662 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:01.663 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:43:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:43:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:01.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:43:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:01 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:01.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.073 226109 INFO nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Creating config drive at /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9/disk.config
Dec 06 07:43:02 compute-1 podman[281953]: 2025-12-06 07:43:02.079406155 +0000 UTC m=+0.057293481 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.079 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaqznwoh9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:02 compute-1 podman[281952]: 2025-12-06 07:43:02.086375619 +0000 UTC m=+0.064554793 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:43:02 compute-1 podman[281954]: 2025-12-06 07:43:02.10577047 +0000 UTC m=+0.082268450 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:43:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2101538755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:02 compute-1 ceph-mon[81689]: pgmap v2624: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.2 MiB/s wr, 287 op/s
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.218 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaqznwoh9" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.282 226109 DEBUG nova.storage.rbd_utils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.286 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9/disk.config 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.443 226109 DEBUG oslo_concurrency.processutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9/disk.config 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.444 226109 INFO nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Deleting local config drive /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9/disk.config because it was imported into RBD.
Dec 06 07:43:02 compute-1 kernel: tap9b1ce5f2-ce: entered promiscuous mode
Dec 06 07:43:02 compute-1 NetworkManager[49031]: <info>  [1765006982.5028] manager: (tap9b1ce5f2-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Dec 06 07:43:02 compute-1 ovn_controller[130279]: 2025-12-06T07:43:02Z|00577|binding|INFO|Claiming lport 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 for this chassis.
Dec 06 07:43:02 compute-1 ovn_controller[130279]: 2025-12-06T07:43:02Z|00578|binding|INFO|9b1ce5f2-ce3a-4770-b68b-cd43556f8871: Claiming fa:16:3e:c8:22:23 10.100.0.4
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.503 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.510 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:22:23 10.100.0.4'], port_security=['fa:16:3e:c8:22:23 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2ac028a3-53d2-4e7d-91b6-cd4578cc36f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9b1ce5f2-ce3a-4770-b68b-cd43556f8871) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.512 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 bound to our chassis
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.513 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:43:02 compute-1 ovn_controller[130279]: 2025-12-06T07:43:02Z|00579|binding|INFO|Setting lport 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 ovn-installed in OVS
Dec 06 07:43:02 compute-1 ovn_controller[130279]: 2025-12-06T07:43:02Z|00580|binding|INFO|Setting lport 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 up in Southbound
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.523 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.527 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:02 compute-1 systemd-machined[190302]: New machine qemu-67-instance-00000095.
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.534 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f03ef4a5-7084-429d-b7de-1dc0986523e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:02 compute-1 systemd-udevd[282068]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:43:02 compute-1 NetworkManager[49031]: <info>  [1765006982.5470] device (tap9b1ce5f2-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:43:02 compute-1 NetworkManager[49031]: <info>  [1765006982.5477] device (tap9b1ce5f2-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:43:02 compute-1 systemd[1]: Started Virtual Machine qemu-67-instance-00000095.
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.563 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[db3833f7-99c1-4556-a999-c7fe8f2e0575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.566 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2a27a9-9a4c-4b15-a326-96b935f4a05b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.590 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.592 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[eca6d84e-d1e0-4d7c-8445-d705aed02e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.608 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[228eaea0-b5ad-4f51-9fd8-e484d6b8476d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713835, 'reachable_time': 30087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282080, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.624 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2515bb-22d4-49ad-aef0-b525174b8b36]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap35a27638-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713845, 'tstamp': 713845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282082, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35a27638-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713847, 'tstamp': 713847}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282082, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.626 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:02 compute-1 nova_compute[226101]: 2025-12-06 07:43:02.628 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.629 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.629 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.630 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:02.630 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.114 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006983.1137078, 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.114 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] VM Started (Lifecycle Event)
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.135 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.139 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006983.116253, 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.139 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] VM Paused (Lifecycle Event)
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.155 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.158 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.176 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.245 226109 DEBUG nova.compute.manager [req-5bd7b2ae-cbe8-4fb6-b100-469a0cebda18 req-4bb15033-ac88-4bb6-8a85-9a8a2cd24668 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Received event network-vif-plugged-9b1ce5f2-ce3a-4770-b68b-cd43556f8871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.245 226109 DEBUG oslo_concurrency.lockutils [req-5bd7b2ae-cbe8-4fb6-b100-469a0cebda18 req-4bb15033-ac88-4bb6-8a85-9a8a2cd24668 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.246 226109 DEBUG oslo_concurrency.lockutils [req-5bd7b2ae-cbe8-4fb6-b100-469a0cebda18 req-4bb15033-ac88-4bb6-8a85-9a8a2cd24668 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.246 226109 DEBUG oslo_concurrency.lockutils [req-5bd7b2ae-cbe8-4fb6-b100-469a0cebda18 req-4bb15033-ac88-4bb6-8a85-9a8a2cd24668 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.246 226109 DEBUG nova.compute.manager [req-5bd7b2ae-cbe8-4fb6-b100-469a0cebda18 req-4bb15033-ac88-4bb6-8a85-9a8a2cd24668 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Processing event network-vif-plugged-9b1ce5f2-ce3a-4770-b68b-cd43556f8871 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.247 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.249 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006983.2496085, 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.250 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] VM Resumed (Lifecycle Event)
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.252 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.254 226109 INFO nova.virt.libvirt.driver [-] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Instance spawned successfully.
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.254 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.270 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.275 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.278 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.279 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.279 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.280 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.280 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.281 226109 DEBUG nova.virt.libvirt.driver [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.308 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.353 226109 INFO nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Took 6.24 seconds to spawn the instance on the hypervisor.
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.354 226109 DEBUG nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.367 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.444 226109 INFO nova.compute.manager [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Took 7.17 seconds to build instance.
Dec 06 07:43:03 compute-1 nova_compute[226101]: 2025-12-06 07:43:03.479 226109 DEBUG oslo_concurrency.lockutils [None req-039909db-cd10-4c53-ba29-499d452fe617 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:03.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:43:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:03.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:43:04 compute-1 ceph-mon[81689]: pgmap v2625: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 959 KiB/s rd, 8.2 MiB/s wr, 236 op/s
Dec 06 07:43:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:05 compute-1 ceph-mon[81689]: pgmap v2626: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 965 KiB/s rd, 8.2 MiB/s wr, 238 op/s
Dec 06 07:43:05 compute-1 nova_compute[226101]: 2025-12-06 07:43:05.333 226109 DEBUG nova.compute.manager [req-8134d7c0-52fc-4ad3-b9ee-f390bfbe6de0 req-e7682a11-aec3-41a4-bc1c-a06fc27cbd48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Received event network-vif-plugged-9b1ce5f2-ce3a-4770-b68b-cd43556f8871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:05 compute-1 nova_compute[226101]: 2025-12-06 07:43:05.334 226109 DEBUG oslo_concurrency.lockutils [req-8134d7c0-52fc-4ad3-b9ee-f390bfbe6de0 req-e7682a11-aec3-41a4-bc1c-a06fc27cbd48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:05 compute-1 nova_compute[226101]: 2025-12-06 07:43:05.336 226109 DEBUG oslo_concurrency.lockutils [req-8134d7c0-52fc-4ad3-b9ee-f390bfbe6de0 req-e7682a11-aec3-41a4-bc1c-a06fc27cbd48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:05 compute-1 nova_compute[226101]: 2025-12-06 07:43:05.336 226109 DEBUG oslo_concurrency.lockutils [req-8134d7c0-52fc-4ad3-b9ee-f390bfbe6de0 req-e7682a11-aec3-41a4-bc1c-a06fc27cbd48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:05 compute-1 nova_compute[226101]: 2025-12-06 07:43:05.336 226109 DEBUG nova.compute.manager [req-8134d7c0-52fc-4ad3-b9ee-f390bfbe6de0 req-e7682a11-aec3-41a4-bc1c-a06fc27cbd48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] No waiting events found dispatching network-vif-plugged-9b1ce5f2-ce3a-4770-b68b-cd43556f8871 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:05 compute-1 nova_compute[226101]: 2025-12-06 07:43:05.337 226109 WARNING nova.compute.manager [req-8134d7c0-52fc-4ad3-b9ee-f390bfbe6de0 req-e7682a11-aec3-41a4-bc1c-a06fc27cbd48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Received unexpected event network-vif-plugged-9b1ce5f2-ce3a-4770-b68b-cd43556f8871 for instance with vm_state active and task_state None.
Dec 06 07:43:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:43:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:43:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:43:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:05 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:05.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:06 compute-1 nova_compute[226101]: 2025-12-06 07:43:06.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:06 compute-1 nova_compute[226101]: 2025-12-06 07:43:06.598 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.169 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.170 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.170 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.171 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.171 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.172 226109 INFO nova.compute.manager [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Terminating instance
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.173 226109 DEBUG nova.compute.manager [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:43:07 compute-1 kernel: tap9b1ce5f2-ce (unregistering): left promiscuous mode
Dec 06 07:43:07 compute-1 NetworkManager[49031]: <info>  [1765006987.2295] device (tap9b1ce5f2-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 ovn_controller[130279]: 2025-12-06T07:43:07Z|00581|binding|INFO|Releasing lport 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 from this chassis (sb_readonly=0)
Dec 06 07:43:07 compute-1 ovn_controller[130279]: 2025-12-06T07:43:07Z|00582|binding|INFO|Setting lport 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 down in Southbound
Dec 06 07:43:07 compute-1 ovn_controller[130279]: 2025-12-06T07:43:07Z|00583|binding|INFO|Removing iface tap9b1ce5f2-ce ovn-installed in OVS
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.254 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.260 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:22:23 10.100.0.4'], port_security=['fa:16:3e:c8:22:23 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2ac028a3-53d2-4e7d-91b6-cd4578cc36f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9b1ce5f2-ce3a-4770-b68b-cd43556f8871) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.261 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9b1ce5f2-ce3a-4770-b68b-cd43556f8871 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 unbound from our chassis
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.262 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.277 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2808cd06-7dcf-4722-b8e8-3985162e924f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:07 compute-1 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000095.scope: Deactivated successfully.
Dec 06 07:43:07 compute-1 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000095.scope: Consumed 4.638s CPU time.
Dec 06 07:43:07 compute-1 systemd-machined[190302]: Machine qemu-67-instance-00000095 terminated.
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.306 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[777436b6-30b7-4ed3-b9f9-44b86ea0324d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.309 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[429aefdc-5078-404a-a432-77be4967f9f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.339 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[17df6bf7-6131-4162-8ff6-16cd679bbeb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.358 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[41d652aa-5bf8-401b-a7ad-6d239e99bfa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713835, 'reachable_time': 30087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282137, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.372 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[928da483-85b3-46e3-b504-52117b717606]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap35a27638-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713845, 'tstamp': 713845}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282138, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35a27638-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713847, 'tstamp': 713847}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282138, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.374 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.376 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.380 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.382 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.382 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.383 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:07.383 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.393 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.398 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.409 226109 INFO nova.virt.libvirt.driver [-] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Instance destroyed successfully.
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.410 226109 DEBUG nova.objects.instance [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'resources' on Instance uuid 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.584 226109 DEBUG nova.virt.libvirt.vif [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:42:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1055944777',display_name='tempest-ServersTestJSON-server-1055944777',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-1055944777',id=149,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:43:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-5s9q9u4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:43:05Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=2ac028a3-53d2-4e7d-91b6-cd4578cc36f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.585 226109 DEBUG nova.network.os_vif_util [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "address": "fa:16:3e:c8:22:23", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b1ce5f2-ce", "ovs_interfaceid": "9b1ce5f2-ce3a-4770-b68b-cd43556f8871", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.586 226109 DEBUG nova.network.os_vif_util [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:22:23,bridge_name='br-int',has_traffic_filtering=True,id=9b1ce5f2-ce3a-4770-b68b-cd43556f8871,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b1ce5f2-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.586 226109 DEBUG os_vif [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:22:23,bridge_name='br-int',has_traffic_filtering=True,id=9b1ce5f2-ce3a-4770-b68b-cd43556f8871,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b1ce5f2-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.589 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.590 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b1ce5f2-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.595 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:43:07 compute-1 nova_compute[226101]: 2025-12-06 07:43:07.598 226109 INFO os_vif [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:22:23,bridge_name='br-int',has_traffic_filtering=True,id=9b1ce5f2-ce3a-4770-b68b-cd43556f8871,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b1ce5f2-ce')
Dec 06 07:43:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:43:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.020 226109 INFO nova.virt.libvirt.driver [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Deleting instance files /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_del
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.021 226109 INFO nova.virt.libvirt.driver [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Deletion of /var/lib/nova/instances/2ac028a3-53d2-4e7d-91b6-cd4578cc36f9_del complete
Dec 06 07:43:08 compute-1 ceph-mon[81689]: pgmap v2627: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 8.2 MiB/s wr, 282 op/s
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.124 226109 INFO nova.compute.manager [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Took 0.95 seconds to destroy the instance on the hypervisor.
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.124 226109 DEBUG oslo.service.loopingcall [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.125 226109 DEBUG nova.compute.manager [-] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.125 226109 DEBUG nova.network.neutron [-] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.822 226109 DEBUG nova.network.neutron [-] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.864 226109 INFO nova.compute.manager [-] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Took 0.74 seconds to deallocate network for instance.
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.913 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.913 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:08 compute-1 nova_compute[226101]: 2025-12-06 07:43:08.944 226109 DEBUG nova.compute.manager [req-9cbdf3a6-b785-4b47-b960-5f34e83795e5 req-53558149-f2e1-47fe-89ba-6fa7682513e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Received event network-vif-deleted-9b1ce5f2-ce3a-4770-b68b-cd43556f8871 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.002 226109 DEBUG oslo_concurrency.processutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3363148672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:43:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3363148672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:43:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2080084077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.455 226109 DEBUG oslo_concurrency.processutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.461 226109 DEBUG nova.compute.provider_tree [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.479 226109 DEBUG nova.scheduler.client.report [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.502 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.535 226109 INFO nova.scheduler.client.report [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Deleted allocations for instance 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.614 226109 DEBUG oslo_concurrency.lockutils [None req-8f408067-6031-4958-9013-7d0d87571224 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "2ac028a3-53d2-4e7d-91b6-cd4578cc36f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:09.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:09.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.874 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.874 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.899 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:43:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.962 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.962 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.968 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:43:09 compute-1 nova_compute[226101]: 2025-12-06 07:43:09.968 226109 INFO nova.compute.claims [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:43:10 compute-1 ceph-mon[81689]: pgmap v2628: 305 pgs: 305 active+clean; 552 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 267 op/s
Dec 06 07:43:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2080084077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.087 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:10 compute-1 sudo[282193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:10 compute-1 sudo[282193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:10 compute-1 sudo[282193]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:10 compute-1 sudo[282237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:43:10 compute-1 sudo[282237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:10 compute-1 sudo[282237]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:10 compute-1 sudo[282262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:10 compute-1 sudo[282262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:10 compute-1 sudo[282262]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:10 compute-1 sudo[282287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:43:10 compute-1 sudo[282287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4142751679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.548 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.555 226109 DEBUG nova.compute.provider_tree [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.594 226109 DEBUG nova.scheduler.client.report [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.618 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.619 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.670 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.671 226109 DEBUG nova.network.neutron [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.701 226109 INFO nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.724 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.836 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.837 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.837 226109 INFO nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Creating image(s)
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.867 226109 DEBUG nova.storage.rbd_utils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.900 226109 DEBUG nova.storage.rbd_utils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.930 226109 DEBUG nova.storage.rbd_utils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:10 compute-1 nova_compute[226101]: 2025-12-06 07:43:10.934 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:10 compute-1 sudo[282287]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.004 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.005 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.006 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.007 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.035 226109 DEBUG nova.storage.rbd_utils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.039 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 119a621b-6198-42db-9a89-d73da6c2a2da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4142751679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:43:11 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.094 226109 DEBUG nova.policy [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e997a5eeee174b368a43ed8cb35fa1d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f44ecb8bdc7e4692a299e29603301124', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.295 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 119a621b-6198-42db-9a89-d73da6c2a2da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.359 226109 DEBUG nova.storage.rbd_utils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] resizing rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.514 226109 DEBUG nova.objects.instance [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'migration_context' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.539 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.540 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Ensure instance console log exists: /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.541 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.541 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:11 compute-1 nova_compute[226101]: 2025-12-06 07:43:11.542 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:11.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:12 compute-1 ceph-mon[81689]: pgmap v2629: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 241 op/s
Dec 06 07:43:12 compute-1 nova_compute[226101]: 2025-12-06 07:43:12.363 226109 DEBUG nova.network.neutron [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Successfully created port: 70e780b6-88d2-47a1-99b8-a525dcb88b5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:43:12 compute-1 nova_compute[226101]: 2025-12-06 07:43:12.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:12 compute-1 nova_compute[226101]: 2025-12-06 07:43:12.594 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.207 226109 DEBUG nova.network.neutron [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Successfully updated port: 70e780b6-88d2-47a1-99b8-a525dcb88b5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.272 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.273 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.273 226109 DEBUG nova.network.neutron [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:43:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/812162387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:13 compute-1 ceph-mon[81689]: pgmap v2630: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 73 KiB/s wr, 107 op/s
Dec 06 07:43:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/700738085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:43:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.290 226109 DEBUG nova.compute.manager [req-0a1b9d52-4595-4b17-af4f-46fa7de1ca6b req-e6e3bb6f-8795-4d82-a43a-d44fd90a56d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-changed-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.290 226109 DEBUG nova.compute.manager [req-0a1b9d52-4595-4b17-af4f-46fa7de1ca6b req-e6e3bb6f-8795-4d82-a43a-d44fd90a56d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Refreshing instance network info cache due to event network-changed-70e780b6-88d2-47a1-99b8-a525dcb88b5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.291 226109 DEBUG oslo_concurrency.lockutils [req-0a1b9d52-4595-4b17-af4f-46fa7de1ca6b req-e6e3bb6f-8795-4d82-a43a-d44fd90a56d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.487 226109 DEBUG nova.network.neutron [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.610 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.610 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.610 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.611 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:43:13 compute-1 nova_compute[226101]: 2025-12-06 07:43:13.611 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:13.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:13.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2882213620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.049 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.140 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.141 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.141 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.144 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.144 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:43:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:43:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:43:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:43:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2882213620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3595080009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.305 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.306 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3957MB free_disk=20.760597229003906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.306 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.306 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.388 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance d954d607-525c-4edf-ab9e-56658dd2525a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.388 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance f50ab4f6-4d4c-488a-9793-0b9979a2e193 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.388 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 119a621b-6198-42db-9a89-d73da6c2a2da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.389 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.389 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.465 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.542 226109 DEBUG nova.network.neutron [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.587 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.588 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance network_info: |[{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.588 226109 DEBUG oslo_concurrency.lockutils [req-0a1b9d52-4595-4b17-af4f-46fa7de1ca6b req-e6e3bb6f-8795-4d82-a43a-d44fd90a56d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.589 226109 DEBUG nova.network.neutron [req-0a1b9d52-4595-4b17-af4f-46fa7de1ca6b req-e6e3bb6f-8795-4d82-a43a-d44fd90a56d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Refreshing network info cache for port 70e780b6-88d2-47a1-99b8-a525dcb88b5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.593 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Start _get_guest_xml network_info=[{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.597 226109 WARNING nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.603 226109 DEBUG nova.virt.libvirt.host [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.604 226109 DEBUG nova.virt.libvirt.host [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.606 226109 DEBUG nova.virt.libvirt.host [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.607 226109 DEBUG nova.virt.libvirt.host [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.608 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.609 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.609 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.610 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.610 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.611 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.611 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.611 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.612 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.612 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.613 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.613 226109 DEBUG nova.virt.hardware [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.617 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/992574613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.924 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.929 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.948 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.982 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:43:14 compute-1 nova_compute[226101]: 2025-12-06 07:43:14.983 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4227087763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.084 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.110 226109 DEBUG nova.storage.rbd_utils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.115 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/992574613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:15 compute-1 ceph-mon[81689]: pgmap v2631: 305 pgs: 305 active+clean; 502 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 873 KiB/s wr, 127 op/s
Dec 06 07:43:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4227087763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3521867990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.562 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.564 226109 DEBUG nova.virt.libvirt.vif [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-311361617',display_name='tempest-ServerStableDeviceRescueTest-server-311361617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-311361617',id=150,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-0xqzu91y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:10Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=119a621b-6198-42db-9a89-d73da6c2a2da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.565 226109 DEBUG nova.network.os_vif_util [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.566 226109 DEBUG nova.network.os_vif_util [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:97:b4,bridge_name='br-int',has_traffic_filtering=True,id=70e780b6-88d2-47a1-99b8-a525dcb88b5d,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e780b6-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.567 226109 DEBUG nova.objects.instance [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'pci_devices' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.596 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <uuid>119a621b-6198-42db-9a89-d73da6c2a2da</uuid>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <name>instance-00000096</name>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-311361617</nova:name>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:43:14</nova:creationTime>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <nova:user uuid="e997a5eeee174b368a43ed8cb35fa1d0">tempest-ServerStableDeviceRescueTest-1830949011-project-member</nova:user>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <nova:project uuid="f44ecb8bdc7e4692a299e29603301124">tempest-ServerStableDeviceRescueTest-1830949011</nova:project>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <nova:port uuid="70e780b6-88d2-47a1-99b8-a525dcb88b5d">
Dec 06 07:43:15 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <system>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <entry name="serial">119a621b-6198-42db-9a89-d73da6c2a2da</entry>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <entry name="uuid">119a621b-6198-42db-9a89-d73da6c2a2da</entry>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </system>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <os>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   </os>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <features>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   </features>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/119a621b-6198-42db-9a89-d73da6c2a2da_disk">
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       </source>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/119a621b-6198-42db-9a89-d73da6c2a2da_disk.config">
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       </source>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:43:15 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:5d:97:b4"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <target dev="tap70e780b6-88"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/console.log" append="off"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <video>
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </video>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:43:15 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:43:15 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:43:15 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:43:15 compute-1 nova_compute[226101]: </domain>
Dec 06 07:43:15 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.597 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Preparing to wait for external event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.598 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.598 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.598 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.599 226109 DEBUG nova.virt.libvirt.vif [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-311361617',display_name='tempest-ServerStableDeviceRescueTest-server-311361617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-311361617',id=150,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-0xqzu91y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:10Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=119a621b-6198-42db-9a89-d73da6c2a2da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.599 226109 DEBUG nova.network.os_vif_util [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.600 226109 DEBUG nova.network.os_vif_util [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:97:b4,bridge_name='br-int',has_traffic_filtering=True,id=70e780b6-88d2-47a1-99b8-a525dcb88b5d,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e780b6-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.600 226109 DEBUG os_vif [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:97:b4,bridge_name='br-int',has_traffic_filtering=True,id=70e780b6-88d2-47a1-99b8-a525dcb88b5d,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e780b6-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.601 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.601 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.602 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.604 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.604 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70e780b6-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.604 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70e780b6-88, col_values=(('external_ids', {'iface-id': '70e780b6-88d2-47a1-99b8-a525dcb88b5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:97:b4', 'vm-uuid': '119a621b-6198-42db-9a89-d73da6c2a2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.606 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:15 compute-1 NetworkManager[49031]: <info>  [1765006995.6070] manager: (tap70e780b6-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.609 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.611 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.612 226109 INFO os_vif [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:97:b4,bridge_name='br-int',has_traffic_filtering=True,id=70e780b6-88d2-47a1-99b8-a525dcb88b5d,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e780b6-88')
Dec 06 07:43:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:15.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:15.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.975 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.975 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.976 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:5d:97:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:43:15 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.976 226109 INFO nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Using config drive
Dec 06 07:43:16 compute-1 nova_compute[226101]: 2025-12-06 07:43:15.999 226109 DEBUG nova.storage.rbd_utils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3521867990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:16 compute-1 nova_compute[226101]: 2025-12-06 07:43:16.521 226109 DEBUG nova.network.neutron [req-0a1b9d52-4595-4b17-af4f-46fa7de1ca6b req-e6e3bb6f-8795-4d82-a43a-d44fd90a56d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updated VIF entry in instance network info cache for port 70e780b6-88d2-47a1-99b8-a525dcb88b5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:43:16 compute-1 nova_compute[226101]: 2025-12-06 07:43:16.521 226109 DEBUG nova.network.neutron [req-0a1b9d52-4595-4b17-af4f-46fa7de1ca6b req-e6e3bb6f-8795-4d82-a43a-d44fd90a56d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:16 compute-1 nova_compute[226101]: 2025-12-06 07:43:16.724 226109 DEBUG oslo_concurrency.lockutils [req-0a1b9d52-4595-4b17-af4f-46fa7de1ca6b req-e6e3bb6f-8795-4d82-a43a-d44fd90a56d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.112 226109 INFO nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Creating config drive at /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.117 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpabbrp5vf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.248 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpabbrp5vf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.275 226109 DEBUG nova.storage.rbd_utils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.279 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.418 226109 DEBUG oslo_concurrency.processutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.419 226109 INFO nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Deleting local config drive /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config because it was imported into RBD.
Dec 06 07:43:17 compute-1 kernel: tap70e780b6-88: entered promiscuous mode
Dec 06 07:43:17 compute-1 NetworkManager[49031]: <info>  [1765006997.4691] manager: (tap70e780b6-88): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Dec 06 07:43:17 compute-1 ovn_controller[130279]: 2025-12-06T07:43:17Z|00584|binding|INFO|Claiming lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d for this chassis.
Dec 06 07:43:17 compute-1 ovn_controller[130279]: 2025-12-06T07:43:17Z|00585|binding|INFO|70e780b6-88d2-47a1-99b8-a525dcb88b5d: Claiming fa:16:3e:5d:97:b4 10.100.0.9
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.469 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:17 compute-1 ovn_controller[130279]: 2025-12-06T07:43:17Z|00586|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d ovn-installed in OVS
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.490 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.495 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:17 compute-1 systemd-udevd[282689]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:43:17 compute-1 systemd-machined[190302]: New machine qemu-68-instance-00000096.
Dec 06 07:43:17 compute-1 NetworkManager[49031]: <info>  [1765006997.5169] device (tap70e780b6-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:43:17 compute-1 NetworkManager[49031]: <info>  [1765006997.5180] device (tap70e780b6-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:43:17 compute-1 ovn_controller[130279]: 2025-12-06T07:43:17Z|00587|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d up in Southbound
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.521 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:97:b4 10.100.0.9'], port_security=['fa:16:3e:5d:97:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '119a621b-6198-42db-9a89-d73da6c2a2da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=70e780b6-88d2-47a1-99b8-a525dcb88b5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.522 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 70e780b6-88d2-47a1-99b8-a525dcb88b5d in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.524 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:17 compute-1 systemd[1]: Started Virtual Machine qemu-68-instance-00000096.
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.535 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7b9dbc-fe89-4fb4-aadc-19cda8b24a93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.536 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.538 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.538 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ef77cbd9-d54f-4fa5-b3a7-e56d0d7ab053]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.539 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[07112b1e-22c2-4d62-ab2b-81fb85f7375a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.550 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b30d01-0d01-4dc5-be2f-566e2d87d174]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.575 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[11e1c210-f70a-419d-a79d-356b9637e98b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.596 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.604 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[162fb7f9-9289-4f4f-aafb-8e294f88b691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 NetworkManager[49031]: <info>  [1765006997.6168] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.617 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[44a40f41-9538-4174-a89b-bd68183ccd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.645 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[816de619-e683-4454-9f49-da85f7ec04f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.649 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[acdeb7f2-12f6-43ca-9aaa-b9a7acb60129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 NetworkManager[49031]: <info>  [1765006997.6756] device (tap6d1a17d6-50): carrier: link connected
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.682 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[1cad2470-fe9f-47c4-82da-f3d2267e0bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.697 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[312b3a51-106f-4593-a8ad-d2c8aad4be94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727717, 'reachable_time': 36605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282725, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.711 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[580ac309-adfe-4a74-8af6-7739674d300a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727717, 'tstamp': 727717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282726, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.726 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8ca89d-269f-4e5d-a3ae-9c6e9b6db7a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727717, 'reachable_time': 36605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282727, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.753 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5c05d9e0-f7f7-4c66-a81f-d0053192da09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.805 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[90bc274c-8231-485a-8aaf-c7ca0712d285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.806 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.806 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.806 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.808 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:17 compute-1 NetworkManager[49031]: <info>  [1765006997.8094] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Dec 06 07:43:17 compute-1 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.811 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.812 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:17 compute-1 ovn_controller[130279]: 2025-12-06T07:43:17Z|00588|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:43:17 compute-1 nova_compute[226101]: 2025-12-06 07:43:17.827 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.828 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.828 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e4cb7ee3-1c63-4c8c-b039-44dbffe728e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.829 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:43:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:17.830 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:43:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:17.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:43:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:17.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:43:18 compute-1 podman[282772]: 2025-12-06 07:43:18.177162815 +0000 UTC m=+0.024306493 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:43:18 compute-1 ceph-mon[81689]: pgmap v2632: 305 pgs: 305 active+clean; 564 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.0 MiB/s wr, 226 op/s
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.720 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006998.7201676, 119a621b-6198-42db-9a89-d73da6c2a2da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.721 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] VM Started (Lifecycle Event)
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.741 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.745 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765006998.7202969, 119a621b-6198-42db-9a89-d73da6c2a2da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.746 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] VM Paused (Lifecycle Event)
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.766 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.768 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.787 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.983 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:18 compute-1 nova_compute[226101]: 2025-12-06 07:43:18.984 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:43:19 compute-1 nova_compute[226101]: 2025-12-06 07:43:19.414 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:19 compute-1 nova_compute[226101]: 2025-12-06 07:43:19.415 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:19 compute-1 nova_compute[226101]: 2025-12-06 07:43:19.415 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:43:19 compute-1 podman[282772]: 2025-12-06 07:43:19.423629029 +0000 UTC m=+1.270772687 container create f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:43:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1009578427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:19 compute-1 ceph-mon[81689]: pgmap v2633: 305 pgs: 305 active+clean; 564 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 204 op/s
Dec 06 07:43:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1495926152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:19 compute-1 systemd[1]: Started libpod-conmon-f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc.scope.
Dec 06 07:43:19 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:43:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e846533906e2851d023267b565f43bf919ca7c6f4bfaf4a2eb09fddc607c45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:19 compute-1 podman[282772]: 2025-12-06 07:43:19.551347675 +0000 UTC m=+1.398491353 container init f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:43:19 compute-1 podman[282772]: 2025-12-06 07:43:19.557128827 +0000 UTC m=+1.404272485 container start f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:43:19 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[282815]: [NOTICE]   (282819) : New worker (282821) forked
Dec 06 07:43:19 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[282815]: [NOTICE]   (282819) : Loading success.
Dec 06 07:43:19 compute-1 sudo[282830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:19 compute-1 sudo[282830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:19 compute-1 sudo[282830]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:19 compute-1 sudo[282855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:43:19 compute-1 sudo[282855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:19 compute-1 sudo[282855]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:43:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:19.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:43:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4055792314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.608 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.623 226109 DEBUG nova.compute.manager [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.623 226109 DEBUG oslo_concurrency.lockutils [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.623 226109 DEBUG oslo_concurrency.lockutils [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.624 226109 DEBUG oslo_concurrency.lockutils [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.624 226109 DEBUG nova.compute.manager [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Processing event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.624 226109 DEBUG nova.compute.manager [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.624 226109 DEBUG oslo_concurrency.lockutils [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.624 226109 DEBUG oslo_concurrency.lockutils [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.624 226109 DEBUG oslo_concurrency.lockutils [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.625 226109 DEBUG nova.compute.manager [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.625 226109 WARNING nova.compute.manager [req-0bdbd1df-57d2-40aa-a125-b2f728970f4d req-7dd04632-4da2-4575-845a-44d53e096ee1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state building and task_state spawning.
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.625 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.628 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007000.6285877, 119a621b-6198-42db-9a89-d73da6c2a2da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.628 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] VM Resumed (Lifecycle Event)
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.630 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.633 226109 INFO nova.virt.libvirt.driver [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance spawned successfully.
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.633 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.654 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.659 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.663 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.663 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.664 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.664 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.665 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.666 226109 DEBUG nova.virt.libvirt.driver [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.706 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.772 226109 INFO nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Took 9.94 seconds to spawn the instance on the hypervisor.
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.773 226109 DEBUG nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.899 226109 INFO nova.compute.manager [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Took 10.96 seconds to build instance.
Dec 06 07:43:20 compute-1 nova_compute[226101]: 2025-12-06 07:43:20.917 226109 DEBUG oslo_concurrency.lockutils [None req-90162da4-ddb8-4eee-af35-2a1d45b63937 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1147296475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:21 compute-1 ceph-mon[81689]: pgmap v2634: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 260 op/s
Dec 06 07:43:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1758460467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.407 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006987.40659, 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.408 226109 INFO nova.compute.manager [-] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] VM Stopped (Lifecycle Event)
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.438 226109 DEBUG nova.compute.manager [None req-7d161ad1-a6df-4b82-9174-c566c6977a57 - - - - - -] [instance: 2ac028a3-53d2-4e7d-91b6-cd4578cc36f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.536 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updating instance_info_cache with network_info: [{"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.553 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-f50ab4f6-4d4c-488a-9793-0b9979a2e193" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.554 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.554 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.554 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:43:22 compute-1 nova_compute[226101]: 2025-12-06 07:43:22.598 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3041811195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:23 compute-1 nova_compute[226101]: 2025-12-06 07:43:23.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:23 compute-1 nova_compute[226101]: 2025-12-06 07:43:23.705 226109 DEBUG nova.compute.manager [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:23 compute-1 nova_compute[226101]: 2025-12-06 07:43:23.761 226109 INFO nova.compute.manager [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] instance snapshotting
Dec 06 07:43:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:43:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:23.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:43:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:24 compute-1 nova_compute[226101]: 2025-12-06 07:43:24.109 226109 INFO nova.virt.libvirt.driver [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Beginning live snapshot process
Dec 06 07:43:24 compute-1 nova_compute[226101]: 2025-12-06 07:43:24.250 226109 DEBUG nova.virt.libvirt.imagebackend [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:43:24 compute-1 nova_compute[226101]: 2025-12-06 07:43:24.468 226109 DEBUG nova.storage.rbd_utils [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] creating snapshot(b35b918bed4c404583e861884c471303) on rbd image(119a621b-6198-42db-9a89-d73da6c2a2da_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:43:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:25 compute-1 ceph-mon[81689]: pgmap v2635: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 233 op/s
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.421 226109 DEBUG oslo_concurrency.lockutils [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.422 226109 DEBUG oslo_concurrency.lockutils [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.444 226109 INFO nova.compute.manager [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Detaching volume df0e5c71-5906-44fa-bff0-b66876c47ac5
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.570 226109 INFO nova.virt.block_device [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Attempting to driver detach volume df0e5c71-5906-44fa-bff0-b66876c47ac5 from mountpoint /dev/vdb
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.577 226109 DEBUG nova.virt.libvirt.driver [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Attempting to detach device vdb from instance f50ab4f6-4d4c-488a-9793-0b9979a2e193 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.577 226109 DEBUG nova.virt.libvirt.guest [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-df0e5c71-5906-44fa-bff0-b66876c47ac5">
Dec 06 07:43:25 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   </source>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <serial>df0e5c71-5906-44fa-bff0-b66876c47ac5</serial>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]: </disk>
Dec 06 07:43:25 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.585 226109 INFO nova.virt.libvirt.driver [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully detached device vdb from instance f50ab4f6-4d4c-488a-9793-0b9979a2e193 from the persistent domain config.
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.585 226109 DEBUG nova.virt.libvirt.driver [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f50ab4f6-4d4c-488a-9793-0b9979a2e193 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.586 226109 DEBUG nova.virt.libvirt.guest [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-df0e5c71-5906-44fa-bff0-b66876c47ac5">
Dec 06 07:43:25 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   </source>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <serial>df0e5c71-5906-44fa-bff0-b66876c47ac5</serial>
Dec 06 07:43:25 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:43:25 compute-1 nova_compute[226101]: </disk>
Dec 06 07:43:25 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.611 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.682 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765007005.6818042, f50ab4f6-4d4c-488a-9793-0b9979a2e193 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.685 226109 DEBUG nova.virt.libvirt.driver [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f50ab4f6-4d4c-488a-9793-0b9979a2e193 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.688 226109 INFO nova.virt.libvirt.driver [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully detached device vdb from instance f50ab4f6-4d4c-488a-9793-0b9979a2e193 from the live domain config.
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.880 226109 DEBUG nova.objects.instance [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid f50ab4f6-4d4c-488a-9793-0b9979a2e193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:43:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:25.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:43:25 compute-1 nova_compute[226101]: 2025-12-06 07:43:25.931 226109 DEBUG oslo_concurrency.lockutils [None req-8d998127-e0f0-4eee-adf4-ffd0c651ee1a 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e329 e329: 3 total, 3 up, 3 in
Dec 06 07:43:26 compute-1 ceph-mon[81689]: pgmap v2636: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 267 op/s
Dec 06 07:43:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2786445463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:26 compute-1 ceph-mon[81689]: osdmap e329: 3 total, 3 up, 3 in
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.096 226109 DEBUG nova.storage.rbd_utils [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] cloning vms/119a621b-6198-42db-9a89-d73da6c2a2da_disk@b35b918bed4c404583e861884c471303 to images/de1696d2-fc5b-497d-a09d-4993ee316ff9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.214 226109 DEBUG nova.storage.rbd_utils [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] flattening images/de1696d2-fc5b-497d-a09d-4993ee316ff9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.850 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.850 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.850 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.852 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.853 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.854 226109 INFO nova.compute.manager [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Terminating instance
Dec 06 07:43:26 compute-1 nova_compute[226101]: 2025-12-06 07:43:26.854 226109 DEBUG nova.compute.manager [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:43:27 compute-1 kernel: tap708d0bd8-6c (unregistering): left promiscuous mode
Dec 06 07:43:27 compute-1 NetworkManager[49031]: <info>  [1765007007.2447] device (tap708d0bd8-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:43:27 compute-1 ovn_controller[130279]: 2025-12-06T07:43:27Z|00589|binding|INFO|Releasing lport 708d0bd8-6c04-48c8-80e4-9ccd10704179 from this chassis (sb_readonly=0)
Dec 06 07:43:27 compute-1 ovn_controller[130279]: 2025-12-06T07:43:27Z|00590|binding|INFO|Setting lport 708d0bd8-6c04-48c8-80e4-9ccd10704179 down in Southbound
Dec 06 07:43:27 compute-1 ovn_controller[130279]: 2025-12-06T07:43:27Z|00591|binding|INFO|Removing iface tap708d0bd8-6c ovn-installed in OVS
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.302 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:27.307 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:83:92 10.100.0.12'], port_security=['fa:16:3e:c0:83:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f50ab4f6-4d4c-488a-9793-0b9979a2e193', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a82c9b7f-b22a-4f20-92eb-ef47963df970', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a93df87f-d2df-4d3a-b692-98bba32f2fe1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=708d0bd8-6c04-48c8-80e4-9ccd10704179) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:27.309 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 708d0bd8-6c04-48c8-80e4-9ccd10704179 in datapath 3c5d4817-c3d5-45fc-9890-418e779bacb2 unbound from our chassis
Dec 06 07:43:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:27.310 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c5d4817-c3d5-45fc-9890-418e779bacb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:43:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:27.312 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[02bc2d73-fcc2-4cfc-bc02-8453f78389be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:27.313 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 namespace which is not needed anymore
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.319 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:27 compute-1 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Dec 06 07:43:27 compute-1 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008d.scope: Consumed 17.554s CPU time.
Dec 06 07:43:27 compute-1 systemd-machined[190302]: Machine qemu-65-instance-0000008d terminated.
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.491 226109 INFO nova.virt.libvirt.driver [-] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Instance destroyed successfully.
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.491 226109 DEBUG nova.objects.instance [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'resources' on Instance uuid f50ab4f6-4d4c-488a-9793-0b9979a2e193 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.504 226109 DEBUG nova.virt.libvirt.vif [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:41:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-838297842',display_name='tempest-AttachVolumeNegativeTest-server-838297842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-838297842',id=141,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGlJuOvECyiA5NWlLIDOjDJWeZ15OCRR+GrIfRd1eb2YqFJmiHzUPd3GlF+M7lFiDx2HKicZQSeYWxrij87um/7Lq1h64lSuGI60htEXEcY7+qwoIwHqQQuRAT9NPhIr3w==',key_name='tempest-keypair-1389259306',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:41:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-z6898ou0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:41:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=f50ab4f6-4d4c-488a-9793-0b9979a2e193,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.505 226109 DEBUG nova.network.os_vif_util [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "address": "fa:16:3e:c0:83:92", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap708d0bd8-6c", "ovs_interfaceid": "708d0bd8-6c04-48c8-80e4-9ccd10704179", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.505 226109 DEBUG nova.network.os_vif_util [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:83:92,bridge_name='br-int',has_traffic_filtering=True,id=708d0bd8-6c04-48c8-80e4-9ccd10704179,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap708d0bd8-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.506 226109 DEBUG os_vif [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:83:92,bridge_name='br-int',has_traffic_filtering=True,id=708d0bd8-6c04-48c8-80e4-9ccd10704179,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap708d0bd8-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.507 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.507 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap708d0bd8-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.508 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.510 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:27 compute-1 nova_compute[226101]: 2025-12-06 07:43:27.512 226109 INFO os_vif [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:83:92,bridge_name='br-int',has_traffic_filtering=True,id=708d0bd8-6c04-48c8-80e4-9ccd10704179,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap708d0bd8-6c')
Dec 06 07:43:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:43:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:27.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:43:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:27.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:28 compute-1 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[280716]: [NOTICE]   (280720) : haproxy version is 2.8.14-c23fe91
Dec 06 07:43:28 compute-1 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[280716]: [NOTICE]   (280720) : path to executable is /usr/sbin/haproxy
Dec 06 07:43:28 compute-1 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[280716]: [WARNING]  (280720) : Exiting Master process...
Dec 06 07:43:28 compute-1 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[280716]: [WARNING]  (280720) : Exiting Master process...
Dec 06 07:43:28 compute-1 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[280716]: [ALERT]    (280720) : Current worker (280722) exited with code 143 (Terminated)
Dec 06 07:43:28 compute-1 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[280716]: [WARNING]  (280720) : All workers exited. Exiting... (0)
Dec 06 07:43:28 compute-1 systemd[1]: libpod-63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4.scope: Deactivated successfully.
Dec 06 07:43:28 compute-1 podman[283011]: 2025-12-06 07:43:28.084804898 +0000 UTC m=+0.690071680 container died 63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:43:28 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Dec 06 07:43:28 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:28.580273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:43:28 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Dec 06 07:43:28 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008580340, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 860, "num_deletes": 251, "total_data_size": 1650114, "memory_usage": 1668896, "flush_reason": "Manual Compaction"}
Dec 06 07:43:28 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Dec 06 07:43:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4128211629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:28 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008991368, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 1077711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55147, "largest_seqno": 56002, "table_properties": {"data_size": 1073528, "index_size": 1835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9844, "raw_average_key_size": 20, "raw_value_size": 1065006, "raw_average_value_size": 2182, "num_data_blocks": 79, "num_entries": 488, "num_filter_entries": 488, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006956, "oldest_key_time": 1765006956, "file_creation_time": 1765007008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:43:28 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 411160 microseconds, and 5396 cpu microseconds.
Dec 06 07:43:28 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:28.991436) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 1077711 bytes OK
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:28.991458) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.000793) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.000831) EVENT_LOG_v1 {"time_micros": 1765007009000822, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.000853) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1645607, prev total WAL file size 1645607, number of live WAL files 2.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.001677) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(1052KB)], [105(11MB)]
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009001742, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 13052979, "oldest_snapshot_seqno": -1}
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.021 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.025 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.027 226109 DEBUG nova.compute.manager [req-16dde53d-e920-434d-aea8-90bbbbebff01 req-12594162-9f28-4939-b7a3-6d784f57a3b1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received event network-vif-unplugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.027 226109 DEBUG oslo_concurrency.lockutils [req-16dde53d-e920-434d-aea8-90bbbbebff01 req-12594162-9f28-4939-b7a3-6d784f57a3b1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.028 226109 DEBUG oslo_concurrency.lockutils [req-16dde53d-e920-434d-aea8-90bbbbebff01 req-12594162-9f28-4939-b7a3-6d784f57a3b1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.028 226109 DEBUG oslo_concurrency.lockutils [req-16dde53d-e920-434d-aea8-90bbbbebff01 req-12594162-9f28-4939-b7a3-6d784f57a3b1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:29 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4-userdata-shm.mount: Deactivated successfully.
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.029 226109 DEBUG nova.compute.manager [req-16dde53d-e920-434d-aea8-90bbbbebff01 req-12594162-9f28-4939-b7a3-6d784f57a3b1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] No waiting events found dispatching network-vif-unplugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.029 226109 DEBUG nova.compute.manager [req-16dde53d-e920-434d-aea8-90bbbbebff01 req-12594162-9f28-4939-b7a3-6d784f57a3b1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received event network-vif-unplugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:43:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-777a26551b0868ebd958341be08824a954918b440fd2343dd4950b6beac338d0-merged.mount: Deactivated successfully.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 8841 keys, 10855656 bytes, temperature: kUnknown
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009123721, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 10855656, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10799452, "index_size": 32988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 229592, "raw_average_key_size": 25, "raw_value_size": 10645123, "raw_average_value_size": 1204, "num_data_blocks": 1284, "num_entries": 8841, "num_filter_entries": 8841, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007009, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.124100) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 10855656 bytes
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.128124) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.9 rd, 88.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(22.2) write-amplify(10.1) OK, records in: 9362, records dropped: 521 output_compression: NoCompression
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.128167) EVENT_LOG_v1 {"time_micros": 1765007009128150, "job": 66, "event": "compaction_finished", "compaction_time_micros": 122158, "compaction_time_cpu_micros": 36271, "output_level": 6, "num_output_files": 1, "total_output_size": 10855656, "num_input_records": 9362, "num_output_records": 8841, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009129184, "job": 66, "event": "table_file_deletion", "file_number": 107}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009131321, "job": 66, "event": "table_file_deletion", "file_number": 105}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.001510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.131577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.131582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.131584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.131586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.131588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 podman[283011]: 2025-12-06 07:43:29.134697041 +0000 UTC m=+1.739963823 container cleanup 63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:43:29 compute-1 systemd[1]: libpod-conmon-63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4.scope: Deactivated successfully.
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.154 226109 DEBUG nova.storage.rbd_utils [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] removing snapshot(b35b918bed4c404583e861884c471303) on rbd image(119a621b-6198-42db-9a89-d73da6c2a2da_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:43:29 compute-1 podman[283089]: 2025-12-06 07:43:29.198973205 +0000 UTC m=+0.040208121 container remove 63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.205 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[867719cc-c8b8-4a9b-9495-6a7796e35517]: (4, ('Sat Dec  6 07:43:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 (63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4)\n63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4\nSat Dec  6 07:43:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 (63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4)\n63470a5d99fdb9a8b6fea2e24411a4f0dc2b134c0c52ec365563a0a56db765e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.206 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[18c828f5-e2a3-43a1-9e52-460aac7de00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.207 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c5d4817-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.209 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:29 compute-1 kernel: tap3c5d4817-c0: left promiscuous mode
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.224 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.227 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c59fb3a7-7421-4322-b3da-77e771a7a44f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:29 compute-1 ceph-mon[81689]: pgmap v2638: 305 pgs: 305 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 492 KiB/s wr, 302 op/s
Dec 06 07:43:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2039062054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:29 compute-1 ceph-mon[81689]: pgmap v2639: 305 pgs: 305 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 993 KiB/s wr, 311 op/s
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.241 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[97520b9c-9e54-41bb-98a1-e8cb4a91ff69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.242 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[65e1b7ac-e5d0-4f5e-a07b-84e83a5e3594]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e330 e330: 3 total, 3 up, 3 in
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.265 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[818cf38c-08d5-4102-8b0b-44a3ee7aed04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717754, 'reachable_time': 33747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283106, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.268 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.268 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4910b1-fb49-4aec-a05e-6cb742932e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:29 compute-1 systemd[1]: run-netns-ovnmeta\x2d3c5d4817\x2dc3d5\x2d45fc\x2d9890\x2d418e779bacb2.mount: Deactivated successfully.
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.279 226109 DEBUG nova.storage.rbd_utils [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] creating snapshot(snap) on rbd image(de1696d2-fc5b-497d-a09d-4993ee316ff9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.358434) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009358506, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 292, "num_deletes": 260, "total_data_size": 96865, "memory_usage": 103688, "flush_reason": "Manual Compaction"}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009360607, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 63772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56003, "largest_seqno": 56294, "table_properties": {"data_size": 61804, "index_size": 132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5034, "raw_average_key_size": 17, "raw_value_size": 57792, "raw_average_value_size": 204, "num_data_blocks": 6, "num_entries": 283, "num_filter_entries": 283, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007009, "oldest_key_time": 1765007009, "file_creation_time": 1765007009, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 2193 microseconds, and 877 cpu microseconds.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360647) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 63772 bytes OK
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360666) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.364005) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.364019) EVENT_LOG_v1 {"time_micros": 1765007009364015, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.364035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 94657, prev total WAL file size 94657, number of live WAL files 2.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.364427) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373638' seq:72057594037927935, type:22 .. '6C6F676D0032303234' seq:0, type:0; will stop at (end)
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(62KB)], [108(10MB)]
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009364567, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 10919428, "oldest_snapshot_seqno": -1}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8592 keys, 10772669 bytes, temperature: kUnknown
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009606445, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10772669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10717687, "index_size": 32406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 225411, "raw_average_key_size": 26, "raw_value_size": 10567052, "raw_average_value_size": 1229, "num_data_blocks": 1255, "num_entries": 8592, "num_filter_entries": 8592, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007009, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.606694) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10772669 bytes
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.608556) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.1 rd, 44.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.4 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(340.2) write-amplify(168.9) OK, records in: 9124, records dropped: 532 output_compression: NoCompression
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.608571) EVENT_LOG_v1 {"time_micros": 1765007009608564, "job": 68, "event": "compaction_finished", "compaction_time_micros": 241945, "compaction_time_cpu_micros": 25211, "output_level": 6, "num_output_files": 1, "total_output_size": 10772669, "num_input_records": 9124, "num_output_records": 8592, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009608809, "job": 68, "event": "table_file_deletion", "file_number": 110}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009610616, "job": 68, "event": "table_file_deletion", "file_number": 108}
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.364332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.610720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.610725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.610727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.610729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:43:29.610730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.618 226109 INFO nova.virt.libvirt.driver [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Deleting instance files /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193_del
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.619 226109 INFO nova.virt.libvirt.driver [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Deletion of /var/lib/nova/instances/f50ab4f6-4d4c-488a-9793-0b9979a2e193_del complete
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.667 226109 INFO nova.compute.manager [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Took 2.81 seconds to destroy the instance on the hypervisor.
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.668 226109 DEBUG oslo.service.loopingcall [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.668 226109 DEBUG nova.compute.manager [-] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.668 226109 DEBUG nova.network.neutron [-] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.726 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.726 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:29.727 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.879 226109 DEBUG nova.compute.manager [req-638e6dfd-1535-4405-af88-cd9ed0e61c69 req-3123f68f-450a-4783-bfcb-a7ba47c3f161 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received event network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.879 226109 DEBUG oslo_concurrency.lockutils [req-638e6dfd-1535-4405-af88-cd9ed0e61c69 req-3123f68f-450a-4783-bfcb-a7ba47c3f161 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.880 226109 DEBUG oslo_concurrency.lockutils [req-638e6dfd-1535-4405-af88-cd9ed0e61c69 req-3123f68f-450a-4783-bfcb-a7ba47c3f161 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.880 226109 DEBUG oslo_concurrency.lockutils [req-638e6dfd-1535-4405-af88-cd9ed0e61c69 req-3123f68f-450a-4783-bfcb-a7ba47c3f161 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.880 226109 DEBUG nova.compute.manager [req-638e6dfd-1535-4405-af88-cd9ed0e61c69 req-3123f68f-450a-4783-bfcb-a7ba47c3f161 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] No waiting events found dispatching network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:29 compute-1 nova_compute[226101]: 2025-12-06 07:43:29.880 226109 WARNING nova.compute.manager [req-638e6dfd-1535-4405-af88-cd9ed0e61c69 req-3123f68f-450a-4783-bfcb-a7ba47c3f161 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received unexpected event network-vif-plugged-708d0bd8-6c04-48c8-80e4-9ccd10704179 for instance with vm_state active and task_state deleting.
Dec 06 07:43:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:29.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:29.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:30 compute-1 ceph-mon[81689]: osdmap e330: 3 total, 3 up, 3 in
Dec 06 07:43:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e331 e331: 3 total, 3 up, 3 in
Dec 06 07:43:30 compute-1 nova_compute[226101]: 2025-12-06 07:43:30.595 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:30 compute-1 nova_compute[226101]: 2025-12-06 07:43:30.820 226109 DEBUG nova.network.neutron [-] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:30 compute-1 nova_compute[226101]: 2025-12-06 07:43:30.872 226109 INFO nova.compute.manager [-] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Took 1.20 seconds to deallocate network for instance.
Dec 06 07:43:30 compute-1 nova_compute[226101]: 2025-12-06 07:43:30.911 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:30 compute-1 nova_compute[226101]: 2025-12-06 07:43:30.912 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:30 compute-1 nova_compute[226101]: 2025-12-06 07:43:30.995 226109 DEBUG oslo_concurrency.processutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:31 compute-1 nova_compute[226101]: 2025-12-06 07:43:31.027 226109 DEBUG nova.compute.manager [req-e35bba31-0488-4ae3-b017-4355e6d830a6 req-72cae339-22da-4f94-a21e-e3ae43d9b68d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Received event network-vif-deleted-708d0bd8-6c04-48c8-80e4-9ccd10704179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:31 compute-1 ceph-mon[81689]: osdmap e331: 3 total, 3 up, 3 in
Dec 06 07:43:31 compute-1 ceph-mon[81689]: pgmap v2642: 305 pgs: 305 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 2.6 MiB/s wr, 354 op/s
Dec 06 07:43:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:31 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1543812408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:31 compute-1 nova_compute[226101]: 2025-12-06 07:43:31.450 226109 DEBUG oslo_concurrency.processutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:31 compute-1 nova_compute[226101]: 2025-12-06 07:43:31.456 226109 DEBUG nova.compute.provider_tree [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:31 compute-1 nova_compute[226101]: 2025-12-06 07:43:31.471 226109 DEBUG nova.scheduler.client.report [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:31 compute-1 nova_compute[226101]: 2025-12-06 07:43:31.493 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:31 compute-1 nova_compute[226101]: 2025-12-06 07:43:31.524 226109 INFO nova.scheduler.client.report [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Deleted allocations for instance f50ab4f6-4d4c-488a-9793-0b9979a2e193
Dec 06 07:43:31 compute-1 nova_compute[226101]: 2025-12-06 07:43:31.604 226109 DEBUG oslo_concurrency.lockutils [None req-6c5e9ad9-57c3-41a1-a765-ee030f6072d3 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f50ab4f6-4d4c-488a-9793-0b9979a2e193" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:31.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:31.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:32 compute-1 nova_compute[226101]: 2025-12-06 07:43:32.175 226109 INFO nova.virt.libvirt.driver [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Snapshot image upload complete
Dec 06 07:43:32 compute-1 nova_compute[226101]: 2025-12-06 07:43:32.176 226109 INFO nova.compute.manager [None req-c78eb8ff-d848-4698-8d87-e066673606be e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Took 8.41 seconds to snapshot the instance on the hypervisor.
Dec 06 07:43:32 compute-1 nova_compute[226101]: 2025-12-06 07:43:32.510 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:32 compute-1 nova_compute[226101]: 2025-12-06 07:43:32.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:32 compute-1 nova_compute[226101]: 2025-12-06 07:43:32.601 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1543812408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:33 compute-1 podman[283148]: 2025-12-06 07:43:33.091061751 +0000 UTC m=+0.063373241 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:43:33 compute-1 podman[283149]: 2025-12-06 07:43:33.108804629 +0000 UTC m=+0.080517744 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:43:33 compute-1 podman[283147]: 2025-12-06 07:43:33.124185434 +0000 UTC m=+0.095537939 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:43:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:43:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:33.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:43:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:33.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:34 compute-1 nova_compute[226101]: 2025-12-06 07:43:34.152 226109 INFO nova.compute.manager [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Rescuing
Dec 06 07:43:34 compute-1 nova_compute[226101]: 2025-12-06 07:43:34.152 226109 DEBUG oslo_concurrency.lockutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:34 compute-1 nova_compute[226101]: 2025-12-06 07:43:34.152 226109 DEBUG oslo_concurrency.lockutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:34 compute-1 nova_compute[226101]: 2025-12-06 07:43:34.153 226109 DEBUG nova.network.neutron [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:43:34 compute-1 ceph-mon[81689]: pgmap v2643: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.1 MiB/s wr, 286 op/s
Dec 06 07:43:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:35 compute-1 nova_compute[226101]: 2025-12-06 07:43:35.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:35 compute-1 nova_compute[226101]: 2025-12-06 07:43:35.509 226109 DEBUG nova.network.neutron [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:35 compute-1 nova_compute[226101]: 2025-12-06 07:43:35.548 226109 DEBUG oslo_concurrency.lockutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:35 compute-1 nova_compute[226101]: 2025-12-06 07:43:35.818 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:43:35 compute-1 ceph-mon[81689]: pgmap v2644: 305 pgs: 305 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 204 op/s
Dec 06 07:43:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:35.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:35.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:37 compute-1 ovn_controller[130279]: 2025-12-06T07:43:37Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:97:b4 10.100.0.9
Dec 06 07:43:37 compute-1 ovn_controller[130279]: 2025-12-06T07:43:37Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:97:b4 10.100.0.9
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.550 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.605 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.607 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:43:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:37.728 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:43:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:37.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:43:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:37.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.943 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.961 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid d954d607-525c-4edf-ab9e-56658dd2525a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.962 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 119a621b-6198-42db-9a89-d73da6c2a2da _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.962 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.962 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "d954d607-525c-4edf-ab9e-56658dd2525a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.963 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.963 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "119a621b-6198-42db-9a89-d73da6c2a2da" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.963 226109 INFO nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.963 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "119a621b-6198-42db-9a89-d73da6c2a2da" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:37 compute-1 nova_compute[226101]: 2025-12-06 07:43:37.982 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "d954d607-525c-4edf-ab9e-56658dd2525a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:38 compute-1 ceph-mon[81689]: pgmap v2645: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.8 MiB/s wr, 195 op/s
Dec 06 07:43:38 compute-1 nova_compute[226101]: 2025-12-06 07:43:38.179 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2945964719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:39 compute-1 ceph-mon[81689]: pgmap v2646: 305 pgs: 305 active+clean; 380 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.3 MiB/s wr, 203 op/s
Dec 06 07:43:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e332 e332: 3 total, 3 up, 3 in
Dec 06 07:43:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:39.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:39.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:40 compute-1 kernel: tap70e780b6-88 (unregistering): left promiscuous mode
Dec 06 07:43:40 compute-1 NetworkManager[49031]: <info>  [1765007020.4000] device (tap70e780b6-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:43:40 compute-1 ovn_controller[130279]: 2025-12-06T07:43:40Z|00592|binding|INFO|Releasing lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d from this chassis (sb_readonly=0)
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.407 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-1 ovn_controller[130279]: 2025-12-06T07:43:40Z|00593|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d down in Southbound
Dec 06 07:43:40 compute-1 ovn_controller[130279]: 2025-12-06T07:43:40Z|00594|binding|INFO|Removing iface tap70e780b6-88 ovn-installed in OVS
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.411 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.417 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:97:b4 10.100.0.9'], port_security=['fa:16:3e:5d:97:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '119a621b-6198-42db-9a89-d73da6c2a2da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=70e780b6-88d2-47a1-99b8-a525dcb88b5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.418 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 70e780b6-88d2-47a1-99b8-a525dcb88b5d in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.420 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.421 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5256ee15-20ff-4473-9801-3c81aa8cb03c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.421 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.425 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-1 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000096.scope: Deactivated successfully.
Dec 06 07:43:40 compute-1 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000096.scope: Consumed 14.341s CPU time.
Dec 06 07:43:40 compute-1 systemd-machined[190302]: Machine qemu-68-instance-00000096 terminated.
Dec 06 07:43:40 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[282815]: [NOTICE]   (282819) : haproxy version is 2.8.14-c23fe91
Dec 06 07:43:40 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[282815]: [NOTICE]   (282819) : path to executable is /usr/sbin/haproxy
Dec 06 07:43:40 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[282815]: [WARNING]  (282819) : Exiting Master process...
Dec 06 07:43:40 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[282815]: [ALERT]    (282819) : Current worker (282821) exited with code 143 (Terminated)
Dec 06 07:43:40 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[282815]: [WARNING]  (282819) : All workers exited. Exiting... (0)
Dec 06 07:43:40 compute-1 systemd[1]: libpod-f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc.scope: Deactivated successfully.
Dec 06 07:43:40 compute-1 podman[283230]: 2025-12-06 07:43:40.541659912 +0000 UTC m=+0.039592845 container died f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:43:40 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc-userdata-shm.mount: Deactivated successfully.
Dec 06 07:43:40 compute-1 systemd[1]: var-lib-containers-storage-overlay-46e846533906e2851d023267b565f43bf919ca7c6f4bfaf4a2eb09fddc607c45-merged.mount: Deactivated successfully.
Dec 06 07:43:40 compute-1 podman[283230]: 2025-12-06 07:43:40.585892427 +0000 UTC m=+0.083825360 container cleanup f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:43:40 compute-1 systemd[1]: libpod-conmon-f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc.scope: Deactivated successfully.
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.631 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.635 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-1 podman[283261]: 2025-12-06 07:43:40.655110492 +0000 UTC m=+0.047193535 container remove f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.661 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc2a10f-9d87-41da-99ca-9ead9d1c7f89]: (4, ('Sat Dec  6 07:43:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc)\nf5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc\nSat Dec  6 07:43:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (f5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc)\nf5079db1db754df4f6f46a273f8664b426b849de6f5afb817105c5bd2a9d3fbc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.663 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb32821-d88f-4688-9319-ec783dcd9381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.664 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.666 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-1 kernel: tap6d1a17d6-50: left promiscuous mode
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.682 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.685 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9a30f2ee-ce92-451d-bb2b-6a9e7fc543fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.696 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2d41ad-bcb4-4662-baef-0d2d1bc96d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.698 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d94d2255-06c7-4f6b-906d-8dfb970db9c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.712 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[181a8e4f-13b3-44ff-b53c-798ff9b4409c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727710, 'reachable_time': 33130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283289, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.714 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:43:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:40.714 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec7cf7e-be58-4eba-9d6a-d25986a4b2e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:40 compute-1 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec 06 07:43:40 compute-1 ceph-mon[81689]: osdmap e332: 3 total, 3 up, 3 in
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.843 226109 INFO nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance shutdown successfully after 5 seconds.
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.849 226109 INFO nova.virt.libvirt.driver [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance destroyed successfully.
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.849 226109 DEBUG nova.objects.instance [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'numa_topology' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:40 compute-1 nova_compute[226101]: 2025-12-06 07:43:40.878 226109 INFO nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Attempting a stable device rescue
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.052 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.058 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.058 226109 INFO nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Creating image(s)
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.086 226109 DEBUG nova.storage.rbd_utils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.089 226109 DEBUG nova.objects.instance [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.124 226109 DEBUG nova.storage.rbd_utils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.148 226109 DEBUG nova.storage.rbd_utils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.151 226109 DEBUG oslo_concurrency.lockutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "217d382ae316a3162f149d49a88ef3cf87af5e06" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.152 226109 DEBUG oslo_concurrency.lockutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "217d382ae316a3162f149d49a88ef3cf87af5e06" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.377 226109 DEBUG nova.virt.libvirt.imagebackend [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/de1696d2-fc5b-497d-a09d-4993ee316ff9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/de1696d2-fc5b-497d-a09d-4993ee316ff9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.467 226109 DEBUG nova.virt.libvirt.imagebackend [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/de1696d2-fc5b-497d-a09d-4993ee316ff9/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.468 226109 DEBUG nova.storage.rbd_utils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] cloning images/de1696d2-fc5b-497d-a09d-4993ee316ff9@snap to None/119a621b-6198-42db-9a89-d73da6c2a2da_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.571 226109 DEBUG oslo_concurrency.lockutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "217d382ae316a3162f149d49a88ef3cf87af5e06" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.615 226109 DEBUG nova.objects.instance [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'migration_context' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.632 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.634 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Start _get_guest_xml network_info=[{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:5d:97:b4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'de1696d2-fc5b-497d-a09d-4993ee316ff9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.635 226109 DEBUG nova.objects.instance [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'resources' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.654 226109 WARNING nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.663 226109 DEBUG nova.virt.libvirt.host [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.664 226109 DEBUG nova.virt.libvirt.host [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.671 226109 DEBUG nova.virt.libvirt.host [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.671 226109 DEBUG nova.virt.libvirt.host [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.672 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.673 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.673 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.673 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.674 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.674 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.674 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.674 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.675 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.675 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.675 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.675 226109 DEBUG nova.virt.hardware [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.676 226109 DEBUG nova.objects.instance [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:41 compute-1 nova_compute[226101]: 2025-12-06 07:43:41.696 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:41 compute-1 ceph-mon[81689]: pgmap v2648: 305 pgs: 305 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.8 MiB/s wr, 211 op/s
Dec 06 07:43:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:43:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:41.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:43:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:41.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1921876245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.146 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.186 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.490 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007007.489195, f50ab4f6-4d4c-488a-9793-0b9979a2e193 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.491 226109 INFO nova.compute.manager [-] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] VM Stopped (Lifecycle Event)
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.511 226109 DEBUG nova.compute.manager [None req-f20f8665-1fca-4fb2-bd45-5e2ebcedd790 - - - - - -] [instance: f50ab4f6-4d4c-488a-9793-0b9979a2e193] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.599 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.607 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.621 226109 DEBUG nova.compute.manager [req-82c43aec-6b97-4efa-86b0-96f07f7ef62b req-f00bd6e2-d905-47f6-87a0-2e0ef512004c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.622 226109 DEBUG oslo_concurrency.lockutils [req-82c43aec-6b97-4efa-86b0-96f07f7ef62b req-f00bd6e2-d905-47f6-87a0-2e0ef512004c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.622 226109 DEBUG oslo_concurrency.lockutils [req-82c43aec-6b97-4efa-86b0-96f07f7ef62b req-f00bd6e2-d905-47f6-87a0-2e0ef512004c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.623 226109 DEBUG oslo_concurrency.lockutils [req-82c43aec-6b97-4efa-86b0-96f07f7ef62b req-f00bd6e2-d905-47f6-87a0-2e0ef512004c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.623 226109 DEBUG nova.compute.manager [req-82c43aec-6b97-4efa-86b0-96f07f7ef62b req-f00bd6e2-d905-47f6-87a0-2e0ef512004c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.623 226109 WARNING nova.compute.manager [req-82c43aec-6b97-4efa-86b0-96f07f7ef62b req-f00bd6e2-d905-47f6-87a0-2e0ef512004c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state rescuing.
Dec 06 07:43:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2263409015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.653 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:42 compute-1 nova_compute[226101]: 2025-12-06 07:43:42.655 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1921876245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2263409015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1346591955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.223 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.225 226109 DEBUG nova.virt.libvirt.vif [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:43:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-311361617',display_name='tempest-ServerStableDeviceRescueTest-server-311361617',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-311361617',id=150,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:43:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-0xqzu91y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:32Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=119a621b-6198-42db-9a89-d73da6c2a2da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:5d:97:b4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.225 226109 DEBUG nova.network.os_vif_util [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:5d:97:b4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.227 226109 DEBUG nova.network.os_vif_util [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:97:b4,bridge_name='br-int',has_traffic_filtering=True,id=70e780b6-88d2-47a1-99b8-a525dcb88b5d,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e780b6-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.228 226109 DEBUG nova.objects.instance [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'pci_devices' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.245 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <uuid>119a621b-6198-42db-9a89-d73da6c2a2da</uuid>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <name>instance-00000096</name>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-311361617</nova:name>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:43:41</nova:creationTime>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <nova:user uuid="e997a5eeee174b368a43ed8cb35fa1d0">tempest-ServerStableDeviceRescueTest-1830949011-project-member</nova:user>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <nova:project uuid="f44ecb8bdc7e4692a299e29603301124">tempest-ServerStableDeviceRescueTest-1830949011</nova:project>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <nova:port uuid="70e780b6-88d2-47a1-99b8-a525dcb88b5d">
Dec 06 07:43:43 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <system>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <entry name="serial">119a621b-6198-42db-9a89-d73da6c2a2da</entry>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <entry name="uuid">119a621b-6198-42db-9a89-d73da6c2a2da</entry>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </system>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <os>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   </os>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <features>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   </features>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/119a621b-6198-42db-9a89-d73da6c2a2da_disk">
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </source>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/119a621b-6198-42db-9a89-d73da6c2a2da_disk.config">
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </source>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/119a621b-6198-42db-9a89-d73da6c2a2da_disk.rescue">
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </source>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:43:43 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <target dev="sdb" bus="scsi"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <boot order="1"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:5d:97:b4"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <target dev="tap70e780b6-88"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/console.log" append="off"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <video>
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </video>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:43:43 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:43:43 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:43:43 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:43:43 compute-1 nova_compute[226101]: </domain>
Dec 06 07:43:43 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.252 226109 INFO nova.virt.libvirt.driver [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance destroyed successfully.
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.302 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.303 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.303 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.303 226109 DEBUG nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:5d:97:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.304 226109 INFO nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Using config drive
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.331 226109 DEBUG nova.storage.rbd_utils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.349 226109 DEBUG nova.objects.instance [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.392 226109 DEBUG nova.objects.instance [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'keypairs' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.755 226109 INFO nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Creating config drive at /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config.rescue
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.761 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoojmiv4c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:43 compute-1 ceph-mon[81689]: pgmap v2649: 305 pgs: 305 active+clean; 439 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 880 KiB/s rd, 7.0 MiB/s wr, 190 op/s
Dec 06 07:43:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1346591955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.897 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoojmiv4c" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:43.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:43.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.926 226109 DEBUG nova.storage.rbd_utils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:43 compute-1 nova_compute[226101]: 2025-12-06 07:43:43.929 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config.rescue 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.074 226109 DEBUG oslo_concurrency.processutils [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config.rescue 119a621b-6198-42db-9a89-d73da6c2a2da_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.075 226109 INFO nova.virt.libvirt.driver [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Deleting local config drive /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da/disk.config.rescue because it was imported into RBD.
Dec 06 07:43:44 compute-1 kernel: tap70e780b6-88: entered promiscuous mode
Dec 06 07:43:44 compute-1 ovn_controller[130279]: 2025-12-06T07:43:44Z|00595|binding|INFO|Claiming lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d for this chassis.
Dec 06 07:43:44 compute-1 ovn_controller[130279]: 2025-12-06T07:43:44Z|00596|binding|INFO|70e780b6-88d2-47a1-99b8-a525dcb88b5d: Claiming fa:16:3e:5d:97:b4 10.100.0.9
Dec 06 07:43:44 compute-1 NetworkManager[49031]: <info>  [1765007024.1302] manager: (tap70e780b6-88): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.130 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.139 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:97:b4 10.100.0.9'], port_security=['fa:16:3e:5d:97:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '119a621b-6198-42db-9a89-d73da6c2a2da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=70e780b6-88d2-47a1-99b8-a525dcb88b5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.140 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 70e780b6-88d2-47a1-99b8-a525dcb88b5d in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.143 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:44 compute-1 ovn_controller[130279]: 2025-12-06T07:43:44Z|00597|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d ovn-installed in OVS
Dec 06 07:43:44 compute-1 ovn_controller[130279]: 2025-12-06T07:43:44Z|00598|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d up in Southbound
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.150 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.159 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0d4c63-44d1-4d3f-a999-dd67ab528a32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.160 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.162 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:43:44 compute-1 systemd-machined[190302]: New machine qemu-69-instance-00000096.
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.162 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[feecd7e7-caad-4bbe-96f9-a42c9b96db9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.163 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae6ba27-4b5d-4fa8-ba2a-dba97f3560d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 systemd-udevd[283588]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:43:44 compute-1 systemd[1]: Started Virtual Machine qemu-69-instance-00000096.
Dec 06 07:43:44 compute-1 NetworkManager[49031]: <info>  [1765007024.1756] device (tap70e780b6-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:43:44 compute-1 NetworkManager[49031]: <info>  [1765007024.1766] device (tap70e780b6-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.176 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce5e0e9-b369-4665-bdc4-bc45691d2d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.203 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8e1f55-d64c-4b73-bc35-6c02a1484863]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.232 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[73d47a5d-b443-409e-a5f0-0f857b39ae08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 NetworkManager[49031]: <info>  [1765007024.2395] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/268)
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.239 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ccd5c7-f03c-4743-8ef0-0e21815a7839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.269 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[32728603-1c3e-422e-937d-58e64dfc99cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.272 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[22c6453c-fedf-4bdc-a3be-cb275c58474a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 NetworkManager[49031]: <info>  [1765007024.2943] device (tap6d1a17d6-50): carrier: link connected
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.299 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[845f877a-524f-4028-aefe-11057326f17f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.316 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6af686-4119-4c29-bc66-1a8db0317f3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730379, 'reachable_time': 42936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283620, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.332 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[79467431-8643-457b-9665-8b22d3ad6cbc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 730379, 'tstamp': 730379}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283621, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.354 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[40c5210f-0518-47ca-99b0-1c3a95ed6d5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730379, 'reachable_time': 42936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283622, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.387 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[92759c25-0b42-425f-a59a-608bfd10676f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.439 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e015e5f7-595f-4ca6-b834-fdcc8ca2ea31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.440 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.441 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.441 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.442 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:44 compute-1 NetworkManager[49031]: <info>  [1765007024.4435] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Dec 06 07:43:44 compute-1 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.446 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.447 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:44 compute-1 ovn_controller[130279]: 2025-12-06T07:43:44Z|00599|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.461 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.463 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.464 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[111a723e-5ea8-4c5d-bfbe-504477146c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.465 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:43:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:44.466 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.603 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.769 226109 DEBUG nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.770 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.770 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.770 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.770 226109 DEBUG nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.770 226109 WARNING nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state rescuing.
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.771 226109 DEBUG nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.771 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.771 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.771 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.771 226109 DEBUG nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.771 226109 WARNING nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state rescuing.
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.772 226109 DEBUG nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.772 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.772 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.772 226109 DEBUG oslo_concurrency.lockutils [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.772 226109 DEBUG nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.772 226109 WARNING nova.compute.manager [req-8a40e6f5-3751-4e19-a91e-0faf321da57e req-39a1d44b-e1f2-461b-b4fa-a6a8ae1ffe47 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state rescuing.
Dec 06 07:43:44 compute-1 podman[283706]: 2025-12-06 07:43:44.81661304 +0000 UTC m=+0.045389398 container create e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:43:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3709914878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:44 compute-1 systemd[1]: Started libpod-conmon-e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc.scope.
Dec 06 07:43:44 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:43:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df33466f7d1ebafdae28db0531295ce9b81342c36d63838fe68ce480cc6b783e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:44 compute-1 podman[283706]: 2025-12-06 07:43:44.792545235 +0000 UTC m=+0.021321593 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.893 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 119a621b-6198-42db-9a89-d73da6c2a2da due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.894 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007024.892683, 119a621b-6198-42db-9a89-d73da6c2a2da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.894 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] VM Resumed (Lifecycle Event)
Dec 06 07:43:44 compute-1 nova_compute[226101]: 2025-12-06 07:43:44.897 226109 DEBUG nova.compute.manager [None req-243deebc-d821-41a6-8c42-4c5e55951802 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:44 compute-1 podman[283706]: 2025-12-06 07:43:44.898332993 +0000 UTC m=+0.127109351 container init e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:43:44 compute-1 podman[283706]: 2025-12-06 07:43:44.90502674 +0000 UTC m=+0.133803098 container start e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:43:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:44 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283729]: [NOTICE]   (283733) : New worker (283735) forked
Dec 06 07:43:44 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283729]: [NOTICE]   (283733) : Loading success.
Dec 06 07:43:45 compute-1 nova_compute[226101]: 2025-12-06 07:43:45.028 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:45 compute-1 nova_compute[226101]: 2025-12-06 07:43:45.032 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:45 compute-1 nova_compute[226101]: 2025-12-06 07:43:45.172 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007024.895663, 119a621b-6198-42db-9a89-d73da6c2a2da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:45 compute-1 nova_compute[226101]: 2025-12-06 07:43:45.173 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] VM Started (Lifecycle Event)
Dec 06 07:43:45 compute-1 nova_compute[226101]: 2025-12-06 07:43:45.304 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:45 compute-1 nova_compute[226101]: 2025-12-06 07:43:45.308 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:45.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:45.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:46 compute-1 ceph-mon[81689]: pgmap v2650: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 883 KiB/s rd, 6.3 MiB/s wr, 204 op/s
Dec 06 07:43:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3444232602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/138774656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:47 compute-1 nova_compute[226101]: 2025-12-06 07:43:47.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:47 compute-1 nova_compute[226101]: 2025-12-06 07:43:47.602 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:47 compute-1 nova_compute[226101]: 2025-12-06 07:43:47.609 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:47.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:43:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:47.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:43:48 compute-1 ceph-mon[81689]: pgmap v2651: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 203 op/s
Dec 06 07:43:48 compute-1 nova_compute[226101]: 2025-12-06 07:43:48.456 226109 INFO nova.compute.manager [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Unrescuing
Dec 06 07:43:48 compute-1 nova_compute[226101]: 2025-12-06 07:43:48.456 226109 DEBUG oslo_concurrency.lockutils [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:48 compute-1 nova_compute[226101]: 2025-12-06 07:43:48.457 226109 DEBUG oslo_concurrency.lockutils [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:48 compute-1 nova_compute[226101]: 2025-12-06 07:43:48.457 226109 DEBUG nova.network.neutron [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:43:49 compute-1 ceph-mon[81689]: pgmap v2652: 305 pgs: 305 active+clean; 459 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 228 op/s
Dec 06 07:43:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:49.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:49.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.417 226109 DEBUG nova.network.neutron [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.433 226109 DEBUG oslo_concurrency.lockutils [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.434 226109 DEBUG nova.objects.instance [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'flavor' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:50 compute-1 kernel: tap70e780b6-88 (unregistering): left promiscuous mode
Dec 06 07:43:50 compute-1 NetworkManager[49031]: <info>  [1765007030.5044] device (tap70e780b6-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:43:50 compute-1 ovn_controller[130279]: 2025-12-06T07:43:50Z|00600|binding|INFO|Releasing lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d from this chassis (sb_readonly=0)
Dec 06 07:43:50 compute-1 ovn_controller[130279]: 2025-12-06T07:43:50Z|00601|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d down in Southbound
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.569 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 ovn_controller[130279]: 2025-12-06T07:43:50Z|00602|binding|INFO|Removing iface tap70e780b6-88 ovn-installed in OVS
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.570 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.575 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:97:b4 10.100.0.9'], port_security=['fa:16:3e:5d:97:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '119a621b-6198-42db-9a89-d73da6c2a2da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=70e780b6-88d2-47a1-99b8-a525dcb88b5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.577 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 70e780b6-88d2-47a1-99b8-a525dcb88b5d in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.578 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.579 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9bbf88-fc4e-456e-9d0e-fbdae66dc9ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.580 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.585 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000096.scope: Deactivated successfully.
Dec 06 07:43:50 compute-1 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000096.scope: Consumed 6.507s CPU time.
Dec 06 07:43:50 compute-1 systemd-machined[190302]: Machine qemu-69-instance-00000096 terminated.
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.690 226109 INFO nova.virt.libvirt.driver [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance destroyed successfully.
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.690 226109 DEBUG nova.objects.instance [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'numa_topology' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:50 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283729]: [NOTICE]   (283733) : haproxy version is 2.8.14-c23fe91
Dec 06 07:43:50 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283729]: [NOTICE]   (283733) : path to executable is /usr/sbin/haproxy
Dec 06 07:43:50 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283729]: [WARNING]  (283733) : Exiting Master process...
Dec 06 07:43:50 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283729]: [ALERT]    (283733) : Current worker (283735) exited with code 143 (Terminated)
Dec 06 07:43:50 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283729]: [WARNING]  (283733) : All workers exited. Exiting... (0)
Dec 06 07:43:50 compute-1 systemd[1]: libpod-e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc.scope: Deactivated successfully.
Dec 06 07:43:50 compute-1 podman[283767]: 2025-12-06 07:43:50.703953836 +0000 UTC m=+0.044787221 container died e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:43:50 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc-userdata-shm.mount: Deactivated successfully.
Dec 06 07:43:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-df33466f7d1ebafdae28db0531295ce9b81342c36d63838fe68ce480cc6b783e-merged.mount: Deactivated successfully.
Dec 06 07:43:50 compute-1 podman[283767]: 2025-12-06 07:43:50.743830897 +0000 UTC m=+0.084664282 container cleanup e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:43:50 compute-1 systemd[1]: libpod-conmon-e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc.scope: Deactivated successfully.
Dec 06 07:43:50 compute-1 kernel: tap70e780b6-88: entered promiscuous mode
Dec 06 07:43:50 compute-1 NetworkManager[49031]: <info>  [1765007030.7755] manager: (tap70e780b6-88): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Dec 06 07:43:50 compute-1 systemd-udevd[283748]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.777 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 ovn_controller[130279]: 2025-12-06T07:43:50Z|00603|binding|INFO|Claiming lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d for this chassis.
Dec 06 07:43:50 compute-1 ovn_controller[130279]: 2025-12-06T07:43:50Z|00604|binding|INFO|70e780b6-88d2-47a1-99b8-a525dcb88b5d: Claiming fa:16:3e:5d:97:b4 10.100.0.9
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.785 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:97:b4 10.100.0.9'], port_security=['fa:16:3e:5d:97:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '119a621b-6198-42db-9a89-d73da6c2a2da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=70e780b6-88d2-47a1-99b8-a525dcb88b5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:50 compute-1 NetworkManager[49031]: <info>  [1765007030.7867] device (tap70e780b6-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:43:50 compute-1 NetworkManager[49031]: <info>  [1765007030.7879] device (tap70e780b6-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:43:50 compute-1 ovn_controller[130279]: 2025-12-06T07:43:50Z|00605|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d ovn-installed in OVS
Dec 06 07:43:50 compute-1 ovn_controller[130279]: 2025-12-06T07:43:50Z|00606|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d up in Southbound
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.797 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.800 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 systemd-machined[190302]: New machine qemu-70-instance-00000096.
Dec 06 07:43:50 compute-1 podman[283808]: 2025-12-06 07:43:50.817488949 +0000 UTC m=+0.054694653 container remove e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:43:50 compute-1 systemd[1]: Started Virtual Machine qemu-70-instance-00000096.
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.824 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[20ff7ad7-84ed-4131-983c-18d31e9d93c8]: (4, ('Sat Dec  6 07:43:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc)\ne998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc\nSat Dec  6 07:43:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (e998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc)\ne998897b839fc1139bae6cc28b409d8dc11963c231743ec49487fe69e4e013cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.827 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b612603-4bd9-4f64-aba6-49e0a70956d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.828 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.829 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 kernel: tap6d1a17d6-50: left promiscuous mode
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.845 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 nova_compute[226101]: 2025-12-06 07:43:50.845 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.848 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[afcde2f6-80a7-48cb-be14-eb56b57c13ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.870 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f888645e-9747-4864-9e93-d4e868532d7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.871 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d632d1ad-0f1e-4c0a-8fdb-18a8caad31d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.888 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[34cce522-6eb9-49f4-a53b-8302dd7cc013]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730373, 'reachable_time': 20016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283836, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.890 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.891 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[471e6f29-961e-4bfb-bce3-95f342f65e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.892 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 70e780b6-88d2-47a1-99b8-a525dcb88b5d in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.894 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.905 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[625c6922-5f54-4f8c-89df-27da676f331f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.906 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.907 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.907 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[40e35fa2-54e5-4b4c-88f6-cd0aabb0f8c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.908 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c00dac03-b497-4c2f-a69d-aa9c04a6df1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.920 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a947d776-777d-47d7-a3c1-ec23ec4fea0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.941 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[815394a0-4c76-437b-ab2e-98ac3e26a7a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.965 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6661a284-e7bf-437c-a44a-0e4e27e79ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:50 compute-1 NetworkManager[49031]: <info>  [1765007030.9731] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Dec 06 07:43:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:50.973 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c89765-8f28-4970-a195-e4457698e359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.004 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8a704987-467a-499e-8404-d37537aeaaff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.007 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd0aec6-6e07-4e28-b89c-a2d541e8b045]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 NetworkManager[49031]: <info>  [1765007031.0267] device (tap6d1a17d6-50): carrier: link connected
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.031 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[cef3779a-d78c-478a-ba49-45946e44582e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.046 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e5fd2f1c-ece8-452c-be22-4ba7a0ca507e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731053, 'reachable_time': 36091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283862, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.061 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e896a032-3943-4287-9e92-e2fb5560d003]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731053, 'tstamp': 731053}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283863, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.075 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[935184d3-b844-4900-a1f0-31cdf59e7369]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731053, 'reachable_time': 36091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283864, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.101 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[947ca713-f633-45b0-bd5c-fbd217fedf06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.158 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb7af71-6a1f-49c2-b641-86d413e9cd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.160 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.160 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.160 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:51 compute-1 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.162 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:51 compute-1 NetworkManager[49031]: <info>  [1765007031.1625] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.163 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.165 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.166 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:51 compute-1 ovn_controller[130279]: 2025-12-06T07:43:51Z|00607|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.180 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.180 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.181 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4bcf6b-18cb-468f-b3cf-3db7b406699b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.182 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:43:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:43:51.182 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:43:51 compute-1 podman[283896]: 2025-12-06 07:43:51.522289695 +0000 UTC m=+0.046212829 container create 2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:43:51 compute-1 systemd[1]: Started libpod-conmon-2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb.scope.
Dec 06 07:43:51 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:43:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51bd4d55e9b4fc4a8431a2afbc55c9d85c8dc600984ee91d746b219e71a9a09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:51 compute-1 podman[283896]: 2025-12-06 07:43:51.589683721 +0000 UTC m=+0.113606865 container init 2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:43:51 compute-1 podman[283896]: 2025-12-06 07:43:51.497512212 +0000 UTC m=+0.021435366 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:43:51 compute-1 podman[283896]: 2025-12-06 07:43:51.595019082 +0000 UTC m=+0.118942216 container start 2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:43:51 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283946]: [NOTICE]   (283953) : New worker (283957) forked
Dec 06 07:43:51 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283946]: [NOTICE]   (283953) : Loading success.
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.618 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.619 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.660 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 119a621b-6198-42db-9a89-d73da6c2a2da due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.660 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007031.6414728, 119a621b-6198-42db-9a89-d73da6c2a2da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.661 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] VM Resumed (Lifecycle Event)
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.681 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.685 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.700 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.700 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007031.641902, 119a621b-6198-42db-9a89-d73da6c2a2da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.701 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] VM Started (Lifecycle Event)
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.717 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.719 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:51 compute-1 nova_compute[226101]: 2025-12-06 07:43:51.737 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:43:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:51.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:51.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:52 compute-1 ceph-mon[81689]: pgmap v2653: 305 pgs: 305 active+clean; 482 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.228 226109 DEBUG nova.compute.manager [None req-97fc4646-845e-4352-80e3-1785faeab157 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.301 226109 DEBUG nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.302 226109 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.302 226109 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.302 226109 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.303 226109 DEBUG nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.303 226109 WARNING nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state None.
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.303 226109 DEBUG nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.303 226109 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.303 226109 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.304 226109 DEBUG oslo_concurrency.lockutils [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.304 226109 DEBUG nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.304 226109 WARNING nova.compute.manager [req-92724983-eecb-45d2-98fa-f6793be76b62 req-dd108d97-fb6a-4eff-bbc6-d72df6fb05e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state None.
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.604 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:52 compute-1 nova_compute[226101]: 2025-12-06 07:43:52.612 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:53.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:53.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:54 compute-1 ceph-mon[81689]: pgmap v2654: 305 pgs: 305 active+clean; 497 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 190 op/s
Dec 06 07:43:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.590 226109 DEBUG nova.compute.manager [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.590 226109 DEBUG oslo_concurrency.lockutils [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.590 226109 DEBUG oslo_concurrency.lockutils [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.591 226109 DEBUG oslo_concurrency.lockutils [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.591 226109 DEBUG nova.compute.manager [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.591 226109 WARNING nova.compute.manager [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state None.
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.591 226109 DEBUG nova.compute.manager [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.591 226109 DEBUG oslo_concurrency.lockutils [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.592 226109 DEBUG oslo_concurrency.lockutils [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.592 226109 DEBUG oslo_concurrency.lockutils [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.592 226109 DEBUG nova.compute.manager [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:55 compute-1 nova_compute[226101]: 2025-12-06 07:43:55.592 226109 WARNING nova.compute.manager [req-c68e0ffc-86b1-4dc6-ae54-7fc04079ed4a req-7f427082-141e-44b6-8762-8be00c407077 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state None.
Dec 06 07:43:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2032903449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:55 compute-1 ceph-mon[81689]: pgmap v2655: 305 pgs: 305 active+clean; 497 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 278 op/s
Dec 06 07:43:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3309694968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:55.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:57 compute-1 nova_compute[226101]: 2025-12-06 07:43:57.614 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:43:57 compute-1 nova_compute[226101]: 2025-12-06 07:43:57.616 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:43:57 compute-1 nova_compute[226101]: 2025-12-06 07:43:57.616 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 07:43:57 compute-1 nova_compute[226101]: 2025-12-06 07:43:57.616 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 07:43:57 compute-1 nova_compute[226101]: 2025-12-06 07:43:57.650 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:57 compute-1 nova_compute[226101]: 2025-12-06 07:43:57.651 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 07:43:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:57.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:57.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:58 compute-1 ceph-mon[81689]: pgmap v2656: 305 pgs: 305 active+clean; 471 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 290 op/s
Dec 06 07:43:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1459538352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3852900960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:59 compute-1 ceph-mon[81689]: pgmap v2657: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.8 MiB/s wr, 283 op/s
Dec 06 07:43:59 compute-1 nova_compute[226101]: 2025-12-06 07:43:59.706 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:59 compute-1 nova_compute[226101]: 2025-12-06 07:43:59.707 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:59 compute-1 nova_compute[226101]: 2025-12-06 07:43:59.707 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:59 compute-1 nova_compute[226101]: 2025-12-06 07:43:59.707 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:59 compute-1 nova_compute[226101]: 2025-12-06 07:43:59.707 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:59 compute-1 nova_compute[226101]: 2025-12-06 07:43:59.709 226109 INFO nova.compute.manager [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Terminating instance
Dec 06 07:43:59 compute-1 nova_compute[226101]: 2025-12-06 07:43:59.710 226109 DEBUG nova.compute.manager [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:43:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:59.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:43:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:59.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:01.663 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:01.664 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:01.664 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:01 compute-1 kernel: tap1c911a62-4d (unregistering): left promiscuous mode
Dec 06 07:44:01 compute-1 NetworkManager[49031]: <info>  [1765007041.7561] device (tap1c911a62-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.767 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:01 compute-1 ovn_controller[130279]: 2025-12-06T07:44:01Z|00608|binding|INFO|Releasing lport 1c911a62-4d45-43ff-bf14-309d464c7c81 from this chassis (sb_readonly=0)
Dec 06 07:44:01 compute-1 ovn_controller[130279]: 2025-12-06T07:44:01Z|00609|binding|INFO|Setting lport 1c911a62-4d45-43ff-bf14-309d464c7c81 down in Southbound
Dec 06 07:44:01 compute-1 ovn_controller[130279]: 2025-12-06T07:44:01Z|00610|binding|INFO|Removing iface tap1c911a62-4d ovn-installed in OVS
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.770 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.783 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:01.787 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:66:fb 10.100.0.8'], port_security=['fa:16:3e:a0:66:fb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd954d607-525c-4edf-ab9e-56658dd2525a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=1c911a62-4d45-43ff-bf14-309d464c7c81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:01.788 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 1c911a62-4d45-43ff-bf14-309d464c7c81 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 unbound from our chassis
Dec 06 07:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:01.789 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35a27638-382c-4afb-83b0-edd6d7f4bca8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:01.790 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0c094e92-5ca5-45ca-bbca-a28e3c7e65c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:01.791 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace which is not needed anymore
Dec 06 07:44:01 compute-1 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Dec 06 07:44:01 compute-1 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008b.scope: Consumed 21.083s CPU time.
Dec 06 07:44:01 compute-1 systemd-machined[190302]: Machine qemu-63-instance-0000008b terminated.
Dec 06 07:44:01 compute-1 ceph-mon[81689]: pgmap v2658: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 266 op/s
Dec 06 07:44:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:01.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.948 226109 INFO nova.virt.libvirt.driver [-] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Instance destroyed successfully.
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.949 226109 DEBUG nova.objects.instance [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'resources' on Instance uuid d954d607-525c-4edf-ab9e-56658dd2525a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.962 226109 DEBUG nova.virt.libvirt.vif [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1486320730',display_name='tempest-₡-1486320730',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--1486320730',id=139,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:41:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-xgnkh50b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:41:00Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=d954d607-525c-4edf-ab9e-56658dd2525a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.963 226109 DEBUG nova.network.os_vif_util [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "1c911a62-4d45-43ff-bf14-309d464c7c81", "address": "fa:16:3e:a0:66:fb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c911a62-4d", "ovs_interfaceid": "1c911a62-4d45-43ff-bf14-309d464c7c81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.963 226109 DEBUG nova.network.os_vif_util [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a0:66:fb,bridge_name='br-int',has_traffic_filtering=True,id=1c911a62-4d45-43ff-bf14-309d464c7c81,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c911a62-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.964 226109 DEBUG os_vif [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:66:fb,bridge_name='br-int',has_traffic_filtering=True,id=1c911a62-4d45-43ff-bf14-309d464c7c81,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c911a62-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.966 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.966 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c911a62-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.968 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.971 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:44:01 compute-1 nova_compute[226101]: 2025-12-06 07:44:01.973 226109 INFO os_vif [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:66:fb,bridge_name='br-int',has_traffic_filtering=True,id=1c911a62-4d45-43ff-bf14-309d464c7c81,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c911a62-4d')
Dec 06 07:44:02 compute-1 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[279856]: [NOTICE]   (279861) : haproxy version is 2.8.14-c23fe91
Dec 06 07:44:02 compute-1 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[279856]: [NOTICE]   (279861) : path to executable is /usr/sbin/haproxy
Dec 06 07:44:02 compute-1 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[279856]: [WARNING]  (279861) : Exiting Master process...
Dec 06 07:44:02 compute-1 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[279856]: [WARNING]  (279861) : Exiting Master process...
Dec 06 07:44:02 compute-1 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[279856]: [ALERT]    (279861) : Current worker (279864) exited with code 143 (Terminated)
Dec 06 07:44:02 compute-1 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[279856]: [WARNING]  (279861) : All workers exited. Exiting... (0)
Dec 06 07:44:02 compute-1 systemd[1]: libpod-584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6.scope: Deactivated successfully.
Dec 06 07:44:02 compute-1 podman[284008]: 2025-12-06 07:44:02.159757843 +0000 UTC m=+0.280275345 container died 584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:44:02 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6-userdata-shm.mount: Deactivated successfully.
Dec 06 07:44:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-c719025635d9ef621e9648ad672d43a88af50b2095f33a7f436dfd8241a5d8eb-merged.mount: Deactivated successfully.
Dec 06 07:44:02 compute-1 podman[284008]: 2025-12-06 07:44:02.496389043 +0000 UTC m=+0.616906545 container cleanup 584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:44:02 compute-1 systemd[1]: libpod-conmon-584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6.scope: Deactivated successfully.
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.534 226109 DEBUG nova.compute.manager [req-49d3d523-ac65-4fb7-946a-b4de42120631 req-43f49a25-84be-4d0a-8717-00145d701d1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received event network-vif-unplugged-1c911a62-4d45-43ff-bf14-309d464c7c81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.534 226109 DEBUG oslo_concurrency.lockutils [req-49d3d523-ac65-4fb7-946a-b4de42120631 req-43f49a25-84be-4d0a-8717-00145d701d1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.535 226109 DEBUG oslo_concurrency.lockutils [req-49d3d523-ac65-4fb7-946a-b4de42120631 req-43f49a25-84be-4d0a-8717-00145d701d1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.535 226109 DEBUG oslo_concurrency.lockutils [req-49d3d523-ac65-4fb7-946a-b4de42120631 req-43f49a25-84be-4d0a-8717-00145d701d1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.535 226109 DEBUG nova.compute.manager [req-49d3d523-ac65-4fb7-946a-b4de42120631 req-43f49a25-84be-4d0a-8717-00145d701d1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] No waiting events found dispatching network-vif-unplugged-1c911a62-4d45-43ff-bf14-309d464c7c81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.535 226109 DEBUG nova.compute.manager [req-49d3d523-ac65-4fb7-946a-b4de42120631 req-43f49a25-84be-4d0a-8717-00145d701d1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received event network-vif-unplugged-1c911a62-4d45-43ff-bf14-309d464c7c81 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.683 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:02 compute-1 podman[284069]: 2025-12-06 07:44:02.782205936 +0000 UTC m=+0.260492842 container remove 584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.787 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1aacd5-e184-48c8-af21-7881c1150e92]: (4, ('Sat Dec  6 07:44:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6)\n584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6\nSat Dec  6 07:44:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6)\n584ea7a794bf61845d399351356756bc59c18d18162c40941f36cafbfaff92f6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.789 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c2f447-9bcb-4892-ba58-c4f69595183f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.790 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:02 compute-1 kernel: tap35a27638-30: left promiscuous mode
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.792 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:02 compute-1 nova_compute[226101]: 2025-12-06 07:44:02.806 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.808 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ad086602-0f7e-4a3b-8f26-fa117299008c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.826 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[143f3b69-b909-4340-a4e0-bbf40473d548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.827 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a9a7cd-f067-4e74-9430-a0e0c502f3f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.841 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7526de-67e8-41b5-b909-7ce953e5892e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713828, 'reachable_time': 31855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284085, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.843 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:44:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:02.843 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5f9364-e28d-40ec-a24e-f7e6a57419eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:02 compute-1 systemd[1]: run-netns-ovnmeta\x2d35a27638\x2d382c\x2d4afb\x2d83b0\x2dedd6d7f4bca8.mount: Deactivated successfully.
Dec 06 07:44:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/158173877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:03.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:04 compute-1 ceph-mon[81689]: pgmap v2659: 305 pgs: 305 active+clean; 452 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 279 op/s
Dec 06 07:44:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2504808701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/561146328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:04 compute-1 podman[284087]: 2025-12-06 07:44:04.071214909 +0000 UTC m=+0.053995185 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:44:04 compute-1 podman[284086]: 2025-12-06 07:44:04.073361506 +0000 UTC m=+0.057654553 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:44:04 compute-1 podman[284088]: 2025-12-06 07:44:04.097059424 +0000 UTC m=+0.077241840 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 06 07:44:04 compute-1 nova_compute[226101]: 2025-12-06 07:44:04.686 226109 DEBUG nova.compute.manager [req-24a8b160-046b-4f2f-9693-1f70b5676457 req-cb331226-8c29-4841-b6fc-b60a52404c00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received event network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:04 compute-1 nova_compute[226101]: 2025-12-06 07:44:04.687 226109 DEBUG oslo_concurrency.lockutils [req-24a8b160-046b-4f2f-9693-1f70b5676457 req-cb331226-8c29-4841-b6fc-b60a52404c00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:04 compute-1 nova_compute[226101]: 2025-12-06 07:44:04.687 226109 DEBUG oslo_concurrency.lockutils [req-24a8b160-046b-4f2f-9693-1f70b5676457 req-cb331226-8c29-4841-b6fc-b60a52404c00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:04 compute-1 nova_compute[226101]: 2025-12-06 07:44:04.687 226109 DEBUG oslo_concurrency.lockutils [req-24a8b160-046b-4f2f-9693-1f70b5676457 req-cb331226-8c29-4841-b6fc-b60a52404c00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:04 compute-1 nova_compute[226101]: 2025-12-06 07:44:04.688 226109 DEBUG nova.compute.manager [req-24a8b160-046b-4f2f-9693-1f70b5676457 req-cb331226-8c29-4841-b6fc-b60a52404c00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] No waiting events found dispatching network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:04 compute-1 nova_compute[226101]: 2025-12-06 07:44:04.688 226109 WARNING nova.compute.manager [req-24a8b160-046b-4f2f-9693-1f70b5676457 req-cb331226-8c29-4841-b6fc-b60a52404c00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received unexpected event network-vif-plugged-1c911a62-4d45-43ff-bf14-309d464c7c81 for instance with vm_state active and task_state deleting.
Dec 06 07:44:05 compute-1 ceph-mon[81689]: pgmap v2660: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 337 op/s
Dec 06 07:44:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:05.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:05.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:06 compute-1 ovn_controller[130279]: 2025-12-06T07:44:06Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:97:b4 10.100.0.9
Dec 06 07:44:06 compute-1 ovn_controller[130279]: 2025-12-06T07:44:06Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:97:b4 10.100.0.9
Dec 06 07:44:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:06 compute-1 nova_compute[226101]: 2025-12-06 07:44:06.968 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:07 compute-1 nova_compute[226101]: 2025-12-06 07:44:07.721 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:07.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:07.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:08 compute-1 ceph-mon[81689]: pgmap v2661: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.2 MiB/s wr, 249 op/s
Dec 06 07:44:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:44:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/895270717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:44:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:44:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/895270717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:44:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/895270717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:44:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/895270717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:44:09 compute-1 ceph-mon[81689]: pgmap v2662: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.6 MiB/s wr, 288 op/s
Dec 06 07:44:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2028827212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:09.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:09.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e333 e333: 3 total, 3 up, 3 in
Dec 06 07:44:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:11.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:11.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:11 compute-1 nova_compute[226101]: 2025-12-06 07:44:11.970 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2461673043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:12 compute-1 nova_compute[226101]: 2025-12-06 07:44:12.723 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:13 compute-1 nova_compute[226101]: 2025-12-06 07:44:13.716 226109 INFO nova.virt.libvirt.driver [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Deleting instance files /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a_del
Dec 06 07:44:13 compute-1 nova_compute[226101]: 2025-12-06 07:44:13.716 226109 INFO nova.virt.libvirt.driver [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Deletion of /var/lib/nova/instances/d954d607-525c-4edf-ab9e-56658dd2525a_del complete
Dec 06 07:44:13 compute-1 ceph-mon[81689]: pgmap v2663: 305 pgs: 305 active+clean; 458 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 299 op/s
Dec 06 07:44:13 compute-1 ceph-mon[81689]: osdmap e333: 3 total, 3 up, 3 in
Dec 06 07:44:13 compute-1 ceph-mon[81689]: pgmap v2665: 305 pgs: 305 active+clean; 462 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 306 op/s
Dec 06 07:44:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:13.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:14 compute-1 nova_compute[226101]: 2025-12-06 07:44:14.156 226109 INFO nova.compute.manager [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Took 14.45 seconds to destroy the instance on the hypervisor.
Dec 06 07:44:14 compute-1 nova_compute[226101]: 2025-12-06 07:44:14.157 226109 DEBUG oslo.service.loopingcall [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:44:14 compute-1 nova_compute[226101]: 2025-12-06 07:44:14.157 226109 DEBUG nova.compute.manager [-] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:44:14 compute-1 nova_compute[226101]: 2025-12-06 07:44:14.158 226109 DEBUG nova.network.neutron [-] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:44:15 compute-1 ceph-mon[81689]: pgmap v2666: 305 pgs: 305 active+clean; 487 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 277 op/s
Dec 06 07:44:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e334 e334: 3 total, 3 up, 3 in
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.551 226109 DEBUG nova.network.neutron [-] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.611 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.621 226109 INFO nova.compute.manager [-] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Took 1.46 seconds to deallocate network for instance.
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.648 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.648 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.649 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.650 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.746 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.747 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.791 226109 DEBUG nova.scheduler.client.report [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:44:15 compute-1 nova_compute[226101]: 2025-12-06 07:44:15.885 226109 DEBUG nova.compute.manager [req-759178dd-4fb9-4a2b-85b4-1c1471f65bbb req-376f49dd-6461-4571-ba05-c8f78a91344d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Received event network-vif-deleted-1c911a62-4d45-43ff-bf14-309d464c7c81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:15.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.074 226109 DEBUG nova.scheduler.client.report [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.075 226109 DEBUG nova.compute.provider_tree [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:44:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:44:16 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3727198498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.146 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:16 compute-1 ceph-mon[81689]: osdmap e334: 3 total, 3 up, 3 in
Dec 06 07:44:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3727198498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.259 226109 DEBUG nova.scheduler.client.report [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.284 226109 DEBUG nova.scheduler.client.report [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.520 226109 DEBUG oslo_concurrency.processutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.769 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.770 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:44:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:16.796 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:16.797 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.946 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007041.9457207, d954d607-525c-4edf-ab9e-56658dd2525a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.947 226109 INFO nova.compute.manager [-] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] VM Stopped (Lifecycle Event)
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.956 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.958 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4161MB free_disk=20.817211151123047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.959 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.972 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e335 e335: 3 total, 3 up, 3 in
Dec 06 07:44:16 compute-1 nova_compute[226101]: 2025-12-06 07:44:16.984 226109 DEBUG nova.compute.manager [None req-1674888f-783b-4266-b19b-f659e68c77f3 - - - - - -] [instance: d954d607-525c-4edf-ab9e-56658dd2525a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:17 compute-1 nova_compute[226101]: 2025-12-06 07:44:17.725 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:17 compute-1 ceph-mon[81689]: osdmap e335: 3 total, 3 up, 3 in
Dec 06 07:44:17 compute-1 ceph-mon[81689]: pgmap v2669: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 254 op/s
Dec 06 07:44:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:17.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:17.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1728587174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:44:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3064426482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:19 compute-1 nova_compute[226101]: 2025-12-06 07:44:19.887 226109 DEBUG oslo_concurrency.processutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:19 compute-1 nova_compute[226101]: 2025-12-06 07:44:19.892 226109 DEBUG nova.compute.provider_tree [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:44:19 compute-1 nova_compute[226101]: 2025-12-06 07:44:19.956 226109 DEBUG nova.scheduler.client.report [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:44:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:19.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:19.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:19 compute-1 sudo[284193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:19 compute-1 sudo[284193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:19 compute-1 sudo[284193]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:19 compute-1 nova_compute[226101]: 2025-12-06 07:44:19.993 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 4.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:19 compute-1 nova_compute[226101]: 2025-12-06 07:44:19.996 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 3.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:20 compute-1 sudo[284218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:44:20 compute-1 sudo[284218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:20 compute-1 sudo[284218]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:20 compute-1 nova_compute[226101]: 2025-12-06 07:44:20.059 226109 INFO nova.scheduler.client.report [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Deleted allocations for instance d954d607-525c-4edf-ab9e-56658dd2525a
Dec 06 07:44:20 compute-1 sudo[284243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:20 compute-1 sudo[284243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:20 compute-1 sudo[284243]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:20 compute-1 sudo[284268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:44:20 compute-1 sudo[284268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:20 compute-1 nova_compute[226101]: 2025-12-06 07:44:20.176 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 119a621b-6198-42db-9a89-d73da6c2a2da actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:44:20 compute-1 nova_compute[226101]: 2025-12-06 07:44:20.177 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:44:20 compute-1 nova_compute[226101]: 2025-12-06 07:44:20.177 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:44:20 compute-1 nova_compute[226101]: 2025-12-06 07:44:20.233 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:20 compute-1 nova_compute[226101]: 2025-12-06 07:44:20.286 226109 DEBUG oslo_concurrency.lockutils [None req-d3ec1b6a-fa21-48f6-9d07-960bc6e4165e 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "d954d607-525c-4edf-ab9e-56658dd2525a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 20.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:20 compute-1 sudo[284268]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:21 compute-1 nova_compute[226101]: 2025-12-06 07:44:21.289 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:21 compute-1 nova_compute[226101]: 2025-12-06 07:44:21.299 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:44:21 compute-1 ceph-mon[81689]: pgmap v2670: 305 pgs: 305 active+clean; 543 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.7 MiB/s wr, 229 op/s
Dec 06 07:44:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2331378924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3064426482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:21 compute-1 nova_compute[226101]: 2025-12-06 07:44:21.369 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:44:21 compute-1 nova_compute[226101]: 2025-12-06 07:44:21.438 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:44:21 compute-1 nova_compute[226101]: 2025-12-06 07:44:21.439 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:21.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:21 compute-1 nova_compute[226101]: 2025-12-06 07:44:21.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:21.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:22 compute-1 nova_compute[226101]: 2025-12-06 07:44:22.726 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:44:22.798 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:44:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:44:22 compute-1 ceph-mon[81689]: pgmap v2671: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.4 MiB/s wr, 299 op/s
Dec 06 07:44:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/323194662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:44:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:44:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:44:23 compute-1 nova_compute[226101]: 2025-12-06 07:44:23.419 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:23 compute-1 nova_compute[226101]: 2025-12-06 07:44:23.419 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:44:23 compute-1 nova_compute[226101]: 2025-12-06 07:44:23.419 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:44:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:23.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:23.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:24 compute-1 ceph-mon[81689]: pgmap v2672: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.7 MiB/s wr, 252 op/s
Dec 06 07:44:24 compute-1 nova_compute[226101]: 2025-12-06 07:44:24.683 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:44:24 compute-1 nova_compute[226101]: 2025-12-06 07:44:24.684 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:44:24 compute-1 nova_compute[226101]: 2025-12-06 07:44:24.685 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:44:24 compute-1 nova_compute[226101]: 2025-12-06 07:44:24.686 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 e336: 3 total, 3 up, 3 in
Dec 06 07:44:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:25.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:25.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:26 compute-1 ceph-mon[81689]: pgmap v2673: 305 pgs: 305 active+clean; 567 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Dec 06 07:44:26 compute-1 ceph-mon[81689]: osdmap e336: 3 total, 3 up, 3 in
Dec 06 07:44:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3531443425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:26 compute-1 nova_compute[226101]: 2025-12-06 07:44:26.976 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.767 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.852 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.909 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.910 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.911 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.911 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.912 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.912 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:27 compute-1 nova_compute[226101]: 2025-12-06 07:44:27.912 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:44:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:27.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:28 compute-1 sudo[284347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:28 compute-1 sudo[284347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:28 compute-1 sudo[284347]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:28 compute-1 sudo[284372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:44:28 compute-1 sudo[284372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:28 compute-1 sudo[284372]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:28 compute-1 ceph-mon[81689]: pgmap v2675: 305 pgs: 305 active+clean; 567 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 211 op/s
Dec 06 07:44:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/678080406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:28 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:29 compute-1 ovn_controller[130279]: 2025-12-06T07:44:29Z|00611|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:44:29 compute-1 nova_compute[226101]: 2025-12-06 07:44:29.229 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:29 compute-1 ovn_controller[130279]: 2025-12-06T07:44:29Z|00612|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:44:29 compute-1 nova_compute[226101]: 2025-12-06 07:44:29.409 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:29 compute-1 nova_compute[226101]: 2025-12-06 07:44:29.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:29.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:29.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:30 compute-1 ceph-mon[81689]: pgmap v2676: 305 pgs: 305 active+clean; 573 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 162 op/s
Dec 06 07:44:31 compute-1 nova_compute[226101]: 2025-12-06 07:44:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:31 compute-1 nova_compute[226101]: 2025-12-06 07:44:31.980 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:31.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:32.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:32 compute-1 ceph-mon[81689]: pgmap v2677: 305 pgs: 305 active+clean; 578 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Dec 06 07:44:32 compute-1 nova_compute[226101]: 2025-12-06 07:44:32.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:32 compute-1 nova_compute[226101]: 2025-12-06 07:44:32.771 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:33.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:34.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:34 compute-1 ceph-mon[81689]: pgmap v2678: 305 pgs: 305 active+clean; 578 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Dec 06 07:44:35 compute-1 podman[284399]: 2025-12-06 07:44:35.125527356 +0000 UTC m=+0.110509966 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:44:35 compute-1 podman[284398]: 2025-12-06 07:44:35.135276928 +0000 UTC m=+0.121402359 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:44:35 compute-1 podman[284400]: 2025-12-06 07:44:35.184203585 +0000 UTC m=+0.157507371 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:44:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3865790578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:35.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:36.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:36 compute-1 ceph-mon[81689]: pgmap v2679: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 193 KiB/s wr, 38 op/s
Dec 06 07:44:36 compute-1 nova_compute[226101]: 2025-12-06 07:44:36.981 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:37 compute-1 ceph-mon[81689]: pgmap v2680: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 189 KiB/s wr, 36 op/s
Dec 06 07:44:37 compute-1 nova_compute[226101]: 2025-12-06 07:44:37.814 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:37.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:39 compute-1 ceph-mon[81689]: pgmap v2681: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 85 KiB/s wr, 42 op/s
Dec 06 07:44:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:39.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:40.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2559247986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.569 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.570 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.607 226109 DEBUG nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.715 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.716 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.725 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.725 226109 INFO nova.compute.claims [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.884 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:41 compute-1 nova_compute[226101]: 2025-12-06 07:44:41.983 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:41.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:42 compute-1 ceph-mon[81689]: pgmap v2682: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 78 KiB/s wr, 40 op/s
Dec 06 07:44:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3681047041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4111193492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:44:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/289831042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.636 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.643 226109 DEBUG nova.compute.provider_tree [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.660 226109 DEBUG nova.scheduler.client.report [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.686 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.687 226109 DEBUG nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.751 226109 DEBUG nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.769 226109 INFO nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.788 226109 DEBUG nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.904 226109 DEBUG nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.906 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.907 226109 INFO nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Creating image(s)
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.936 226109 DEBUG nova.storage.rbd_utils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.962 226109 DEBUG nova.storage.rbd_utils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.988 226109 DEBUG nova.storage.rbd_utils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:42 compute-1 nova_compute[226101]: 2025-12-06 07:44:42.992 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:43 compute-1 nova_compute[226101]: 2025-12-06 07:44:43.058 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:43 compute-1 nova_compute[226101]: 2025-12-06 07:44:43.059 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:43 compute-1 nova_compute[226101]: 2025-12-06 07:44:43.060 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:43 compute-1 nova_compute[226101]: 2025-12-06 07:44:43.060 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:43 compute-1 nova_compute[226101]: 2025-12-06 07:44:43.095 226109 DEBUG nova.storage.rbd_utils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:43 compute-1 nova_compute[226101]: 2025-12-06 07:44:43.099 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/289831042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:43 compute-1 ceph-mon[81689]: pgmap v2683: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 33 KiB/s wr, 36 op/s
Dec 06 07:44:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:44.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:45 compute-1 ceph-mon[81689]: pgmap v2684: 305 pgs: 305 active+clean; 541 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 221 KiB/s wr, 54 op/s
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.293 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.354 226109 DEBUG nova.storage.rbd_utils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] resizing rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.472 226109 DEBUG nova.objects.instance [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'migration_context' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.514 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.514 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Ensure instance console log exists: /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.515 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.515 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.515 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.517 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.521 226109 WARNING nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.525 226109 DEBUG nova.virt.libvirt.host [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.526 226109 DEBUG nova.virt.libvirt.host [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.529 226109 DEBUG nova.virt.libvirt.host [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.530 226109 DEBUG nova.virt.libvirt.host [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.531 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.531 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.532 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.532 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.532 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.533 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.533 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.533 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.533 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.533 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.534 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.534 226109 DEBUG nova.virt.hardware [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.536 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:44:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2944281054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.967 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:45 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.995 226109 DEBUG nova.storage.rbd_utils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:45.999 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:46.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4294926738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2944281054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:44:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1507784745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:46.456 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:46.458 226109 DEBUG nova.objects.instance [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:46.480 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <uuid>503091c4-5f9e-4c5c-9061-e47a7635bd3e</uuid>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <name>instance-0000009c</name>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerShowV257Test-server-1341454059</nova:name>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:44:45</nova:creationTime>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <nova:user uuid="ea0c2e89d39744778d7bbb99f8dc9934">tempest-ServerShowV257Test-170530115-project-member</nova:user>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <nova:project uuid="c9ca41d51b00465db7923ccd90b0b1fe">tempest-ServerShowV257Test-170530115</nova:project>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <system>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <entry name="serial">503091c4-5f9e-4c5c-9061-e47a7635bd3e</entry>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <entry name="uuid">503091c4-5f9e-4c5c-9061-e47a7635bd3e</entry>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     </system>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <os>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   </os>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <features>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   </features>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk">
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       </source>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config">
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       </source>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:44:46 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/console.log" append="off"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <video>
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     </video>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:44:46 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:44:46 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:44:46 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:44:46 compute-1 nova_compute[226101]: </domain>
Dec 06 07:44:46 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:46.818 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:46.818 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:46.819 226109 INFO nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Using config drive
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:46.848 226109 DEBUG nova.storage.rbd_utils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:46 compute-1 nova_compute[226101]: 2025-12-06 07:44:46.986 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:47 compute-1 nova_compute[226101]: 2025-12-06 07:44:47.123 226109 INFO nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Creating config drive at /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config
Dec 06 07:44:47 compute-1 nova_compute[226101]: 2025-12-06 07:44:47.129 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplnpbcdnh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:47 compute-1 nova_compute[226101]: 2025-12-06 07:44:47.262 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplnpbcdnh" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:47 compute-1 nova_compute[226101]: 2025-12-06 07:44:47.289 226109 DEBUG nova.storage.rbd_utils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:47 compute-1 nova_compute[226101]: 2025-12-06 07:44:47.292 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1507784745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:47 compute-1 ceph-mon[81689]: pgmap v2685: 305 pgs: 305 active+clean; 527 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 205 KiB/s wr, 49 op/s
Dec 06 07:44:47 compute-1 nova_compute[226101]: 2025-12-06 07:44:47.501 226109 DEBUG oslo_concurrency.processutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:47 compute-1 nova_compute[226101]: 2025-12-06 07:44:47.502 226109 INFO nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Deleting local config drive /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config because it was imported into RBD.
Dec 06 07:44:47 compute-1 systemd-machined[190302]: New machine qemu-71-instance-0000009c.
Dec 06 07:44:47 compute-1 systemd[1]: Started Virtual Machine qemu-71-instance-0000009c.
Dec 06 07:44:47 compute-1 nova_compute[226101]: 2025-12-06 07:44:47.862 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:48.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:48.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.222 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007088.222259, 503091c4-5f9e-4c5c-9061-e47a7635bd3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.223 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] VM Resumed (Lifecycle Event)
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.226 226109 DEBUG nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.227 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.231 226109 INFO nova.virt.libvirt.driver [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance spawned successfully.
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.231 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.255 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.256 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.256 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.256 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.257 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.257 226109 DEBUG nova.virt.libvirt.driver [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.261 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.264 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.300 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.300 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007088.2234454, 503091c4-5f9e-4c5c-9061-e47a7635bd3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.300 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] VM Started (Lifecycle Event)
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.338 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.341 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.345 226109 INFO nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Took 5.44 seconds to spawn the instance on the hypervisor.
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.345 226109 DEBUG nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.382 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.434 226109 INFO nova.compute.manager [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Took 6.76 seconds to build instance.
Dec 06 07:44:48 compute-1 nova_compute[226101]: 2025-12-06 07:44:48.461 226109 DEBUG oslo_concurrency.lockutils [None req-1d5f598f-121d-4bf0-aae5-82ecc9d05e80 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:50.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:50 compute-1 ceph-mon[81689]: pgmap v2686: 305 pgs: 305 active+clean; 510 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 521 KiB/s wr, 99 op/s
Dec 06 07:44:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3847219544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:51 compute-1 nova_compute[226101]: 2025-12-06 07:44:51.892 226109 INFO nova.compute.manager [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Rebuilding instance
Dec 06 07:44:51 compute-1 nova_compute[226101]: 2025-12-06 07:44:51.988 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:52.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:52 compute-1 ceph-mon[81689]: pgmap v2687: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.257 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'trusted_certs' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.283 226109 DEBUG nova.compute.manager [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.374 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'pci_requests' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.403 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.428 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'resources' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.453 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'migration_context' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.589 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.592 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:44:52 compute-1 nova_compute[226101]: 2025-12-06 07:44:52.865 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:53 compute-1 ceph-mon[81689]: pgmap v2688: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:44:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:54.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:56.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:56 compute-1 ceph-mon[81689]: pgmap v2689: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Dec 06 07:44:56 compute-1 nova_compute[226101]: 2025-12-06 07:44:56.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:57 compute-1 nova_compute[226101]: 2025-12-06 07:44:57.917 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:58.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:44:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:44:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:58.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:44:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:58 compute-1 ceph-mon[81689]: pgmap v2690: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 219 op/s
Dec 06 07:44:59 compute-1 ceph-mon[81689]: pgmap v2691: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 225 op/s
Dec 06 07:45:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:00.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:00.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:01 compute-1 ceph-mon[81689]: pgmap v2692: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 176 op/s
Dec 06 07:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:45:01.664 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:45:01.664 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:45:01.665 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:01 compute-1 nova_compute[226101]: 2025-12-06 07:45:01.990 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:02.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:02 compute-1 nova_compute[226101]: 2025-12-06 07:45:02.638 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:45:02 compute-1 nova_compute[226101]: 2025-12-06 07:45:02.948 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:04.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:04.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:04 compute-1 ceph-mon[81689]: pgmap v2693: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 KiB/s wr, 112 op/s
Dec 06 07:45:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/485716860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2923747230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:04 compute-1 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Dec 06 07:45:04 compute-1 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009c.scope: Consumed 14.083s CPU time.
Dec 06 07:45:04 compute-1 systemd-machined[190302]: Machine qemu-71-instance-0000009c terminated.
Dec 06 07:45:05 compute-1 nova_compute[226101]: 2025-12-06 07:45:05.651 226109 INFO nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance shutdown successfully after 13 seconds.
Dec 06 07:45:05 compute-1 nova_compute[226101]: 2025-12-06 07:45:05.656 226109 INFO nova.virt.libvirt.driver [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance destroyed successfully.
Dec 06 07:45:05 compute-1 nova_compute[226101]: 2025-12-06 07:45:05.660 226109 INFO nova.virt.libvirt.driver [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance destroyed successfully.
Dec 06 07:45:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 07:45:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:06.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:06 compute-1 podman[284848]: 2025-12-06 07:45:06.069191129 +0000 UTC m=+0.054783516 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:45:06 compute-1 ceph-mon[81689]: pgmap v2694: 305 pgs: 305 active+clean; 576 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 215 op/s
Dec 06 07:45:06 compute-1 podman[284847]: 2025-12-06 07:45:06.101145339 +0000 UTC m=+0.087428184 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:45:06 compute-1 podman[284849]: 2025-12-06 07:45:06.109708419 +0000 UTC m=+0.092672335 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.367 226109 INFO nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Deleting instance files /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e_del
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.367 226109 INFO nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Deletion of /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e_del complete
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.580 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.581 226109 INFO nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Creating image(s)
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.603 226109 DEBUG nova.storage.rbd_utils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.627 226109 DEBUG nova.storage.rbd_utils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.654 226109 DEBUG nova.storage.rbd_utils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.657 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.719 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.721 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.721 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.722 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.802 226109 DEBUG nova.storage.rbd_utils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.806 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:06 compute-1 nova_compute[226101]: 2025-12-06 07:45:06.991 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.159 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:07 compute-1 ceph-mon[81689]: pgmap v2695: 305 pgs: 305 active+clean; 603 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 170 op/s
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.226 226109 DEBUG nova.storage.rbd_utils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] resizing rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.318 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.319 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Ensure instance console log exists: /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.320 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.320 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.320 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.322 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.325 226109 WARNING nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.330 226109 DEBUG nova.virt.libvirt.host [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.330 226109 DEBUG nova.virt.libvirt.host [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.334 226109 DEBUG nova.virt.libvirt.host [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.334 226109 DEBUG nova.virt.libvirt.host [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.335 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.335 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.336 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.336 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.336 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.336 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.336 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.336 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.337 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.337 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.337 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.337 226109 DEBUG nova.virt.hardware [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.337 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'vcpu_model' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.354 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:45:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/177074201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.768 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.792 226109 DEBUG nova.storage.rbd_utils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:07 compute-1 nova_compute[226101]: 2025-12-06 07:45:07.796 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.002 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:08.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/177074201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:45:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4026041978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.291 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.294 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <uuid>503091c4-5f9e-4c5c-9061-e47a7635bd3e</uuid>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <name>instance-0000009c</name>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerShowV257Test-server-1341454059</nova:name>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:45:07</nova:creationTime>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <nova:user uuid="ea0c2e89d39744778d7bbb99f8dc9934">tempest-ServerShowV257Test-170530115-project-member</nova:user>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <nova:project uuid="c9ca41d51b00465db7923ccd90b0b1fe">tempest-ServerShowV257Test-170530115</nova:project>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <nova:ports/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <system>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <entry name="serial">503091c4-5f9e-4c5c-9061-e47a7635bd3e</entry>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <entry name="uuid">503091c4-5f9e-4c5c-9061-e47a7635bd3e</entry>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     </system>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <os>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   </os>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <features>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   </features>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk">
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       </source>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config">
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       </source>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:45:08 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/console.log" append="off"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <video>
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     </video>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:45:08 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:45:08 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:45:08 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:45:08 compute-1 nova_compute[226101]: </domain>
Dec 06 07:45:08 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.481 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.482 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.482 226109 INFO nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Using config drive
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.506 226109 DEBUG nova.storage.rbd_utils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.537 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'ec2_ids' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.590 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'keypairs' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.832 226109 INFO nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Creating config drive at /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.838 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7x_wxyj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:08 compute-1 nova_compute[226101]: 2025-12-06 07:45:08.973 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu7x_wxyj" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.002 226109 DEBUG nova.storage.rbd_utils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] rbd image 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.005 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.164 226109 DEBUG oslo_concurrency.processutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config 503091c4-5f9e-4c5c-9061-e47a7635bd3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.165 226109 INFO nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Deleting local config drive /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e/disk.config because it was imported into RBD.
Dec 06 07:45:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4026041978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2063318395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2063318395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:09 compute-1 ceph-mon[81689]: pgmap v2696: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.1 MiB/s wr, 185 op/s
Dec 06 07:45:09 compute-1 systemd-machined[190302]: New machine qemu-72-instance-0000009c.
Dec 06 07:45:09 compute-1 systemd[1]: Started Virtual Machine qemu-72-instance-0000009c.
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.623 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 503091c4-5f9e-4c5c-9061-e47a7635bd3e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.625 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007109.623081, 503091c4-5f9e-4c5c-9061-e47a7635bd3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.625 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] VM Resumed (Lifecycle Event)
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.628 226109 DEBUG nova.compute.manager [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.629 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.633 226109 INFO nova.virt.libvirt.driver [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance spawned successfully.
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.633 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.647 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.653 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.656 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.657 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.657 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.658 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.658 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.658 226109 DEBUG nova.virt.libvirt.driver [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.685 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.686 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007109.6239643, 503091c4-5f9e-4c5c-9061-e47a7635bd3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.686 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] VM Started (Lifecycle Event)
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.710 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.713 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.719 226109 DEBUG nova.compute.manager [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.730 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.787 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.788 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.788 226109 DEBUG nova.objects.instance [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:45:09 compute-1 nova_compute[226101]: 2025-12-06 07:45:09.871 226109 DEBUG oslo_concurrency.lockutils [None req-68e537bb-ac0c-482b-8288-f5f334b2c748 ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:10.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:10.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:11 compute-1 nova_compute[226101]: 2025-12-06 07:45:11.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:12.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:12.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:12 compute-1 ceph-mon[81689]: pgmap v2697: 305 pgs: 305 active+clean; 575 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 832 KiB/s rd, 7.0 MiB/s wr, 211 op/s
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.580 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.580 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.581 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.581 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.581 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.582 226109 INFO nova.compute.manager [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Terminating instance
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.582 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "refresh_cache-503091c4-5f9e-4c5c-9061-e47a7635bd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.583 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquired lock "refresh_cache-503091c4-5f9e-4c5c-9061-e47a7635bd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.583 226109 DEBUG nova.network.neutron [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:45:12 compute-1 nova_compute[226101]: 2025-12-06 07:45:12.815 226109 DEBUG nova.network.neutron [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:45:13 compute-1 nova_compute[226101]: 2025-12-06 07:45:13.005 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:13 compute-1 ceph-mon[81689]: pgmap v2698: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 7.5 MiB/s wr, 242 op/s
Dec 06 07:45:13 compute-1 nova_compute[226101]: 2025-12-06 07:45:13.974 226109 DEBUG nova.network.neutron [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:45:14 compute-1 nova_compute[226101]: 2025-12-06 07:45:13.999 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Releasing lock "refresh_cache-503091c4-5f9e-4c5c-9061-e47a7635bd3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:45:14 compute-1 nova_compute[226101]: 2025-12-06 07:45:14.000 226109 DEBUG nova.compute.manager [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:45:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:14.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:14.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:14 compute-1 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Dec 06 07:45:14 compute-1 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009c.scope: Consumed 4.871s CPU time.
Dec 06 07:45:14 compute-1 systemd-machined[190302]: Machine qemu-72-instance-0000009c terminated.
Dec 06 07:45:14 compute-1 nova_compute[226101]: 2025-12-06 07:45:14.223 226109 INFO nova.virt.libvirt.driver [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance destroyed successfully.
Dec 06 07:45:14 compute-1 nova_compute[226101]: 2025-12-06 07:45:14.224 226109 DEBUG nova.objects.instance [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lazy-loading 'resources' on Instance uuid 503091c4-5f9e-4c5c-9061-e47a7635bd3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1826853894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1910726417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1731268524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4129380359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:16 compute-1 ceph-mon[81689]: pgmap v2699: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.5 MiB/s wr, 294 op/s
Dec 06 07:45:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:16.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:16.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.347 226109 INFO nova.virt.libvirt.driver [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Deleting instance files /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e_del
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.347 226109 INFO nova.virt.libvirt.driver [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Deletion of /var/lib/nova/instances/503091c4-5f9e-4c5c-9061-e47a7635bd3e_del complete
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.458 226109 INFO nova.compute.manager [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Took 2.46 seconds to destroy the instance on the hypervisor.
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.459 226109 DEBUG oslo.service.loopingcall [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.459 226109 DEBUG nova.compute.manager [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.459 226109 DEBUG nova.network.neutron [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.618 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.619 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.619 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.619 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.619 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:16 compute-1 nova_compute[226101]: 2025-12-06 07:45:16.995 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:45:17 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4123349726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.063 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.143 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.143 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:45:17 compute-1 ceph-mon[81689]: pgmap v2700: 305 pgs: 305 active+clean; 573 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.1 MiB/s wr, 196 op/s
Dec 06 07:45:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4123349726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.277 226109 DEBUG nova.network.neutron [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.295 226109 DEBUG nova.network.neutron [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.300 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.301 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4132MB free_disk=20.789100646972656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.301 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.301 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.315 226109 INFO nova.compute.manager [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Took 0.86 seconds to deallocate network for instance.
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.648 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.688 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 119a621b-6198-42db-9a89-d73da6c2a2da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.688 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 503091c4-5f9e-4c5c-9061-e47a7635bd3e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.689 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.689 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:45:17 compute-1 nova_compute[226101]: 2025-12-06 07:45:17.808 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.005 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:18.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:18.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:45:18 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/45640886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.245 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.250 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.269 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.293 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.294 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.294 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3556149852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.493 226109 DEBUG oslo_concurrency.processutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:45:18 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3903286247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.940 226109 DEBUG oslo_concurrency.processutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.946 226109 DEBUG nova.compute.provider_tree [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:45:18 compute-1 nova_compute[226101]: 2025-12-06 07:45:18.968 226109 DEBUG nova.scheduler.client.report [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:45:19 compute-1 nova_compute[226101]: 2025-12-06 07:45:19.000 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:19 compute-1 nova_compute[226101]: 2025-12-06 07:45:19.065 226109 INFO nova.scheduler.client.report [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Deleted allocations for instance 503091c4-5f9e-4c5c-9061-e47a7635bd3e
Dec 06 07:45:19 compute-1 nova_compute[226101]: 2025-12-06 07:45:19.173 226109 DEBUG oslo_concurrency.lockutils [None req-c6fb4c29-079f-4306-9ba5-2d10d1278bbe ea0c2e89d39744778d7bbb99f8dc9934 c9ca41d51b00465db7923ccd90b0b1fe - - default default] Lock "503091c4-5f9e-4c5c-9061-e47a7635bd3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/45640886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3903286247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:19 compute-1 ceph-mon[81689]: pgmap v2701: 305 pgs: 305 active+clean; 566 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 179 op/s
Dec 06 07:45:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/965692880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:20.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:20.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:21 compute-1 nova_compute[226101]: 2025-12-06 07:45:21.295 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:21 compute-1 nova_compute[226101]: 2025-12-06 07:45:21.296 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:45:21 compute-1 nova_compute[226101]: 2025-12-06 07:45:21.296 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:45:21 compute-1 nova_compute[226101]: 2025-12-06 07:45:21.714 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:45:21 compute-1 nova_compute[226101]: 2025-12-06 07:45:21.714 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:45:21 compute-1 nova_compute[226101]: 2025-12-06 07:45:21.714 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:45:21 compute-1 nova_compute[226101]: 2025-12-06 07:45:21.715 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:21 compute-1 nova_compute[226101]: 2025-12-06 07:45:21.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:22.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:22.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:22 compute-1 ceph-mon[81689]: pgmap v2702: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 209 op/s
Dec 06 07:45:23 compute-1 nova_compute[226101]: 2025-12-06 07:45:23.006 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:23 compute-1 ceph-mon[81689]: pgmap v2703: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 591 KiB/s wr, 224 op/s
Dec 06 07:45:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:24.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:24.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e337 e337: 3 total, 3 up, 3 in
Dec 06 07:45:24 compute-1 nova_compute[226101]: 2025-12-06 07:45:24.901 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:45:24 compute-1 nova_compute[226101]: 2025-12-06 07:45:24.991 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:45:24 compute-1 nova_compute[226101]: 2025-12-06 07:45:24.991 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:45:24 compute-1 nova_compute[226101]: 2025-12-06 07:45:24.991 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:24 compute-1 nova_compute[226101]: 2025-12-06 07:45:24.992 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:45:25 compute-1 ceph-mon[81689]: osdmap e337: 3 total, 3 up, 3 in
Dec 06 07:45:25 compute-1 ceph-mon[81689]: pgmap v2705: 305 pgs: 305 active+clean; 556 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 502 KiB/s wr, 210 op/s
Dec 06 07:45:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e338 e338: 3 total, 3 up, 3 in
Dec 06 07:45:25 compute-1 nova_compute[226101]: 2025-12-06 07:45:25.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:26.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:26 compute-1 ceph-mon[81689]: osdmap e338: 3 total, 3 up, 3 in
Dec 06 07:45:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e339 e339: 3 total, 3 up, 3 in
Dec 06 07:45:26 compute-1 nova_compute[226101]: 2025-12-06 07:45:26.998 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:27 compute-1 nova_compute[226101]: 2025-12-06 07:45:27.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:27 compute-1 ceph-mon[81689]: osdmap e339: 3 total, 3 up, 3 in
Dec 06 07:45:27 compute-1 ceph-mon[81689]: pgmap v2708: 305 pgs: 305 active+clean; 566 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.1 MiB/s wr, 222 op/s
Dec 06 07:45:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/644100411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:28 compute-1 nova_compute[226101]: 2025-12-06 07:45:28.039 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:28.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:28 compute-1 sudo[285342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:28 compute-1 sudo[285342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:28 compute-1 sudo[285342]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:28 compute-1 sudo[285367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:45:28 compute-1 sudo[285367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:28 compute-1 sudo[285367]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:28 compute-1 sudo[285392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:28 compute-1 sudo[285392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:28 compute-1 sudo[285392]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:28 compute-1 sudo[285417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:45:28 compute-1 sudo[285417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:28 compute-1 nova_compute[226101]: 2025-12-06 07:45:28.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:28 compute-1 sudo[285417]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:29 compute-1 nova_compute[226101]: 2025-12-06 07:45:29.221 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007114.2204843, 503091c4-5f9e-4c5c-9061-e47a7635bd3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:29 compute-1 nova_compute[226101]: 2025-12-06 07:45:29.222 226109 INFO nova.compute.manager [-] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] VM Stopped (Lifecycle Event)
Dec 06 07:45:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3247020229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:29 compute-1 nova_compute[226101]: 2025-12-06 07:45:29.251 226109 DEBUG nova.compute.manager [None req-65c53d27-fa0a-43dc-af30-3a5e754c364a - - - - - -] [instance: 503091c4-5f9e-4c5c-9061-e47a7635bd3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:30.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:30 compute-1 nova_compute[226101]: 2025-12-06 07:45:30.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:45:30.599 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:30 compute-1 nova_compute[226101]: 2025-12-06 07:45:30.600 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:45:30.600 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:45:31 compute-1 ceph-mon[81689]: pgmap v2709: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.6 MiB/s wr, 169 op/s
Dec 06 07:45:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:45:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:45:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:45:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:45:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:45:31 compute-1 nova_compute[226101]: 2025-12-06 07:45:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:32 compute-1 nova_compute[226101]: 2025-12-06 07:45:32.005 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:32.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:32 compute-1 ceph-mon[81689]: pgmap v2710: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 104 op/s
Dec 06 07:45:32 compute-1 nova_compute[226101]: 2025-12-06 07:45:32.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:33 compute-1 nova_compute[226101]: 2025-12-06 07:45:33.086 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:33 compute-1 ceph-mon[81689]: pgmap v2711: 305 pgs: 305 active+clean; 603 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 95 op/s
Dec 06 07:45:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:34.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 e340: 3 total, 3 up, 3 in
Dec 06 07:45:35 compute-1 sudo[285472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:35 compute-1 sudo[285472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:35 compute-1 sudo[285472]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:35 compute-1 sudo[285497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:45:35 compute-1 sudo[285497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:35 compute-1 sudo[285497]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:35 compute-1 ceph-mon[81689]: osdmap e340: 3 total, 3 up, 3 in
Dec 06 07:45:35 compute-1 ceph-mon[81689]: pgmap v2713: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 197 op/s
Dec 06 07:45:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:35 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:36.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:37 compute-1 nova_compute[226101]: 2025-12-06 07:45:37.057 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:37 compute-1 podman[285523]: 2025-12-06 07:45:37.092442314 +0000 UTC m=+0.074180238 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:45:37 compute-1 podman[285522]: 2025-12-06 07:45:37.099140383 +0000 UTC m=+0.080773934 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:45:37 compute-1 podman[285524]: 2025-12-06 07:45:37.128249027 +0000 UTC m=+0.104159474 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:45:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:45:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2783124359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:45:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2783124359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:38.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:38 compute-1 nova_compute[226101]: 2025-12-06 07:45:38.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:38 compute-1 ceph-mon[81689]: pgmap v2714: 305 pgs: 305 active+clean; 654 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.6 MiB/s wr, 189 op/s
Dec 06 07:45:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2783124359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2783124359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:38.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:45:39.602 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:40.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:40 compute-1 ceph-mon[81689]: pgmap v2715: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 5.1 MiB/s wr, 167 op/s
Dec 06 07:45:41 compute-1 ceph-mon[81689]: pgmap v2716: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 651 KiB/s rd, 5.1 MiB/s wr, 169 op/s
Dec 06 07:45:42 compute-1 nova_compute[226101]: 2025-12-06 07:45:42.060 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:42.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:42.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2137929200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:43 compute-1 nova_compute[226101]: 2025-12-06 07:45:43.105 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/225474099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:43 compute-1 ceph-mon[81689]: pgmap v2717: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 4.4 MiB/s wr, 169 op/s
Dec 06 07:45:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2239859850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:44.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:44.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2902135661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:46 compute-1 ceph-mon[81689]: pgmap v2718: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 502 KiB/s rd, 3.6 MiB/s wr, 144 op/s
Dec 06 07:45:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:46.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:46.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:47 compute-1 nova_compute[226101]: 2025-12-06 07:45:47.063 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:47 compute-1 nova_compute[226101]: 2025-12-06 07:45:47.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:48 compute-1 ceph-mon[81689]: pgmap v2719: 305 pgs: 305 active+clean; 660 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 153 KiB/s rd, 897 KiB/s wr, 69 op/s
Dec 06 07:45:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:48.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:48.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:48 compute-1 nova_compute[226101]: 2025-12-06 07:45:48.159 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:49 compute-1 ceph-mon[81689]: pgmap v2720: 305 pgs: 305 active+clean; 660 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 85 KiB/s wr, 75 op/s
Dec 06 07:45:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:50.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:50.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:52 compute-1 nova_compute[226101]: 2025-12-06 07:45:52.068 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-1 ceph-mon[81689]: pgmap v2721: 305 pgs: 305 active+clean; 625 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 44 KiB/s wr, 101 op/s
Dec 06 07:45:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:52.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:52.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2982845957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:53 compute-1 nova_compute[226101]: 2025-12-06 07:45:53.161 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:54.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:54.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:54 compute-1 ceph-mon[81689]: pgmap v2722: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 30 KiB/s wr, 111 op/s
Dec 06 07:45:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3888879543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3888879543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:55 compute-1 ceph-mon[81689]: pgmap v2723: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 30 KiB/s wr, 201 op/s
Dec 06 07:45:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4244794580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4244794580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:56.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:45:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:45:57 compute-1 nova_compute[226101]: 2025-12-06 07:45:57.072 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:57 compute-1 ceph-mon[81689]: pgmap v2724: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 17 KiB/s wr, 217 op/s
Dec 06 07:45:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:45:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:58.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:58 compute-1 nova_compute[226101]: 2025-12-06 07:45:58.201 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2381368513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2381368513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:45:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2197277755' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:45:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2197277755' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2197277755' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2197277755' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:59 compute-1 ceph-mon[81689]: pgmap v2725: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.7 KiB/s wr, 231 op/s
Dec 06 07:46:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:00.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:00.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:00 compute-1 ovn_controller[130279]: 2025-12-06T07:46:00Z|00613|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:46:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1020618160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1020618160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:01.665 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:01.665 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:01.666 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:02 compute-1 nova_compute[226101]: 2025-12-06 07:46:02.076 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:02.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:02.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:02 compute-1 ceph-mon[81689]: pgmap v2726: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.8 KiB/s wr, 215 op/s
Dec 06 07:46:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:03 compute-1 nova_compute[226101]: 2025-12-06 07:46:03.203 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:46:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4208224397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:46:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4208224397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:03 compute-1 ceph-mon[81689]: pgmap v2727: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 KiB/s wr, 192 op/s
Dec 06 07:46:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4208224397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4208224397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2049582650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:04.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:04.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:05 compute-1 ceph-mon[81689]: pgmap v2728: 305 pgs: 305 active+clean; 600 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 995 KiB/s wr, 223 op/s
Dec 06 07:46:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:06.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:06.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:07 compute-1 nova_compute[226101]: 2025-12-06 07:46:07.080 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:08 compute-1 podman[285584]: 2025-12-06 07:46:08.07840385 +0000 UTC m=+0.044560541 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:46:08 compute-1 podman[285583]: 2025-12-06 07:46:08.115826317 +0000 UTC m=+0.084807864 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:46:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:08.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:08.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:08 compute-1 podman[285585]: 2025-12-06 07:46:08.168905186 +0000 UTC m=+0.129443265 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:46:08 compute-1 nova_compute[226101]: 2025-12-06 07:46:08.204 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:08 compute-1 ceph-mon[81689]: pgmap v2729: 305 pgs: 305 active+clean; 613 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 130 op/s
Dec 06 07:46:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/750190271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2971671356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1956200466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1956200466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:09 compute-1 ceph-mon[81689]: pgmap v2730: 305 pgs: 305 active+clean; 626 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:46:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:10.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:10.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:11 compute-1 ceph-mon[81689]: pgmap v2731: 305 pgs: 305 active+clean; 626 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 504 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:46:11 compute-1 sshd-session[285648]: Connection reset by authenticating user root 45.135.232.92 port 26758 [preauth]
Dec 06 07:46:12 compute-1 nova_compute[226101]: 2025-12-06 07:46:12.083 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:12.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:12.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:13 compute-1 nova_compute[226101]: 2025-12-06 07:46:13.207 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:13 compute-1 ceph-mon[81689]: pgmap v2732: 305 pgs: 305 active+clean; 626 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Dec 06 07:46:13 compute-1 sshd-session[285650]: Connection reset by authenticating user root 45.135.232.92 port 26780 [preauth]
Dec 06 07:46:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:14.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:14.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:16 compute-1 sshd-session[285652]: Invalid user informix from 45.135.232.92 port 26794
Dec 06 07:46:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:16.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:16 compute-1 NetworkManager[49031]: <info>  [1765007176.1617] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Dec 06 07:46:16 compute-1 nova_compute[226101]: 2025-12-06 07:46:16.161 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:16 compute-1 NetworkManager[49031]: <info>  [1765007176.1630] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Dec 06 07:46:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:16.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:16 compute-1 nova_compute[226101]: 2025-12-06 07:46:16.317 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:16 compute-1 ovn_controller[130279]: 2025-12-06T07:46:16Z|00614|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:46:16 compute-1 nova_compute[226101]: 2025-12-06 07:46:16.332 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:16 compute-1 sshd-session[285652]: Connection reset by invalid user informix 45.135.232.92 port 26794 [preauth]
Dec 06 07:46:17 compute-1 ceph-mon[81689]: pgmap v2733: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:46:17 compute-1 nova_compute[226101]: 2025-12-06 07:46:17.123 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:18.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:18 compute-1 nova_compute[226101]: 2025-12-06 07:46:18.209 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:18 compute-1 sshd-session[285655]: Connection reset by authenticating user root 45.135.232.92 port 51116 [preauth]
Dec 06 07:46:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:18 compute-1 ceph-mon[81689]: pgmap v2734: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 864 KiB/s wr, 114 op/s
Dec 06 07:46:18 compute-1 nova_compute[226101]: 2025-12-06 07:46:18.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:18 compute-1 nova_compute[226101]: 2025-12-06 07:46:18.735 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:18 compute-1 nova_compute[226101]: 2025-12-06 07:46:18.736 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:18 compute-1 nova_compute[226101]: 2025-12-06 07:46:18.736 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:18 compute-1 nova_compute[226101]: 2025-12-06 07:46:18.736 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:46:18 compute-1 nova_compute[226101]: 2025-12-06 07:46:18.737 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:46:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1600270492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.219 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.283 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.283 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.422 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.423 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4173MB free_disk=20.78506851196289GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.424 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.424 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.518 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 119a621b-6198-42db-9a89-d73da6c2a2da actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.518 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.519 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:46:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e341 e341: 3 total, 3 up, 3 in
Dec 06 07:46:19 compute-1 ceph-mon[81689]: pgmap v2735: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 606 KiB/s wr, 121 op/s
Dec 06 07:46:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1600270492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4196588490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:19 compute-1 nova_compute[226101]: 2025-12-06 07:46:19.577 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:46:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1062430692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:20 compute-1 nova_compute[226101]: 2025-12-06 07:46:20.018 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:20 compute-1 nova_compute[226101]: 2025-12-06 07:46:20.024 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:46:20 compute-1 nova_compute[226101]: 2025-12-06 07:46:20.046 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:46:20 compute-1 nova_compute[226101]: 2025-12-06 07:46:20.090 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:46:20 compute-1 nova_compute[226101]: 2025-12-06 07:46:20.090 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:20.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:20 compute-1 ceph-mon[81689]: osdmap e341: 3 total, 3 up, 3 in
Dec 06 07:46:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1062430692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1632847009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:21 compute-1 nova_compute[226101]: 2025-12-06 07:46:21.476 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:21 compute-1 ceph-mon[81689]: pgmap v2737: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 38 KiB/s wr, 89 op/s
Dec 06 07:46:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e342 e342: 3 total, 3 up, 3 in
Dec 06 07:46:22 compute-1 nova_compute[226101]: 2025-12-06 07:46:22.091 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:22 compute-1 nova_compute[226101]: 2025-12-06 07:46:22.092 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:46:22 compute-1 nova_compute[226101]: 2025-12-06 07:46:22.092 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:46:22 compute-1 nova_compute[226101]: 2025-12-06 07:46:22.126 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:22.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:22.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:22 compute-1 nova_compute[226101]: 2025-12-06 07:46:22.330 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:46:22 compute-1 nova_compute[226101]: 2025-12-06 07:46:22.330 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:46:22 compute-1 nova_compute[226101]: 2025-12-06 07:46:22.330 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:46:22 compute-1 nova_compute[226101]: 2025-12-06 07:46:22.330 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:46:22 compute-1 ceph-mon[81689]: osdmap e342: 3 total, 3 up, 3 in
Dec 06 07:46:23 compute-1 nova_compute[226101]: 2025-12-06 07:46:23.261 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:23 compute-1 ceph-mon[81689]: pgmap v2739: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 46 op/s
Dec 06 07:46:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3024054679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3024054679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:23 compute-1 sshd-session[285701]: Connection reset by authenticating user root 45.135.232.92 port 51158 [preauth]
Dec 06 07:46:23 compute-1 nova_compute[226101]: 2025-12-06 07:46:23.975 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:46:23 compute-1 nova_compute[226101]: 2025-12-06 07:46:23.997 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:46:23 compute-1 nova_compute[226101]: 2025-12-06 07:46:23.998 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:46:23 compute-1 nova_compute[226101]: 2025-12-06 07:46:23.998 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:23 compute-1 nova_compute[226101]: 2025-12-06 07:46:23.998 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:46:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:24.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:24.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:24 compute-1 nova_compute[226101]: 2025-12-06 07:46:24.488 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e343 e343: 3 total, 3 up, 3 in
Dec 06 07:46:25 compute-1 sshd-session[285703]: banner exchange: Connection from 184.105.139.69 port 20252: invalid format
Dec 06 07:46:25 compute-1 nova_compute[226101]: 2025-12-06 07:46:25.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:25 compute-1 ceph-mon[81689]: osdmap e343: 3 total, 3 up, 3 in
Dec 06 07:46:25 compute-1 ceph-mon[81689]: pgmap v2741: 305 pgs: 305 active+clean; 649 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Dec 06 07:46:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e344 e344: 3 total, 3 up, 3 in
Dec 06 07:46:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:26.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:26.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:26 compute-1 ceph-mon[81689]: osdmap e344: 3 total, 3 up, 3 in
Dec 06 07:46:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e345 e345: 3 total, 3 up, 3 in
Dec 06 07:46:27 compute-1 nova_compute[226101]: 2025-12-06 07:46:27.180 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:28.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:28.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:28 compute-1 ceph-mon[81689]: pgmap v2743: 305 pgs: 305 active+clean; 690 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.0 MiB/s wr, 178 op/s
Dec 06 07:46:28 compute-1 ceph-mon[81689]: osdmap e345: 3 total, 3 up, 3 in
Dec 06 07:46:28 compute-1 nova_compute[226101]: 2025-12-06 07:46:28.314 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:28 compute-1 nova_compute[226101]: 2025-12-06 07:46:28.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:29 compute-1 ceph-mon[81689]: pgmap v2745: 305 pgs: 305 active+clean; 725 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 11 MiB/s wr, 278 op/s
Dec 06 07:46:29 compute-1 nova_compute[226101]: 2025-12-06 07:46:29.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e346 e346: 3 total, 3 up, 3 in
Dec 06 07:46:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:30.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:30.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2389771305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:30 compute-1 ceph-mon[81689]: osdmap e346: 3 total, 3 up, 3 in
Dec 06 07:46:30 compute-1 nova_compute[226101]: 2025-12-06 07:46:30.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/347993965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:31 compute-1 ceph-mon[81689]: pgmap v2747: 305 pgs: 305 active+clean; 735 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 9.5 MiB/s wr, 245 op/s
Dec 06 07:46:31 compute-1 nova_compute[226101]: 2025-12-06 07:46:31.584 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "29195b63-c365-4ace-a4f5-9c2dba89c276" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:31 compute-1 nova_compute[226101]: 2025-12-06 07:46:31.584 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:31 compute-1 nova_compute[226101]: 2025-12-06 07:46:31.598 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:46:31 compute-1 nova_compute[226101]: 2025-12-06 07:46:31.677 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:31 compute-1 nova_compute[226101]: 2025-12-06 07:46:31.678 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:31 compute-1 nova_compute[226101]: 2025-12-06 07:46:31.685 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:46:31 compute-1 nova_compute[226101]: 2025-12-06 07:46:31.685 226109 INFO nova.compute.claims [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:46:31 compute-1 nova_compute[226101]: 2025-12-06 07:46:31.820 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:32.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:46:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1354848071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.323 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.328 226109 DEBUG nova.compute.provider_tree [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.350 226109 DEBUG nova.scheduler.client.report [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.376 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.377 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.422 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.422 226109 DEBUG nova.network.neutron [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.452 226109 INFO nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.469 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:46:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1354848071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.571 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.572 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.572 226109 INFO nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Creating image(s)
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.597 226109 DEBUG nova.storage.rbd_utils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image 29195b63-c365-4ace-a4f5-9c2dba89c276_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.626 226109 DEBUG nova.storage.rbd_utils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image 29195b63-c365-4ace-a4f5-9c2dba89c276_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.649 226109 DEBUG nova.storage.rbd_utils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image 29195b63-c365-4ace-a4f5-9c2dba89c276_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.652 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.676 226109 DEBUG nova.policy [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89d63d29c7534f70817e13d23cada716', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f093eaeb91c042dd8c85f5cd256c4394', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.680 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.713 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.713 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.714 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.714 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.740 226109 DEBUG nova.storage.rbd_utils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image 29195b63-c365-4ace-a4f5-9c2dba89c276_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:46:32 compute-1 nova_compute[226101]: 2025-12-06 07:46:32.743 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 29195b63-c365-4ace-a4f5-9c2dba89c276_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.350 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.375 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 29195b63-c365-4ace-a4f5-9c2dba89c276_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.448 226109 DEBUG nova.storage.rbd_utils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] resizing rbd image 29195b63-c365-4ace-a4f5-9c2dba89c276_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:46:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:33 compute-1 ceph-mon[81689]: pgmap v2748: 305 pgs: 305 active+clean; 740 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.7 MiB/s wr, 210 op/s
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.562 226109 DEBUG nova.objects.instance [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lazy-loading 'migration_context' on Instance uuid 29195b63-c365-4ace-a4f5-9c2dba89c276 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.613 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.614 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Ensure instance console log exists: /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.614 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.615 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:33.616 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.616 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.617 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:33.617 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:46:33 compute-1 nova_compute[226101]: 2025-12-06 07:46:33.956 226109 DEBUG nova.network.neutron [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Successfully created port: 1a8069ff-0103-4f67-92ab-269451290b42 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:46:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:34.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:34.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/462057177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e347 e347: 3 total, 3 up, 3 in
Dec 06 07:46:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e348 e348: 3 total, 3 up, 3 in
Dec 06 07:46:34 compute-1 nova_compute[226101]: 2025-12-06 07:46:34.971 226109 DEBUG nova.network.neutron [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Successfully updated port: 1a8069ff-0103-4f67-92ab-269451290b42 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:46:34 compute-1 nova_compute[226101]: 2025-12-06 07:46:34.989 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:46:34 compute-1 nova_compute[226101]: 2025-12-06 07:46:34.989 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquired lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:46:34 compute-1 nova_compute[226101]: 2025-12-06 07:46:34.989 226109 DEBUG nova.network.neutron [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:46:35 compute-1 nova_compute[226101]: 2025-12-06 07:46:35.116 226109 DEBUG nova.compute.manager [req-b5990fb0-c00b-446d-bd8f-cfb694c2f9bf req-4274c0f7-1717-4522-a5a1-85c4e0195c09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-changed-1a8069ff-0103-4f67-92ab-269451290b42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:46:35 compute-1 nova_compute[226101]: 2025-12-06 07:46:35.117 226109 DEBUG nova.compute.manager [req-b5990fb0-c00b-446d-bd8f-cfb694c2f9bf req-4274c0f7-1717-4522-a5a1-85c4e0195c09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Refreshing instance network info cache due to event network-changed-1a8069ff-0103-4f67-92ab-269451290b42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:46:35 compute-1 nova_compute[226101]: 2025-12-06 07:46:35.117 226109 DEBUG oslo_concurrency.lockutils [req-b5990fb0-c00b-446d-bd8f-cfb694c2f9bf req-4274c0f7-1717-4522-a5a1-85c4e0195c09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:46:35 compute-1 nova_compute[226101]: 2025-12-06 07:46:35.348 226109 DEBUG nova.network.neutron [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:46:35 compute-1 ceph-mon[81689]: osdmap e347: 3 total, 3 up, 3 in
Dec 06 07:46:35 compute-1 ceph-mon[81689]: osdmap e348: 3 total, 3 up, 3 in
Dec 06 07:46:35 compute-1 ceph-mon[81689]: pgmap v2751: 305 pgs: 305 active+clean; 733 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Dec 06 07:46:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e349 e349: 3 total, 3 up, 3 in
Dec 06 07:46:35 compute-1 sudo[285893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:35 compute-1 sudo[285893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:35 compute-1 sudo[285893]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:35 compute-1 sudo[285918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:46:35 compute-1 sudo[285918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:35 compute-1 sudo[285918]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:36 compute-1 sudo[285943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:36 compute-1 sudo[285943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:36 compute-1 sudo[285943]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:36 compute-1 sudo[285968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:46:36 compute-1 sudo[285968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:36.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.371 226109 DEBUG nova.network.neutron [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Updating instance_info_cache with network_info: [{"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.391 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Releasing lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.391 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Instance network_info: |[{"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.391 226109 DEBUG oslo_concurrency.lockutils [req-b5990fb0-c00b-446d-bd8f-cfb694c2f9bf req-4274c0f7-1717-4522-a5a1-85c4e0195c09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.392 226109 DEBUG nova.network.neutron [req-b5990fb0-c00b-446d-bd8f-cfb694c2f9bf req-4274c0f7-1717-4522-a5a1-85c4e0195c09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Refreshing network info cache for port 1a8069ff-0103-4f67-92ab-269451290b42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.394 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Start _get_guest_xml network_info=[{"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.399 226109 WARNING nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.403 226109 DEBUG nova.virt.libvirt.host [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.403 226109 DEBUG nova.virt.libvirt.host [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.409 226109 DEBUG nova.virt.libvirt.host [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.410 226109 DEBUG nova.virt.libvirt.host [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.411 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.411 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.411 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.411 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.412 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.412 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.412 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.412 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.412 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.413 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.413 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.413 226109 DEBUG nova.virt.hardware [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.415 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:36 compute-1 sudo[285968]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:36 compute-1 nova_compute[226101]: 2025-12-06 07:46:36.736 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:36 compute-1 ceph-mon[81689]: osdmap e349: 3 total, 3 up, 3 in
Dec 06 07:46:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:46:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:46:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:46:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:46:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:46:36 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:46:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e350 e350: 3 total, 3 up, 3 in
Dec 06 07:46:37 compute-1 nova_compute[226101]: 2025-12-06 07:46:37.220 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:46:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1061802510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:37 compute-1 nova_compute[226101]: 2025-12-06 07:46:37.730 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:37 compute-1 nova_compute[226101]: 2025-12-06 07:46:37.759 226109 DEBUG nova.storage.rbd_utils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image 29195b63-c365-4ace-a4f5-9c2dba89c276_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:46:37 compute-1 nova_compute[226101]: 2025-12-06 07:46:37.765 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:37 compute-1 nova_compute[226101]: 2025-12-06 07:46:37.926 226109 DEBUG nova.network.neutron [req-b5990fb0-c00b-446d-bd8f-cfb694c2f9bf req-4274c0f7-1717-4522-a5a1-85c4e0195c09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Updated VIF entry in instance network info cache for port 1a8069ff-0103-4f67-92ab-269451290b42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:46:37 compute-1 nova_compute[226101]: 2025-12-06 07:46:37.927 226109 DEBUG nova.network.neutron [req-b5990fb0-c00b-446d-bd8f-cfb694c2f9bf req-4274c0f7-1717-4522-a5a1-85c4e0195c09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Updating instance_info_cache with network_info: [{"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:46:37 compute-1 nova_compute[226101]: 2025-12-06 07:46:37.948 226109 DEBUG oslo_concurrency.lockutils [req-b5990fb0-c00b-446d-bd8f-cfb694c2f9bf req-4274c0f7-1717-4522-a5a1-85c4e0195c09 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:46:38 compute-1 ceph-mon[81689]: osdmap e350: 3 total, 3 up, 3 in
Dec 06 07:46:38 compute-1 ceph-mon[81689]: pgmap v2754: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 730 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 10 MiB/s wr, 262 op/s
Dec 06 07:46:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/540055131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1061802510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:38.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:46:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4058190283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:38.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:38 compute-1 nova_compute[226101]: 2025-12-06 07:46:38.353 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:39 compute-1 podman[286086]: 2025-12-06 07:46:39.072337004 +0000 UTC m=+0.056685136 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 07:46:39 compute-1 podman[286085]: 2025-12-06 07:46:39.075200111 +0000 UTC m=+0.059627155 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:46:39 compute-1 podman[286087]: 2025-12-06 07:46:39.102055345 +0000 UTC m=+0.078260268 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.404 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.405 226109 DEBUG nova.virt.libvirt.vif [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:46:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-440288893',display_name='tempest-TestSnapshotPattern-server-440288893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-440288893',id=160,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxDMv0Vhbgr4L65QJ5+X+b7zbDfxyD9+qYaGNf4b7W3f9yi+P//RoKkMpyvVNIPGPzRh0H8TZRtNdilAq90sFwxv4/Dk5avudO2cObIlP9Igfm6SfNSZd6YTMkk3vYjjg==',key_name='tempest-TestSnapshotPattern-467820612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f093eaeb91c042dd8c85f5cd256c4394',ramdisk_id='',reservation_id='r-p0ehmmk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-563672408',owner_user_name='tempest-TestSnapshotPattern-563672408-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:46:32Z,user_data=None,user_id='89d63d29c7534f70817e13d23cada716',uuid=29195b63-c365-4ace-a4f5-9c2dba89c276,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.406 226109 DEBUG nova.network.os_vif_util [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converting VIF {"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.406 226109 DEBUG nova.network.os_vif_util [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=1a8069ff-0103-4f67-92ab-269451290b42,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a8069ff-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.407 226109 DEBUG nova.objects.instance [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29195b63-c365-4ace-a4f5-9c2dba89c276 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:46:39 compute-1 ceph-mon[81689]: pgmap v2755: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 783 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 14 MiB/s wr, 289 op/s
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.427 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <uuid>29195b63-c365-4ace-a4f5-9c2dba89c276</uuid>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <name>instance-000000a0</name>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <nova:name>tempest-TestSnapshotPattern-server-440288893</nova:name>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:46:36</nova:creationTime>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <nova:user uuid="89d63d29c7534f70817e13d23cada716">tempest-TestSnapshotPattern-563672408-project-member</nova:user>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <nova:project uuid="f093eaeb91c042dd8c85f5cd256c4394">tempest-TestSnapshotPattern-563672408</nova:project>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <nova:port uuid="1a8069ff-0103-4f67-92ab-269451290b42">
Dec 06 07:46:39 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <system>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <entry name="serial">29195b63-c365-4ace-a4f5-9c2dba89c276</entry>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <entry name="uuid">29195b63-c365-4ace-a4f5-9c2dba89c276</entry>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </system>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <os>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   </os>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <features>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   </features>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/29195b63-c365-4ace-a4f5-9c2dba89c276_disk">
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       </source>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/29195b63-c365-4ace-a4f5-9c2dba89c276_disk.config">
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       </source>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:46:39 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:33:7a:98"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <target dev="tap1a8069ff-01"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276/console.log" append="off"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <video>
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </video>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:46:39 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:46:39 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:46:39 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:46:39 compute-1 nova_compute[226101]: </domain>
Dec 06 07:46:39 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.428 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Preparing to wait for external event network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.429 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.429 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.429 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.430 226109 DEBUG nova.virt.libvirt.vif [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:46:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-440288893',display_name='tempest-TestSnapshotPattern-server-440288893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-440288893',id=160,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxDMv0Vhbgr4L65QJ5+X+b7zbDfxyD9+qYaGNf4b7W3f9yi+P//RoKkMpyvVNIPGPzRh0H8TZRtNdilAq90sFwxv4/Dk5avudO2cObIlP9Igfm6SfNSZd6YTMkk3vYjjg==',key_name='tempest-TestSnapshotPattern-467820612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f093eaeb91c042dd8c85f5cd256c4394',ramdisk_id='',reservation_id='r-p0ehmmk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-563672408',owner_user_name='tempest-TestSnapshotPattern-563672408-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:46:32Z,user_data=None,user_id='89d63d29c7534f70817e13d23cada716',uuid=29195b63-c365-4ace-a4f5-9c2dba89c276,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.430 226109 DEBUG nova.network.os_vif_util [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converting VIF {"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.431 226109 DEBUG nova.network.os_vif_util [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=1a8069ff-0103-4f67-92ab-269451290b42,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a8069ff-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.431 226109 DEBUG os_vif [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=1a8069ff-0103-4f67-92ab-269451290b42,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a8069ff-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.432 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.432 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.432 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.435 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.436 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a8069ff-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.436 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a8069ff-01, col_values=(('external_ids', {'iface-id': '1a8069ff-0103-4f67-92ab-269451290b42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:7a:98', 'vm-uuid': '29195b63-c365-4ace-a4f5-9c2dba89c276'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.476 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:39 compute-1 NetworkManager[49031]: <info>  [1765007199.4767] manager: (tap1a8069ff-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.479 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.483 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.483 226109 INFO os_vif [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=1a8069ff-0103-4f67-92ab-269451290b42,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a8069ff-01')
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.535 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.535 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.535 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] No VIF found with MAC fa:16:3e:33:7a:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.536 226109 INFO nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Using config drive
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.558 226109 DEBUG nova.storage.rbd_utils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image 29195b63-c365-4ace-a4f5-9c2dba89c276_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:46:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:39.619 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.932 226109 INFO nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Creating config drive at /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276/disk.config
Dec 06 07:46:39 compute-1 nova_compute[226101]: 2025-12-06 07:46:39.937 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2tjh2csw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.066 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2tjh2csw" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.094 226109 DEBUG nova.storage.rbd_utils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image 29195b63-c365-4ace-a4f5-9c2dba89c276_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.098 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276/disk.config 29195b63-c365-4ace-a4f5-9c2dba89c276_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:40.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:40.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.296 226109 DEBUG oslo_concurrency.processutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276/disk.config 29195b63-c365-4ace-a4f5-9c2dba89c276_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.297 226109 INFO nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Deleting local config drive /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276/disk.config because it was imported into RBD.
Dec 06 07:46:40 compute-1 kernel: tap1a8069ff-01: entered promiscuous mode
Dec 06 07:46:40 compute-1 NetworkManager[49031]: <info>  [1765007200.3444] manager: (tap1a8069ff-01): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.345 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-1 ovn_controller[130279]: 2025-12-06T07:46:40Z|00615|binding|INFO|Claiming lport 1a8069ff-0103-4f67-92ab-269451290b42 for this chassis.
Dec 06 07:46:40 compute-1 ovn_controller[130279]: 2025-12-06T07:46:40Z|00616|binding|INFO|1a8069ff-0103-4f67-92ab-269451290b42: Claiming fa:16:3e:33:7a:98 10.100.0.8
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.356 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:7a:98 10.100.0.8'], port_security=['fa:16:3e:33:7a:98 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '29195b63-c365-4ace-a4f5-9c2dba89c276', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9edf259b-6a5e-4e11-938d-d631a412648e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f093eaeb91c042dd8c85f5cd256c4394', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaa0df08-ced0-442a-9685-6c089d405f5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90bdd78f-ae71-4d01-8170-80b57acff7fd, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=1a8069ff-0103-4f67-92ab-269451290b42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.357 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 1a8069ff-0103-4f67-92ab-269451290b42 in datapath 9edf259b-6a5e-4e11-938d-d631a412648e bound to our chassis
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.360 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9edf259b-6a5e-4e11-938d-d631a412648e
Dec 06 07:46:40 compute-1 ovn_controller[130279]: 2025-12-06T07:46:40Z|00617|binding|INFO|Setting lport 1a8069ff-0103-4f67-92ab-269451290b42 ovn-installed in OVS
Dec 06 07:46:40 compute-1 ovn_controller[130279]: 2025-12-06T07:46:40Z|00618|binding|INFO|Setting lport 1a8069ff-0103-4f67-92ab-269451290b42 up in Southbound
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.364 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.369 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.373 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[48df1496-f9a7-4780-b8fc-c44d55453335]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.374 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9edf259b-61 in ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:46:40 compute-1 systemd-udevd[286221]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.375 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9edf259b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.375 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d051a741-9a54-441e-98ab-af920e37a06e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.376 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c626c378-8867-44af-8439-cb51ef62082d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 systemd-machined[190302]: New machine qemu-73-instance-000000a0.
Dec 06 07:46:40 compute-1 NetworkManager[49031]: <info>  [1765007200.3900] device (tap1a8069ff-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:46:40 compute-1 NetworkManager[49031]: <info>  [1765007200.3907] device (tap1a8069ff-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.388 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a3734bcc-5aec-425e-a5a9-5369d16b05e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 systemd[1]: Started Virtual Machine qemu-73-instance-000000a0.
Dec 06 07:46:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4058190283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.413 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf96b77-383b-46d9-bcc9-54585aff7b50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.443 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[46c2a716-5a0d-4ba2-bb48-0f40db7f6e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 NetworkManager[49031]: <info>  [1765007200.4504] manager: (tap9edf259b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/277)
Dec 06 07:46:40 compute-1 systemd-udevd[286225]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.450 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[72a1fdeb-dd25-462d-a48a-7eb87cab71d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.481 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[13a84449-0ee7-453a-a6a7-0cca66b54df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.484 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f1355ce5-bc75-4fbf-a77d-56098052f026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 NetworkManager[49031]: <info>  [1765007200.5066] device (tap9edf259b-60): carrier: link connected
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.512 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a28ea7-ea12-4cdf-9d04-0f1d58e9e8d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.529 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dc085926-cb36-4958-8320-41110eece7c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9edf259b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:ac:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748001, 'reachable_time': 28613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286254, 'error': None, 'target': 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.544 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[434150bb-d34e-4a97-817f-7f5cf164fc25]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:ac24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748001, 'tstamp': 748001}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286255, 'error': None, 'target': 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.565 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3f81e5cf-3663-419e-bda3-e93eab1403ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9edf259b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:ac:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748001, 'reachable_time': 28613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286267, 'error': None, 'target': 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.597 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[697a68fb-ed82-4ec4-8464-fd946708c7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.669 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[18f3912f-601f-4264-abef-973acf451875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.670 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9edf259b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.671 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.671 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9edf259b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.704 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-1 NetworkManager[49031]: <info>  [1765007200.7060] manager: (tap9edf259b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Dec 06 07:46:40 compute-1 kernel: tap9edf259b-60: entered promiscuous mode
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.706 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.708 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9edf259b-60, col_values=(('external_ids', {'iface-id': '2622b20a-1eb7-4bb6-abbf-35b090425f31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:46:40 compute-1 ovn_controller[130279]: 2025-12-06T07:46:40Z|00619|binding|INFO|Releasing lport 2622b20a-1eb7-4bb6-abbf-35b090425f31 from this chassis (sb_readonly=0)
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.710 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.724 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.726 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9edf259b-6a5e-4e11-938d-d631a412648e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9edf259b-6a5e-4e11-938d-d631a412648e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.727 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd227d4-3f52-4019-baf9-158070a3c373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.728 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-9edf259b-6a5e-4e11-938d-d631a412648e
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/9edf259b-6a5e-4e11-938d-d631a412648e.pid.haproxy
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 9edf259b-6a5e-4e11-938d-d631a412648e
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:46:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:46:40.729 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'env', 'PROCESS_TAG=haproxy-9edf259b-6a5e-4e11-938d-d631a412648e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9edf259b-6a5e-4e11-938d-d631a412648e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.802 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007200.801726, 29195b63-c365-4ace-a4f5-9c2dba89c276 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.802 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] VM Started (Lifecycle Event)
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.867 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.872 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007200.8044426, 29195b63-c365-4ace-a4f5-9c2dba89c276 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.873 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] VM Paused (Lifecycle Event)
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.894 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.897 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:46:40 compute-1 nova_compute[226101]: 2025-12-06 07:46:40.918 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:46:41 compute-1 podman[286330]: 2025-12-06 07:46:41.098060936 +0000 UTC m=+0.050416618 container create 687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:46:41 compute-1 systemd[1]: Started libpod-conmon-687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246.scope.
Dec 06 07:46:41 compute-1 podman[286330]: 2025-12-06 07:46:41.070852594 +0000 UTC m=+0.023208296 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:46:41 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:46:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dc31e5bce584c5b3fe2a9a571ba360c0b04ba9378c8c62add075666f7232de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:41 compute-1 podman[286330]: 2025-12-06 07:46:41.198660903 +0000 UTC m=+0.151016595 container init 687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:46:41 compute-1 podman[286330]: 2025-12-06 07:46:41.20373556 +0000 UTC m=+0.156091242 container start 687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:46:41 compute-1 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[286345]: [NOTICE]   (286349) : New worker (286351) forked
Dec 06 07:46:41 compute-1 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[286345]: [NOTICE]   (286349) : Loading success.
Dec 06 07:46:41 compute-1 ceph-mon[81689]: pgmap v2756: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 211 op/s
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.593 226109 DEBUG nova.compute.manager [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.594 226109 DEBUG oslo_concurrency.lockutils [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.594 226109 DEBUG oslo_concurrency.lockutils [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.595 226109 DEBUG oslo_concurrency.lockutils [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.595 226109 DEBUG nova.compute.manager [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Processing event network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.595 226109 DEBUG nova.compute.manager [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.595 226109 DEBUG oslo_concurrency.lockutils [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.595 226109 DEBUG oslo_concurrency.lockutils [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.596 226109 DEBUG oslo_concurrency.lockutils [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.596 226109 DEBUG nova.compute.manager [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] No waiting events found dispatching network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.596 226109 WARNING nova.compute.manager [req-4bd487b4-8328-4923-9933-13ca0a85579c req-f1a8df21-5cfe-4e4f-88f8-dd8646bc9fb3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received unexpected event network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 for instance with vm_state building and task_state spawning.
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.597 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.600 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007201.6002321, 29195b63-c365-4ace-a4f5-9c2dba89c276 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.600 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] VM Resumed (Lifecycle Event)
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.602 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.605 226109 INFO nova.virt.libvirt.driver [-] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Instance spawned successfully.
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.605 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.630 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.633 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.657 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.657 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.658 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.658 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.658 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.659 226109 DEBUG nova.virt.libvirt.driver [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.662 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.714 226109 INFO nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Took 9.14 seconds to spawn the instance on the hypervisor.
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.718 226109 DEBUG nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.787 226109 INFO nova.compute.manager [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Took 10.14 seconds to build instance.
Dec 06 07:46:41 compute-1 nova_compute[226101]: 2025-12-06 07:46:41.822 226109 DEBUG oslo_concurrency.lockutils [None req-6732181e-2390-4d43-9097-9b9161a7e846 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:42.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:42.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1367365160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3457157827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:43 compute-1 sudo[286360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:43 compute-1 sudo[286360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:43 compute-1 sudo[286360]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:43 compute-1 sudo[286385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:46:43 compute-1 sudo[286385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:43 compute-1 sudo[286385]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:43 compute-1 nova_compute[226101]: 2025-12-06 07:46:43.356 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:44.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:44.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:44 compute-1 ceph-mon[81689]: pgmap v2757: 305 pgs: 305 active+clean; 788 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 139 op/s
Dec 06 07:46:44 compute-1 nova_compute[226101]: 2025-12-06 07:46:44.477 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e351 e351: 3 total, 3 up, 3 in
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.848837) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204848875, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2494, "num_deletes": 257, "total_data_size": 5671672, "memory_usage": 5739008, "flush_reason": "Manual Compaction"}
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204884050, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 3714441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56299, "largest_seqno": 58788, "table_properties": {"data_size": 3704248, "index_size": 6495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21860, "raw_average_key_size": 20, "raw_value_size": 3683631, "raw_average_value_size": 3528, "num_data_blocks": 281, "num_entries": 1044, "num_filter_entries": 1044, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007010, "oldest_key_time": 1765007010, "file_creation_time": 1765007204, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 35277 microseconds, and 11525 cpu microseconds.
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.884111) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 3714441 bytes OK
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.884137) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.886460) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.886475) EVENT_LOG_v1 {"time_micros": 1765007204886470, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.886491) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 5660605, prev total WAL file size 5660605, number of live WAL files 2.
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.887825) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(3627KB)], [111(10MB)]
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204887877, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 14487110, "oldest_snapshot_seqno": -1}
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 9108 keys, 12445897 bytes, temperature: kUnknown
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204988213, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 12445897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12386034, "index_size": 36007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 237278, "raw_average_key_size": 26, "raw_value_size": 12225017, "raw_average_value_size": 1342, "num_data_blocks": 1400, "num_entries": 9108, "num_filter_entries": 9108, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007204, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.988531) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 12445897 bytes
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.997340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.3 rd, 123.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.3 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 9636, records dropped: 528 output_compression: NoCompression
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.997362) EVENT_LOG_v1 {"time_micros": 1765007204997352, "job": 70, "event": "compaction_finished", "compaction_time_micros": 100426, "compaction_time_cpu_micros": 26219, "output_level": 6, "num_output_files": 1, "total_output_size": 12445897, "num_input_records": 9636, "num_output_records": 9108, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204998072, "job": 70, "event": "table_file_deletion", "file_number": 113}
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204999828, "job": 70, "event": "table_file_deletion", "file_number": 111}
Dec 06 07:46:44 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.887697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.999970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.999976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.999977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.999978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:46:44.999980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:45 compute-1 ceph-mon[81689]: osdmap e351: 3 total, 3 up, 3 in
Dec 06 07:46:45 compute-1 ceph-mon[81689]: pgmap v2759: 305 pgs: 305 active+clean; 842 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 7.1 MiB/s wr, 264 op/s
Dec 06 07:46:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e352 e352: 3 total, 3 up, 3 in
Dec 06 07:46:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:46:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1937280064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:46.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:46.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:47 compute-1 ceph-mon[81689]: osdmap e352: 3 total, 3 up, 3 in
Dec 06 07:46:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1937280064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:47 compute-1 nova_compute[226101]: 2025-12-06 07:46:47.052 226109 DEBUG nova.compute.manager [req-455fb402-b260-4af7-80b7-05df3c3ff845 req-8b9c2690-2ca8-408a-9e00-79152256f34a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-changed-1a8069ff-0103-4f67-92ab-269451290b42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:46:47 compute-1 nova_compute[226101]: 2025-12-06 07:46:47.052 226109 DEBUG nova.compute.manager [req-455fb402-b260-4af7-80b7-05df3c3ff845 req-8b9c2690-2ca8-408a-9e00-79152256f34a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Refreshing instance network info cache due to event network-changed-1a8069ff-0103-4f67-92ab-269451290b42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:46:47 compute-1 nova_compute[226101]: 2025-12-06 07:46:47.052 226109 DEBUG oslo_concurrency.lockutils [req-455fb402-b260-4af7-80b7-05df3c3ff845 req-8b9c2690-2ca8-408a-9e00-79152256f34a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:46:47 compute-1 nova_compute[226101]: 2025-12-06 07:46:47.052 226109 DEBUG oslo_concurrency.lockutils [req-455fb402-b260-4af7-80b7-05df3c3ff845 req-8b9c2690-2ca8-408a-9e00-79152256f34a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:46:47 compute-1 nova_compute[226101]: 2025-12-06 07:46:47.053 226109 DEBUG nova.network.neutron [req-455fb402-b260-4af7-80b7-05df3c3ff845 req-8b9c2690-2ca8-408a-9e00-79152256f34a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Refreshing network info cache for port 1a8069ff-0103-4f67-92ab-269451290b42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:46:47 compute-1 nova_compute[226101]: 2025-12-06 07:46:47.327 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:47 compute-1 ceph-mon[81689]: pgmap v2761: 305 pgs: 305 active+clean; 835 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.2 MiB/s wr, 274 op/s
Dec 06 07:46:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:48.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:48.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:48 compute-1 nova_compute[226101]: 2025-12-06 07:46:48.396 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:49 compute-1 ceph-mon[81689]: pgmap v2762: 305 pgs: 305 active+clean; 806 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.9 MiB/s rd, 5.9 MiB/s wr, 303 op/s
Dec 06 07:46:49 compute-1 nova_compute[226101]: 2025-12-06 07:46:49.481 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:49 compute-1 nova_compute[226101]: 2025-12-06 07:46:49.503 226109 DEBUG nova.network.neutron [req-455fb402-b260-4af7-80b7-05df3c3ff845 req-8b9c2690-2ca8-408a-9e00-79152256f34a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Updated VIF entry in instance network info cache for port 1a8069ff-0103-4f67-92ab-269451290b42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:46:49 compute-1 nova_compute[226101]: 2025-12-06 07:46:49.504 226109 DEBUG nova.network.neutron [req-455fb402-b260-4af7-80b7-05df3c3ff845 req-8b9c2690-2ca8-408a-9e00-79152256f34a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Updating instance_info_cache with network_info: [{"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:46:49 compute-1 nova_compute[226101]: 2025-12-06 07:46:49.532 226109 DEBUG oslo_concurrency.lockutils [req-455fb402-b260-4af7-80b7-05df3c3ff845 req-8b9c2690-2ca8-408a-9e00-79152256f34a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:46:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:50.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:50.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:51 compute-1 ceph-mon[81689]: pgmap v2763: 305 pgs: 305 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 5.8 MiB/s wr, 349 op/s
Dec 06 07:46:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:52.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:52.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:53 compute-1 nova_compute[226101]: 2025-12-06 07:46:53.398 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:54.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:54 compute-1 ceph-mon[81689]: pgmap v2764: 305 pgs: 305 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 4.0 MiB/s wr, 294 op/s
Dec 06 07:46:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:54.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:54 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Dec 06 07:46:54 compute-1 nova_compute[226101]: 2025-12-06 07:46:54.518 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e353 e353: 3 total, 3 up, 3 in
Dec 06 07:46:55 compute-1 nova_compute[226101]: 2025-12-06 07:46:55.665 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:55 compute-1 ceph-mon[81689]: osdmap e353: 3 total, 3 up, 3 in
Dec 06 07:46:55 compute-1 ceph-mon[81689]: pgmap v2766: 305 pgs: 305 active+clean; 789 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 181 op/s
Dec 06 07:46:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:56.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:56.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:56 compute-1 ovn_controller[130279]: 2025-12-06T07:46:56Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:7a:98 10.100.0.8
Dec 06 07:46:56 compute-1 ovn_controller[130279]: 2025-12-06T07:46:56Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:7a:98 10.100.0.8
Dec 06 07:46:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:58.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:46:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:58 compute-1 nova_compute[226101]: 2025-12-06 07:46:58.413 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:58 compute-1 ceph-mon[81689]: pgmap v2767: 305 pgs: 305 active+clean; 796 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 139 op/s
Dec 06 07:46:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:59 compute-1 nova_compute[226101]: 2025-12-06 07:46:59.563 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:00.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:47:01.666 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:47:01.667 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:47:01.668 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:02.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:02.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:03 compute-1 nova_compute[226101]: 2025-12-06 07:47:03.414 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1548366512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1075072744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:03 compute-1 ceph-mon[81689]: pgmap v2768: 305 pgs: 305 active+clean; 809 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 139 op/s
Dec 06 07:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3035636638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:04.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:04.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:04 compute-1 ceph-mon[81689]: pgmap v2769: 305 pgs: 305 active+clean; 813 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:47:04 compute-1 ceph-mon[81689]: pgmap v2770: 305 pgs: 305 active+clean; 813 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 550 KiB/s rd, 2.5 MiB/s wr, 94 op/s
Dec 06 07:47:04 compute-1 nova_compute[226101]: 2025-12-06 07:47:04.610 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:05 compute-1 nova_compute[226101]: 2025-12-06 07:47:05.002 226109 DEBUG nova.compute.manager [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:47:05 compute-1 nova_compute[226101]: 2025-12-06 07:47:05.069 226109 INFO nova.compute.manager [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] instance snapshotting
Dec 06 07:47:05 compute-1 nova_compute[226101]: 2025-12-06 07:47:05.326 226109 INFO nova.virt.libvirt.driver [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Beginning live snapshot process
Dec 06 07:47:05 compute-1 nova_compute[226101]: 2025-12-06 07:47:05.468 226109 DEBUG nova.virt.libvirt.imagebackend [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:47:05 compute-1 ceph-mon[81689]: pgmap v2771: 305 pgs: 305 active+clean; 817 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.5 MiB/s wr, 114 op/s
Dec 06 07:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2972039573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:05 compute-1 nova_compute[226101]: 2025-12-06 07:47:05.814 226109 DEBUG nova.storage.rbd_utils [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] creating snapshot(c04485eba55f455290e65b795d324449) on rbd image(29195b63-c365-4ace-a4f5-9c2dba89c276_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:47:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:06.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e354 e354: 3 total, 3 up, 3 in
Dec 06 07:47:06 compute-1 nova_compute[226101]: 2025-12-06 07:47:06.916 226109 DEBUG nova.storage.rbd_utils [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] cloning vms/29195b63-c365-4ace-a4f5-9c2dba89c276_disk@c04485eba55f455290e65b795d324449 to images/85f1f69b-a9ee-46e0-a30b-ae7faf09ffef clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:47:07 compute-1 nova_compute[226101]: 2025-12-06 07:47:07.169 226109 DEBUG nova.storage.rbd_utils [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] flattening images/85f1f69b-a9ee-46e0-a30b-ae7faf09ffef flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:47:07 compute-1 nova_compute[226101]: 2025-12-06 07:47:07.521 226109 DEBUG nova.storage.rbd_utils [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] removing snapshot(c04485eba55f455290e65b795d324449) on rbd image(29195b63-c365-4ace-a4f5-9c2dba89c276_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:47:07 compute-1 ceph-mon[81689]: osdmap e354: 3 total, 3 up, 3 in
Dec 06 07:47:07 compute-1 ceph-mon[81689]: pgmap v2773: 305 pgs: 305 active+clean; 825 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 1.5 MiB/s wr, 92 op/s
Dec 06 07:47:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e355 e355: 3 total, 3 up, 3 in
Dec 06 07:47:07 compute-1 nova_compute[226101]: 2025-12-06 07:47:07.888 226109 DEBUG nova.storage.rbd_utils [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] creating snapshot(snap) on rbd image(85f1f69b-a9ee-46e0-a30b-ae7faf09ffef) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:47:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:08.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:08.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:08 compute-1 nova_compute[226101]: 2025-12-06 07:47:08.417 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:08 compute-1 ceph-mon[81689]: osdmap e355: 3 total, 3 up, 3 in
Dec 06 07:47:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e356 e356: 3 total, 3 up, 3 in
Dec 06 07:47:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4247177994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4247177994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:09 compute-1 nova_compute[226101]: 2025-12-06 07:47:09.689 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:10 compute-1 podman[286552]: 2025-12-06 07:47:10.078625062 +0000 UTC m=+0.062084221 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:47:10 compute-1 podman[286551]: 2025-12-06 07:47:10.085572469 +0000 UTC m=+0.068839843 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:47:10 compute-1 podman[286553]: 2025-12-06 07:47:10.124294052 +0000 UTC m=+0.096452467 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:47:10 compute-1 ceph-mon[81689]: osdmap e356: 3 total, 3 up, 3 in
Dec 06 07:47:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4247177994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4247177994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:10 compute-1 ceph-mon[81689]: pgmap v2776: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 863 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.5 MiB/s wr, 167 op/s
Dec 06 07:47:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:10.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:10.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:12 compute-1 ceph-mon[81689]: pgmap v2777: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 911 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 MiB/s rd, 7.9 MiB/s wr, 314 op/s
Dec 06 07:47:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:12.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:12.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:12 compute-1 nova_compute[226101]: 2025-12-06 07:47:12.733 226109 INFO nova.virt.libvirt.driver [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Snapshot image upload complete
Dec 06 07:47:12 compute-1 nova_compute[226101]: 2025-12-06 07:47:12.734 226109 INFO nova.compute.manager [None req-5c191759-b8cd-48d0-bce4-93b1108e896f 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Took 7.66 seconds to snapshot the instance on the hypervisor.
Dec 06 07:47:13 compute-1 nova_compute[226101]: 2025-12-06 07:47:13.421 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:14 compute-1 ceph-mon[81689]: pgmap v2778: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 404 op/s
Dec 06 07:47:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:14.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:14.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:14 compute-1 nova_compute[226101]: 2025-12-06 07:47:14.739 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:16.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:16.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:47:17 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.39034 192.168.122.102:0/183954237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:18 compute-1 ceph-mon[81689]: pgmap v2779: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 8.5 MiB/s wr, 426 op/s
Dec 06 07:47:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000079s ======
Dec 06 07:47:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:18.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec 06 07:47:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:18.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:18 compute-1 nova_compute[226101]: 2025-12-06 07:47:18.422 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:18 compute-1 nova_compute[226101]: 2025-12-06 07:47:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:18 compute-1 nova_compute[226101]: 2025-12-06 07:47:18.610 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:18 compute-1 nova_compute[226101]: 2025-12-06 07:47:18.610 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:18 compute-1 nova_compute[226101]: 2025-12-06 07:47:18.611 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:18 compute-1 nova_compute[226101]: 2025-12-06 07:47:18.611 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:47:18 compute-1 nova_compute[226101]: 2025-12-06 07:47:18.611 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:47:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3221444380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.048 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.131 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.132 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.134 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.134 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.299 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.300 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3903MB free_disk=20.6937255859375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.300 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.301 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:47:19.366 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:47:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:47:19.367 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.402 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.418 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 119a621b-6198-42db-9a89-d73da6c2a2da actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.418 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 29195b63-c365-4ace-a4f5-9c2dba89c276 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.418 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.419 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.536 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:19 compute-1 nova_compute[226101]: 2025-12-06 07:47:19.741 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:20.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:20.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/183954237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:20 compute-1 ceph-mon[81689]: pgmap v2780: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 MiB/s rd, 7.4 MiB/s wr, 387 op/s
Dec 06 07:47:20 compute-1 ceph-mon[81689]: from='client.39034 192.168.122.102:0/183954237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3827093137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4117359800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3221444380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:20 compute-1 ceph-mon[81689]: pgmap v2781: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.1 MiB/s wr, 309 op/s
Dec 06 07:47:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2144854194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:47:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/889746553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e357 e357: 3 total, 3 up, 3 in
Dec 06 07:47:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/153126333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:22 compute-1 nova_compute[226101]: 2025-12-06 07:47:22.224 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:22 compute-1 nova_compute[226101]: 2025-12-06 07:47:22.230 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:47:22 compute-1 nova_compute[226101]: 2025-12-06 07:47:22.246 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:47:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:22.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:22 compute-1 nova_compute[226101]: 2025-12-06 07:47:22.270 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:47:22 compute-1 nova_compute[226101]: 2025-12-06 07:47:22.270 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:47:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/991248695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:47:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/991248695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:23 compute-1 ceph-mon[81689]: pgmap v2782: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.5 MiB/s wr, 278 op/s
Dec 06 07:47:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/889746553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:23 compute-1 ceph-mon[81689]: osdmap e357: 3 total, 3 up, 3 in
Dec 06 07:47:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/991248695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/991248695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3915905730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:23 compute-1 ceph-mon[81689]: pgmap v2784: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 14 KiB/s wr, 152 op/s
Dec 06 07:47:23 compute-1 nova_compute[226101]: 2025-12-06 07:47:23.423 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:24.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:24 compute-1 nova_compute[226101]: 2025-12-06 07:47:24.270 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:24 compute-1 nova_compute[226101]: 2025-12-06 07:47:24.271 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:47:24 compute-1 nova_compute[226101]: 2025-12-06 07:47:24.271 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:47:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:47:24.369 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:47:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1022723988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:24 compute-1 nova_compute[226101]: 2025-12-06 07:47:24.743 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:24 compute-1 nova_compute[226101]: 2025-12-06 07:47:24.984 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:47:24 compute-1 nova_compute[226101]: 2025-12-06 07:47:24.985 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:47:24 compute-1 nova_compute[226101]: 2025-12-06 07:47:24.985 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:47:24 compute-1 nova_compute[226101]: 2025-12-06 07:47:24.986 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:47:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:26.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:26.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2017072707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2017072707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:26 compute-1 ceph-mon[81689]: pgmap v2785: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 139 op/s
Dec 06 07:47:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:28.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:28.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:28 compute-1 nova_compute[226101]: 2025-12-06 07:47:28.425 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:28 compute-1 nova_compute[226101]: 2025-12-06 07:47:28.483 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [{"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:47:28 compute-1 nova_compute[226101]: 2025-12-06 07:47:28.515 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-119a621b-6198-42db-9a89-d73da6c2a2da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:47:28 compute-1 nova_compute[226101]: 2025-12-06 07:47:28.516 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:47:28 compute-1 nova_compute[226101]: 2025-12-06 07:47:28.516 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:28 compute-1 nova_compute[226101]: 2025-12-06 07:47:28.517 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:28 compute-1 nova_compute[226101]: 2025-12-06 07:47:28.517 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:47:28 compute-1 ceph-mon[81689]: pgmap v2786: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 182 op/s
Dec 06 07:47:29 compute-1 ceph-mon[81689]: pgmap v2787: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 215 op/s
Dec 06 07:47:29 compute-1 nova_compute[226101]: 2025-12-06 07:47:29.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:29 compute-1 nova_compute[226101]: 2025-12-06 07:47:29.768 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:30.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:30.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/855239122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:31 compute-1 ceph-mon[81689]: pgmap v2788: 305 pgs: 305 active+clean; 916 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 40 KiB/s wr, 233 op/s
Dec 06 07:47:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:31 compute-1 nova_compute[226101]: 2025-12-06 07:47:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:31 compute-1 nova_compute[226101]: 2025-12-06 07:47:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:32.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:32.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:32 compute-1 nova_compute[226101]: 2025-12-06 07:47:32.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3830138832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:33 compute-1 nova_compute[226101]: 2025-12-06 07:47:33.454 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:34 compute-1 ceph-mon[81689]: pgmap v2789: 305 pgs: 305 active+clean; 890 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 50 KiB/s wr, 256 op/s
Dec 06 07:47:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:34.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:34.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:34 compute-1 nova_compute[226101]: 2025-12-06 07:47:34.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:34 compute-1 nova_compute[226101]: 2025-12-06 07:47:34.772 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3956898978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:36 compute-1 ceph-mon[81689]: pgmap v2790: 305 pgs: 305 active+clean; 867 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 53 KiB/s wr, 274 op/s
Dec 06 07:47:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:36.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:36.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:37 compute-1 ceph-mon[81689]: pgmap v2791: 305 pgs: 305 active+clean; 869 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 46 KiB/s wr, 214 op/s
Dec 06 07:47:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:38.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:38 compute-1 nova_compute[226101]: 2025-12-06 07:47:38.499 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:38 compute-1 ovn_controller[130279]: 2025-12-06T07:47:38Z|00620|binding|INFO|Releasing lport 2622b20a-1eb7-4bb6-abbf-35b090425f31 from this chassis (sb_readonly=0)
Dec 06 07:47:38 compute-1 ovn_controller[130279]: 2025-12-06T07:47:38Z|00621|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:47:38 compute-1 nova_compute[226101]: 2025-12-06 07:47:38.653 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:39 compute-1 ceph-mon[81689]: pgmap v2792: 305 pgs: 305 active+clean; 869 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 109 KiB/s wr, 180 op/s
Dec 06 07:47:39 compute-1 nova_compute[226101]: 2025-12-06 07:47:39.774 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:40.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:40.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:41 compute-1 podman[286661]: 2025-12-06 07:47:41.090262403 +0000 UTC m=+0.070335224 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:47:41 compute-1 podman[286660]: 2025-12-06 07:47:41.096687956 +0000 UTC m=+0.076978842 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:47:41 compute-1 podman[286662]: 2025-12-06 07:47:41.132349206 +0000 UTC m=+0.098752729 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:47:41 compute-1 ceph-mon[81689]: pgmap v2793: 305 pgs: 305 active+clean; 893 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Dec 06 07:47:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:42.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:42.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:43 compute-1 ceph-mon[81689]: pgmap v2794: 305 pgs: 305 active+clean; 900 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:47:43 compute-1 sudo[286725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:43 compute-1 sudo[286725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:43 compute-1 sudo[286725]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:43 compute-1 sudo[286750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:47:43 compute-1 sudo[286750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:43 compute-1 sudo[286750]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:43 compute-1 nova_compute[226101]: 2025-12-06 07:47:43.502 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:43 compute-1 sudo[286775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:43 compute-1 sudo[286775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:43 compute-1 sudo[286775]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:43 compute-1 sudo[286800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:47:43 compute-1 sudo[286800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:44 compute-1 sudo[286800]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:44.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:44.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:47:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:47:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:47:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:47:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:47:44 compute-1 nova_compute[226101]: 2025-12-06 07:47:44.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.143943) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265143973, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 873, "num_deletes": 251, "total_data_size": 1545919, "memory_usage": 1573168, "flush_reason": "Manual Compaction"}
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265152320, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 1019453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58793, "largest_seqno": 59661, "table_properties": {"data_size": 1015316, "index_size": 1853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8795, "raw_average_key_size": 18, "raw_value_size": 1006798, "raw_average_value_size": 2071, "num_data_blocks": 82, "num_entries": 486, "num_filter_entries": 486, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007205, "oldest_key_time": 1765007205, "file_creation_time": 1765007265, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 8417 microseconds, and 2916 cpu microseconds.
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.152359) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 1019453 bytes OK
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.152374) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.153949) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.153961) EVENT_LOG_v1 {"time_micros": 1765007265153957, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.153976) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1541381, prev total WAL file size 1541381, number of live WAL files 2.
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.154544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(995KB)], [114(11MB)]
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265154584, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 13465350, "oldest_snapshot_seqno": -1}
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 9074 keys, 12373411 bytes, temperature: kUnknown
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265237297, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 12373411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12313677, "index_size": 35942, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238483, "raw_average_key_size": 26, "raw_value_size": 12152939, "raw_average_value_size": 1339, "num_data_blocks": 1380, "num_entries": 9074, "num_filter_entries": 9074, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007265, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.238097) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12373411 bytes
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.240145) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.2 rd, 149.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(25.3) write-amplify(12.1) OK, records in: 9594, records dropped: 520 output_compression: NoCompression
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.240176) EVENT_LOG_v1 {"time_micros": 1765007265240164, "job": 72, "event": "compaction_finished", "compaction_time_micros": 82996, "compaction_time_cpu_micros": 31616, "output_level": 6, "num_output_files": 1, "total_output_size": 12373411, "num_input_records": 9594, "num_output_records": 9074, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265240535, "job": 72, "event": "table_file_deletion", "file_number": 116}
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265242658, "job": 72, "event": "table_file_deletion", "file_number": 114}
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.154446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.242763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.242768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.242770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.242771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:47:45.242773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-1 ceph-mon[81689]: pgmap v2795: 305 pgs: 305 active+clean; 903 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Dec 06 07:47:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:46.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:46.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e358 e358: 3 total, 3 up, 3 in
Dec 06 07:47:47 compute-1 nova_compute[226101]: 2025-12-06 07:47:47.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:48.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:48.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:48 compute-1 ceph-mon[81689]: osdmap e358: 3 total, 3 up, 3 in
Dec 06 07:47:48 compute-1 ceph-mon[81689]: pgmap v2797: 305 pgs: 305 active+clean; 906 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 127 op/s
Dec 06 07:47:48 compute-1 nova_compute[226101]: 2025-12-06 07:47:48.504 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:49 compute-1 ceph-mon[81689]: pgmap v2798: 305 pgs: 305 active+clean; 868 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Dec 06 07:47:49 compute-1 nova_compute[226101]: 2025-12-06 07:47:49.812 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:50.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:50.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:50 compute-1 sudo[286857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:50 compute-1 sudo[286857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:50 compute-1 sudo[286857]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:50 compute-1 sudo[286882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:47:50 compute-1 sudo[286882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:50 compute-1 sudo[286882]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:51 compute-1 ceph-mon[81689]: pgmap v2799: 305 pgs: 305 active+clean; 815 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 878 KiB/s wr, 93 op/s
Dec 06 07:47:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:52.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:52.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1957087422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:53 compute-1 sshd-session[286907]: Received disconnect from 154.219.116.39 port 60902:11: Bye Bye [preauth]
Dec 06 07:47:53 compute-1 sshd-session[286907]: Disconnected from authenticating user root 154.219.116.39 port 60902 [preauth]
Dec 06 07:47:53 compute-1 nova_compute[226101]: 2025-12-06 07:47:53.506 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:54 compute-1 nova_compute[226101]: 2025-12-06 07:47:54.188 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:54.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:54 compute-1 ceph-mon[81689]: pgmap v2800: 305 pgs: 305 active+clean; 773 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 709 KiB/s rd, 382 KiB/s wr, 83 op/s
Dec 06 07:47:54 compute-1 nova_compute[226101]: 2025-12-06 07:47:54.814 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e359 e359: 3 total, 3 up, 3 in
Dec 06 07:47:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e360 e360: 3 total, 3 up, 3 in
Dec 06 07:47:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3420904072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:56 compute-1 ceph-mon[81689]: pgmap v2801: 305 pgs: 305 active+clean; 753 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 633 KiB/s rd, 380 KiB/s wr, 78 op/s
Dec 06 07:47:56 compute-1 ceph-mon[81689]: osdmap e359: 3 total, 3 up, 3 in
Dec 06 07:47:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:56.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:56.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:57 compute-1 ceph-mon[81689]: osdmap e360: 3 total, 3 up, 3 in
Dec 06 07:47:57 compute-1 ceph-mon[81689]: pgmap v2804: 305 pgs: 305 active+clean; 753 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 125 KiB/s wr, 85 op/s
Dec 06 07:47:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:58.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:47:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:58 compute-1 nova_compute[226101]: 2025-12-06 07:47:58.508 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e361 e361: 3 total, 3 up, 3 in
Dec 06 07:47:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:47:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2911447517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:47:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2911447517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:59 compute-1 ceph-mon[81689]: osdmap e361: 3 total, 3 up, 3 in
Dec 06 07:47:59 compute-1 ceph-mon[81689]: pgmap v2806: 305 pgs: 305 active+clean; 793 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.4 MiB/s wr, 197 op/s
Dec 06 07:47:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2911447517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2911447517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e362 e362: 3 total, 3 up, 3 in
Dec 06 07:47:59 compute-1 nova_compute[226101]: 2025-12-06 07:47:59.848 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:00.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:00.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:00 compute-1 ceph-mon[81689]: osdmap e362: 3 total, 3 up, 3 in
Dec 06 07:48:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e363 e363: 3 total, 3 up, 3 in
Dec 06 07:48:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:01 compute-1 nova_compute[226101]: 2025-12-06 07:48:01.494 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:01.666 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:01.667 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:01.668 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:01 compute-1 ceph-mon[81689]: osdmap e363: 3 total, 3 up, 3 in
Dec 06 07:48:01 compute-1 ceph-mon[81689]: pgmap v2809: 305 pgs: 305 active+clean; 813 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 11 MiB/s wr, 199 op/s
Dec 06 07:48:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:02.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:02.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e364 e364: 3 total, 3 up, 3 in
Dec 06 07:48:03 compute-1 nova_compute[226101]: 2025-12-06 07:48:03.542 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:04 compute-1 ceph-mon[81689]: osdmap e364: 3 total, 3 up, 3 in
Dec 06 07:48:04 compute-1 ceph-mon[81689]: pgmap v2811: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 857 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 13 MiB/s wr, 60 op/s
Dec 06 07:48:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:04.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:04.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:04 compute-1 nova_compute[226101]: 2025-12-06 07:48:04.851 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:05 compute-1 ceph-mon[81689]: pgmap v2812: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 742 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 205 op/s
Dec 06 07:48:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e365 e365: 3 total, 3 up, 3 in
Dec 06 07:48:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:05.942 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:05 compute-1 nova_compute[226101]: 2025-12-06 07:48:05.943 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:05.943 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:48:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:06.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:06.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:06 compute-1 ceph-mon[81689]: osdmap e365: 3 total, 3 up, 3 in
Dec 06 07:48:07 compute-1 ceph-mon[81689]: pgmap v2814: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 667 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 174 op/s
Dec 06 07:48:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:07.945 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:08.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:08.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:08 compute-1 nova_compute[226101]: 2025-12-06 07:48:08.543 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1771189620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:09 compute-1 nova_compute[226101]: 2025-12-06 07:48:09.853 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:10.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:10 compute-1 ceph-mon[81689]: pgmap v2815: 305 pgs: 305 active+clean; 609 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.7 MiB/s wr, 163 op/s
Dec 06 07:48:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e366 e366: 3 total, 3 up, 3 in
Dec 06 07:48:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:12 compute-1 podman[286910]: 2025-12-06 07:48:12.077008963 +0000 UTC m=+0.053655535 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:48:12 compute-1 podman[286911]: 2025-12-06 07:48:12.083399975 +0000 UTC m=+0.051647341 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:48:12 compute-1 ceph-mon[81689]: osdmap e366: 3 total, 3 up, 3 in
Dec 06 07:48:12 compute-1 ceph-mon[81689]: pgmap v2817: 305 pgs: 305 active+clean; 571 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 118 KiB/s rd, 8.0 KiB/s wr, 170 op/s
Dec 06 07:48:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3119992932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:12 compute-1 podman[286912]: 2025-12-06 07:48:12.109158818 +0000 UTC m=+0.081132945 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:48:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:12.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e367 e367: 3 total, 3 up, 3 in
Dec 06 07:48:13 compute-1 nova_compute[226101]: 2025-12-06 07:48:13.545 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:14 compute-1 ceph-mon[81689]: pgmap v2818: 305 pgs: 305 active+clean; 541 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 65 op/s
Dec 06 07:48:14 compute-1 ceph-mon[81689]: osdmap e367: 3 total, 3 up, 3 in
Dec 06 07:48:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:14.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:14 compute-1 nova_compute[226101]: 2025-12-06 07:48:14.855 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:15 compute-1 ceph-mon[81689]: pgmap v2820: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 522 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 5.6 KiB/s wr, 111 op/s
Dec 06 07:48:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e368 e368: 3 total, 3 up, 3 in
Dec 06 07:48:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:16.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:16 compute-1 ceph-mon[81689]: osdmap e368: 3 total, 3 up, 3 in
Dec 06 07:48:18 compute-1 ceph-mon[81689]: pgmap v2822: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 506 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Dec 06 07:48:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2535541170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:48:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2535541170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:48:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2128666931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:18.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:18 compute-1 nova_compute[226101]: 2025-12-06 07:48:18.547 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:18 compute-1 nova_compute[226101]: 2025-12-06 07:48:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:18 compute-1 nova_compute[226101]: 2025-12-06 07:48:18.691 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:18 compute-1 nova_compute[226101]: 2025-12-06 07:48:18.691 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:18 compute-1 nova_compute[226101]: 2025-12-06 07:48:18.691 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:18 compute-1 nova_compute[226101]: 2025-12-06 07:48:18.692 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:48:18 compute-1 nova_compute[226101]: 2025-12-06 07:48:18.692 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2916215043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.135 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e369 e369: 3 total, 3 up, 3 in
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.209 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.209 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.213 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.214 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:48:19 compute-1 ceph-mon[81689]: pgmap v2823: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 5.9 KiB/s wr, 110 op/s
Dec 06 07:48:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2916215043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:19 compute-1 ceph-mon[81689]: osdmap e369: 3 total, 3 up, 3 in
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.384 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.385 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3987MB free_disk=20.851390838623047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.385 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.385 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.600 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 119a621b-6198-42db-9a89-d73da6c2a2da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.600 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 29195b63-c365-4ace-a4f5-9c2dba89c276 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.601 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.601 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.669 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.825 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:19 compute-1 nova_compute[226101]: 2025-12-06 07:48:19.857 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1058672723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:20 compute-1 nova_compute[226101]: 2025-12-06 07:48:20.185 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:20 compute-1 nova_compute[226101]: 2025-12-06 07:48:20.190 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:48:20 compute-1 nova_compute[226101]: 2025-12-06 07:48:20.219 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:48:20 compute-1 nova_compute[226101]: 2025-12-06 07:48:20.221 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:48:20 compute-1 nova_compute[226101]: 2025-12-06 07:48:20.221 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1058672723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:20.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:20.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e370 e370: 3 total, 3 up, 3 in
Dec 06 07:48:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2907350927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1005909562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:21 compute-1 ceph-mon[81689]: osdmap e370: 3 total, 3 up, 3 in
Dec 06 07:48:21 compute-1 ceph-mon[81689]: pgmap v2826: 305 pgs: 305 active+clean; 393 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 66 op/s
Dec 06 07:48:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:21 compute-1 nova_compute[226101]: 2025-12-06 07:48:21.981 226109 DEBUG nova.compute.manager [req-98a6622b-cad8-4670-8a9f-184dd716c3e5 req-3dcd9524-cc5a-4aca-85c5-60edca583040 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-changed-1a8069ff-0103-4f67-92ab-269451290b42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:21 compute-1 nova_compute[226101]: 2025-12-06 07:48:21.981 226109 DEBUG nova.compute.manager [req-98a6622b-cad8-4670-8a9f-184dd716c3e5 req-3dcd9524-cc5a-4aca-85c5-60edca583040 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Refreshing instance network info cache due to event network-changed-1a8069ff-0103-4f67-92ab-269451290b42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:48:21 compute-1 nova_compute[226101]: 2025-12-06 07:48:21.982 226109 DEBUG oslo_concurrency.lockutils [req-98a6622b-cad8-4670-8a9f-184dd716c3e5 req-3dcd9524-cc5a-4aca-85c5-60edca583040 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:48:21 compute-1 nova_compute[226101]: 2025-12-06 07:48:21.982 226109 DEBUG oslo_concurrency.lockutils [req-98a6622b-cad8-4670-8a9f-184dd716c3e5 req-3dcd9524-cc5a-4aca-85c5-60edca583040 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:48:21 compute-1 nova_compute[226101]: 2025-12-06 07:48:21.982 226109 DEBUG nova.network.neutron [req-98a6622b-cad8-4670-8a9f-184dd716c3e5 req-3dcd9524-cc5a-4aca-85c5-60edca583040 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Refreshing network info cache for port 1a8069ff-0103-4f67-92ab-269451290b42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.102 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "29195b63-c365-4ace-a4f5-9c2dba89c276" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.103 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.103 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.103 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.103 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.106 226109 INFO nova.compute.manager [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Terminating instance
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.107 226109 DEBUG nova.compute.manager [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:48:22 compute-1 kernel: tap1a8069ff-01 (unregistering): left promiscuous mode
Dec 06 07:48:22 compute-1 NetworkManager[49031]: <info>  [1765007302.1612] device (tap1a8069ff-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:48:22 compute-1 ovn_controller[130279]: 2025-12-06T07:48:22Z|00622|binding|INFO|Releasing lport 1a8069ff-0103-4f67-92ab-269451290b42 from this chassis (sb_readonly=0)
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.210 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 ovn_controller[130279]: 2025-12-06T07:48:22Z|00623|binding|INFO|Setting lport 1a8069ff-0103-4f67-92ab-269451290b42 down in Southbound
Dec 06 07:48:22 compute-1 ovn_controller[130279]: 2025-12-06T07:48:22Z|00624|binding|INFO|Removing iface tap1a8069ff-01 ovn-installed in OVS
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.212 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.219 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:7a:98 10.100.0.8'], port_security=['fa:16:3e:33:7a:98 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '29195b63-c365-4ace-a4f5-9c2dba89c276', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9edf259b-6a5e-4e11-938d-d631a412648e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f093eaeb91c042dd8c85f5cd256c4394', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaa0df08-ced0-442a-9685-6c089d405f5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90bdd78f-ae71-4d01-8170-80b57acff7fd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=1a8069ff-0103-4f67-92ab-269451290b42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.221 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 1a8069ff-0103-4f67-92ab-269451290b42 in datapath 9edf259b-6a5e-4e11-938d-d631a412648e unbound from our chassis
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.223 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9edf259b-6a5e-4e11-938d-d631a412648e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.224 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8d136b22-5f3f-478f-9602-03bac81ac3d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.224 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e namespace which is not needed anymore
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.225 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/569971451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:22 compute-1 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a0.scope: Deactivated successfully.
Dec 06 07:48:22 compute-1 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a0.scope: Consumed 17.195s CPU time.
Dec 06 07:48:22 compute-1 systemd-machined[190302]: Machine qemu-73-instance-000000a0 terminated.
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.339 226109 INFO nova.virt.libvirt.driver [-] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Instance destroyed successfully.
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.340 226109 DEBUG nova.objects.instance [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lazy-loading 'resources' on Instance uuid 29195b63-c365-4ace-a4f5-9c2dba89c276 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:48:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:22.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:22 compute-1 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[286345]: [NOTICE]   (286349) : haproxy version is 2.8.14-c23fe91
Dec 06 07:48:22 compute-1 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[286345]: [NOTICE]   (286349) : path to executable is /usr/sbin/haproxy
Dec 06 07:48:22 compute-1 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[286345]: [WARNING]  (286349) : Exiting Master process...
Dec 06 07:48:22 compute-1 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[286345]: [ALERT]    (286349) : Current worker (286351) exited with code 143 (Terminated)
Dec 06 07:48:22 compute-1 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[286345]: [WARNING]  (286349) : All workers exited. Exiting... (0)
Dec 06 07:48:22 compute-1 systemd[1]: libpod-687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246.scope: Deactivated successfully.
Dec 06 07:48:22 compute-1 podman[287041]: 2025-12-06 07:48:22.354421874 +0000 UTC m=+0.049908194 container died 687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:48:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:22.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.370 226109 DEBUG nova.virt.libvirt.vif [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:46:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-440288893',display_name='tempest-TestSnapshotPattern-server-440288893',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-440288893',id=160,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxDMv0Vhbgr4L65QJ5+X+b7zbDfxyD9+qYaGNf4b7W3f9yi+P//RoKkMpyvVNIPGPzRh0H8TZRtNdilAq90sFwxv4/Dk5avudO2cObIlP9Igfm6SfNSZd6YTMkk3vYjjg==',key_name='tempest-TestSnapshotPattern-467820612',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:46:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f093eaeb91c042dd8c85f5cd256c4394',ramdisk_id='',reservation_id='r-p0ehmmk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-563672408',owner_user_name='tempest-TestSnapshotPattern-563672408-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:47:12Z,user_data=None,user_id='89d63d29c7534f70817e13d23cada716',uuid=29195b63-c365-4ace-a4f5-9c2dba89c276,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.370 226109 DEBUG nova.network.os_vif_util [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converting VIF {"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.372 226109 DEBUG nova.network.os_vif_util [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=1a8069ff-0103-4f67-92ab-269451290b42,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a8069ff-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.372 226109 DEBUG os_vif [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=1a8069ff-0103-4f67-92ab-269451290b42,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a8069ff-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.374 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.375 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a8069ff-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.376 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.377 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.379 226109 INFO os_vif [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:7a:98,bridge_name='br-int',has_traffic_filtering=True,id=1a8069ff-0103-4f67-92ab-269451290b42,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a8069ff-01')
Dec 06 07:48:22 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246-userdata-shm.mount: Deactivated successfully.
Dec 06 07:48:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-62dc31e5bce584c5b3fe2a9a571ba360c0b04ba9378c8c62add075666f7232de-merged.mount: Deactivated successfully.
Dec 06 07:48:22 compute-1 podman[287041]: 2025-12-06 07:48:22.391294656 +0000 UTC m=+0.086780956 container cleanup 687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:48:22 compute-1 systemd[1]: libpod-conmon-687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246.scope: Deactivated successfully.
Dec 06 07:48:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:48:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4805.4 total, 600.0 interval
                                           Cumulative writes: 49K writes, 194K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 49K writes, 17K syncs, 2.88 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 33.86 MB, 0.06 MB/s
                                           Interval WAL: 10K writes, 3884 syncs, 2.63 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:48:22 compute-1 podman[287092]: 2025-12-06 07:48:22.454561599 +0000 UTC m=+0.039261868 container remove 687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.459 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eee31320-586d-4261-bb03-2d8c1cef07f7]: (4, ('Sat Dec  6 07:48:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e (687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246)\n687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246\nSat Dec  6 07:48:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e (687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246)\n687cd5754d7f389b1064fdb5c333801c67388a381b929e0eb322963d9d1b4246\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.461 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3a66c321-a8d2-4c61-abfb-e076d6d0dcd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.462 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9edf259b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.464 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 kernel: tap9edf259b-60: left promiscuous mode
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.478 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.480 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.482 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[df920c6b-2c63-448a-bcf6-45b969af8310]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.503 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[008f5996-7171-41f0-9f5b-0b9fbc6f6fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.504 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[65507aad-5793-467f-9185-ca55b172dff5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.520 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[20ccc41b-df86-4d23-9924-41a978ba7fe7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747994, 'reachable_time': 22152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287109, 'error': None, 'target': 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.523 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:48:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:22.523 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1c392a-eba5-40de-ae16-49cb5710a563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:22 compute-1 systemd[1]: run-netns-ovnmeta\x2d9edf259b\x2d6a5e\x2d4e11\x2d938d\x2dd631a412648e.mount: Deactivated successfully.
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.795 226109 INFO nova.virt.libvirt.driver [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Deleting instance files /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276_del
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.797 226109 INFO nova.virt.libvirt.driver [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Deletion of /var/lib/nova/instances/29195b63-c365-4ace-a4f5-9c2dba89c276_del complete
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.876 226109 INFO nova.compute.manager [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Took 0.77 seconds to destroy the instance on the hypervisor.
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.876 226109 DEBUG oslo.service.loopingcall [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.876 226109 DEBUG nova.compute.manager [-] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:48:22 compute-1 nova_compute[226101]: 2025-12-06 07:48:22.876 226109 DEBUG nova.network.neutron [-] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:48:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e371 e371: 3 total, 3 up, 3 in
Dec 06 07:48:23 compute-1 nova_compute[226101]: 2025-12-06 07:48:23.221 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:23 compute-1 nova_compute[226101]: 2025-12-06 07:48:23.221 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:48:23 compute-1 nova_compute[226101]: 2025-12-06 07:48:23.263 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Dec 06 07:48:23 compute-1 nova_compute[226101]: 2025-12-06 07:48:23.264 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:48:23 compute-1 nova_compute[226101]: 2025-12-06 07:48:23.549 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.138 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.139 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.139 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.139 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.140 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.141 226109 INFO nova.compute.manager [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Terminating instance
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.142 226109 DEBUG nova.compute.manager [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.170 226109 DEBUG nova.compute.manager [req-ee17434a-374f-4290-bd3b-8603818b00ab req-3c6786c8-6080-46d2-8602-6c1255cda859 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-vif-unplugged-1a8069ff-0103-4f67-92ab-269451290b42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.170 226109 DEBUG oslo_concurrency.lockutils [req-ee17434a-374f-4290-bd3b-8603818b00ab req-3c6786c8-6080-46d2-8602-6c1255cda859 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.171 226109 DEBUG oslo_concurrency.lockutils [req-ee17434a-374f-4290-bd3b-8603818b00ab req-3c6786c8-6080-46d2-8602-6c1255cda859 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.171 226109 DEBUG oslo_concurrency.lockutils [req-ee17434a-374f-4290-bd3b-8603818b00ab req-3c6786c8-6080-46d2-8602-6c1255cda859 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.171 226109 DEBUG nova.compute.manager [req-ee17434a-374f-4290-bd3b-8603818b00ab req-3c6786c8-6080-46d2-8602-6c1255cda859 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] No waiting events found dispatching network-vif-unplugged-1a8069ff-0103-4f67-92ab-269451290b42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.172 226109 DEBUG nova.compute.manager [req-ee17434a-374f-4290-bd3b-8603818b00ab req-3c6786c8-6080-46d2-8602-6c1255cda859 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-vif-unplugged-1a8069ff-0103-4f67-92ab-269451290b42 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:48:24 compute-1 kernel: tap70e780b6-88 (unregistering): left promiscuous mode
Dec 06 07:48:24 compute-1 NetworkManager[49031]: <info>  [1765007304.1961] device (tap70e780b6-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.197 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 ovn_controller[130279]: 2025-12-06T07:48:24Z|00625|binding|INFO|Releasing lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d from this chassis (sb_readonly=0)
Dec 06 07:48:24 compute-1 ovn_controller[130279]: 2025-12-06T07:48:24Z|00626|binding|INFO|Setting lport 70e780b6-88d2-47a1-99b8-a525dcb88b5d down in Southbound
Dec 06 07:48:24 compute-1 ovn_controller[130279]: 2025-12-06T07:48:24Z|00627|binding|INFO|Removing iface tap70e780b6-88 ovn-installed in OVS
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.207 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.214 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:97:b4 10.100.0.9'], port_security=['fa:16:3e:5d:97:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '119a621b-6198-42db-9a89-d73da6c2a2da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=70e780b6-88d2-47a1-99b8-a525dcb88b5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.215 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 70e780b6-88d2-47a1-99b8-a525dcb88b5d in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.216 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.216 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a235bc7e-f57c-4868-9a1e-c3d28fa7e840]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.217 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000096.scope: Deactivated successfully.
Dec 06 07:48:24 compute-1 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000096.scope: Consumed 24.458s CPU time.
Dec 06 07:48:24 compute-1 systemd-machined[190302]: Machine qemu-70-instance-00000096 terminated.
Dec 06 07:48:24 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283946]: [NOTICE]   (283953) : haproxy version is 2.8.14-c23fe91
Dec 06 07:48:24 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283946]: [NOTICE]   (283953) : path to executable is /usr/sbin/haproxy
Dec 06 07:48:24 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283946]: [WARNING]  (283953) : Exiting Master process...
Dec 06 07:48:24 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283946]: [ALERT]    (283953) : Current worker (283957) exited with code 143 (Terminated)
Dec 06 07:48:24 compute-1 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[283946]: [WARNING]  (283953) : All workers exited. Exiting... (0)
Dec 06 07:48:24 compute-1 systemd[1]: libpod-2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb.scope: Deactivated successfully.
Dec 06 07:48:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:24.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:24 compute-1 podman[287134]: 2025-12-06 07:48:24.345400521 +0000 UTC m=+0.043849982 container died 2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 07:48:24 compute-1 NetworkManager[49031]: <info>  [1765007304.3590] manager: (tap70e780b6-88): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.361 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:24.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.365 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.373 226109 INFO nova.virt.libvirt.driver [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Instance destroyed successfully.
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.374 226109 DEBUG nova.objects.instance [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'resources' on Instance uuid 119a621b-6198-42db-9a89-d73da6c2a2da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:48:24 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb-userdata-shm.mount: Deactivated successfully.
Dec 06 07:48:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-c51bd4d55e9b4fc4a8431a2afbc55c9d85c8dc600984ee91d746b219e71a9a09-merged.mount: Deactivated successfully.
Dec 06 07:48:24 compute-1 podman[287134]: 2025-12-06 07:48:24.384319718 +0000 UTC m=+0.082769149 container cleanup 2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:48:24 compute-1 systemd[1]: libpod-conmon-2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb.scope: Deactivated successfully.
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.401 226109 DEBUG nova.virt.libvirt.vif [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:43:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-311361617',display_name='tempest-ServerStableDeviceRescueTest-server-311361617',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-311361617',id=150,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:43:44Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-0xqzu91y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:43:52Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=119a621b-6198-42db-9a89-d73da6c2a2da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.402 226109 DEBUG nova.network.os_vif_util [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "address": "fa:16:3e:5d:97:b4", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e780b6-88", "ovs_interfaceid": "70e780b6-88d2-47a1-99b8-a525dcb88b5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.403 226109 DEBUG nova.network.os_vif_util [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:97:b4,bridge_name='br-int',has_traffic_filtering=True,id=70e780b6-88d2-47a1-99b8-a525dcb88b5d,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e780b6-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.403 226109 DEBUG os_vif [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:97:b4,bridge_name='br-int',has_traffic_filtering=True,id=70e780b6-88d2-47a1-99b8-a525dcb88b5d,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e780b6-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.406 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.407 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70e780b6-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.408 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.409 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.411 226109 INFO os_vif [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:97:b4,bridge_name='br-int',has_traffic_filtering=True,id=70e780b6-88d2-47a1-99b8-a525dcb88b5d,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e780b6-88')
Dec 06 07:48:24 compute-1 podman[287175]: 2025-12-06 07:48:24.448829895 +0000 UTC m=+0.040800490 container remove 2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.453 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[84619cd7-920c-4d85-be34-3ec37c3580e1]: (4, ('Sat Dec  6 07:48:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb)\n2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb\nSat Dec  6 07:48:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb)\n2abce4266ef36dc8099db5e36cea4ba9f2febb5a7751671d9de9142094e901eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.455 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a9febd-f453-400f-b03c-a5aa556449a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.455 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:24 compute-1 kernel: tap6d1a17d6-50: left promiscuous mode
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.458 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.461 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[31e34c80-f004-4fb4-92b7-78463dfc4a1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:24 compute-1 ceph-mon[81689]: osdmap e371: 3 total, 3 up, 3 in
Dec 06 07:48:24 compute-1 ceph-mon[81689]: pgmap v2828: 305 pgs: 305 active+clean; 356 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 4.8 KiB/s wr, 100 op/s
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.539 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.542 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[df58acf2-b43a-46d4-bea8-8dbbef94dbc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.544 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[45c8b301-9ca0-42e8-88ba-0b7c554ec516]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.565 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[daf08f05-6a02-4b40-a5b9-9aa78295c398]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731046, 'reachable_time': 42377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287208, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.567 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:48:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:24.567 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[700ab727-22e8-4905-8ff5-6b35b9a1a06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:24 compute-1 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.571 226109 DEBUG nova.network.neutron [req-98a6622b-cad8-4670-8a9f-184dd716c3e5 req-3dcd9524-cc5a-4aca-85c5-60edca583040 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Updated VIF entry in instance network info cache for port 1a8069ff-0103-4f67-92ab-269451290b42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.572 226109 DEBUG nova.network.neutron [req-98a6622b-cad8-4670-8a9f-184dd716c3e5 req-3dcd9524-cc5a-4aca-85c5-60edca583040 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Updating instance_info_cache with network_info: [{"id": "1a8069ff-0103-4f67-92ab-269451290b42", "address": "fa:16:3e:33:7a:98", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a8069ff-01", "ovs_interfaceid": "1a8069ff-0103-4f67-92ab-269451290b42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.644 226109 DEBUG nova.network.neutron [-] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.647 226109 DEBUG oslo_concurrency.lockutils [req-98a6622b-cad8-4670-8a9f-184dd716c3e5 req-3dcd9524-cc5a-4aca-85c5-60edca583040 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-29195b63-c365-4ace-a4f5-9c2dba89c276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.686 226109 INFO nova.compute.manager [-] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Took 1.81 seconds to deallocate network for instance.
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.724 226109 DEBUG nova.compute.manager [req-09d47016-3518-437d-a100-898967eb109a req-187cdae6-f665-486f-91ec-91d12cde10a7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-vif-deleted-1a8069ff-0103-4f67-92ab-269451290b42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.743 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.744 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:24 compute-1 nova_compute[226101]: 2025-12-06 07:48:24.830 226109 DEBUG oslo_concurrency.processutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/748425604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.256 226109 DEBUG oslo_concurrency.processutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.264 226109 DEBUG nova.compute.provider_tree [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.298 226109 DEBUG nova.scheduler.client.report [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.334 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.388 226109 INFO nova.scheduler.client.report [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Deleted allocations for instance 29195b63-c365-4ace-a4f5-9c2dba89c276
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.622 226109 DEBUG oslo_concurrency.lockutils [None req-4f5e49ce-6283-410b-aa73-8ccf837a71c1 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:25 compute-1 ceph-mon[81689]: pgmap v2829: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 232 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 5.7 KiB/s wr, 155 op/s
Dec 06 07:48:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/748425604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:25 compute-1 nova_compute[226101]: 2025-12-06 07:48:25.990 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:26 compute-1 systemd[1]: Starting dnf makecache...
Dec 06 07:48:26 compute-1 nova_compute[226101]: 2025-12-06 07:48:26.328 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:26.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:26.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:26 compute-1 dnf[287233]: Metadata cache refreshed recently.
Dec 06 07:48:26 compute-1 nova_compute[226101]: 2025-12-06 07:48:26.399 226109 INFO nova.virt.libvirt.driver [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Deleting instance files /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da_del
Dec 06 07:48:26 compute-1 nova_compute[226101]: 2025-12-06 07:48:26.401 226109 INFO nova.virt.libvirt.driver [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Deletion of /var/lib/nova/instances/119a621b-6198-42db-9a89-d73da6c2a2da_del complete
Dec 06 07:48:26 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 06 07:48:26 compute-1 systemd[1]: Finished dnf makecache.
Dec 06 07:48:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.200 226109 DEBUG nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received event network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.200 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.200 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.200 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29195b63-c365-4ace-a4f5-9c2dba89c276-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.201 226109 DEBUG nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] No waiting events found dispatching network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.201 226109 WARNING nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Received unexpected event network-vif-plugged-1a8069ff-0103-4f67-92ab-269451290b42 for instance with vm_state deleted and task_state None.
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.201 226109 DEBUG nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.201 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.201 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.201 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.201 226109 DEBUG nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.202 226109 DEBUG nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-unplugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.202 226109 DEBUG nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.202 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.202 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.202 226109 DEBUG oslo_concurrency.lockutils [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.202 226109 DEBUG nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] No waiting events found dispatching network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.202 226109 WARNING nova.compute.manager [req-999d8efb-3558-4f85-9204-b43595b4c0b0 req-d334347b-bae9-4c66-96a5-8b3e4b3d0a46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received unexpected event network-vif-plugged-70e780b6-88d2-47a1-99b8-a525dcb88b5d for instance with vm_state active and task_state deleting.
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.304 226109 INFO nova.compute.manager [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Took 3.16 seconds to destroy the instance on the hypervisor.
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.305 226109 DEBUG oslo.service.loopingcall [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.305 226109 DEBUG nova.compute.manager [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:48:27 compute-1 nova_compute[226101]: 2025-12-06 07:48:27.306 226109 DEBUG nova.network.neutron [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:48:27 compute-1 ceph-mon[81689]: pgmap v2830: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 164 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 6.1 KiB/s wr, 149 op/s
Dec 06 07:48:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:28.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:28.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:28 compute-1 nova_compute[226101]: 2025-12-06 07:48:28.552 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:29 compute-1 ceph-mon[81689]: pgmap v2831: 305 pgs: 305 active+clean; 122 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 6.6 KiB/s wr, 154 op/s
Dec 06 07:48:29 compute-1 nova_compute[226101]: 2025-12-06 07:48:29.409 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:29 compute-1 nova_compute[226101]: 2025-12-06 07:48:29.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.183 226109 DEBUG nova.network.neutron [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.208 226109 INFO nova.compute.manager [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Took 2.90 seconds to deallocate network for instance.
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.311 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.312 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.342 226109 DEBUG nova.compute.manager [req-e1b48c9b-398a-4727-baab-54fc31bbf965 req-d3fa1462-3647-4845-9d97-5ada281d834a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Received event network-vif-deleted-70e780b6-88d2-47a1-99b8-a525dcb88b5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:30.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:30.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.413 226109 DEBUG oslo_concurrency.processutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:30 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2280994863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.890 226109 DEBUG oslo_concurrency.processutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.897 226109 DEBUG nova.compute.provider_tree [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:48:30 compute-1 nova_compute[226101]: 2025-12-06 07:48:30.936 226109 DEBUG nova.scheduler.client.report [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:48:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 e372: 3 total, 3 up, 3 in
Dec 06 07:48:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2280994863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:31 compute-1 nova_compute[226101]: 2025-12-06 07:48:31.030 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:31 compute-1 nova_compute[226101]: 2025-12-06 07:48:31.128 226109 INFO nova.scheduler.client.report [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Deleted allocations for instance 119a621b-6198-42db-9a89-d73da6c2a2da
Dec 06 07:48:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:31 compute-1 nova_compute[226101]: 2025-12-06 07:48:31.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:31 compute-1 nova_compute[226101]: 2025-12-06 07:48:31.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:31 compute-1 nova_compute[226101]: 2025-12-06 07:48:31.658 226109 DEBUG oslo_concurrency.lockutils [None req-d2dea038-2395-4a27-b876-ffe4261acd6a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "119a621b-6198-42db-9a89-d73da6c2a2da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:32 compute-1 ceph-mon[81689]: osdmap e372: 3 total, 3 up, 3 in
Dec 06 07:48:32 compute-1 ceph-mon[81689]: pgmap v2833: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 5.9 KiB/s wr, 137 op/s
Dec 06 07:48:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2733307890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:32.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:32.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2858422135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:33 compute-1 nova_compute[226101]: 2025-12-06 07:48:33.588 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:33 compute-1 nova_compute[226101]: 2025-12-06 07:48:33.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:34.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:34.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:34 compute-1 nova_compute[226101]: 2025-12-06 07:48:34.412 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:34 compute-1 ceph-mon[81689]: pgmap v2834: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 4.8 KiB/s wr, 112 op/s
Dec 06 07:48:35 compute-1 sshd-session[287257]: Received disconnect from 14.103.118.136 port 46630:11: Bye Bye [preauth]
Dec 06 07:48:35 compute-1 sshd-session[287257]: Disconnected from authenticating user root 14.103.118.136 port 46630 [preauth]
Dec 06 07:48:35 compute-1 ceph-mon[81689]: pgmap v2835: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 53 op/s
Dec 06 07:48:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:36.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:36.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:36 compute-1 nova_compute[226101]: 2025-12-06 07:48:36.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3510655874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:37 compute-1 nova_compute[226101]: 2025-12-06 07:48:37.338 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007302.3368626, 29195b63-c365-4ace-a4f5-9c2dba89c276 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:48:37 compute-1 nova_compute[226101]: 2025-12-06 07:48:37.339 226109 INFO nova.compute.manager [-] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] VM Stopped (Lifecycle Event)
Dec 06 07:48:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:38.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:38.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:38 compute-1 nova_compute[226101]: 2025-12-06 07:48:38.634 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:38 compute-1 sshd-session[287259]: banner exchange: Connection from 64.62.197.62 port 23996: invalid format
Dec 06 07:48:39 compute-1 nova_compute[226101]: 2025-12-06 07:48:39.371 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007304.3700106, 119a621b-6198-42db-9a89-d73da6c2a2da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:48:39 compute-1 nova_compute[226101]: 2025-12-06 07:48:39.372 226109 INFO nova.compute.manager [-] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] VM Stopped (Lifecycle Event)
Dec 06 07:48:39 compute-1 nova_compute[226101]: 2025-12-06 07:48:39.414 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:39 compute-1 nova_compute[226101]: 2025-12-06 07:48:39.788 226109 DEBUG nova.compute.manager [None req-473b5356-abd3-479b-9de7-c6dbee471f8f - - - - - -] [instance: 119a621b-6198-42db-9a89-d73da6c2a2da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:48:39 compute-1 nova_compute[226101]: 2025-12-06 07:48:39.789 226109 DEBUG nova.compute.manager [None req-b45f05d1-a476-4a74-a1c7-7911375bc14e - - - - - -] [instance: 29195b63-c365-4ace-a4f5-9c2dba89c276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:48:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:40.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:40.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:41 compute-1 ceph-mon[81689]: pgmap v2836: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 07:48:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:42.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:42.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:42 compute-1 ceph-mon[81689]: pgmap v2837: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 614 B/s wr, 14 op/s
Dec 06 07:48:42 compute-1 ceph-mon[81689]: pgmap v2838: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:48:42 compute-1 nova_compute[226101]: 2025-12-06 07:48:42.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:42 compute-1 nova_compute[226101]: 2025-12-06 07:48:42.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:48:43 compute-1 podman[287261]: 2025-12-06 07:48:43.086313321 +0000 UTC m=+0.063176021 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:48:43 compute-1 podman[287260]: 2025-12-06 07:48:43.098704595 +0000 UTC m=+0.072414290 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:48:43 compute-1 podman[287262]: 2025-12-06 07:48:43.111129109 +0000 UTC m=+0.089006516 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:48:43 compute-1 nova_compute[226101]: 2025-12-06 07:48:43.637 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:44 compute-1 ceph-mon[81689]: pgmap v2839: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:48:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:44.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:44.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:44 compute-1 nova_compute[226101]: 2025-12-06 07:48:44.416 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:44 compute-1 nova_compute[226101]: 2025-12-06 07:48:44.977 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:48:45 compute-1 ceph-mon[81689]: pgmap v2840: 305 pgs: 305 active+clean; 155 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Dec 06 07:48:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:46.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:46.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:47 compute-1 ceph-mon[81689]: pgmap v2841: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:47 compute-1 nova_compute[226101]: 2025-12-06 07:48:47.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:48.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:48.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:48 compute-1 nova_compute[226101]: 2025-12-06 07:48:48.666 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:49 compute-1 ceph-mon[81689]: pgmap v2842: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:49.285 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:49.286 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:48:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:48:49.286 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:49 compute-1 nova_compute[226101]: 2025-12-06 07:48:49.287 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:49 compute-1 nova_compute[226101]: 2025-12-06 07:48:49.418 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:50.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:50.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:50 compute-1 sudo[287322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:50 compute-1 sudo[287322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:50 compute-1 sudo[287322]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:50 compute-1 sudo[287347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:48:50 compute-1 sudo[287347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:50 compute-1 sudo[287347]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:51 compute-1 sudo[287372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:51 compute-1 sudo[287372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:51 compute-1 sudo[287372]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:51 compute-1 sudo[287397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:48:51 compute-1 sudo[287397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:51 compute-1 ceph-mon[81689]: pgmap v2843: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:51 compute-1 sudo[287397]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:52.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:52.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:53 compute-1 ceph-mon[81689]: pgmap v2844: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:48:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:48:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:48:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:48:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:48:53 compute-1 nova_compute[226101]: 2025-12-06 07:48:53.667 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:54.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:54.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:54 compute-1 nova_compute[226101]: 2025-12-06 07:48:54.421 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/994656536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:48:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2368498693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:48:55 compute-1 ceph-mon[81689]: pgmap v2845: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:56.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:56.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:58.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:48:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:58.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:58 compute-1 nova_compute[226101]: 2025-12-06 07:48:58.668 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:58 compute-1 ceph-mon[81689]: pgmap v2846: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 564 KiB/s wr, 11 op/s
Dec 06 07:48:59 compute-1 nova_compute[226101]: 2025-12-06 07:48:59.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:00 compute-1 ceph-mon[81689]: pgmap v2847: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Dec 06 07:49:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:00.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:00.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:01 compute-1 ceph-mon[81689]: pgmap v2848: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 12 KiB/s wr, 22 op/s
Dec 06 07:49:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:49:01.668 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:49:01.668 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:49:01.668 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:01 compute-1 sudo[287454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:49:02 compute-1 sudo[287454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:02 compute-1 sudo[287454]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:02 compute-1 sudo[287479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:49:02 compute-1 sudo[287479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:02 compute-1 sudo[287479]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:02.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:49:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:49:03 compute-1 nova_compute[226101]: 2025-12-06 07:49:03.669 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:04.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:04.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:04 compute-1 nova_compute[226101]: 2025-12-06 07:49:04.468 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:04 compute-1 ceph-mon[81689]: pgmap v2849: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 44 op/s
Dec 06 07:49:04 compute-1 nova_compute[226101]: 2025-12-06 07:49:04.619 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:04 compute-1 nova_compute[226101]: 2025-12-06 07:49:04.620 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:49:06 compute-1 ceph-mon[81689]: pgmap v2850: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:06.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:06.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:07 compute-1 ceph-mon[81689]: pgmap v2851: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:08.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:08 compute-1 nova_compute[226101]: 2025-12-06 07:49:08.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:09 compute-1 nova_compute[226101]: 2025-12-06 07:49:09.469 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:09 compute-1 ceph-mon[81689]: pgmap v2852: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:10.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:11 compute-1 ceph-mon[81689]: pgmap v2853: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 06 07:49:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:12.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:12.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:49:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1194372675' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:49:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:49:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1194372675' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:49:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1194372675' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:49:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1194372675' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:49:13 compute-1 nova_compute[226101]: 2025-12-06 07:49:13.805 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:14 compute-1 podman[287504]: 2025-12-06 07:49:14.074231396 +0000 UTC m=+0.055948257 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:49:14 compute-1 podman[287505]: 2025-12-06 07:49:14.094230094 +0000 UTC m=+0.075933925 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 07:49:14 compute-1 podman[287506]: 2025-12-06 07:49:14.095995861 +0000 UTC m=+0.072755709 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 07:49:14 compute-1 ceph-mon[81689]: pgmap v2854: 305 pgs: 305 active+clean; 172 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 587 KiB/s wr, 60 op/s
Dec 06 07:49:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:14.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:14.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:14 compute-1 nova_compute[226101]: 2025-12-06 07:49:14.470 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:15 compute-1 ceph-mon[81689]: pgmap v2855: 305 pgs: 305 active+clean; 195 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 07:49:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:16 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:49:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:16.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:16 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:49:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:16.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:17 compute-1 ceph-mon[81689]: pgmap v2856: 305 pgs: 305 active+clean; 198 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:49:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:18.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:18.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:18 compute-1 nova_compute[226101]: 2025-12-06 07:49:18.841 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:19 compute-1 nova_compute[226101]: 2025-12-06 07:49:19.472 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:20.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:20.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:20 compute-1 nova_compute[226101]: 2025-12-06 07:49:20.653 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:20 compute-1 nova_compute[226101]: 2025-12-06 07:49:20.742 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:20 compute-1 nova_compute[226101]: 2025-12-06 07:49:20.743 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:20 compute-1 nova_compute[226101]: 2025-12-06 07:49:20.743 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:20 compute-1 nova_compute[226101]: 2025-12-06 07:49:20.743 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:49:20 compute-1 nova_compute[226101]: 2025-12-06 07:49:20.743 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:49:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2417538724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:21 compute-1 nova_compute[226101]: 2025-12-06 07:49:21.199 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:21 compute-1 nova_compute[226101]: 2025-12-06 07:49:21.386 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:49:21 compute-1 nova_compute[226101]: 2025-12-06 07:49:21.388 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4414MB free_disk=20.94293212890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:49:21 compute-1 nova_compute[226101]: 2025-12-06 07:49:21.388 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:21 compute-1 nova_compute[226101]: 2025-12-06 07:49:21.389 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:22 compute-1 ceph-mon[81689]: pgmap v2857: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:49:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:22.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:22.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:22 compute-1 ovn_controller[130279]: 2025-12-06T07:49:22Z|00628|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 06 07:49:23 compute-1 ceph-mon[81689]: pgmap v2858: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:49:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2417538724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/366501616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.627 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.627 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.775 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.890 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.890 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.923 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.953 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:49:23 compute-1 nova_compute[226101]: 2025-12-06 07:49:23.993 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:49:24 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/729183953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:24 compute-1 nova_compute[226101]: 2025-12-06 07:49:24.430 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:24.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:24 compute-1 nova_compute[226101]: 2025-12-06 07:49:24.437 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:49:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:24 compute-1 nova_compute[226101]: 2025-12-06 07:49:24.474 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:24 compute-1 nova_compute[226101]: 2025-12-06 07:49:24.515 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:49:24 compute-1 ceph-mon[81689]: pgmap v2859: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:49:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2798437822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:24 compute-1 nova_compute[226101]: 2025-12-06 07:49:24.667 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:49:24 compute-1 nova_compute[226101]: 2025-12-06 07:49:24.667 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/729183953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:25 compute-1 ceph-mon[81689]: pgmap v2860: 305 pgs: 305 active+clean; 157 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Dec 06 07:49:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:26.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:27 compute-1 ceph-mon[81689]: pgmap v2861: 305 pgs: 305 active+clean; 138 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 114 KiB/s rd, 63 KiB/s wr, 36 op/s
Dec 06 07:49:27 compute-1 nova_compute[226101]: 2025-12-06 07:49:27.604 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:27 compute-1 nova_compute[226101]: 2025-12-06 07:49:27.605 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:49:27 compute-1 nova_compute[226101]: 2025-12-06 07:49:27.605 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:49:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:28.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:28 compute-1 nova_compute[226101]: 2025-12-06 07:49:28.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:29 compute-1 nova_compute[226101]: 2025-12-06 07:49:29.476 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:29 compute-1 nova_compute[226101]: 2025-12-06 07:49:29.777 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:49:29 compute-1 nova_compute[226101]: 2025-12-06 07:49:29.777 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:29 compute-1 nova_compute[226101]: 2025-12-06 07:49:29.777 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:29 compute-1 nova_compute[226101]: 2025-12-06 07:49:29.777 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:29 compute-1 nova_compute[226101]: 2025-12-06 07:49:29.777 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:49:30 compute-1 ceph-mon[81689]: pgmap v2862: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 16 KiB/s wr, 28 op/s
Dec 06 07:49:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:49:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.5 total, 600.0 interval
                                           Cumulative writes: 11K writes, 60K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1681 writes, 8365 keys, 1681 commit groups, 1.0 writes per commit group, ingest: 16.50 MB, 0.03 MB/s
                                           Interval WAL: 1681 writes, 1681 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     14.5      5.16              0.22        36    0.143       0      0       0.0       0.0
                                             L6      1/0   11.80 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7     37.8     32.1     10.92              0.95        35    0.312    243K    19K       0.0       0.0
                                            Sum      1/0   11.80 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     25.7     26.4     16.08              1.16        71    0.226    243K    19K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2     24.1     24.4      3.25              0.21        12    0.270     56K   3113       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     37.8     32.1     10.92              0.95        35    0.312    243K    19K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     14.6      5.11              0.22        35    0.146       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.073, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.41 GB write, 0.09 MB/s write, 0.40 GB read, 0.09 MB/s read, 16.1 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 3.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 46.26 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000303 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2641,44.48 MB,14.6302%) FilterBlock(71,683.55 KB,0.219581%) IndexBlock(71,1.12 MB,0.367712%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:49:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:30.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:32.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:32.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:32 compute-1 nova_compute[226101]: 2025-12-06 07:49:32.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:32 compute-1 nova_compute[226101]: 2025-12-06 07:49:32.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:32 compute-1 ceph-mon[81689]: pgmap v2863: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 07:49:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1050604975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:33 compute-1 ceph-mon[81689]: pgmap v2864: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 06 07:49:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3221781244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:33 compute-1 nova_compute[226101]: 2025-12-06 07:49:33.920 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:49:34.159 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:49:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:49:34.160 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:49:34 compute-1 nova_compute[226101]: 2025-12-06 07:49:34.159 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:34.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:34.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:34 compute-1 nova_compute[226101]: 2025-12-06 07:49:34.477 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4007388396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:35 compute-1 nova_compute[226101]: 2025-12-06 07:49:35.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:36 compute-1 ceph-mon[81689]: pgmap v2865: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 06 07:49:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:36.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:36.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:37 compute-1 ceph-mon[81689]: pgmap v2866: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s rd, 1.2 KiB/s wr, 9 op/s
Dec 06 07:49:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:38.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:38.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:38 compute-1 nova_compute[226101]: 2025-12-06 07:49:38.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:38 compute-1 nova_compute[226101]: 2025-12-06 07:49:38.922 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:39 compute-1 sshd-session[287614]: Connection closed by 94.102.49.155 port 53320
Dec 06 07:49:39 compute-1 nova_compute[226101]: 2025-12-06 07:49:39.479 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:39 compute-1 sshd-session[287615]: Connection closed by 94.102.49.155 port 53324 [preauth]
Dec 06 07:49:39 compute-1 ceph-mon[81689]: pgmap v2867: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 5 op/s
Dec 06 07:49:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:40.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:40.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:41 compute-1 ceph-mon[81689]: pgmap v2868: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:42.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:42.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:49:43.161 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:49:43 compute-1 ceph-mon[81689]: pgmap v2869: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:43 compute-1 nova_compute[226101]: 2025-12-06 07:49:43.923 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:44.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:44.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:44 compute-1 nova_compute[226101]: 2025-12-06 07:49:44.517 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:45 compute-1 podman[287617]: 2025-12-06 07:49:45.06629932 +0000 UTC m=+0.054713905 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:49:45 compute-1 podman[287618]: 2025-12-06 07:49:45.077268114 +0000 UTC m=+0.064287011 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 07:49:45 compute-1 podman[287619]: 2025-12-06 07:49:45.095121685 +0000 UTC m=+0.078826742 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 07:49:45 compute-1 ceph-mon[81689]: pgmap v2870: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:46.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:46.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:47 compute-1 ceph-mon[81689]: pgmap v2871: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:48.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:48.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:48 compute-1 nova_compute[226101]: 2025-12-06 07:49:48.924 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:49 compute-1 ceph-mon[81689]: pgmap v2872: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:49 compute-1 nova_compute[226101]: 2025-12-06 07:49:49.563 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:50 compute-1 nova_compute[226101]: 2025-12-06 07:49:50.311 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:50 compute-1 nova_compute[226101]: 2025-12-06 07:49:50.311 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:50 compute-1 nova_compute[226101]: 2025-12-06 07:49:50.352 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:49:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:50.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:50.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:50 compute-1 nova_compute[226101]: 2025-12-06 07:49:50.883 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:50 compute-1 nova_compute[226101]: 2025-12-06 07:49:50.884 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:50 compute-1 nova_compute[226101]: 2025-12-06 07:49:50.892 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:49:50 compute-1 nova_compute[226101]: 2025-12-06 07:49:50.892 226109 INFO nova.compute.claims [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:49:51 compute-1 ceph-mon[81689]: pgmap v2873: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:51 compute-1 nova_compute[226101]: 2025-12-06 07:49:51.725 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:49:52 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4162480082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.161 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.167 226109 DEBUG nova.compute.provider_tree [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.194 226109 DEBUG nova.scheduler.client.report [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.231 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.232 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:49:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4162480082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.302 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.303 226109 DEBUG nova.network.neutron [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.363 226109 INFO nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.449 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:49:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:49:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:52.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:52.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.697 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.698 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.698 226109 INFO nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Creating image(s)
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.725 226109 DEBUG nova.storage.rbd_utils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.754 226109 DEBUG nova.storage.rbd_utils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.780 226109 DEBUG nova.storage.rbd_utils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.784 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.851 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.852 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.853 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.853 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.880 226109 DEBUG nova.storage.rbd_utils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:49:52 compute-1 nova_compute[226101]: 2025-12-06 07:49:52.884 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0852932a-3266-432f-9975-8870535aff4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.297 226109 DEBUG nova.policy [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f2335740042045fba7f544ee5140eb87', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:49:53 compute-1 ceph-mon[81689]: pgmap v2874: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.545 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0852932a-3266-432f-9975-8870535aff4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.637 226109 DEBUG nova.storage.rbd_utils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] resizing rbd image 0852932a-3266-432f-9975-8870535aff4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.741 226109 DEBUG nova.objects.instance [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'migration_context' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.865 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.865 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Ensure instance console log exists: /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.866 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.866 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.866 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:53 compute-1 nova_compute[226101]: 2025-12-06 07:49:53.927 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:54.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:54.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:54 compute-1 nova_compute[226101]: 2025-12-06 07:49:54.565 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:55 compute-1 ceph-mon[81689]: pgmap v2875: 305 pgs: 305 active+clean; 148 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Dec 06 07:49:55 compute-1 nova_compute[226101]: 2025-12-06 07:49:55.395 226109 DEBUG nova.network.neutron [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Successfully created port: 680d7c20-81e0-48d0-954c-9fbcda7e7615 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:49:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:56.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:49:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:56 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:56.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2081352726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:57 compute-1 sshd-session[287676]: error: kex_exchange_identification: read: Connection timed out
Dec 06 07:49:57 compute-1 sshd-session[287676]: banner exchange: Connection from 183.15.121.139 port 34786: Connection timed out
Dec 06 07:49:57 compute-1 nova_compute[226101]: 2025-12-06 07:49:57.870 226109 DEBUG nova.network.neutron [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Successfully updated port: 680d7c20-81e0-48d0-954c-9fbcda7e7615 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:49:58 compute-1 nova_compute[226101]: 2025-12-06 07:49:58.076 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:49:58 compute-1 nova_compute[226101]: 2025-12-06 07:49:58.076 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquired lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:49:58 compute-1 nova_compute[226101]: 2025-12-06 07:49:58.076 226109 DEBUG nova.network.neutron [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:49:58 compute-1 nova_compute[226101]: 2025-12-06 07:49:58.269 226109 DEBUG nova.compute.manager [req-410db591-8963-4c03-a7f0-a3b77c6b55ad req-782346cc-4c2f-4289-8d26-79343ca7e0f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-changed-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:49:58 compute-1 nova_compute[226101]: 2025-12-06 07:49:58.270 226109 DEBUG nova.compute.manager [req-410db591-8963-4c03-a7f0-a3b77c6b55ad req-782346cc-4c2f-4289-8d26-79343ca7e0f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Refreshing instance network info cache due to event network-changed-680d7c20-81e0-48d0-954c-9fbcda7e7615. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:49:58 compute-1 nova_compute[226101]: 2025-12-06 07:49:58.270 226109 DEBUG oslo_concurrency.lockutils [req-410db591-8963-4c03-a7f0-a3b77c6b55ad req-782346cc-4c2f-4289-8d26-79343ca7e0f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:49:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:49:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:58.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:49:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:58.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:58 compute-1 nova_compute[226101]: 2025-12-06 07:49:58.718 226109 DEBUG nova.network.neutron [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:49:58 compute-1 nova_compute[226101]: 2025-12-06 07:49:58.984 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:59 compute-1 nova_compute[226101]: 2025-12-06 07:49:59.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:59 compute-1 ceph-mon[81689]: pgmap v2876: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Dec 06 07:50:00 compute-1 ceph-mon[81689]: pgmap v2877: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:50:00 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 07:50:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:00.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:00 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:00.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.494 226109 DEBUG nova.network.neutron [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updating instance_info_cache with network_info: [{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:01.669 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:01.670 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:01.670 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.681 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Releasing lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.682 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance network_info: |[{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.682 226109 DEBUG oslo_concurrency.lockutils [req-410db591-8963-4c03-a7f0-a3b77c6b55ad req-782346cc-4c2f-4289-8d26-79343ca7e0f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.682 226109 DEBUG nova.network.neutron [req-410db591-8963-4c03-a7f0-a3b77c6b55ad req-782346cc-4c2f-4289-8d26-79343ca7e0f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Refreshing network info cache for port 680d7c20-81e0-48d0-954c-9fbcda7e7615 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.685 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Start _get_guest_xml network_info=[{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.690 226109 WARNING nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.695 226109 DEBUG nova.virt.libvirt.host [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.696 226109 DEBUG nova.virt.libvirt.host [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.702 226109 DEBUG nova.virt.libvirt.host [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.703 226109 DEBUG nova.virt.libvirt.host [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.704 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.704 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.705 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.705 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.705 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.705 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.706 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.706 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.706 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.706 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.706 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.707 226109 DEBUG nova.virt.hardware [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:50:01 compute-1 nova_compute[226101]: 2025-12-06 07:50:01.710 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:02 compute-1 sudo[287876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:02 compute-1 sudo[287876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:02 compute-1 sudo[287876]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:02 compute-1 ceph-mon[81689]: pgmap v2878: 305 pgs: 305 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Dec 06 07:50:02 compute-1 sudo[287901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:50:02 compute-1 sudo[287901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:02 compute-1 sudo[287901]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:02 compute-1 sudo[287935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:02 compute-1 sudo[287935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:02 compute-1 sudo[287935]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:02.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:02.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:02 compute-1 sudo[287960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:50:02 compute-1 sudo[287960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2406501570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:02 compute-1 nova_compute[226101]: 2025-12-06 07:50:02.696 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.986s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.021 226109 DEBUG nova.storage.rbd_utils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.027 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:03 compute-1 podman[288065]: 2025-12-06 07:50:03.048967784 +0000 UTC m=+0.203804067 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:50:03 compute-1 podman[288065]: 2025-12-06 07:50:03.341933198 +0000 UTC m=+0.496769481 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:50:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1209685693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2406501570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:03 compute-1 ceph-mon[81689]: pgmap v2879: 305 pgs: 305 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.753 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.755 226109 DEBUG nova.virt.libvirt.vif [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:49:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-881580221',display_name='tempest-ServerRescueNegativeTestJSON-server-881580221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-881580221',id=164,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-8ig59luj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:49:52Z,user_data=None,user_id='f2335740042045fba7f544ee5140eb87',uuid=0852932a-3266-432f-9975-8870535aff4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.755 226109 DEBUG nova.network.os_vif_util [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.756 226109 DEBUG nova.network.os_vif_util [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:1c:a5,bridge_name='br-int',has_traffic_filtering=True,id=680d7c20-81e0-48d0-954c-9fbcda7e7615,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680d7c20-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.757 226109 DEBUG nova.objects.instance [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.787 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <uuid>0852932a-3266-432f-9975-8870535aff4e</uuid>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <name>instance-000000a4</name>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-881580221</nova:name>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:50:01</nova:creationTime>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <nova:user uuid="f2335740042045fba7f544ee5140eb87">tempest-ServerRescueNegativeTestJSON-1304226499-project-member</nova:user>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <nova:project uuid="4842ecff6dce4ccc981a6b65a14ea406">tempest-ServerRescueNegativeTestJSON-1304226499</nova:project>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <nova:port uuid="680d7c20-81e0-48d0-954c-9fbcda7e7615">
Dec 06 07:50:03 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <system>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <entry name="serial">0852932a-3266-432f-9975-8870535aff4e</entry>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <entry name="uuid">0852932a-3266-432f-9975-8870535aff4e</entry>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </system>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <os>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   </os>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <features>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   </features>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0852932a-3266-432f-9975-8870535aff4e_disk">
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       </source>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0852932a-3266-432f-9975-8870535aff4e_disk.config">
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       </source>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:50:03 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d5:1c:a5"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <target dev="tap680d7c20-81"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/console.log" append="off"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <video>
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </video>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:50:03 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:50:03 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:50:03 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:50:03 compute-1 nova_compute[226101]: </domain>
Dec 06 07:50:03 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.787 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Preparing to wait for external event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.787 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.788 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.788 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.789 226109 DEBUG nova.virt.libvirt.vif [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:49:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-881580221',display_name='tempest-ServerRescueNegativeTestJSON-server-881580221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-881580221',id=164,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-8ig59luj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:49:52Z,user_data=None,user_id='f2335740042045fba7f544ee5140eb87',uuid=0852932a-3266-432f-9975-8870535aff4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.789 226109 DEBUG nova.network.os_vif_util [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.789 226109 DEBUG nova.network.os_vif_util [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:1c:a5,bridge_name='br-int',has_traffic_filtering=True,id=680d7c20-81e0-48d0-954c-9fbcda7e7615,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680d7c20-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.790 226109 DEBUG os_vif [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:1c:a5,bridge_name='br-int',has_traffic_filtering=True,id=680d7c20-81e0-48d0-954c-9fbcda7e7615,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680d7c20-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.790 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.791 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.791 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.794 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap680d7c20-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.795 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap680d7c20-81, col_values=(('external_ids', {'iface-id': '680d7c20-81e0-48d0-954c-9fbcda7e7615', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:1c:a5', 'vm-uuid': '0852932a-3266-432f-9975-8870535aff4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:03 compute-1 NetworkManager[49031]: <info>  [1765007403.7977] manager: (tap680d7c20-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.799 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.805 226109 INFO os_vif [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:1c:a5,bridge_name='br-int',has_traffic_filtering=True,id=680d7c20-81e0-48d0-954c-9fbcda7e7615,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680d7c20-81')
Dec 06 07:50:03 compute-1 sudo[287960]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.870 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.871 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.871 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No VIF found with MAC fa:16:3e:d5:1c:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.871 226109 INFO nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Using config drive
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.898 226109 DEBUG nova.storage.rbd_utils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:03 compute-1 nova_compute[226101]: 2025-12-06 07:50:03.985 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1209685693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:04 compute-1 sudo[288236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:04 compute-1 sudo[288236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:04 compute-1 sudo[288236]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:04 compute-1 sudo[288261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:50:04 compute-1 sudo[288261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:04 compute-1 sudo[288261]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:04 compute-1 sudo[288286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:04 compute-1 sudo[288286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:04 compute-1 sudo[288286]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:04 compute-1 sudo[288311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:50:04 compute-1 sudo[288311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:05 compute-1 sudo[288311]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:05 compute-1 ceph-mon[81689]: pgmap v2880: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 4.2 MiB/s wr, 65 op/s
Dec 06 07:50:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:50:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:50:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:50:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:50:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:50:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e373 e373: 3 total, 3 up, 3 in
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.593987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405594008, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1880, "num_deletes": 260, "total_data_size": 4239370, "memory_usage": 4299856, "flush_reason": "Manual Compaction"}
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405613895, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 2760768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59666, "largest_seqno": 61541, "table_properties": {"data_size": 2752918, "index_size": 4728, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16981, "raw_average_key_size": 20, "raw_value_size": 2737003, "raw_average_value_size": 3354, "num_data_blocks": 206, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007265, "oldest_key_time": 1765007265, "file_creation_time": 1765007405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 19986 microseconds, and 5435 cpu microseconds.
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.613969) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 2760768 bytes OK
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.613988) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.615345) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.615357) EVENT_LOG_v1 {"time_micros": 1765007405615353, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.615374) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 4230786, prev total WAL file size 4230786, number of live WAL files 2.
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.616490) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(2696KB)], [117(11MB)]
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405616534, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 15134179, "oldest_snapshot_seqno": -1}
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 9356 keys, 13186543 bytes, temperature: kUnknown
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405705217, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 13186543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13124123, "index_size": 37924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 245175, "raw_average_key_size": 26, "raw_value_size": 12957634, "raw_average_value_size": 1384, "num_data_blocks": 1459, "num_entries": 9356, "num_filter_entries": 9356, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.705538) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 13186543 bytes
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.707642) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.5 rd, 148.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(10.3) write-amplify(4.8) OK, records in: 9890, records dropped: 534 output_compression: NoCompression
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.707715) EVENT_LOG_v1 {"time_micros": 1765007405707688, "job": 74, "event": "compaction_finished", "compaction_time_micros": 88761, "compaction_time_cpu_micros": 29547, "output_level": 6, "num_output_files": 1, "total_output_size": 13186543, "num_input_records": 9890, "num_output_records": 9356, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405708921, "job": 74, "event": "table_file_deletion", "file_number": 119}
Dec 06 07:50:05 compute-1 nova_compute[226101]: 2025-12-06 07:50:05.711 226109 DEBUG nova.network.neutron [req-410db591-8963-4c03-a7f0-a3b77c6b55ad req-782346cc-4c2f-4289-8d26-79343ca7e0f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updated VIF entry in instance network info cache for port 680d7c20-81e0-48d0-954c-9fbcda7e7615. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:50:05 compute-1 nova_compute[226101]: 2025-12-06 07:50:05.711 226109 DEBUG nova.network.neutron [req-410db591-8963-4c03-a7f0-a3b77c6b55ad req-782346cc-4c2f-4289-8d26-79343ca7e0f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updating instance_info_cache with network_info: [{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405713675, "job": 74, "event": "table_file_deletion", "file_number": 117}
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.616375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.713858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.713864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.713866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.713867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:05.713869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-1 nova_compute[226101]: 2025-12-06 07:50:05.730 226109 INFO nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Creating config drive at /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config
Dec 06 07:50:05 compute-1 nova_compute[226101]: 2025-12-06 07:50:05.735 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wrlijvm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:05 compute-1 nova_compute[226101]: 2025-12-06 07:50:05.868 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wrlijvm" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:05 compute-1 nova_compute[226101]: 2025-12-06 07:50:05.897 226109 DEBUG nova.storage.rbd_utils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:05 compute-1 nova_compute[226101]: 2025-12-06 07:50:05.900 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config 0852932a-3266-432f-9975-8870535aff4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:06 compute-1 nova_compute[226101]: 2025-12-06 07:50:06.012 226109 DEBUG oslo_concurrency.lockutils [req-410db591-8963-4c03-a7f0-a3b77c6b55ad req-782346cc-4c2f-4289-8d26-79343ca7e0f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:06 compute-1 nova_compute[226101]: 2025-12-06 07:50:06.045 226109 DEBUG oslo_concurrency.processutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config 0852932a-3266-432f-9975-8870535aff4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:06 compute-1 nova_compute[226101]: 2025-12-06 07:50:06.046 226109 INFO nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Deleting local config drive /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config because it was imported into RBD.
Dec 06 07:50:06 compute-1 kernel: tap680d7c20-81: entered promiscuous mode
Dec 06 07:50:06 compute-1 NetworkManager[49031]: <info>  [1765007406.1031] manager: (tap680d7c20-81): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Dec 06 07:50:06 compute-1 systemd-udevd[288417]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:50:06 compute-1 nova_compute[226101]: 2025-12-06 07:50:06.151 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:06 compute-1 ovn_controller[130279]: 2025-12-06T07:50:06Z|00629|binding|INFO|Claiming lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 for this chassis.
Dec 06 07:50:06 compute-1 ovn_controller[130279]: 2025-12-06T07:50:06Z|00630|binding|INFO|680d7c20-81e0-48d0-954c-9fbcda7e7615: Claiming fa:16:3e:d5:1c:a5 10.100.0.11
Dec 06 07:50:06 compute-1 nova_compute[226101]: 2025-12-06 07:50:06.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:06 compute-1 nova_compute[226101]: 2025-12-06 07:50:06.158 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:06 compute-1 NetworkManager[49031]: <info>  [1765007406.1656] device (tap680d7c20-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:50:06 compute-1 NetworkManager[49031]: <info>  [1765007406.1673] device (tap680d7c20-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:50:06 compute-1 systemd-machined[190302]: New machine qemu-74-instance-000000a4.
Dec 06 07:50:06 compute-1 ovn_controller[130279]: 2025-12-06T07:50:06Z|00631|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 ovn-installed in OVS
Dec 06 07:50:06 compute-1 systemd[1]: Started Virtual Machine qemu-74-instance-000000a4.
Dec 06 07:50:06 compute-1 nova_compute[226101]: 2025-12-06 07:50:06.215 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:06.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:06.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:06 compute-1 ceph-mon[81689]: osdmap e373: 3 total, 3 up, 3 in
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:06.995253) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406995526, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 256, "total_data_size": 22901, "memory_usage": 28328, "flush_reason": "Manual Compaction"}
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406997590, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 14382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61546, "largest_seqno": 61811, "table_properties": {"data_size": 12555, "index_size": 60, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4589, "raw_average_key_size": 17, "raw_value_size": 9012, "raw_average_value_size": 34, "num_data_blocks": 3, "num_entries": 265, "num_filter_entries": 265, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007406, "oldest_key_time": 1765007406, "file_creation_time": 1765007406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 2418 microseconds, and 717 cpu microseconds.
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:06.997681) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 14382 bytes OK
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:06.997721) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:06.999036) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:06.999047) EVENT_LOG_v1 {"time_micros": 1765007406999043, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:06.999061) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 20803, prev total WAL file size 20803, number of live WAL files 2.
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:06.999511) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303233' seq:72057594037927935, type:22 .. '6C6F676D0032323735' seq:0, type:0; will stop at (end)
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(14KB)], [120(12MB)]
Dec 06 07:50:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406999546, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 13200925, "oldest_snapshot_seqno": -1}
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 9104 keys, 13036553 bytes, temperature: kUnknown
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407090855, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 13036553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12975521, "index_size": 37176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240880, "raw_average_key_size": 26, "raw_value_size": 12813079, "raw_average_value_size": 1407, "num_data_blocks": 1423, "num_entries": 9104, "num_filter_entries": 9104, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:07.091075) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13036553 bytes
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:07.093425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.5 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 12.6 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(1824.3) write-amplify(906.4) OK, records in: 9621, records dropped: 517 output_compression: NoCompression
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:07.093440) EVENT_LOG_v1 {"time_micros": 1765007407093433, "job": 76, "event": "compaction_finished", "compaction_time_micros": 91365, "compaction_time_cpu_micros": 25999, "output_level": 6, "num_output_files": 1, "total_output_size": 13036553, "num_input_records": 9621, "num_output_records": 9104, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407093534, "job": 76, "event": "table_file_deletion", "file_number": 122}
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407095443, "job": 76, "event": "table_file_deletion", "file_number": 120}
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:06.999448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:07.095478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:07.095483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:07.095484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:07.095485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:50:07.095487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.492 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007407.4921615, 0852932a-3266-432f-9975-8870535aff4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.493 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Started (Lifecycle Event)
Dec 06 07:50:07 compute-1 ovn_controller[130279]: 2025-12-06T07:50:07Z|00632|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 up in Southbound
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.525 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:1c:a5 10.100.0.11'], port_security=['fa:16:3e:d5:1c:a5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0852932a-3266-432f-9975-8870535aff4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=680d7c20-81e0-48d0-954c-9fbcda7e7615) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.526 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 680d7c20-81e0-48d0-954c-9fbcda7e7615 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 bound to our chassis
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.527 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.536 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[667422b3-cf68-4657-bd1b-34d49e10d625]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.537 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.539 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.539 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc144cf-eb33-46eb-b8a9-a8a33d3d629f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.540 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[094bd9de-4f54-4038-a110-18d5db5ccdf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.551 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[8447ca1f-dc98-4e2e-9ecd-1a6825f13960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.573 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c78ab797-939f-4be7-a0f3-072cc9471fbe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.597 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[23fb87b1-e82b-40d1-af73-985ce3f110ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.605 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[904ae126-0905-43a0-992d-e8651daa8b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 NetworkManager[49031]: <info>  [1765007407.6061] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/282)
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.633 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[52c46767-6c53-4c1e-8bc9-3c6dbc9b4a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.636 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4b61c455-d613-4a7b-a9d6-3ec127ce1b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 NetworkManager[49031]: <info>  [1765007407.6637] device (tap3d151181-00): carrier: link connected
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.670 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1d1e95-d1ea-432d-8fcf-3a0aeb95be17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.688 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f343d24b-3b2a-4ee0-8ed8-0dd0d9d9a09f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768716, 'reachable_time': 18380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288495, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.707 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7e390462-669d-4826-af25-efbdc68cc875]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 768716, 'tstamp': 768716}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288496, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.724 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[aed2de7a-e6b9-4335-9388-49a5a6a9d855]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768716, 'reachable_time': 18380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288497, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.767 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0e90b6-0732-476f-933d-ea58f1fd6a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.847 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4baf03-cafd-49fa-a720-42a8acdd61ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.849 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.849 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.849 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.851 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:07 compute-1 kernel: tap3d151181-00: entered promiscuous mode
Dec 06 07:50:07 compute-1 NetworkManager[49031]: <info>  [1765007407.8535] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.855 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.856 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:07 compute-1 ovn_controller[130279]: 2025-12-06T07:50:07Z|00633|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=1)
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.857 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.877 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.877 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.878 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[473d842e-fdec-4073-aa95-e7e390dd5c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.879 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:50:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:07.879 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.951 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.955 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007407.492301, 0852932a-3266-432f-9975-8870535aff4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:07 compute-1 nova_compute[226101]: 2025-12-06 07:50:07.956 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Paused (Lifecycle Event)
Dec 06 07:50:08 compute-1 ceph-mon[81689]: pgmap v2882: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.2 MiB/s wr, 46 op/s
Dec 06 07:50:08 compute-1 podman[288530]: 2025-12-06 07:50:08.281378301 +0000 UTC m=+0.047239523 container create 9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:50:08 compute-1 systemd[1]: Started libpod-conmon-9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054.scope.
Dec 06 07:50:08 compute-1 nova_compute[226101]: 2025-12-06 07:50:08.323 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:08 compute-1 nova_compute[226101]: 2025-12-06 07:50:08.326 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:08 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:50:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f94249b1a5ea8f24422378ede30675ce9ca8d0236cea2a5d9923359cdcd1b77/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:08 compute-1 podman[288530]: 2025-12-06 07:50:08.25535477 +0000 UTC m=+0.021216022 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:50:08 compute-1 podman[288530]: 2025-12-06 07:50:08.361644671 +0000 UTC m=+0.127505943 container init 9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:50:08 compute-1 podman[288530]: 2025-12-06 07:50:08.366766229 +0000 UTC m=+0.132627461 container start 9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:50:08 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[288545]: [NOTICE]   (288549) : New worker (288551) forked
Dec 06 07:50:08 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[288545]: [NOTICE]   (288549) : Loading success.
Dec 06 07:50:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:08.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:08.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:08 compute-1 nova_compute[226101]: 2025-12-06 07:50:08.745 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:50:08 compute-1 nova_compute[226101]: 2025-12-06 07:50:08.798 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:08 compute-1 nova_compute[226101]: 2025-12-06 07:50:08.988 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:09 compute-1 ceph-mon[81689]: pgmap v2883: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.2 MiB/s wr, 47 op/s
Dec 06 07:50:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:50:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2605400865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:50:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:50:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2605400865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:50:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2605400865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:50:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2605400865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:50:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:10.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:10.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.538 226109 DEBUG nova.compute.manager [req-b0a4c8cc-a84d-4007-b429-f90b486fad7a req-177a7fdf-8be3-4044-bd9b-610d20018a90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.539 226109 DEBUG oslo_concurrency.lockutils [req-b0a4c8cc-a84d-4007-b429-f90b486fad7a req-177a7fdf-8be3-4044-bd9b-610d20018a90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.539 226109 DEBUG oslo_concurrency.lockutils [req-b0a4c8cc-a84d-4007-b429-f90b486fad7a req-177a7fdf-8be3-4044-bd9b-610d20018a90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.540 226109 DEBUG oslo_concurrency.lockutils [req-b0a4c8cc-a84d-4007-b429-f90b486fad7a req-177a7fdf-8be3-4044-bd9b-610d20018a90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.540 226109 DEBUG nova.compute.manager [req-b0a4c8cc-a84d-4007-b429-f90b486fad7a req-177a7fdf-8be3-4044-bd9b-610d20018a90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Processing event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.541 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.544 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007410.5444486, 0852932a-3266-432f-9975-8870535aff4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.545 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Resumed (Lifecycle Event)
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.546 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.549 226109 INFO nova.virt.libvirt.driver [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance spawned successfully.
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.550 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.627 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.637 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.643 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.643 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.644 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.645 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.646 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.646 226109 DEBUG nova.virt.libvirt.driver [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.692 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.736 226109 INFO nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Took 18.04 seconds to spawn the instance on the hypervisor.
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.737 226109 DEBUG nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.830 226109 INFO nova.compute.manager [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Took 20.03 seconds to build instance.
Dec 06 07:50:10 compute-1 nova_compute[226101]: 2025-12-06 07:50:10.905 226109 DEBUG oslo_concurrency.lockutils [None req-3bc2a0ce-dd61-4b23-bab9-d1abf5bdd6c8 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1083001606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:11 compute-1 ceph-mon[81689]: pgmap v2884: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 3.4 MiB/s wr, 61 op/s
Dec 06 07:50:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1316813484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:11 compute-1 sshd-session[288560]: Received disconnect from 165.154.55.146 port 51020:11: Bye Bye [preauth]
Dec 06 07:50:11 compute-1 sshd-session[288560]: Disconnected from authenticating user root 165.154.55.146 port 51020 [preauth]
Dec 06 07:50:11 compute-1 sudo[288562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:11 compute-1 sudo[288562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:11 compute-1 sudo[288562]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:11 compute-1 sudo[288587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:50:11 compute-1 sudo[288587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:11 compute-1 sudo[288587]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:12 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:12 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:12.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:12.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:12 compute-1 nova_compute[226101]: 2025-12-06 07:50:12.690 226109 DEBUG nova.compute.manager [req-59cab684-7fe6-4a04-8788-1a61bc2b3867 req-4a0b6841-9c47-4951-a32c-cbabe37cb9d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:12 compute-1 nova_compute[226101]: 2025-12-06 07:50:12.691 226109 DEBUG oslo_concurrency.lockutils [req-59cab684-7fe6-4a04-8788-1a61bc2b3867 req-4a0b6841-9c47-4951-a32c-cbabe37cb9d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:12 compute-1 nova_compute[226101]: 2025-12-06 07:50:12.691 226109 DEBUG oslo_concurrency.lockutils [req-59cab684-7fe6-4a04-8788-1a61bc2b3867 req-4a0b6841-9c47-4951-a32c-cbabe37cb9d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:12 compute-1 nova_compute[226101]: 2025-12-06 07:50:12.691 226109 DEBUG oslo_concurrency.lockutils [req-59cab684-7fe6-4a04-8788-1a61bc2b3867 req-4a0b6841-9c47-4951-a32c-cbabe37cb9d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:12 compute-1 nova_compute[226101]: 2025-12-06 07:50:12.691 226109 DEBUG nova.compute.manager [req-59cab684-7fe6-4a04-8788-1a61bc2b3867 req-4a0b6841-9c47-4951-a32c-cbabe37cb9d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:50:12 compute-1 nova_compute[226101]: 2025-12-06 07:50:12.692 226109 WARNING nova.compute.manager [req-59cab684-7fe6-4a04-8788-1a61bc2b3867 req-4a0b6841-9c47-4951-a32c-cbabe37cb9d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state active and task_state None.
Dec 06 07:50:13 compute-1 ceph-mon[81689]: pgmap v2885: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 3.4 MiB/s wr, 61 op/s
Dec 06 07:50:13 compute-1 nova_compute[226101]: 2025-12-06 07:50:13.799 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:13 compute-1 nova_compute[226101]: 2025-12-06 07:50:13.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:14.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:14.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:16 compute-1 podman[288614]: 2025-12-06 07:50:16.116725435 +0000 UTC m=+0.076749206 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:50:16 compute-1 podman[288613]: 2025-12-06 07:50:16.128204654 +0000 UTC m=+0.087363152 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:50:16 compute-1 podman[288615]: 2025-12-06 07:50:16.151633175 +0000 UTC m=+0.107970377 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:50:16 compute-1 ceph-mon[81689]: pgmap v2886: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.3 MiB/s wr, 73 op/s
Dec 06 07:50:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:16.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:16.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/272318413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:18.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:18.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:18 compute-1 nova_compute[226101]: 2025-12-06 07:50:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:18 compute-1 nova_compute[226101]: 2025-12-06 07:50:18.801 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:18 compute-1 nova_compute[226101]: 2025-12-06 07:50:18.991 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:19 compute-1 nova_compute[226101]: 2025-12-06 07:50:19.041 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:19 compute-1 nova_compute[226101]: 2025-12-06 07:50:19.041 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:19 compute-1 nova_compute[226101]: 2025-12-06 07:50:19.041 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:19 compute-1 nova_compute[226101]: 2025-12-06 07:50:19.042 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:50:19 compute-1 nova_compute[226101]: 2025-12-06 07:50:19.042 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:50:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/341899823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:19 compute-1 nova_compute[226101]: 2025-12-06 07:50:19.768 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.726s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.036 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.037 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.189 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.190 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4207MB free_disk=20.946487426757812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.190 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.191 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:20 compute-1 ceph-mon[81689]: pgmap v2887: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Dec 06 07:50:20 compute-1 ceph-mon[81689]: pgmap v2888: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 26 KiB/s wr, 121 op/s
Dec 06 07:50:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:20.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:20.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.647 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0852932a-3266-432f-9975-8870535aff4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.648 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.648 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:50:20 compute-1 nova_compute[226101]: 2025-12-06 07:50:20.685 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/341899823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:21.713 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:50:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:21.714 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:50:21 compute-1 nova_compute[226101]: 2025-12-06 07:50:21.714 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:22.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:22.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:50:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/347522823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:22 compute-1 nova_compute[226101]: 2025-12-06 07:50:22.569 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.884s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:22 compute-1 nova_compute[226101]: 2025-12-06 07:50:22.577 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:50:22 compute-1 nova_compute[226101]: 2025-12-06 07:50:22.633 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:50:22 compute-1 nova_compute[226101]: 2025-12-06 07:50:22.977 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:50:22 compute-1 nova_compute[226101]: 2025-12-06 07:50:22.978 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:23 compute-1 ceph-mon[81689]: pgmap v2889: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Dec 06 07:50:23 compute-1 nova_compute[226101]: 2025-12-06 07:50:23.843 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:23 compute-1 nova_compute[226101]: 2025-12-06 07:50:23.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/347522823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:24 compute-1 ceph-mon[81689]: pgmap v2890: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 134 op/s
Dec 06 07:50:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2055956596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:24.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:24.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:25 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:25.718 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:26.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:26.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:27 compute-1 nova_compute[226101]: 2025-12-06 07:50:27.554 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:27 compute-1 nova_compute[226101]: 2025-12-06 07:50:27.554 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:27 compute-1 nova_compute[226101]: 2025-12-06 07:50:27.581 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:50:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3565456485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:27 compute-1 ceph-mon[81689]: pgmap v2891: 305 pgs: 305 active+clean; 262 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.5 MiB/s wr, 176 op/s
Dec 06 07:50:27 compute-1 nova_compute[226101]: 2025-12-06 07:50:27.651 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:27 compute-1 nova_compute[226101]: 2025-12-06 07:50:27.651 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:27 compute-1 nova_compute[226101]: 2025-12-06 07:50:27.659 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:50:27 compute-1 nova_compute[226101]: 2025-12-06 07:50:27.660 226109 INFO nova.compute.claims [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:50:27 compute-1 nova_compute[226101]: 2025-12-06 07:50:27.782 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:50:28 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/519535519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.235 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.241 226109 DEBUG nova.compute.provider_tree [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.260 226109 DEBUG nova.scheduler.client.report [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.297 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.297 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.382 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.382 226109 DEBUG nova.network.neutron [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.411 226109 INFO nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.432 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:50:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:28.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:28.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.562 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.564 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.564 226109 INFO nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Creating image(s)
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.586 226109 DEBUG nova.storage.rbd_utils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.610 226109 DEBUG nova.storage.rbd_utils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.635 226109 DEBUG nova.storage.rbd_utils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.638 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.666 226109 DEBUG nova.policy [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4962bc7b172346e19d127b46ea2d7a11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4cf19b89a6d46bca307e65731a9dd21', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.708 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.708 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.709 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.709 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.730 226109 DEBUG nova.storage.rbd_utils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.734 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.846 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/11715429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/281735642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:28 compute-1 ceph-mon[81689]: pgmap v2892: 305 pgs: 305 active+clean; 273 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.3 MiB/s wr, 145 op/s
Dec 06 07:50:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/519535519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.979 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.979 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:50:28 compute-1 nova_compute[226101]: 2025-12-06 07:50:28.979 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:50:29 compute-1 nova_compute[226101]: 2025-12-06 07:50:29.001 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:50:29 compute-1 nova_compute[226101]: 2025-12-06 07:50:29.039 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:29 compute-1 nova_compute[226101]: 2025-12-06 07:50:29.262 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:29 compute-1 nova_compute[226101]: 2025-12-06 07:50:29.317 226109 DEBUG nova.storage.rbd_utils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] resizing rbd image e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:50:29 compute-1 nova_compute[226101]: 2025-12-06 07:50:29.657 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:29 compute-1 nova_compute[226101]: 2025-12-06 07:50:29.658 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:29 compute-1 nova_compute[226101]: 2025-12-06 07:50:29.658 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:50:29 compute-1 nova_compute[226101]: 2025-12-06 07:50:29.658 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:29 compute-1 ovn_controller[130279]: 2025-12-06T07:50:29Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:1c:a5 10.100.0.11
Dec 06 07:50:29 compute-1 ovn_controller[130279]: 2025-12-06T07:50:29Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:1c:a5 10.100.0.11
Dec 06 07:50:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:30.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:30.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:31 compute-1 nova_compute[226101]: 2025-12-06 07:50:31.188 226109 DEBUG nova.network.neutron [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Successfully created port: c775a9ab-4177-4c1a-880a-35b1b701d488 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:50:31 compute-1 ceph-mon[81689]: pgmap v2893: 305 pgs: 305 active+clean; 308 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.5 MiB/s wr, 145 op/s
Dec 06 07:50:31 compute-1 nova_compute[226101]: 2025-12-06 07:50:31.444 226109 DEBUG nova.objects.instance [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'migration_context' on Instance uuid e7d5d854-2a1f-485b-931a-4ec90cf7ba04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:31 compute-1 nova_compute[226101]: 2025-12-06 07:50:31.458 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:50:31 compute-1 nova_compute[226101]: 2025-12-06 07:50:31.459 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Ensure instance console log exists: /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:50:31 compute-1 nova_compute[226101]: 2025-12-06 07:50:31.459 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:31 compute-1 nova_compute[226101]: 2025-12-06 07:50:31.460 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:31 compute-1 nova_compute[226101]: 2025-12-06 07:50:31.460 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:32 compute-1 ceph-mon[81689]: pgmap v2894: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.2 MiB/s wr, 219 op/s
Dec 06 07:50:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.378 226109 DEBUG nova.network.neutron [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Successfully updated port: c775a9ab-4177-4c1a-880a-35b1b701d488 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.413 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.414 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquired lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.414 226109 DEBUG nova.network.neutron [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:50:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:32.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:32.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.573 226109 DEBUG nova.compute.manager [req-37833f4c-afbc-4b47-8266-e1d652b38d44 req-e32280b1-3cc0-4b6b-b137-17e09e3ea3c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-changed-c775a9ab-4177-4c1a-880a-35b1b701d488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.574 226109 DEBUG nova.compute.manager [req-37833f4c-afbc-4b47-8266-e1d652b38d44 req-e32280b1-3cc0-4b6b-b137-17e09e3ea3c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Refreshing instance network info cache due to event network-changed-c775a9ab-4177-4c1a-880a-35b1b701d488. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.574 226109 DEBUG oslo_concurrency.lockutils [req-37833f4c-afbc-4b47-8266-e1d652b38d44 req-e32280b1-3cc0-4b6b-b137-17e09e3ea3c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.592 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updating instance_info_cache with network_info: [{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.613 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.613 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.614 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.614 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.614 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.615 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:50:32 compute-1 nova_compute[226101]: 2025-12-06 07:50:32.666 226109 DEBUG nova.network.neutron [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:50:33 compute-1 nova_compute[226101]: 2025-12-06 07:50:33.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:33 compute-1 nova_compute[226101]: 2025-12-06 07:50:33.848 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.042 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.360 226109 DEBUG nova.network.neutron [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updating instance_info_cache with network_info: [{"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.384 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Releasing lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.384 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Instance network_info: |[{"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.384 226109 DEBUG oslo_concurrency.lockutils [req-37833f4c-afbc-4b47-8266-e1d652b38d44 req-e32280b1-3cc0-4b6b-b137-17e09e3ea3c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.385 226109 DEBUG nova.network.neutron [req-37833f4c-afbc-4b47-8266-e1d652b38d44 req-e32280b1-3cc0-4b6b-b137-17e09e3ea3c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Refreshing network info cache for port c775a9ab-4177-4c1a-880a-35b1b701d488 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.387 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Start _get_guest_xml network_info=[{"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.392 226109 WARNING nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.396 226109 DEBUG nova.virt.libvirt.host [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.397 226109 DEBUG nova.virt.libvirt.host [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.400 226109 DEBUG nova.virt.libvirt.host [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.400 226109 DEBUG nova.virt.libvirt.host [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.401 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.402 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.402 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.402 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.403 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.403 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.403 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.403 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.403 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.404 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.404 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.404 226109 DEBUG nova.virt.hardware [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.407 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:34.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:34.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:34 compute-1 nova_compute[226101]: 2025-12-06 07:50:34.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3917741292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:35 compute-1 nova_compute[226101]: 2025-12-06 07:50:35.100 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:35 compute-1 nova_compute[226101]: 2025-12-06 07:50:35.125 226109 DEBUG nova.storage.rbd_utils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:35 compute-1 nova_compute[226101]: 2025-12-06 07:50:35.130 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2432893103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:35 compute-1 nova_compute[226101]: 2025-12-06 07:50:35.989 226109 DEBUG nova.network.neutron [req-37833f4c-afbc-4b47-8266-e1d652b38d44 req-e32280b1-3cc0-4b6b-b137-17e09e3ea3c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updated VIF entry in instance network info cache for port c775a9ab-4177-4c1a-880a-35b1b701d488. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:50:35 compute-1 nova_compute[226101]: 2025-12-06 07:50:35.989 226109 DEBUG nova.network.neutron [req-37833f4c-afbc-4b47-8266-e1d652b38d44 req-e32280b1-3cc0-4b6b-b137-17e09e3ea3c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updating instance_info_cache with network_info: [{"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3325595356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:36 compute-1 ceph-mon[81689]: pgmap v2895: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.2 MiB/s wr, 188 op/s
Dec 06 07:50:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4260146210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.197 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.199 226109 DEBUG nova.virt.libvirt.vif [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:50:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-2011496999',display_name='tempest-TestStampPattern-server-2011496999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-2011496999',id=167,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8KhRxaTrkKNzMUybnifFqVhR7VOW5ilrhcPN+BlOV2c9vQAH2tT4hPBYJpZ93aPVMmrWQGW35OWGQh34F5+BdF2On//RqgE6BOka+CpM6HEuYW/HMTwic5wOTQHp91yg==',key_name='tempest-TestStampPattern-1707395411',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4cf19b89a6d46bca307e65731a9dd21',ramdisk_id='',reservation_id='r-yxfbgt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1318067975',owner_user_name='tempest-TestStampPattern-1318067975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:50:28Z,user_data=None,user_id='4962bc7b172346e19d127b46ea2d7a11',uuid=e7d5d854-2a1f-485b-931a-4ec90cf7ba04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.199 226109 DEBUG nova.network.os_vif_util [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converting VIF {"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.200 226109 DEBUG nova.network.os_vif_util [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:c8:49,bridge_name='br-int',has_traffic_filtering=True,id=c775a9ab-4177-4c1a-880a-35b1b701d488,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775a9ab-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.201 226109 DEBUG nova.objects.instance [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7d5d854-2a1f-485b-931a-4ec90cf7ba04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.249 226109 DEBUG oslo_concurrency.lockutils [req-37833f4c-afbc-4b47-8266-e1d652b38d44 req-e32280b1-3cc0-4b6b-b137-17e09e3ea3c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.257 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <uuid>e7d5d854-2a1f-485b-931a-4ec90cf7ba04</uuid>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <name>instance-000000a7</name>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <nova:name>tempest-TestStampPattern-server-2011496999</nova:name>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:50:34</nova:creationTime>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <nova:user uuid="4962bc7b172346e19d127b46ea2d7a11">tempest-TestStampPattern-1318067975-project-member</nova:user>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <nova:project uuid="c4cf19b89a6d46bca307e65731a9dd21">tempest-TestStampPattern-1318067975</nova:project>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <nova:port uuid="c775a9ab-4177-4c1a-880a-35b1b701d488">
Dec 06 07:50:36 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <system>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <entry name="serial">e7d5d854-2a1f-485b-931a-4ec90cf7ba04</entry>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <entry name="uuid">e7d5d854-2a1f-485b-931a-4ec90cf7ba04</entry>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </system>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <os>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   </os>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <features>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   </features>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk">
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       </source>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk.config">
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       </source>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:50:36 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:66:c8:49"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <target dev="tapc775a9ab-41"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04/console.log" append="off"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <video>
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </video>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:50:36 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:50:36 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:50:36 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:50:36 compute-1 nova_compute[226101]: </domain>
Dec 06 07:50:36 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.258 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Preparing to wait for external event network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.258 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.259 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.259 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.260 226109 DEBUG nova.virt.libvirt.vif [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:50:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-2011496999',display_name='tempest-TestStampPattern-server-2011496999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-2011496999',id=167,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8KhRxaTrkKNzMUybnifFqVhR7VOW5ilrhcPN+BlOV2c9vQAH2tT4hPBYJpZ93aPVMmrWQGW35OWGQh34F5+BdF2On//RqgE6BOka+CpM6HEuYW/HMTwic5wOTQHp91yg==',key_name='tempest-TestStampPattern-1707395411',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4cf19b89a6d46bca307e65731a9dd21',ramdisk_id='',reservation_id='r-yxfbgt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1318067975',owner_user_name='tempest-TestStampPattern-1318067975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:50:28Z,user_data=None,user_id='4962bc7b172346e19d127b46ea2d7a11',uuid=e7d5d854-2a1f-485b-931a-4ec90cf7ba04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.260 226109 DEBUG nova.network.os_vif_util [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converting VIF {"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.260 226109 DEBUG nova.network.os_vif_util [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:c8:49,bridge_name='br-int',has_traffic_filtering=True,id=c775a9ab-4177-4c1a-880a-35b1b701d488,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775a9ab-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.261 226109 DEBUG os_vif [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:c8:49,bridge_name='br-int',has_traffic_filtering=True,id=c775a9ab-4177-4c1a-880a-35b1b701d488,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775a9ab-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.261 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.262 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.262 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.264 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.265 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc775a9ab-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.265 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc775a9ab-41, col_values=(('external_ids', {'iface-id': 'c775a9ab-4177-4c1a-880a-35b1b701d488', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:c8:49', 'vm-uuid': 'e7d5d854-2a1f-485b-931a-4ec90cf7ba04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.324 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:36 compute-1 NetworkManager[49031]: <info>  [1765007436.3260] manager: (tapc775a9ab-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.327 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.331 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.331 226109 INFO os_vif [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:c8:49,bridge_name='br-int',has_traffic_filtering=True,id=c775a9ab-4177-4c1a-880a-35b1b701d488,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775a9ab-41')
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.443 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.444 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.445 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No VIF found with MAC fa:16:3e:66:c8:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.445 226109 INFO nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Using config drive
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.469 226109 DEBUG nova.storage.rbd_utils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:36.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:36.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:36 compute-1 nova_compute[226101]: 2025-12-06 07:50:36.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.177 226109 INFO nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Creating config drive at /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04/disk.config
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.181 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgaw8_io1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:37 compute-1 sshd-session[288972]: Received disconnect from 154.219.116.39 port 46054:11: Bye Bye [preauth]
Dec 06 07:50:37 compute-1 sshd-session[288972]: Disconnected from authenticating user root 154.219.116.39 port 46054 [preauth]
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.312 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgaw8_io1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3917741292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:37 compute-1 ceph-mon[81689]: pgmap v2896: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 260 op/s
Dec 06 07:50:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2432893103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.365 226109 DEBUG nova.storage.rbd_utils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.369 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04/disk.config e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.894 226109 DEBUG oslo_concurrency.processutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04/disk.config e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.895 226109 INFO nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Deleting local config drive /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04/disk.config because it was imported into RBD.
Dec 06 07:50:37 compute-1 kernel: tapc775a9ab-41: entered promiscuous mode
Dec 06 07:50:37 compute-1 NetworkManager[49031]: <info>  [1765007437.9439] manager: (tapc775a9ab-41): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.945 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:37 compute-1 ovn_controller[130279]: 2025-12-06T07:50:37Z|00634|binding|INFO|Claiming lport c775a9ab-4177-4c1a-880a-35b1b701d488 for this chassis.
Dec 06 07:50:37 compute-1 ovn_controller[130279]: 2025-12-06T07:50:37Z|00635|binding|INFO|c775a9ab-4177-4c1a-880a-35b1b701d488: Claiming fa:16:3e:66:c8:49 10.100.0.4
Dec 06 07:50:37 compute-1 nova_compute[226101]: 2025-12-06 07:50:37.949 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:37 compute-1 systemd-udevd[289047]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:50:37 compute-1 systemd-machined[190302]: New machine qemu-75-instance-000000a7.
Dec 06 07:50:37 compute-1 NetworkManager[49031]: <info>  [1765007437.9854] device (tapc775a9ab-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:50:37 compute-1 NetworkManager[49031]: <info>  [1765007437.9872] device (tapc775a9ab-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.005 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-1 systemd[1]: Started Virtual Machine qemu-75-instance-000000a7.
Dec 06 07:50:38 compute-1 ovn_controller[130279]: 2025-12-06T07:50:38Z|00636|binding|INFO|Setting lport c775a9ab-4177-4c1a-880a-35b1b701d488 ovn-installed in OVS
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.010 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-1 ceph-mon[81689]: pgmap v2897: 305 pgs: 305 active+clean; 392 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Dec 06 07:50:38 compute-1 ovn_controller[130279]: 2025-12-06T07:50:38Z|00637|binding|INFO|Setting lport c775a9ab-4177-4c1a-880a-35b1b701d488 up in Southbound
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.274 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:c8:49 10.100.0.4'], port_security=['fa:16:3e:66:c8:49 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e7d5d854-2a1f-485b-931a-4ec90cf7ba04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4cf19b89a6d46bca307e65731a9dd21', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8dd9f4b-9afe-430e-a0a0-846e8785c631', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1ea0b24-813d-4f2d-b582-53b5b07aa43a, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=c775a9ab-4177-4c1a-880a-35b1b701d488) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.275 139580 INFO neutron.agent.ovn.metadata.agent [-] Port c775a9ab-4177-4c1a-880a-35b1b701d488 in datapath 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 bound to our chassis
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.277 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.289 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ddab52ab-cd86-4679-b154-2378bf81088e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.290 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9e0e5f36-41 in ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.292 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9e0e5f36-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.293 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0edfd32d-8715-4c68-89e1-6955bfe89ce4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.294 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[768794ee-e3da-431a-bcd1-c88c4a34c4e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.306 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[16ba37a3-5d88-4b6c-bceb-aaf54498e4c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.330 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8577a290-65b5-4fd5-bd80-ab92a761ede2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.356 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[69643fed-fc4b-4d4e-8129-c57c72b38c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 systemd-udevd[289050]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:50:38 compute-1 NetworkManager[49031]: <info>  [1765007438.3637] manager: (tap9e0e5f36-40): new Veth device (/org/freedesktop/NetworkManager/Devices/286)
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.363 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6e837ef2-d1d1-482c-8a5c-22338cbece74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.392 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e20786-58e6-4fa8-88da-eddb61e39c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.394 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[18c2804d-6812-43f3-bef6-4f708c635430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 NetworkManager[49031]: <info>  [1765007438.4139] device (tap9e0e5f36-40): carrier: link connected
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.418 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7631dc80-df39-4970-a88b-0505794b1d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.433 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[396ce8b5-3b9b-4ed1-ab36-970d9b143678]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e0e5f36-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e5:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771791, 'reachable_time': 35534, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289088, 'error': None, 'target': 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/754643084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.446 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1a1292-d91e-40d0-8139-9b2f592d787c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:e5d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 771791, 'tstamp': 771791}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289089, 'error': None, 'target': 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.462 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6abd7682-1723-4160-b3fa-af4541038a43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e0e5f36-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e5:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771791, 'reachable_time': 35534, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289098, 'error': None, 'target': 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.490 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[805678bc-cf44-434e-8df5-bcd04fac2391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:38.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.552 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e76edb-5457-4a1a-ac99-28cb467d113a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.553 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e0e5f36-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.554 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.554 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e0e5f36-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.593 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-1 NetworkManager[49031]: <info>  [1765007438.5938] manager: (tap9e0e5f36-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Dec 06 07:50:38 compute-1 kernel: tap9e0e5f36-40: entered promiscuous mode
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.595 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.599 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e0e5f36-40, col_values=(('external_ids', {'iface-id': 'b8d91b14-ad14-4c15-a901-b1a5c72f0e0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.600 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-1 ovn_controller[130279]: 2025-12-06T07:50:38Z|00638|binding|INFO|Releasing lport b8d91b14-ad14-4c15-a901-b1a5c72f0e0f from this chassis (sb_readonly=0)
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.601 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.602 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.603 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9ac6b3-0d3e-4812-8c5a-1c1209f8c687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.604 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7.pid.haproxy
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:50:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:50:38.605 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'env', 'PROCESS_TAG=haproxy-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.614 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.749 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007438.7490084, e7d5d854-2a1f-485b-931a-4ec90cf7ba04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:38 compute-1 nova_compute[226101]: 2025-12-06 07:50:38.750 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] VM Started (Lifecycle Event)
Dec 06 07:50:39 compute-1 podman[289157]: 2025-12-06 07:50:38.926194068 +0000 UTC m=+0.021798228 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:50:39 compute-1 podman[289157]: 2025-12-06 07:50:39.036574289 +0000 UTC m=+0.132178439 container create 27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.044 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:39 compute-1 systemd[1]: Started libpod-conmon-27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877.scope.
Dec 06 07:50:39 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c320bd327b08948367d97b9fc8498a753574e4fed40c2107aaf466027ef86f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:39 compute-1 podman[289157]: 2025-12-06 07:50:39.257072073 +0000 UTC m=+0.352676253 container init 27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:50:39 compute-1 podman[289157]: 2025-12-06 07:50:39.263334072 +0000 UTC m=+0.358938232 container start 27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 07:50:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/754643084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/375235154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:39 compute-1 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[289172]: [NOTICE]   (289178) : New worker (289180) forked
Dec 06 07:50:39 compute-1 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[289172]: [NOTICE]   (289178) : Loading success.
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.313 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.318 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007438.7492456, e7d5d854-2a1f-485b-931a-4ec90cf7ba04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.318 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] VM Paused (Lifecycle Event)
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.482 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.486 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.523 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.812 226109 DEBUG nova.compute.manager [req-296262bd-42b1-4168-be8b-095ea393c060 req-1cc3d084-cd39-438e-8727-144d7f01566a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.813 226109 DEBUG oslo_concurrency.lockutils [req-296262bd-42b1-4168-be8b-095ea393c060 req-1cc3d084-cd39-438e-8727-144d7f01566a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.813 226109 DEBUG oslo_concurrency.lockutils [req-296262bd-42b1-4168-be8b-095ea393c060 req-1cc3d084-cd39-438e-8727-144d7f01566a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.813 226109 DEBUG oslo_concurrency.lockutils [req-296262bd-42b1-4168-be8b-095ea393c060 req-1cc3d084-cd39-438e-8727-144d7f01566a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.813 226109 DEBUG nova.compute.manager [req-296262bd-42b1-4168-be8b-095ea393c060 req-1cc3d084-cd39-438e-8727-144d7f01566a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Processing event network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.814 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.819 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.819 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007439.819093, e7d5d854-2a1f-485b-931a-4ec90cf7ba04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.819 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] VM Resumed (Lifecycle Event)
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.823 226109 INFO nova.virt.libvirt.driver [-] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Instance spawned successfully.
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.823 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.895 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.899 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.939 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.939 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.940 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.940 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.940 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:39 compute-1 nova_compute[226101]: 2025-12-06 07:50:39.941 226109 DEBUG nova.virt.libvirt.driver [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:40 compute-1 nova_compute[226101]: 2025-12-06 07:50:40.023 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:50:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:40.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:40 compute-1 nova_compute[226101]: 2025-12-06 07:50:40.630 226109 INFO nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Took 12.07 seconds to spawn the instance on the hypervisor.
Dec 06 07:50:40 compute-1 nova_compute[226101]: 2025-12-06 07:50:40.630 226109 DEBUG nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:40 compute-1 nova_compute[226101]: 2025-12-06 07:50:40.725 226109 INFO nova.compute.manager [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Took 13.10 seconds to build instance.
Dec 06 07:50:40 compute-1 nova_compute[226101]: 2025-12-06 07:50:40.746 226109 DEBUG oslo_concurrency.lockutils [None req-12d17805-c5f6-4c9e-bef8-114d4be53840 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:40 compute-1 ceph-mon[81689]: pgmap v2898: 305 pgs: 305 active+clean; 392 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Dec 06 07:50:41 compute-1 nova_compute[226101]: 2025-12-06 07:50:41.325 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:41 compute-1 ceph-mon[81689]: pgmap v2899: 305 pgs: 305 active+clean; 422 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.5 MiB/s wr, 212 op/s
Dec 06 07:50:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e374 e374: 3 total, 3 up, 3 in
Dec 06 07:50:41 compute-1 nova_compute[226101]: 2025-12-06 07:50:41.937 226109 DEBUG nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:41 compute-1 nova_compute[226101]: 2025-12-06 07:50:41.938 226109 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:41 compute-1 nova_compute[226101]: 2025-12-06 07:50:41.938 226109 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:41 compute-1 nova_compute[226101]: 2025-12-06 07:50:41.938 226109 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:41 compute-1 nova_compute[226101]: 2025-12-06 07:50:41.939 226109 DEBUG nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] No waiting events found dispatching network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:50:41 compute-1 nova_compute[226101]: 2025-12-06 07:50:41.939 226109 WARNING nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received unexpected event network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 for instance with vm_state active and task_state None.
Dec 06 07:50:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:42.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:42.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:42 compute-1 ceph-mon[81689]: osdmap e374: 3 total, 3 up, 3 in
Dec 06 07:50:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3577384955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1467634657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2511236024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:43 compute-1 ceph-mon[81689]: pgmap v2901: 305 pgs: 305 active+clean; 422 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Dec 06 07:50:44 compute-1 nova_compute[226101]: 2025-12-06 07:50:44.046 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:44.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:44 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:44.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:45 compute-1 ceph-mon[81689]: pgmap v2902: 305 pgs: 305 active+clean; 509 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.3 MiB/s wr, 197 op/s
Dec 06 07:50:46 compute-1 nova_compute[226101]: 2025-12-06 07:50:46.374 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:46.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:46 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:46.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2810597353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:47 compute-1 NetworkManager[49031]: <info>  [1765007447.0522] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Dec 06 07:50:47 compute-1 NetworkManager[49031]: <info>  [1765007447.0531] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Dec 06 07:50:47 compute-1 nova_compute[226101]: 2025-12-06 07:50:47.053 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-1 podman[289190]: 2025-12-06 07:50:47.082127151 +0000 UTC m=+0.066716537 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 06 07:50:47 compute-1 podman[289191]: 2025-12-06 07:50:47.098939803 +0000 UTC m=+0.081859514 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 07:50:47 compute-1 nova_compute[226101]: 2025-12-06 07:50:47.267 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-1 podman[289192]: 2025-12-06 07:50:47.275447254 +0000 UTC m=+0.233011372 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 06 07:50:47 compute-1 ovn_controller[130279]: 2025-12-06T07:50:47Z|00639|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:50:47 compute-1 ovn_controller[130279]: 2025-12-06T07:50:47Z|00640|binding|INFO|Releasing lport b8d91b14-ad14-4c15-a901-b1a5c72f0e0f from this chassis (sb_readonly=0)
Dec 06 07:50:47 compute-1 nova_compute[226101]: 2025-12-06 07:50:47.294 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/588140338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:47 compute-1 ceph-mon[81689]: pgmap v2903: 305 pgs: 305 active+clean; 515 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 234 op/s
Dec 06 07:50:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:48.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:48 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:48.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:49 compute-1 nova_compute[226101]: 2025-12-06 07:50:49.051 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:49 compute-1 ceph-mon[81689]: pgmap v2904: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 233 op/s
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.145 226109 INFO nova.compute.manager [None req-12ebcdd7-b223-4f58-8a55-5de6a8c60afe f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Pausing
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.146 226109 DEBUG nova.objects.instance [None req-12ebcdd7-b223-4f58-8a55-5de6a8c60afe f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'flavor' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.202 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007450.2019324, 0852932a-3266-432f-9975-8870535aff4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.202 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Paused (Lifecycle Event)
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.204 226109 DEBUG nova.compute.manager [None req-12ebcdd7-b223-4f58-8a55-5de6a8c60afe f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.235 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.239 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.289 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] During sync_power_state the instance has a pending task (pausing). Skip.
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.371 226109 DEBUG nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-changed-c775a9ab-4177-4c1a-880a-35b1b701d488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.372 226109 DEBUG nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Refreshing instance network info cache due to event network-changed-c775a9ab-4177-4c1a-880a-35b1b701d488. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.372 226109 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.373 226109 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:50 compute-1 nova_compute[226101]: 2025-12-06 07:50:50.373 226109 DEBUG nova.network.neutron [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Refreshing network info cache for port c775a9ab-4177-4c1a-880a-35b1b701d488 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:50:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:51 compute-1 sshd-session[289189]: error: kex_exchange_identification: read: Connection timed out
Dec 06 07:50:51 compute-1 sshd-session[289189]: banner exchange: Connection from 101.227.54.26 port 45298: Connection timed out
Dec 06 07:50:51 compute-1 nova_compute[226101]: 2025-12-06 07:50:51.411 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:52 compute-1 nova_compute[226101]: 2025-12-06 07:50:52.681 226109 DEBUG nova.network.neutron [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updated VIF entry in instance network info cache for port c775a9ab-4177-4c1a-880a-35b1b701d488. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:50:52 compute-1 nova_compute[226101]: 2025-12-06 07:50:52.682 226109 DEBUG nova.network.neutron [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updating instance_info_cache with network_info: [{"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:52 compute-1 ceph-mon[81689]: pgmap v2905: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.4 MiB/s wr, 293 op/s
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.104 226109 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.210 226109 INFO nova.compute.manager [None req-10d1db21-ee8e-4dca-934e-76360b0363b5 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Unpausing
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.211 226109 DEBUG nova.objects.instance [None req-10d1db21-ee8e-4dca-934e-76360b0363b5 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'flavor' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.295 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007453.2948694, 0852932a-3266-432f-9975-8870535aff4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.295 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Resumed (Lifecycle Event)
Dec 06 07:50:53 compute-1 virtqemud[225710]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.300 226109 DEBUG nova.virt.libvirt.guest [None req-10d1db21-ee8e-4dca-934e-76360b0363b5 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.300 226109 DEBUG nova.compute.manager [None req-10d1db21-ee8e-4dca-934e-76360b0363b5 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.332 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.336 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:53 compute-1 nova_compute[226101]: 2025-12-06 07:50:53.392 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] During sync_power_state the instance has a pending task (unpausing). Skip.
Dec 06 07:50:54 compute-1 nova_compute[226101]: 2025-12-06 07:50:54.053 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:54 compute-1 ceph-mon[81689]: pgmap v2906: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 260 op/s
Dec 06 07:50:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:55 compute-1 ovn_controller[130279]: 2025-12-06T07:50:55Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:c8:49 10.100.0.4
Dec 06 07:50:55 compute-1 ovn_controller[130279]: 2025-12-06T07:50:55Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:c8:49 10.100.0.4
Dec 06 07:50:56 compute-1 ceph-mon[81689]: pgmap v2907: 305 pgs: 305 active+clean; 527 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.3 MiB/s wr, 319 op/s
Dec 06 07:50:56 compute-1 nova_compute[226101]: 2025-12-06 07:50:56.414 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:56.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:56 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:56.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:57 compute-1 nova_compute[226101]: 2025-12-06 07:50:57.591 226109 INFO nova.compute.manager [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Rescuing
Dec 06 07:50:57 compute-1 nova_compute[226101]: 2025-12-06 07:50:57.591 226109 DEBUG oslo_concurrency.lockutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:57 compute-1 nova_compute[226101]: 2025-12-06 07:50:57.592 226109 DEBUG oslo_concurrency.lockutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquired lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:57 compute-1 nova_compute[226101]: 2025-12-06 07:50:57.592 226109 DEBUG nova.network.neutron [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:50:58 compute-1 ceph-mon[81689]: pgmap v2908: 305 pgs: 305 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.9 MiB/s wr, 223 op/s
Dec 06 07:50:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:50:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:58.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:50:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:58.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:58 compute-1 nova_compute[226101]: 2025-12-06 07:50:58.940 226109 DEBUG nova.network.neutron [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updating instance_info_cache with network_info: [{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:58 compute-1 nova_compute[226101]: 2025-12-06 07:50:58.974 226109 DEBUG oslo_concurrency.lockutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Releasing lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:59 compute-1 nova_compute[226101]: 2025-12-06 07:50:59.079 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:59 compute-1 nova_compute[226101]: 2025-12-06 07:50:59.342 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:51:00 compute-1 ceph-mon[81689]: pgmap v2909: 305 pgs: 305 active+clean; 548 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 195 op/s
Dec 06 07:51:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:00.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:00.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:01 compute-1 nova_compute[226101]: 2025-12-06 07:51:01.454 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:01.670 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:01.671 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:01.672 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:02.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:02.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:02 compute-1 ceph-mon[81689]: pgmap v2910: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.780494) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462780548, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 805, "num_deletes": 250, "total_data_size": 1519129, "memory_usage": 1539592, "flush_reason": "Manual Compaction"}
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462790925, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 665305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61817, "largest_seqno": 62616, "table_properties": {"data_size": 662026, "index_size": 1123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8960, "raw_average_key_size": 20, "raw_value_size": 654996, "raw_average_value_size": 1523, "num_data_blocks": 49, "num_entries": 430, "num_filter_entries": 430, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007407, "oldest_key_time": 1765007407, "file_creation_time": 1765007462, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 10562 microseconds, and 5854 cpu microseconds.
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.791048) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 665305 bytes OK
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.791082) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.793590) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.793654) EVENT_LOG_v1 {"time_micros": 1765007462793639, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.793688) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1514924, prev total WAL file size 1514924, number of live WAL files 2.
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.794790) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303236' seq:72057594037927935, type:22 .. '6D6772737461740032323737' seq:0, type:0; will stop at (end)
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(649KB)], [123(12MB)]
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462794874, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 13701858, "oldest_snapshot_seqno": -1}
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 9040 keys, 10192520 bytes, temperature: kUnknown
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462877736, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 10192520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10135972, "index_size": 32820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 239795, "raw_average_key_size": 26, "raw_value_size": 9978763, "raw_average_value_size": 1103, "num_data_blocks": 1242, "num_entries": 9040, "num_filter_entries": 9040, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007462, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.878237) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10192520 bytes
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.880156) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.9 rd, 122.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(35.9) write-amplify(15.3) OK, records in: 9534, records dropped: 494 output_compression: NoCompression
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.880184) EVENT_LOG_v1 {"time_micros": 1765007462880171, "job": 78, "event": "compaction_finished", "compaction_time_micros": 83100, "compaction_time_cpu_micros": 32297, "output_level": 6, "num_output_files": 1, "total_output_size": 10192520, "num_input_records": 9534, "num_output_records": 9040, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462881174, "job": 78, "event": "table_file_deletion", "file_number": 125}
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462884922, "job": 78, "event": "table_file_deletion", "file_number": 123}
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.794560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.885202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.885208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.885210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.885212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:51:02.885214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:03 compute-1 ceph-mon[81689]: pgmap v2911: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Dec 06 07:51:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e375 e375: 3 total, 3 up, 3 in
Dec 06 07:51:04 compute-1 nova_compute[226101]: 2025-12-06 07:51:04.080 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:04.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:04.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.034 226109 DEBUG oslo_concurrency.lockutils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.035 226109 DEBUG oslo_concurrency.lockutils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:05 compute-1 kernel: tap680d7c20-81 (unregistering): left promiscuous mode
Dec 06 07:51:05 compute-1 NetworkManager[49031]: <info>  [1765007465.0423] device (tap680d7c20-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:51:05 compute-1 ovn_controller[130279]: 2025-12-06T07:51:05Z|00641|binding|INFO|Releasing lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 from this chassis (sb_readonly=0)
Dec 06 07:51:05 compute-1 ovn_controller[130279]: 2025-12-06T07:51:05Z|00642|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 down in Southbound
Dec 06 07:51:05 compute-1 ovn_controller[130279]: 2025-12-06T07:51:05Z|00643|binding|INFO|Removing iface tap680d7c20-81 ovn-installed in OVS
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.054 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.057 226109 DEBUG nova.objects.instance [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'flavor' on Instance uuid e7d5d854-2a1f-485b-931a-4ec90cf7ba04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.058 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.061 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:1c:a5 10.100.0.11'], port_security=['fa:16:3e:d5:1c:a5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0852932a-3266-432f-9975-8870535aff4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=680d7c20-81e0-48d0-954c-9fbcda7e7615) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.062 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 680d7c20-81e0-48d0-954c-9fbcda7e7615 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.065 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.066 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6bfa51-e2b2-403d-89b4-83813b176861]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.067 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.069 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 ceph-mon[81689]: osdmap e375: 3 total, 3 up, 3 in
Dec 06 07:51:05 compute-1 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Dec 06 07:51:05 compute-1 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a4.scope: Consumed 16.422s CPU time.
Dec 06 07:51:05 compute-1 systemd-machined[190302]: Machine qemu-74-instance-000000a4 terminated.
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.121 226109 DEBUG oslo_concurrency.lockutils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:05 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[288545]: [NOTICE]   (288549) : haproxy version is 2.8.14-c23fe91
Dec 06 07:51:05 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[288545]: [NOTICE]   (288549) : path to executable is /usr/sbin/haproxy
Dec 06 07:51:05 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[288545]: [WARNING]  (288549) : Exiting Master process...
Dec 06 07:51:05 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[288545]: [WARNING]  (288549) : Exiting Master process...
Dec 06 07:51:05 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[288545]: [ALERT]    (288549) : Current worker (288551) exited with code 143 (Terminated)
Dec 06 07:51:05 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[288545]: [WARNING]  (288549) : All workers exited. Exiting... (0)
Dec 06 07:51:05 compute-1 systemd[1]: libpod-9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054.scope: Deactivated successfully.
Dec 06 07:51:05 compute-1 podman[289276]: 2025-12-06 07:51:05.268506724 +0000 UTC m=+0.112944951 container died 9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:51:05 compute-1 kernel: tap680d7c20-81: entered promiscuous mode
Dec 06 07:51:05 compute-1 ovn_controller[130279]: 2025-12-06T07:51:05Z|00644|binding|INFO|Claiming lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 for this chassis.
Dec 06 07:51:05 compute-1 ovn_controller[130279]: 2025-12-06T07:51:05Z|00645|binding|INFO|680d7c20-81e0-48d0-954c-9fbcda7e7615: Claiming fa:16:3e:d5:1c:a5 10.100.0.11
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.280 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 kernel: tap680d7c20-81 (unregistering): left promiscuous mode
Dec 06 07:51:05 compute-1 NetworkManager[49031]: <info>  [1765007465.2825] manager: (tap680d7c20-81): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.291 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:1c:a5 10.100.0.11'], port_security=['fa:16:3e:d5:1c:a5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0852932a-3266-432f-9975-8870535aff4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=680d7c20-81e0-48d0-954c-9fbcda7e7615) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 ovn_controller[130279]: 2025-12-06T07:51:05Z|00646|binding|INFO|Releasing lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 from this chassis (sb_readonly=0)
Dec 06 07:51:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054-userdata-shm.mount: Deactivated successfully.
Dec 06 07:51:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-9f94249b1a5ea8f24422378ede30675ce9ca8d0236cea2a5d9923359cdcd1b77-merged.mount: Deactivated successfully.
Dec 06 07:51:05 compute-1 podman[289276]: 2025-12-06 07:51:05.324706266 +0000 UTC m=+0.169144483 container cleanup 9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:51:05 compute-1 systemd[1]: libpod-conmon-9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054.scope: Deactivated successfully.
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.370 226109 INFO nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance shutdown successfully after 6 seconds.
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.374 226109 INFO nova.virt.libvirt.driver [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance destroyed successfully.
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.374 226109 DEBUG nova.objects.instance [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:05 compute-1 podman[289307]: 2025-12-06 07:51:05.391182706 +0000 UTC m=+0.043130852 container remove 9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.396 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f99add9c-0d8f-41ee-a844-f4f2c86a1f2b]: (4, ('Sat Dec  6 07:51:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054)\n9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054\nSat Dec  6 07:51:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054)\n9017729a4a57ea596c59d0e3184c8ca4c6a1a1c755e026a8321a31b391f03054\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.397 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9ea1b8-5a4a-4288-a980-2966f199bed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.398 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:05 compute-1 kernel: tap3d151181-00: left promiscuous mode
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.401 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.416 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.418 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cb5b47-02c7-43a8-9c57-a4afcf752651]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.430 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[828ea417-3911-4c64-839c-189b1d856d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.432 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3d4334-b0c6-4d6d-b23c-d974e7c3b13c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.447 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7c0563-1156-4360-b903-5dbbcdfc5fef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768709, 'reachable_time': 20571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289323, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.451 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.451 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[dab0eeff-6bd2-4355-b4c9-9646c7456d25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.451 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 680d7c20-81e0-48d0-954c-9fbcda7e7615 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 bound to our chassis
Dec 06 07:51:05 compute-1 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.453 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.465 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[afbcdb3d-ccac-48dd-ac49-10588e48616a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.466 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.467 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.467 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcfb5d3-906b-4b76-8f9d-4bffaca1beca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.468 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[524512e3-b818-4365-871c-86a7a5dbc5db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.480 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c1da84-6f5d-4e41-b14c-171ef101873e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.502 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[50b6ab0f-9173-41e8-b2c9-25bb63dfc97d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.533 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[85923c14-eeca-48c2-b545-5bcceb5c3189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.537 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4b35edb3-1c8d-4555-81b4-aa16229dc09c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 systemd-udevd[289257]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:51:05 compute-1 NetworkManager[49031]: <info>  [1765007465.5392] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.569 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8949d8cc-8668-4f25-af63-41a571e4c992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.572 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[169aa149-1461-4953-9b5c-2f62d58e7fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 NetworkManager[49031]: <info>  [1765007465.5952] device (tap3d151181-00): carrier: link connected
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.601 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0da7013b-dd76-457d-aa08-6e195fe72cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.626 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ce36fada-f877-46a8-bf92-b9edda421825]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774509, 'reachable_time': 29861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289351, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.641 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea94556-bbbf-40a4-bb78-931984fe0ee2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 774509, 'tstamp': 774509}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289352, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.660 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[74062922-34b1-459c-8423-5614bf60a634]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774509, 'reachable_time': 29861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289353, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.688 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[43feddaa-32c1-40b6-808a-c03922d5925e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.744 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a7366fbf-9d41-4ead-96bb-08b8e6456f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.745 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.745 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.746 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.747 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 NetworkManager[49031]: <info>  [1765007465.7478] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Dec 06 07:51:05 compute-1 kernel: tap3d151181-00: entered promiscuous mode
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.750 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.751 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.752 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 ovn_controller[130279]: 2025-12-06T07:51:05Z|00647|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=1)
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.763 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 nova_compute[226101]: 2025-12-06 07:51:05.766 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.766 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.767 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7591a97e-dce2-444b-8238-fbafd1b879f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.768 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:51:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:05.768 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:51:06 compute-1 ceph-mon[81689]: pgmap v2913: 305 pgs: 305 active+clean; 575 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Dec 06 07:51:06 compute-1 podman[289388]: 2025-12-06 07:51:06.100775053 +0000 UTC m=+0.051786174 container create 6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:51:06 compute-1 systemd[1]: Started libpod-conmon-6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544.scope.
Dec 06 07:51:06 compute-1 podman[289388]: 2025-12-06 07:51:06.073846919 +0000 UTC m=+0.024858060 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:51:06 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:51:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5e9f6d06bc89c4147909f82953739c9666597cc510d3f085fd51e982abbb8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:06 compute-1 podman[289388]: 2025-12-06 07:51:06.193970082 +0000 UTC m=+0.144981253 container init 6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:51:06 compute-1 podman[289388]: 2025-12-06 07:51:06.200857978 +0000 UTC m=+0.151869109 container start 6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:51:06 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[289403]: [NOTICE]   (289407) : New worker (289409) forked
Dec 06 07:51:06 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[289403]: [NOTICE]   (289407) : Loading success.
Dec 06 07:51:06 compute-1 nova_compute[226101]: 2025-12-06 07:51:06.456 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:07.954 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:1c:a5 10.100.0.11'], port_security=['fa:16:3e:d5:1c:a5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0852932a-3266-432f-9975-8870535aff4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=680d7c20-81e0-48d0-954c-9fbcda7e7615) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:07.955 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 680d7c20-81e0-48d0-954c-9fbcda7e7615 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis
Dec 06 07:51:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:07.957 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:51:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:07.958 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ad51c002-2729-4efb-9cf5-02e938cd2e95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:07.958 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore
Dec 06 07:51:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:08 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[289403]: [NOTICE]   (289407) : haproxy version is 2.8.14-c23fe91
Dec 06 07:51:08 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[289403]: [NOTICE]   (289407) : path to executable is /usr/sbin/haproxy
Dec 06 07:51:08 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[289403]: [WARNING]  (289407) : Exiting Master process...
Dec 06 07:51:08 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[289403]: [ALERT]    (289407) : Current worker (289409) exited with code 143 (Terminated)
Dec 06 07:51:08 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[289403]: [WARNING]  (289407) : All workers exited. Exiting... (0)
Dec 06 07:51:08 compute-1 systemd[1]: libpod-6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544.scope: Deactivated successfully.
Dec 06 07:51:08 compute-1 podman[289433]: 2025-12-06 07:51:08.355022986 +0000 UTC m=+0.297427456 container died 6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:51:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:08.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:08.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:08 compute-1 nova_compute[226101]: 2025-12-06 07:51:08.893 226109 INFO nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Attempting rescue
Dec 06 07:51:08 compute-1 nova_compute[226101]: 2025-12-06 07:51:08.894 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 06 07:51:09 compute-1 nova_compute[226101]: 2025-12-06 07:51:09.083 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:09 compute-1 nova_compute[226101]: 2025-12-06 07:51:09.274 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:51:09 compute-1 nova_compute[226101]: 2025-12-06 07:51:09.275 226109 INFO nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Creating image(s)
Dec 06 07:51:09 compute-1 ceph-mon[81689]: pgmap v2914: 305 pgs: 305 active+clean; 580 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 180 op/s
Dec 06 07:51:09 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544-userdata-shm.mount: Deactivated successfully.
Dec 06 07:51:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-4e5e9f6d06bc89c4147909f82953739c9666597cc510d3f085fd51e982abbb8e-merged.mount: Deactivated successfully.
Dec 06 07:51:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:10.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:10 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:10.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:10 compute-1 podman[289433]: 2025-12-06 07:51:10.662617744 +0000 UTC m=+2.605022224 container cleanup 6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:51:10 compute-1 systemd[1]: libpod-conmon-6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544.scope: Deactivated successfully.
Dec 06 07:51:10 compute-1 ceph-mon[81689]: pgmap v2915: 305 pgs: 305 active+clean; 582 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 193 op/s
Dec 06 07:51:10 compute-1 nova_compute[226101]: 2025-12-06 07:51:10.688 226109 DEBUG nova.storage.rbd_utils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:10 compute-1 nova_compute[226101]: 2025-12-06 07:51:10.694 226109 DEBUG nova.objects.instance [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:10 compute-1 podman[289479]: 2025-12-06 07:51:10.987509368 +0000 UTC m=+0.304006093 container remove 6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:51:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:10.994 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9eae06a9-2df5-47ec-b033-4df47767583f]: (4, ('Sat Dec  6 07:51:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544)\n6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544\nSat Dec  6 07:51:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544)\n6d510ce9b4ccba4d48b187cc1d68bc9a28fde51a461e50da25cd1471224f9544\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:10.996 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb4d6c5-79fb-46f9-9091-e6ebc30d7201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:10 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:10.997 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:11 compute-1 nova_compute[226101]: 2025-12-06 07:51:10.999 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:11 compute-1 kernel: tap3d151181-00: left promiscuous mode
Dec 06 07:51:11 compute-1 nova_compute[226101]: 2025-12-06 07:51:11.018 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:11.020 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cf40dd72-b2a3-4413-97eb-becabb6beb58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:11.034 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9455cb10-1c0f-4843-8a6a-b7bc20950507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:11.037 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d4581a61-34ae-4c2e-9eec-201a6dc24828]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:11.054 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7b2ae6-8715-42ff-a9fc-86d488ae79e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774503, 'reachable_time': 27829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289499, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:11.057 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:51:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:11.058 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ad762aba-d969-476d-bde2-1a6e463cf177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:11 compute-1 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec 06 07:51:11 compute-1 nova_compute[226101]: 2025-12-06 07:51:11.505 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4194304661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:51:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4194304661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:51:11 compute-1 ceph-mon[81689]: pgmap v2916: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Dec 06 07:51:11 compute-1 sudo[289504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:11 compute-1 sudo[289504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:11 compute-1 sudo[289504]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:11 compute-1 sudo[289529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:51:11 compute-1 sudo[289529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:11 compute-1 sudo[289529]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:11 compute-1 sudo[289554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:11 compute-1 sudo[289554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:11 compute-1 sudo[289554]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:12 compute-1 sudo[289579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:51:12 compute-1 sudo[289579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:12 compute-1 sudo[289579]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e376 e376: 3 total, 3 up, 3 in
Dec 06 07:51:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:12.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:12 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:12.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:13 compute-1 ceph-mon[81689]: osdmap e376: 3 total, 3 up, 3 in
Dec 06 07:51:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:51:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:51:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:51:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:51:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:51:13 compute-1 ceph-mon[81689]: pgmap v2918: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 2.5 MiB/s wr, 145 op/s
Dec 06 07:51:14 compute-1 nova_compute[226101]: 2025-12-06 07:51:14.085 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:14.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:14.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:15 compute-1 ceph-mon[81689]: pgmap v2919: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 473 KiB/s rd, 692 KiB/s wr, 128 op/s
Dec 06 07:51:16 compute-1 nova_compute[226101]: 2025-12-06 07:51:16.508 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:16.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:16.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:16 compute-1 nova_compute[226101]: 2025-12-06 07:51:16.981 226109 DEBUG nova.storage.rbd_utils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.014 226109 DEBUG nova.storage.rbd_utils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.019 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.047 226109 DEBUG oslo_concurrency.lockutils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.048 226109 DEBUG oslo_concurrency.lockutils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.048 226109 INFO nova.compute.manager [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Attaching volume 3c0c17ef-cec5-45f2-956d-446361af61f4 to /dev/vdb
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.092 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.092 226109 DEBUG oslo_concurrency.lockutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.093 226109 DEBUG oslo_concurrency.lockutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.093 226109 DEBUG oslo_concurrency.lockutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.120 226109 DEBUG nova.storage.rbd_utils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.124 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0852932a-3266-432f-9975-8870535aff4e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:17 compute-1 ceph-mon[81689]: pgmap v2920: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 212 KiB/s rd, 133 KiB/s wr, 137 op/s
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.347 226109 DEBUG os_brick.utils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.348 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.358 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.358 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[bd49d875-209c-43f6-8152-dec90f4085cd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.359 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.367 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.367 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[06afa51d-b8b4-48b1-bf33-e4f75d280b97]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.369 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.376 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.377 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[9af8cc49-7ece-4f67-bf63-e069d97b715c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.379 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[27cd388e-36ec-4485-83fd-e16de92cd004]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.380 226109 DEBUG oslo_concurrency.processutils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.412 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0852932a-3266-432f-9975-8870535aff4e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.413 226109 DEBUG nova.objects.instance [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'migration_context' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.416 226109 DEBUG oslo_concurrency.processutils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.419 226109 DEBUG os_brick.initiator.connectors.lightos [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.420 226109 DEBUG os_brick.initiator.connectors.lightos [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.420 226109 DEBUG os_brick.initiator.connectors.lightos [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.421 226109 DEBUG os_brick.utils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:51:17 compute-1 nova_compute[226101]: 2025-12-06 07:51:17.421 226109 DEBUG nova.virt.block_device [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updating existing volume attachment record: 34e4d033-7d30-4dd0-a2cc-735fe90e8897 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:51:18 compute-1 podman[289719]: 2025-12-06 07:51:18.071777658 +0000 UTC m=+0.059742430 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd)
Dec 06 07:51:18 compute-1 podman[289720]: 2025-12-06 07:51:18.091276232 +0000 UTC m=+0.069204393 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:51:18 compute-1 podman[289721]: 2025-12-06 07:51:18.098581679 +0000 UTC m=+0.082298906 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:51:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:18 compute-1 nova_compute[226101]: 2025-12-06 07:51:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:18.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:18.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:19 compute-1 nova_compute[226101]: 2025-12-06 07:51:19.087 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:20 compute-1 sudo[289784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:20 compute-1 sudo[289784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:20 compute-1 sudo[289784]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:20 compute-1 sudo[289809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:51:20 compute-1 sudo[289809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:20 compute-1 sudo[289809]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:20 compute-1 nova_compute[226101]: 2025-12-06 07:51:20.298 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007465.2973537, 0852932a-3266-432f-9975-8870535aff4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:20 compute-1 nova_compute[226101]: 2025-12-06 07:51:20.299 226109 INFO nova.compute.manager [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Stopped (Lifecycle Event)
Dec 06 07:51:20 compute-1 sshd-session[289782]: Received disconnect from 136.112.8.45 port 46812:11: Bye Bye [preauth]
Dec 06 07:51:20 compute-1 sshd-session[289782]: Disconnected from authenticating user root 136.112.8.45 port 46812 [preauth]
Dec 06 07:51:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:20.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:20.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:20 compute-1 ceph-mon[81689]: pgmap v2921: 305 pgs: 305 active+clean; 604 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 1009 KiB/s wr, 166 op/s
Dec 06 07:51:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:21 compute-1 nova_compute[226101]: 2025-12-06 07:51:21.511 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.228 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.229 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Start _get_guest_xml network_info=[{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:d5:1c:a5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.230 226109 DEBUG nova.objects.instance [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'resources' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:22 compute-1 ceph-mon[81689]: pgmap v2922: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 135 KiB/s rd, 2.2 MiB/s wr, 228 op/s
Dec 06 07:51:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:22.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:22.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.742 226109 DEBUG nova.compute.manager [req-483361fe-2cf3-4f6f-9743-04bcb661a8ec req-2f11b63a-ca76-4e5c-9d10-8519884f95cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.743 226109 DEBUG oslo_concurrency.lockutils [req-483361fe-2cf3-4f6f-9743-04bcb661a8ec req-2f11b63a-ca76-4e5c-9d10-8519884f95cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.743 226109 DEBUG oslo_concurrency.lockutils [req-483361fe-2cf3-4f6f-9743-04bcb661a8ec req-2f11b63a-ca76-4e5c-9d10-8519884f95cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.743 226109 DEBUG oslo_concurrency.lockutils [req-483361fe-2cf3-4f6f-9743-04bcb661a8ec req-2f11b63a-ca76-4e5c-9d10-8519884f95cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.743 226109 DEBUG nova.compute.manager [req-483361fe-2cf3-4f6f-9743-04bcb661a8ec req-2f11b63a-ca76-4e5c-9d10-8519884f95cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.744 226109 WARNING nova.compute.manager [req-483361fe-2cf3-4f6f-9743-04bcb661a8ec req-2f11b63a-ca76-4e5c-9d10-8519884f95cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state active and task_state rescuing.
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.764 226109 DEBUG nova.compute.manager [None req-240b48fa-df6a-40e8-b470-2cc4798c4d0f - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.765 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.765 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.766 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.766 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.766 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.794 226109 DEBUG nova.compute.manager [None req-240b48fa-df6a-40e8-b470-2cc4798c4d0f - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.799 226109 WARNING nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.810 226109 DEBUG nova.virt.libvirt.host [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.810 226109 DEBUG nova.virt.libvirt.host [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.814 226109 DEBUG nova.virt.libvirt.host [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.814 226109 DEBUG nova.virt.libvirt.host [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.816 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.816 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.817 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.817 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.818 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.818 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.818 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.818 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.819 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.819 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.819 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.820 226109 DEBUG nova.virt.hardware [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.820 226109 DEBUG nova.objects.instance [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.844 226109 INFO nova.compute.manager [None req-240b48fa-df6a-40e8-b470-2cc4798c4d0f - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:51:22 compute-1 nova_compute[226101]: 2025-12-06 07:51:22.853 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/255879142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.408 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.409 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:51:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3214352129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.439 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1178322315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.856 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.857 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.860 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.861 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:51:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3788578080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.892 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.893 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:23 compute-1 nova_compute[226101]: 2025-12-06 07:51:23.993 226109 DEBUG nova.objects.instance [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'flavor' on Instance uuid e7d5d854-2a1f-485b-931a-4ec90cf7ba04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.062 226109 DEBUG nova.virt.libvirt.driver [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Attempting to attach volume 3c0c17ef-cec5-45f2-956d-446361af61f4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.064 226109 DEBUG nova.virt.libvirt.guest [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-3c0c17ef-cec5-45f2-956d-446361af61f4">
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </source>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <serial>3c0c17ef-cec5-45f2-956d-446361af61f4</serial>
Dec 06 07:51:24 compute-1 nova_compute[226101]: </disk>
Dec 06 07:51:24 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.098 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.099 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4124MB free_disk=20.71881866455078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.099 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.100 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:24 compute-1 ceph-mon[81689]: pgmap v2923: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 2.0 MiB/s wr, 215 op/s
Dec 06 07:51:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/255879142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3214352129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1178322315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3788578080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:24 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3910783313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.347 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.348 226109 DEBUG nova.virt.libvirt.vif [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:49:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-881580221',display_name='tempest-ServerRescueNegativeTestJSON-server-881580221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-881580221',id=164,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-8ig59luj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:50:53Z,user_data=None,user_id='f2335740042045fba7f544ee5140eb87',uuid=0852932a-3266-432f-9975-8870535aff4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:d5:1c:a5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.348 226109 DEBUG nova.network.os_vif_util [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:d5:1c:a5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.349 226109 DEBUG nova.network.os_vif_util [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:1c:a5,bridge_name='br-int',has_traffic_filtering=True,id=680d7c20-81e0-48d0-954c-9fbcda7e7615,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680d7c20-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.350 226109 DEBUG nova.objects.instance [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.592 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <uuid>0852932a-3266-432f-9975-8870535aff4e</uuid>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <name>instance-000000a4</name>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-881580221</nova:name>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:51:22</nova:creationTime>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <nova:user uuid="f2335740042045fba7f544ee5140eb87">tempest-ServerRescueNegativeTestJSON-1304226499-project-member</nova:user>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <nova:project uuid="4842ecff6dce4ccc981a6b65a14ea406">tempest-ServerRescueNegativeTestJSON-1304226499</nova:project>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <nova:port uuid="680d7c20-81e0-48d0-954c-9fbcda7e7615">
Dec 06 07:51:24 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <system>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <entry name="serial">0852932a-3266-432f-9975-8870535aff4e</entry>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <entry name="uuid">0852932a-3266-432f-9975-8870535aff4e</entry>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </system>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <os>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </os>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <features>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </features>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0852932a-3266-432f-9975-8870535aff4e_disk.rescue">
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </source>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0852932a-3266-432f-9975-8870535aff4e_disk">
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </source>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/0852932a-3266-432f-9975-8870535aff4e_disk.config.rescue">
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </source>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:51:24 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d5:1c:a5"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <target dev="tap680d7c20-81"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/console.log" append="off"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <video>
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </video>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:51:24 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:51:24 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:51:24 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:51:24 compute-1 nova_compute[226101]: </domain>
Dec 06 07:51:24 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.599 226109 INFO nova.virt.libvirt.driver [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance destroyed successfully.
Dec 06 07:51:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:24.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:24.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.760 226109 DEBUG nova.virt.libvirt.driver [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.761 226109 DEBUG nova.virt.libvirt.driver [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.761 226109 DEBUG nova.virt.libvirt.driver [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:24 compute-1 nova_compute[226101]: 2025-12-06 07:51:24.761 226109 DEBUG nova.virt.libvirt.driver [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No VIF found with MAC fa:16:3e:66:c8:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.176 226109 DEBUG nova.compute.manager [req-63f0ae34-62ec-454b-a853-3988acbde41f req-082c6f61-cb10-4cd1-b93a-599c9a5c8098 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.176 226109 DEBUG oslo_concurrency.lockutils [req-63f0ae34-62ec-454b-a853-3988acbde41f req-082c6f61-cb10-4cd1-b93a-599c9a5c8098 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.176 226109 DEBUG oslo_concurrency.lockutils [req-63f0ae34-62ec-454b-a853-3988acbde41f req-082c6f61-cb10-4cd1-b93a-599c9a5c8098 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.176 226109 DEBUG oslo_concurrency.lockutils [req-63f0ae34-62ec-454b-a853-3988acbde41f req-082c6f61-cb10-4cd1-b93a-599c9a5c8098 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.177 226109 DEBUG nova.compute.manager [req-63f0ae34-62ec-454b-a853-3988acbde41f req-082c6f61-cb10-4cd1-b93a-599c9a5c8098 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.177 226109 WARNING nova.compute.manager [req-63f0ae34-62ec-454b-a853-3988acbde41f req-082c6f61-cb10-4cd1-b93a-599c9a5c8098 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state active and task_state rescuing.
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.196 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.197 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.197 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.197 226109 DEBUG nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No VIF found with MAC fa:16:3e:d5:1c:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.198 226109 INFO nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Using config drive
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.223 226109 DEBUG nova.storage.rbd_utils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.229 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0852932a-3266-432f-9975-8870535aff4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.230 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.230 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.230 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.255 226109 DEBUG nova.objects.instance [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.296 226109 DEBUG nova.objects.instance [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'keypairs' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.487 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.543 226109 DEBUG oslo_concurrency.lockutils [None req-c7f60749-ef15-4024-967e-afea444204c9 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 8.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3910783313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:51:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/149922795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.930 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:25 compute-1 nova_compute[226101]: 2025-12-06 07:51:25.934 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.507 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.514 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:26 compute-1 ceph-mon[81689]: pgmap v2924: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 194 op/s
Dec 06 07:51:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3719497911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/149922795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:26.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:51:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:26.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.655 226109 INFO nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Creating config drive at /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config.rescue
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.660 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ilth1rm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.691 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.692 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.795 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ilth1rm" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.826 226109 DEBUG nova.storage.rbd_utils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 0852932a-3266-432f-9975-8870535aff4e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.832 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config.rescue 0852932a-3266-432f-9975-8870535aff4e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.977 226109 DEBUG oslo_concurrency.processutils [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config.rescue 0852932a-3266-432f-9975-8870535aff4e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:26 compute-1 nova_compute[226101]: 2025-12-06 07:51:26.978 226109 INFO nova.virt.libvirt.driver [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Deleting local config drive /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e/disk.config.rescue because it was imported into RBD.
Dec 06 07:51:27 compute-1 kernel: tap680d7c20-81: entered promiscuous mode
Dec 06 07:51:27 compute-1 NetworkManager[49031]: <info>  [1765007487.0198] manager: (tap680d7c20-81): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.020 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-1 ovn_controller[130279]: 2025-12-06T07:51:27Z|00648|binding|INFO|Claiming lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 for this chassis.
Dec 06 07:51:27 compute-1 ovn_controller[130279]: 2025-12-06T07:51:27Z|00649|binding|INFO|680d7c20-81e0-48d0-954c-9fbcda7e7615: Claiming fa:16:3e:d5:1c:a5 10.100.0.11
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.034 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:1c:a5 10.100.0.11'], port_security=['fa:16:3e:d5:1c:a5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0852932a-3266-432f-9975-8870535aff4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '5', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=680d7c20-81e0-48d0-954c-9fbcda7e7615) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.035 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 680d7c20-81e0-48d0-954c-9fbcda7e7615 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 bound to our chassis
Dec 06 07:51:27 compute-1 ovn_controller[130279]: 2025-12-06T07:51:27Z|00650|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 ovn-installed in OVS
Dec 06 07:51:27 compute-1 ovn_controller[130279]: 2025-12-06T07:51:27Z|00651|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 up in Southbound
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.037 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.040 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.048 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[025fd17f-1ddd-440b-906a-69bd6f344cf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.049 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:51:27 compute-1 systemd-udevd[290038]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.051 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.051 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[56e98fa3-9e45-453a-a77a-0f078c6653c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.052 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bd64cbc0-9f4f-4f42-b66f-443695f8f6b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 systemd-machined[190302]: New machine qemu-76-instance-000000a4.
Dec 06 07:51:27 compute-1 NetworkManager[49031]: <info>  [1765007487.0657] device (tap680d7c20-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:51:27 compute-1 NetworkManager[49031]: <info>  [1765007487.0664] device (tap680d7c20-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.066 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[fe27498b-7c08-4367-a0b6-e9aae1540dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 systemd[1]: Started Virtual Machine qemu-76-instance-000000a4.
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.088 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb5ad52-7520-4fc8-81be-1f7fb8bb49f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.117 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[672b7ff3-3569-4d1b-8dae-6e5fe79a271a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.123 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[70167349-42f1-4b6e-a1c4-483f9c9bcc2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 NetworkManager[49031]: <info>  [1765007487.1245] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/294)
Dec 06 07:51:27 compute-1 systemd-udevd[290041]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.160 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3454e4-0923-4fae-a5a9-e3adf4748a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.163 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[74d10338-0611-4d68-b419-f9655c94f555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 NetworkManager[49031]: <info>  [1765007487.1904] device (tap3d151181-00): carrier: link connected
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.197 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5c250711-9583-4397-a30f-5a72ea5140fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.212 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cff9cc92-8449-4d16-abe1-0716b9d4644e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776669, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290070, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.227 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[efb95f73-489a-4187-bce1-c15bd57f8305]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776669, 'tstamp': 776669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290071, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.243 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3e99b698-478f-4bd4-b8af-a58a41334ada]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776669, 'reachable_time': 18498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290072, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.277 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a469e58b-ce50-4838-91d9-502da379b2b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.345 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b1d44d-5cc9-4b9a-be5d-0cbb72553601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.346 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.346 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.347 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:27 compute-1 kernel: tap3d151181-00: entered promiscuous mode
Dec 06 07:51:27 compute-1 NetworkManager[49031]: <info>  [1765007487.3883] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.385 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.390 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:27 compute-1 ovn_controller[130279]: 2025-12-06T07:51:27Z|00652|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.391 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.405 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.406 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.407 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.408 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[05e078ad-1354-4373-b2f2-67888bc0cb3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.408 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.409 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.565 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007487.565252, 0852932a-3266-432f-9975-8870535aff4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/600074495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.566 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Resumed (Lifecycle Event)
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.573 226109 DEBUG nova.compute.manager [None req-f1ce8146-5782-4cd1-b9b0-2da3e4ec58c2 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.621 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.626 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.700 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.700 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007487.5661304, 0852932a-3266-432f-9975-8870535aff4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.701 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Started (Lifecycle Event)
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.732 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.738 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.777 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.780 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:27 compute-1 podman[290164]: 2025-12-06 07:51:27.791218551 +0000 UTC m=+0.056636806 container create 330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:51:27 compute-1 systemd[1]: Started libpod-conmon-330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595.scope.
Dec 06 07:51:27 compute-1 podman[290164]: 2025-12-06 07:51:27.764060181 +0000 UTC m=+0.029478456 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:51:27 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:51:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5450bfc418cbd014973885b2eb7a4f310bee3cb398779ac8a36f542987449b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:27 compute-1 podman[290164]: 2025-12-06 07:51:27.891287274 +0000 UTC m=+0.156705549 container init 330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 07:51:27 compute-1 podman[290164]: 2025-12-06 07:51:27.897576444 +0000 UTC m=+0.162994699 container start 330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 07:51:27 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290179]: [NOTICE]   (290183) : New worker (290185) forked
Dec 06 07:51:27 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290179]: [NOTICE]   (290183) : Loading success.
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.946 226109 DEBUG nova.compute.manager [req-801212cd-cd4c-4029-a612-b1d34bbbaa96 req-dcf3474a-f527-4431-84f3-451a4e84e237 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.947 226109 DEBUG oslo_concurrency.lockutils [req-801212cd-cd4c-4029-a612-b1d34bbbaa96 req-dcf3474a-f527-4431-84f3-451a4e84e237 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.947 226109 DEBUG oslo_concurrency.lockutils [req-801212cd-cd4c-4029-a612-b1d34bbbaa96 req-dcf3474a-f527-4431-84f3-451a4e84e237 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.948 226109 DEBUG oslo_concurrency.lockutils [req-801212cd-cd4c-4029-a612-b1d34bbbaa96 req-dcf3474a-f527-4431-84f3-451a4e84e237 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.948 226109 DEBUG nova.compute.manager [req-801212cd-cd4c-4029-a612-b1d34bbbaa96 req-dcf3474a-f527-4431-84f3-451a4e84e237 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:27 compute-1 nova_compute[226101]: 2025-12-06 07:51:27.948 226109 WARNING nova.compute.manager [req-801212cd-cd4c-4029-a612-b1d34bbbaa96 req-dcf3474a-f527-4431-84f3-451a4e84e237 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state rescued and task_state None.
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.956 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:51:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:27.957 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.473 226109 DEBUG oslo_concurrency.lockutils [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.474 226109 DEBUG oslo_concurrency.lockutils [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.489 226109 INFO nova.compute.manager [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Detaching volume 3c0c17ef-cec5-45f2-956d-446361af61f4
Dec 06 07:51:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:28.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:28.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.620 226109 INFO nova.virt.block_device [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Attempting to driver detach volume 3c0c17ef-cec5-45f2-956d-446361af61f4 from mountpoint /dev/vdb
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.628 226109 DEBUG nova.virt.libvirt.driver [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Attempting to detach device vdb from instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.629 226109 DEBUG nova.virt.libvirt.guest [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-3c0c17ef-cec5-45f2-956d-446361af61f4">
Dec 06 07:51:28 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   </source>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <serial>3c0c17ef-cec5-45f2-956d-446361af61f4</serial>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]: </disk>
Dec 06 07:51:28 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.636 226109 INFO nova.virt.libvirt.driver [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Successfully detached device vdb from instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04 from the persistent domain config.
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.636 226109 DEBUG nova.virt.libvirt.driver [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.636 226109 DEBUG nova.virt.libvirt.guest [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-3c0c17ef-cec5-45f2-956d-446361af61f4">
Dec 06 07:51:28 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   </source>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <serial>3c0c17ef-cec5-45f2-956d-446361af61f4</serial>
Dec 06 07:51:28 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:51:28 compute-1 nova_compute[226101]: </disk>
Dec 06 07:51:28 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.746 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765007488.7458596, e7d5d854-2a1f-485b-931a-4ec90cf7ba04 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.748 226109 DEBUG nova.virt.libvirt.driver [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:51:28 compute-1 nova_compute[226101]: 2025-12-06 07:51:28.750 226109 INFO nova.virt.libvirt.driver [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Successfully detached device vdb from instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04 from the live domain config.
Dec 06 07:51:29 compute-1 ceph-mon[81689]: pgmap v2925: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 147 op/s
Dec 06 07:51:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1438578257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/156477243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:29 compute-1 nova_compute[226101]: 2025-12-06 07:51:29.070 226109 INFO nova.compute.manager [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Unrescuing
Dec 06 07:51:29 compute-1 nova_compute[226101]: 2025-12-06 07:51:29.070 226109 DEBUG oslo_concurrency.lockutils [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:51:29 compute-1 nova_compute[226101]: 2025-12-06 07:51:29.070 226109 DEBUG oslo_concurrency.lockutils [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquired lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:51:29 compute-1 nova_compute[226101]: 2025-12-06 07:51:29.071 226109 DEBUG nova.network.neutron [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:51:29 compute-1 nova_compute[226101]: 2025-12-06 07:51:29.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:29 compute-1 nova_compute[226101]: 2025-12-06 07:51:29.101 226109 DEBUG nova.objects.instance [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'flavor' on Instance uuid e7d5d854-2a1f-485b-931a-4ec90cf7ba04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:29 compute-1 nova_compute[226101]: 2025-12-06 07:51:29.520 226109 DEBUG oslo_concurrency.lockutils [None req-f8b91cde-2b0b-4ea8-aee4-bf3a98d8c4b5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:30 compute-1 nova_compute[226101]: 2025-12-06 07:51:30.062 226109 DEBUG nova.compute.manager [req-92264537-ee72-4ca6-b223-e135204725db req-5afbd8da-157c-4be3-b5e2-a1b31b8d3817 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:30 compute-1 nova_compute[226101]: 2025-12-06 07:51:30.063 226109 DEBUG oslo_concurrency.lockutils [req-92264537-ee72-4ca6-b223-e135204725db req-5afbd8da-157c-4be3-b5e2-a1b31b8d3817 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:30 compute-1 nova_compute[226101]: 2025-12-06 07:51:30.063 226109 DEBUG oslo_concurrency.lockutils [req-92264537-ee72-4ca6-b223-e135204725db req-5afbd8da-157c-4be3-b5e2-a1b31b8d3817 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:30 compute-1 nova_compute[226101]: 2025-12-06 07:51:30.063 226109 DEBUG oslo_concurrency.lockutils [req-92264537-ee72-4ca6-b223-e135204725db req-5afbd8da-157c-4be3-b5e2-a1b31b8d3817 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:30 compute-1 nova_compute[226101]: 2025-12-06 07:51:30.063 226109 DEBUG nova.compute.manager [req-92264537-ee72-4ca6-b223-e135204725db req-5afbd8da-157c-4be3-b5e2-a1b31b8d3817 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:30 compute-1 nova_compute[226101]: 2025-12-06 07:51:30.063 226109 WARNING nova.compute.manager [req-92264537-ee72-4ca6-b223-e135204725db req-5afbd8da-157c-4be3-b5e2-a1b31b8d3817 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:51:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:30.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:30.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:31 compute-1 ceph-mon[81689]: pgmap v2926: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 2.0 MiB/s wr, 122 op/s
Dec 06 07:51:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e377 e377: 3 total, 3 up, 3 in
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.515 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.526 226109 DEBUG nova.network.neutron [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updating instance_info_cache with network_info: [{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.550 226109 DEBUG oslo_concurrency.lockutils [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Releasing lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.552 226109 DEBUG nova.objects.instance [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'flavor' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:31 compute-1 kernel: tap680d7c20-81 (unregistering): left promiscuous mode
Dec 06 07:51:31 compute-1 NetworkManager[49031]: <info>  [1765007491.6235] device (tap680d7c20-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.627 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:31 compute-1 ovn_controller[130279]: 2025-12-06T07:51:31Z|00653|binding|INFO|Releasing lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 from this chassis (sb_readonly=0)
Dec 06 07:51:31 compute-1 ovn_controller[130279]: 2025-12-06T07:51:31Z|00654|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 down in Southbound
Dec 06 07:51:31 compute-1 ovn_controller[130279]: 2025-12-06T07:51:31Z|00655|binding|INFO|Removing iface tap680d7c20-81 ovn-installed in OVS
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.629 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.639 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:1c:a5 10.100.0.11'], port_security=['fa:16:3e:d5:1c:a5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0852932a-3266-432f-9975-8870535aff4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=680d7c20-81e0-48d0-954c-9fbcda7e7615) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.642 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 680d7c20-81e0-48d0-954c-9fbcda7e7615 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.645 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.647 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e26b5844-ecd2-4ae8-8096-5233e46a7491]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.648 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.650 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:31 compute-1 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Dec 06 07:51:31 compute-1 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a4.scope: Consumed 4.521s CPU time.
Dec 06 07:51:31 compute-1 systemd-machined[190302]: Machine qemu-76-instance-000000a4 terminated.
Dec 06 07:51:31 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290179]: [NOTICE]   (290183) : haproxy version is 2.8.14-c23fe91
Dec 06 07:51:31 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290179]: [NOTICE]   (290183) : path to executable is /usr/sbin/haproxy
Dec 06 07:51:31 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290179]: [WARNING]  (290183) : Exiting Master process...
Dec 06 07:51:31 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290179]: [WARNING]  (290183) : Exiting Master process...
Dec 06 07:51:31 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290179]: [ALERT]    (290183) : Current worker (290185) exited with code 143 (Terminated)
Dec 06 07:51:31 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290179]: [WARNING]  (290183) : All workers exited. Exiting... (0)
Dec 06 07:51:31 compute-1 systemd[1]: libpod-330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595.scope: Deactivated successfully.
Dec 06 07:51:31 compute-1 podman[290220]: 2025-12-06 07:51:31.776863432 +0000 UTC m=+0.043768368 container died 330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:51:31 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595-userdata-shm.mount: Deactivated successfully.
Dec 06 07:51:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-c5450bfc418cbd014973885b2eb7a4f310bee3cb398779ac8a36f542987449b5-merged.mount: Deactivated successfully.
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.812 226109 INFO nova.virt.libvirt.driver [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance destroyed successfully.
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.814 226109 DEBUG nova.objects.instance [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:31 compute-1 podman[290220]: 2025-12-06 07:51:31.818118863 +0000 UTC m=+0.085023799 container cleanup 330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:51:31 compute-1 systemd[1]: libpod-conmon-330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595.scope: Deactivated successfully.
Dec 06 07:51:31 compute-1 podman[290260]: 2025-12-06 07:51:31.887569973 +0000 UTC m=+0.046475033 container remove 330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.893 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3a587498-c8a1-4792-bafa-17305ff8bf99]: (4, ('Sat Dec  6 07:51:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595)\n330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595\nSat Dec  6 07:51:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595)\n330e04108c43b3458567c582995b05e8c644abd62eb0b01998c75ed7c4b07595\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.895 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e48590c9-f073-49de-8924-58f3e4ec7087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.896 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:31 compute-1 NetworkManager[49031]: <info>  [1765007491.9091] manager: (tap680d7c20-81): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Dec 06 07:51:31 compute-1 systemd-udevd[290200]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.955 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:31 compute-1 kernel: tap680d7c20-81: entered promiscuous mode
Dec 06 07:51:31 compute-1 NetworkManager[49031]: <info>  [1765007491.9699] device (tap680d7c20-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:51:31 compute-1 NetworkManager[49031]: <info>  [1765007491.9712] device (tap680d7c20-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:51:31 compute-1 nova_compute[226101]: 2025-12-06 07:51:31.971 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:31 compute-1 ovn_controller[130279]: 2025-12-06T07:51:31Z|00656|binding|INFO|Claiming lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 for this chassis.
Dec 06 07:51:31 compute-1 ovn_controller[130279]: 2025-12-06T07:51:31Z|00657|binding|INFO|680d7c20-81e0-48d0-954c-9fbcda7e7615: Claiming fa:16:3e:d5:1c:a5 10.100.0.11
Dec 06 07:51:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:31.983 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:1c:a5 10.100.0.11'], port_security=['fa:16:3e:d5:1c:a5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0852932a-3266-432f-9975-8870535aff4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '7', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=680d7c20-81e0-48d0-954c-9fbcda7e7615) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:31 compute-1 systemd-machined[190302]: New machine qemu-77-instance-000000a4.
Dec 06 07:51:31 compute-1 systemd[1]: Started Virtual Machine qemu-77-instance-000000a4.
Dec 06 07:51:31 compute-1 kernel: tap3d151181-00: left promiscuous mode
Dec 06 07:51:32 compute-1 ovn_controller[130279]: 2025-12-06T07:51:32Z|00658|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 ovn-installed in OVS
Dec 06 07:51:32 compute-1 ovn_controller[130279]: 2025-12-06T07:51:32Z|00659|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 up in Southbound
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.000 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.003 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[317ce983-36ed-4722-af14-f32e760fb71c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.005 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.025 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0f72cc-ac97-4e27-86cd-805fca466155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.026 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf81052-d76a-496f-b7f6-01ea7e0d801a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.044 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4bffbc-c235-4668-896c-456eaf598fee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776661, 'reachable_time': 43753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290292, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.049 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:51:32 compute-1 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.049 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[11b7feb2-1a29-4345-a050-2a048e6405f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.050 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 680d7c20-81e0-48d0-954c-9fbcda7e7615 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.053 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.064 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0398b721-3cf7-4026-84bd-eb2a68a91da4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.065 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.067 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.067 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[107ddf64-8bea-4eb6-b096-2dabaf4f773f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.068 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[debeb0f9-6735-457e-bafe-6a2c6e61ee01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.082 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a03bd235-8563-42a6-9ba0-50593e7e0a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.096 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0773c848-417f-473d-95ea-99c5ec3ad5b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.121 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[35d26dc8-3a60-4d8f-8bbf-bd0f39573725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 NetworkManager[49031]: <info>  [1765007492.1312] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/297)
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.131 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0f984e9e-3213-4047-8b76-791ac78f1050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.166 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4e13948e-6642-4a9c-abb8-91fc0e0d4691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.170 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8e000c22-c525-4d2a-808a-493c340be893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.194 226109 DEBUG nova.compute.manager [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.195 226109 DEBUG oslo_concurrency.lockutils [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.195 226109 DEBUG oslo_concurrency.lockutils [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.195 226109 DEBUG oslo_concurrency.lockutils [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.195 226109 DEBUG nova.compute.manager [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.195 226109 WARNING nova.compute.manager [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.196 226109 DEBUG nova.compute.manager [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.196 226109 DEBUG oslo_concurrency.lockutils [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.196 226109 DEBUG oslo_concurrency.lockutils [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.196 226109 DEBUG oslo_concurrency.lockutils [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.196 226109 DEBUG nova.compute.manager [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.196 226109 WARNING nova.compute.manager [req-e8c43903-d07a-49c9-9897-353e01655d62 req-006b0f03-9382-4362-b339-47f615b63f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:51:32 compute-1 NetworkManager[49031]: <info>  [1765007492.1982] device (tap3d151181-00): carrier: link connected
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.205 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[22effdbe-062d-4d9a-a68f-644fefbead15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.224 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c41e5d45-f3ff-4129-b881-d8d8cd1fa1c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777170, 'reachable_time': 34531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290320, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.244 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5d31003c-a09e-4287-a623-be69da848724]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777170, 'tstamp': 777170}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290321, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.266 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[44ee42e6-cab6-4ca8-9df8-7d9fb5b88a91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777170, 'reachable_time': 34531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290322, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.297 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f82e5fc6-beb7-4e6a-9c50-015fdbae29e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.356 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[af4be66e-cfaf-42f5-a2b1-d08ffd3cd5d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.358 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.358 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.359 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.361 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:32 compute-1 NetworkManager[49031]: <info>  [1765007492.3617] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Dec 06 07:51:32 compute-1 kernel: tap3d151181-00: entered promiscuous mode
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.363 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.364 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.365 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:32 compute-1 ovn_controller[130279]: 2025-12-06T07:51:32Z|00660|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.379 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.379 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.380 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[77ce7d75-06a8-4268-8ede-ebd36a0c3658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.381 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:51:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:51:32.382 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:51:32 compute-1 ceph-mon[81689]: pgmap v2927: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Dec 06 07:51:32 compute-1 ceph-mon[81689]: osdmap e377: 3 total, 3 up, 3 in
Dec 06 07:51:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:32.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e378 e378: 3 total, 3 up, 3 in
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.693 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.694 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.694 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.721 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.721 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.722 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.722 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:32 compute-1 podman[290369]: 2025-12-06 07:51:32.73088639 +0000 UTC m=+0.051990780 container create 0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:51:32 compute-1 systemd[1]: Started libpod-conmon-0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6.scope.
Dec 06 07:51:32 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:51:32 compute-1 podman[290369]: 2025-12-06 07:51:32.703269127 +0000 UTC m=+0.024373537 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:51:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e896e7df497063e9b2ac38f68f8242b255bafe628c25ea8b02978ae84a1e9b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.807 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 0852932a-3266-432f-9975-8870535aff4e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.809 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007492.8075533, 0852932a-3266-432f-9975-8870535aff4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.809 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Resumed (Lifecycle Event)
Dec 06 07:51:32 compute-1 podman[290369]: 2025-12-06 07:51:32.815982591 +0000 UTC m=+0.137086991 container init 0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:51:32 compute-1 podman[290369]: 2025-12-06 07:51:32.822352392 +0000 UTC m=+0.143456782 container start 0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.837 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.841 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:32 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290408]: [NOTICE]   (290426) : New worker (290429) forked
Dec 06 07:51:32 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290408]: [NOTICE]   (290426) : Loading success.
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.868 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.869 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007492.8090456, 0852932a-3266-432f-9975-8870535aff4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.870 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Started (Lifecycle Event)
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.891 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.897 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:32 compute-1 nova_compute[226101]: 2025-12-06 07:51:32.923 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:51:33 compute-1 nova_compute[226101]: 2025-12-06 07:51:33.176 226109 DEBUG nova.compute.manager [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:33 compute-1 nova_compute[226101]: 2025-12-06 07:51:33.225 226109 INFO nova.compute.manager [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] instance snapshotting
Dec 06 07:51:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:33 compute-1 nova_compute[226101]: 2025-12-06 07:51:33.457 226109 INFO nova.virt.libvirt.driver [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Beginning live snapshot process
Dec 06 07:51:33 compute-1 nova_compute[226101]: 2025-12-06 07:51:33.635 226109 DEBUG nova.virt.libvirt.imagebackend [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:51:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e379 e379: 3 total, 3 up, 3 in
Dec 06 07:51:33 compute-1 ceph-mon[81689]: osdmap e378: 3 total, 3 up, 3 in
Dec 06 07:51:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/415410419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:33 compute-1 nova_compute[226101]: 2025-12-06 07:51:33.689 226109 DEBUG nova.compute.manager [None req-63fd8954-1001-4ff3-8e9b-9bae9e8cd842 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:33 compute-1 nova_compute[226101]: 2025-12-06 07:51:33.885 226109 DEBUG nova.storage.rbd_utils [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] creating snapshot(04811896ed36499882d701c80e12ae16) on rbd image(e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.137 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.290 226109 DEBUG nova.compute.manager [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.291 226109 DEBUG oslo_concurrency.lockutils [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.291 226109 DEBUG oslo_concurrency.lockutils [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.292 226109 DEBUG oslo_concurrency.lockutils [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.292 226109 DEBUG nova.compute.manager [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.292 226109 WARNING nova.compute.manager [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state active and task_state None.
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.292 226109 DEBUG nova.compute.manager [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.293 226109 DEBUG oslo_concurrency.lockutils [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.293 226109 DEBUG oslo_concurrency.lockutils [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.294 226109 DEBUG oslo_concurrency.lockutils [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.295 226109 DEBUG nova.compute.manager [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.295 226109 WARNING nova.compute.manager [req-ecb6b36e-def2-4093-a4bf-81791fb43026 req-d35f971f-7648-4809-b561-e7940b7e930f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state active and task_state None.
Dec 06 07:51:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:34.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e380 e380: 3 total, 3 up, 3 in
Dec 06 07:51:34 compute-1 ceph-mon[81689]: pgmap v2930: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 333 KiB/s wr, 170 op/s
Dec 06 07:51:34 compute-1 ceph-mon[81689]: osdmap e379: 3 total, 3 up, 3 in
Dec 06 07:51:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1423580668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:34 compute-1 nova_compute[226101]: 2025-12-06 07:51:34.712 226109 DEBUG nova.storage.rbd_utils [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] cloning vms/e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk@04811896ed36499882d701c80e12ae16 to images/83fea89a-3a0d-4881-b429-13684080bb6c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:51:35 compute-1 ovn_controller[130279]: 2025-12-06T07:51:35Z|00661|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.277 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updating instance_info_cache with network_info: [{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:51:36 compute-1 ceph-mon[81689]: osdmap e380: 3 total, 3 up, 3 in
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.387 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.388 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.389 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.389 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.389 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.390 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.390 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.560 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:36.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:36.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:36 compute-1 nova_compute[226101]: 2025-12-06 07:51:36.830 226109 DEBUG nova.storage.rbd_utils [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] flattening images/83fea89a-3a0d-4881-b429-13684080bb6c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:51:37 compute-1 ceph-mon[81689]: pgmap v2933: 305 pgs: 305 active+clean; 665 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 MiB/s rd, 6.5 MiB/s wr, 503 op/s
Dec 06 07:51:37 compute-1 nova_compute[226101]: 2025-12-06 07:51:37.350 226109 DEBUG nova.storage.rbd_utils [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] removing snapshot(04811896ed36499882d701c80e12ae16) on rbd image(e7d5d854-2a1f-485b-931a-4ec90cf7ba04_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:51:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e381 e381: 3 total, 3 up, 3 in
Dec 06 07:51:38 compute-1 nova_compute[226101]: 2025-12-06 07:51:38.460 226109 DEBUG nova.storage.rbd_utils [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] creating snapshot(snap) on rbd image(83fea89a-3a0d-4881-b429-13684080bb6c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:51:38 compute-1 ceph-mon[81689]: pgmap v2934: 305 pgs: 305 active+clean; 685 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 MiB/s rd, 8.0 MiB/s wr, 454 op/s
Dec 06 07:51:38 compute-1 nova_compute[226101]: 2025-12-06 07:51:38.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:38.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:38.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:39 compute-1 nova_compute[226101]: 2025-12-06 07:51:39.167 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:39 compute-1 ceph-mon[81689]: osdmap e381: 3 total, 3 up, 3 in
Dec 06 07:51:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e382 e382: 3 total, 3 up, 3 in
Dec 06 07:51:40 compute-1 ceph-mon[81689]: pgmap v2936: 305 pgs: 305 active+clean; 692 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 MiB/s rd, 10 MiB/s wr, 536 op/s
Dec 06 07:51:40 compute-1 ceph-mon[81689]: osdmap e382: 3 total, 3 up, 3 in
Dec 06 07:51:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/520152717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:40 compute-1 nova_compute[226101]: 2025-12-06 07:51:40.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:40.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:40.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:41 compute-1 nova_compute[226101]: 2025-12-06 07:51:41.342 226109 INFO nova.virt.libvirt.driver [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Snapshot image upload complete
Dec 06 07:51:41 compute-1 nova_compute[226101]: 2025-12-06 07:51:41.343 226109 INFO nova.compute.manager [None req-8f3ccef3-e0b1-4e6c-b8e8-9abe3d7bb449 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Took 8.12 seconds to snapshot the instance on the hypervisor.
Dec 06 07:51:41 compute-1 nova_compute[226101]: 2025-12-06 07:51:41.610 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:42.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:42.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e383 e383: 3 total, 3 up, 3 in
Dec 06 07:51:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:43 compute-1 ceph-mon[81689]: pgmap v2938: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 MiB/s rd, 12 MiB/s wr, 504 op/s
Dec 06 07:51:43 compute-1 ceph-mon[81689]: osdmap e383: 3 total, 3 up, 3 in
Dec 06 07:51:44 compute-1 nova_compute[226101]: 2025-12-06 07:51:44.170 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:44.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:44.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:45 compute-1 ceph-mon[81689]: pgmap v2940: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 7.8 MiB/s wr, 240 op/s
Dec 06 07:51:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3778624018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:45 compute-1 ovn_controller[130279]: 2025-12-06T07:51:45Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:1c:a5 10.100.0.11
Dec 06 07:51:46 compute-1 nova_compute[226101]: 2025-12-06 07:51:46.612 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:46.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:46.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:47 compute-1 ceph-mon[81689]: pgmap v2941: 305 pgs: 305 active+clean; 688 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 274 op/s
Dec 06 07:51:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2585121310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e384 e384: 3 total, 3 up, 3 in
Dec 06 07:51:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3250387133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:48 compute-1 ceph-mon[81689]: osdmap e384: 3 total, 3 up, 3 in
Dec 06 07:51:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:48.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:48.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:49 compute-1 podman[290584]: 2025-12-06 07:51:49.09202463 +0000 UTC m=+0.075644797 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:51:49 compute-1 podman[290583]: 2025-12-06 07:51:49.101251659 +0000 UTC m=+0.088436131 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:51:49 compute-1 podman[290585]: 2025-12-06 07:51:49.128356328 +0000 UTC m=+0.110025271 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:51:49 compute-1 ceph-mon[81689]: pgmap v2942: 305 pgs: 305 active+clean; 696 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 286 op/s
Dec 06 07:51:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/831807183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:49 compute-1 nova_compute[226101]: 2025-12-06 07:51:49.171 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1411954260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2754697469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:51 compute-1 ceph-mon[81689]: pgmap v2944: 305 pgs: 305 active+clean; 713 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Dec 06 07:51:51 compute-1 nova_compute[226101]: 2025-12-06 07:51:51.613 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:52 compute-1 ceph-mon[81689]: pgmap v2945: 305 pgs: 305 active+clean; 715 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 224 op/s
Dec 06 07:51:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:52.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:52.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1435402962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1424748871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2763807753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:54 compute-1 nova_compute[226101]: 2025-12-06 07:51:54.172 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:54 compute-1 nova_compute[226101]: 2025-12-06 07:51:54.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:54.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:54.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:54 compute-1 ceph-mon[81689]: pgmap v2946: 305 pgs: 305 active+clean; 715 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Dec 06 07:51:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2929533344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:56 compute-1 nova_compute[226101]: 2025-12-06 07:51:56.616 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:56.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:56.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:57 compute-1 ceph-mon[81689]: pgmap v2947: 305 pgs: 305 active+clean; 780 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.9 MiB/s wr, 266 op/s
Dec 06 07:51:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e385 e385: 3 total, 3 up, 3 in
Dec 06 07:51:58 compute-1 ceph-mon[81689]: osdmap e385: 3 total, 3 up, 3 in
Dec 06 07:51:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:58.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:51:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:59 compute-1 sshd-session[290644]: Received disconnect from 154.219.116.39 port 58480:11: Bye Bye [preauth]
Dec 06 07:51:59 compute-1 sshd-session[290644]: Disconnected from authenticating user root 154.219.116.39 port 58480 [preauth]
Dec 06 07:51:59 compute-1 nova_compute[226101]: 2025-12-06 07:51:59.174 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:00 compute-1 ceph-mon[81689]: pgmap v2949: 305 pgs: 305 active+clean; 796 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.8 MiB/s wr, 258 op/s
Dec 06 07:52:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:00.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:00 compute-1 ceph-mon[81689]: pgmap v2950: 305 pgs: 305 active+clean; 768 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 4.7 MiB/s wr, 288 op/s
Dec 06 07:52:01 compute-1 nova_compute[226101]: 2025-12-06 07:52:01.619 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:01.672 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:01.672 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:01.673 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:02.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e386 e386: 3 total, 3 up, 3 in
Dec 06 07:52:03 compute-1 ceph-mon[81689]: pgmap v2951: 305 pgs: 305 active+clean; 717 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 4.7 MiB/s wr, 381 op/s
Dec 06 07:52:03 compute-1 ceph-mon[81689]: osdmap e386: 3 total, 3 up, 3 in
Dec 06 07:52:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:04 compute-1 nova_compute[226101]: 2025-12-06 07:52:04.176 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:04 compute-1 ceph-mon[81689]: pgmap v2953: 305 pgs: 305 active+clean; 717 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 1.1 MiB/s wr, 260 op/s
Dec 06 07:52:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:52:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:06 compute-1 nova_compute[226101]: 2025-12-06 07:52:06.621 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:06.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:06 compute-1 ceph-mon[81689]: pgmap v2954: 305 pgs: 305 active+clean; 722 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 917 KiB/s wr, 285 op/s
Dec 06 07:52:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:52:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2310510304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:52:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:52:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2310510304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:52:09 compute-1 ceph-mon[81689]: pgmap v2955: 305 pgs: 305 active+clean; 725 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.2 MiB/s wr, 250 op/s
Dec 06 07:52:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2310510304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:52:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2310510304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:52:09 compute-1 nova_compute[226101]: 2025-12-06 07:52:09.177 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:10.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:10 compute-1 ceph-mon[81689]: pgmap v2956: 305 pgs: 305 active+clean; 744 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.5 MiB/s wr, 212 op/s
Dec 06 07:52:11 compute-1 nova_compute[226101]: 2025-12-06 07:52:11.659 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:12.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:13 compute-1 ceph-mon[81689]: pgmap v2957: 305 pgs: 305 active+clean; 750 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 153 op/s
Dec 06 07:52:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:14 compute-1 nova_compute[226101]: 2025-12-06 07:52:14.179 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:15 compute-1 ceph-mon[81689]: pgmap v2958: 305 pgs: 305 active+clean; 750 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Dec 06 07:52:16 compute-1 nova_compute[226101]: 2025-12-06 07:52:16.661 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:16.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:16 compute-1 ceph-mon[81689]: pgmap v2959: 305 pgs: 305 active+clean; 763 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 186 op/s
Dec 06 07:52:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1736642058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:52:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1736642058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:52:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:19 compute-1 nova_compute[226101]: 2025-12-06 07:52:19.180 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:19 compute-1 nova_compute[226101]: 2025-12-06 07:52:19.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:19 compute-1 nova_compute[226101]: 2025-12-06 07:52:19.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:19 compute-1 nova_compute[226101]: 2025-12-06 07:52:19.621 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:19 compute-1 nova_compute[226101]: 2025-12-06 07:52:19.621 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:19 compute-1 nova_compute[226101]: 2025-12-06 07:52:19.621 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:52:19 compute-1 nova_compute[226101]: 2025-12-06 07:52:19.622 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:19 compute-1 ceph-mon[81689]: pgmap v2960: 305 pgs: 305 active+clean; 764 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Dec 06 07:52:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:52:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3857071513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:20 compute-1 podman[290667]: 2025-12-06 07:52:20.075267216 +0000 UTC m=+0.051423166 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.092 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:20 compute-1 podman[290666]: 2025-12-06 07:52:20.102061317 +0000 UTC m=+0.084432393 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:52:20 compute-1 podman[290668]: 2025-12-06 07:52:20.107510764 +0000 UTC m=+0.084572028 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.173 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.173 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.176 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.176 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:20 compute-1 sudo[290731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:20 compute-1 sudo[290731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-1 sudo[290731]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.346 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.347 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3931MB free_disk=20.701759338378906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.347 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.348 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:20 compute-1 sudo[290756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:52:20 compute-1 sudo[290756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-1 sudo[290756]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.424 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0852932a-3266-432f-9975-8870535aff4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.424 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.424 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.425 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:52:20 compute-1 sudo[290781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:20 compute-1 sudo[290781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-1 sudo[290781]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.489 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:20 compute-1 sudo[290808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:52:20 compute-1 sudo[290808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:20.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:20 compute-1 sudo[290808]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-1 sudo[290872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:20 compute-1 sudo[290872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-1 sudo[290872]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-1 sudo[290897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:52:20 compute-1 sudo[290897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-1 sudo[290897]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:52:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1192855359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.931 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.937 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:52:20 compute-1 sudo[290922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:20 compute-1 sudo[290922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-1 sudo[290922]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.958 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.977 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:52:20 compute-1 nova_compute[226101]: 2025-12-06 07:52:20.977 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:21 compute-1 sudo[290949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:52:21 compute-1 sudo[290949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:21 compute-1 ceph-mon[81689]: pgmap v2961: 305 pgs: 305 active+clean; 737 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 153 op/s
Dec 06 07:52:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3857071513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:21 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1192855359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:21 compute-1 sudo[290949]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:21 compute-1 nova_compute[226101]: 2025-12-06 07:52:21.664 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:22 compute-1 sshd-session[290803]: Received disconnect from 165.154.55.146 port 52280:11: Bye Bye [preauth]
Dec 06 07:52:22 compute-1 sshd-session[290803]: Disconnected from authenticating user root 165.154.55.146 port 52280 [preauth]
Dec 06 07:52:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:52:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:52:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:52:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:52:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:52:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:52:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:22.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e387 e387: 3 total, 3 up, 3 in
Dec 06 07:52:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:24 compute-1 nova_compute[226101]: 2025-12-06 07:52:24.183 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:52:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:26 compute-1 nova_compute[226101]: 2025-12-06 07:52:26.667 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:52:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:27 compute-1 ceph-mon[81689]: pgmap v2962: 305 pgs: 305 active+clean; 689 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 630 KiB/s wr, 162 op/s
Dec 06 07:52:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2314624498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:27 compute-1 nova_compute[226101]: 2025-12-06 07:52:27.977 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:27 compute-1 nova_compute[226101]: 2025-12-06 07:52:27.978 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:52:28 compute-1 nova_compute[226101]: 2025-12-06 07:52:28.277 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:52:28 compute-1 nova_compute[226101]: 2025-12-06 07:52:28.278 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:52:28 compute-1 nova_compute[226101]: 2025-12-06 07:52:28.278 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:52:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:52:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:29 compute-1 nova_compute[226101]: 2025-12-06 07:52:29.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:52:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971704627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:29 compute-1 ceph-mon[81689]: pgmap v2963: 305 pgs: 305 active+clean; 689 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 880 KiB/s rd, 589 KiB/s wr, 103 op/s
Dec 06 07:52:29 compute-1 ceph-mon[81689]: osdmap e387: 3 total, 3 up, 3 in
Dec 06 07:52:29 compute-1 ceph-mon[81689]: pgmap v2965: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 85 KiB/s wr, 70 op/s
Dec 06 07:52:29 compute-1 ceph-mon[81689]: pgmap v2966: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 82 KiB/s wr, 52 op/s
Dec 06 07:52:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/491815996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:30.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 07:52:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:30.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:30 compute-1 nova_compute[226101]: 2025-12-06 07:52:30.715 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updating instance_info_cache with network_info: [{"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:52:30 compute-1 nova_compute[226101]: 2025-12-06 07:52:30.743 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:52:30 compute-1 nova_compute[226101]: 2025-12-06 07:52:30.744 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:52:30 compute-1 nova_compute[226101]: 2025-12-06 07:52:30.744 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:30 compute-1 nova_compute[226101]: 2025-12-06 07:52:30.745 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:52:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e388 e388: 3 total, 3 up, 3 in
Dec 06 07:52:31 compute-1 ceph-mon[81689]: pgmap v2967: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 80 KiB/s wr, 48 op/s
Dec 06 07:52:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3441895647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1971704627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:31 compute-1 nova_compute[226101]: 2025-12-06 07:52:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:31 compute-1 nova_compute[226101]: 2025-12-06 07:52:31.669 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:32 compute-1 ceph-mon[81689]: osdmap e388: 3 total, 3 up, 3 in
Dec 06 07:52:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:32.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:32.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e389 e389: 3 total, 3 up, 3 in
Dec 06 07:52:33 compute-1 ceph-mon[81689]: pgmap v2969: 305 pgs: 305 active+clean; 676 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 1.0 MiB/s wr, 46 op/s
Dec 06 07:52:33 compute-1 ceph-mon[81689]: osdmap e389: 3 total, 3 up, 3 in
Dec 06 07:52:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.187 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.525 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.525 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.554 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:34 compute-1 ceph-mon[81689]: pgmap v2971: 305 pgs: 305 active+clean; 676 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.0 MiB/s wr, 26 op/s
Dec 06 07:52:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/853213052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:34.639 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.639 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:34.642 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.669 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.670 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.677 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.677 226109 INFO nova.compute.claims [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:52:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:34.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:34.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:34 compute-1 nova_compute[226101]: 2025-12-06 07:52:34.820 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:52:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/927376835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.267 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.273 226109 DEBUG nova.compute.provider_tree [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.289 226109 DEBUG nova.scheduler.client.report [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.314 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.315 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.361 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.361 226109 DEBUG nova.network.neutron [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.384 226109 INFO nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.400 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.536 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.538 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.538 226109 INFO nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Creating image(s)
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.565 226109 DEBUG nova.storage.rbd_utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.590 226109 DEBUG nova.storage.rbd_utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.616 226109 DEBUG nova.storage.rbd_utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.619 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "057b78433884b2f3b54db7a8a6319a4281d6a489" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.620 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "057b78433884b2f3b54db7a8a6319a4281d6a489" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.624 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:35.645 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.748 226109 DEBUG nova.policy [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0f669e963dc54ad7bebf8dd20341428a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:52:35 compute-1 nova_compute[226101]: 2025-12-06 07:52:35.893 226109 DEBUG nova.virt.libvirt.imagebackend [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/1af12b1c-b711-4118-bb67-f97cd1b83061/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/1af12b1c-b711-4118-bb67-f97cd1b83061/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:52:36 compute-1 nova_compute[226101]: 2025-12-06 07:52:36.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:36 compute-1 nova_compute[226101]: 2025-12-06 07:52:36.671 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:36.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:36.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:36 compute-1 nova_compute[226101]: 2025-12-06 07:52:36.778 226109 DEBUG nova.network.neutron [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Successfully created port: 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:52:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/927376835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2536026822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:37 compute-1 sudo[291081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:37 compute-1 sudo[291081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:37 compute-1 sudo[291081]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:37 compute-1 sudo[291106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:52:37 compute-1 sudo[291106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:37 compute-1 sudo[291106]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:37 compute-1 nova_compute[226101]: 2025-12-06 07:52:37.834 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:37 compute-1 nova_compute[226101]: 2025-12-06 07:52:37.900 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489.part --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:37 compute-1 nova_compute[226101]: 2025-12-06 07:52:37.901 226109 DEBUG nova.virt.images [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] 1af12b1c-b711-4118-bb67-f97cd1b83061 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 06 07:52:37 compute-1 nova_compute[226101]: 2025-12-06 07:52:37.902 226109 DEBUG nova.privsep.utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 07:52:37 compute-1 nova_compute[226101]: 2025-12-06 07:52:37.902 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489.part /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.106 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489.part /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489.converted" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:38 compute-1 ceph-mon[81689]: pgmap v2972: 305 pgs: 305 active+clean; 653 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 2.6 MiB/s wr, 84 op/s
Dec 06 07:52:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/171804119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.111 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.175 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.177 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "057b78433884b2f3b54db7a8a6319a4281d6a489" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.208 226109 DEBUG nova.storage.rbd_utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.211 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.501 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.566 226109 DEBUG nova.storage.rbd_utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] resizing rbd image 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.638 226109 DEBUG nova.network.neutron [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Successfully updated port: 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.656 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.656 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.657 226109 DEBUG nova.network.neutron [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:52:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:38.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:38.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.749 226109 DEBUG nova.objects.instance [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'migration_context' on Instance uuid 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.767 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.768 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Ensure instance console log exists: /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.770 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.771 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.771 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:38 compute-1 nova_compute[226101]: 2025-12-06 07:52:38.849 226109 DEBUG nova.network.neutron [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:52:39 compute-1 ceph-mon[81689]: pgmap v2973: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 62 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Dec 06 07:52:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2547543116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:52:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1634490562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:39 compute-1 nova_compute[226101]: 2025-12-06 07:52:39.189 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.041 226109 DEBUG nova.network.neutron [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.311 226109 DEBUG nova.compute.manager [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.312 226109 DEBUG nova.compute.manager [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing instance network info cache due to event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.312 226109 DEBUG oslo_concurrency.lockutils [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.350 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.350 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Instance network_info: |[{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.351 226109 DEBUG oslo_concurrency.lockutils [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.351 226109 DEBUG nova.network.neutron [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.353 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Start _get_guest_xml network_info=[{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T07:52:27Z,direct_url=<?>,disk_format='qcow2',id=1af12b1c-b711-4118-bb67-f97cd1b83061,min_disk=0,min_ram=0,name='tempest-scenario-img--39023373',owner='0c8fc5bc237e42bfad505a0bca6681eb',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T07:52:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '1af12b1c-b711-4118-bb67-f97cd1b83061'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.358 226109 WARNING nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.366 226109 DEBUG nova.virt.libvirt.host [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.367 226109 DEBUG nova.virt.libvirt.host [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.379 226109 DEBUG nova.virt.libvirt.host [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.380 226109 DEBUG nova.virt.libvirt.host [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.381 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.381 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T07:52:27Z,direct_url=<?>,disk_format='qcow2',id=1af12b1c-b711-4118-bb67-f97cd1b83061,min_disk=0,min_ram=0,name='tempest-scenario-img--39023373',owner='0c8fc5bc237e42bfad505a0bca6681eb',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T07:52:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.381 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.381 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.382 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.382 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.382 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.382 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.382 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.382 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.383 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.383 226109 DEBUG nova.virt.hardware [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.385 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1634490562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2287609545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4213075360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:40 compute-1 nova_compute[226101]: 2025-12-06 07:52:40.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:40.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:40.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:52:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1235752040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:41 compute-1 nova_compute[226101]: 2025-12-06 07:52:41.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:41 compute-1 nova_compute[226101]: 2025-12-06 07:52:41.674 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:41 compute-1 nova_compute[226101]: 2025-12-06 07:52:41.786 226109 DEBUG nova.network.neutron [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updated VIF entry in instance network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:52:41 compute-1 nova_compute[226101]: 2025-12-06 07:52:41.786 226109 DEBUG nova.network.neutron [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:52:41 compute-1 nova_compute[226101]: 2025-12-06 07:52:41.805 226109 DEBUG oslo_concurrency.lockutils [req-fc179e70-e27d-4151-b52a-b6ed28864e7e req-b959c506-8de5-474f-945b-61dcec7192a3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:52:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:42.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:43.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.190 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.429 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.458 226109 DEBUG nova.storage.rbd_utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.462 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:44.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:44 compute-1 ceph-mon[81689]: pgmap v2974: 305 pgs: 305 active+clean; 611 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 81 op/s
Dec 06 07:52:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:52:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/888545578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.897 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.899 226109 DEBUG nova.virt.libvirt.vif [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:52:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-340091425',display_name='tempest-TestMinimumBasicScenario-server-340091425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-340091425',id=171,image_ref='1af12b1c-b711-4118-bb67-f97cd1b83061',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWg+OuPgRfJkF5bYdFQQQkvtf9shaTk87ZwwwRFdmtYywqGfmf219JAc/WhRsuU/0QBvKCF559O8SFUHaVuq2eR12p/xyrs9TGhlFmJ6ye8Pfwsq05h3CgeNSYYLP6AOQ==',key_name='tempest-TestMinimumBasicScenario-980781584',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-qu66fmgi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1af12b1c-b711-4118-bb67-f97cd1b83061',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:52:35Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=4d2e7124-8b09-4fd5-bc94-c2c9a3d964af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.899 226109 DEBUG nova.network.os_vif_util [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.900 226109 DEBUG nova.network.os_vif_util [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:63:bb,bridge_name='br-int',has_traffic_filtering=True,id=5a1f8827-eedc-46b4-859d-37533bd8cdb6,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a1f8827-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.901 226109 DEBUG nova.objects.instance [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.919 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <uuid>4d2e7124-8b09-4fd5-bc94-c2c9a3d964af</uuid>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <name>instance-000000ab</name>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <nova:name>tempest-TestMinimumBasicScenario-server-340091425</nova:name>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:52:40</nova:creationTime>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <nova:user uuid="0f669e963dc54ad7bebf8dd20341428a">tempest-TestMinimumBasicScenario-1310413980-project-member</nova:user>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <nova:project uuid="0c8fc5bc237e42bfad505a0bca6681eb">tempest-TestMinimumBasicScenario-1310413980</nova:project>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="1af12b1c-b711-4118-bb67-f97cd1b83061"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <nova:port uuid="5a1f8827-eedc-46b4-859d-37533bd8cdb6">
Dec 06 07:52:44 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <system>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <entry name="serial">4d2e7124-8b09-4fd5-bc94-c2c9a3d964af</entry>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <entry name="uuid">4d2e7124-8b09-4fd5-bc94-c2c9a3d964af</entry>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </system>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <os>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   </os>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <features>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   </features>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk">
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       </source>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk.config">
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       </source>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:52:44 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:bf:63:bb"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <target dev="tap5a1f8827-ee"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af/console.log" append="off"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <video>
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </video>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:52:44 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:52:44 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:52:44 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:52:44 compute-1 nova_compute[226101]: </domain>
Dec 06 07:52:44 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.921 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Preparing to wait for external event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.921 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.921 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.921 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.922 226109 DEBUG nova.virt.libvirt.vif [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:52:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-340091425',display_name='tempest-TestMinimumBasicScenario-server-340091425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-340091425',id=171,image_ref='1af12b1c-b711-4118-bb67-f97cd1b83061',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWg+OuPgRfJkF5bYdFQQQkvtf9shaTk87ZwwwRFdmtYywqGfmf219JAc/WhRsuU/0QBvKCF559O8SFUHaVuq2eR12p/xyrs9TGhlFmJ6ye8Pfwsq05h3CgeNSYYLP6AOQ==',key_name='tempest-TestMinimumBasicScenario-980781584',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-qu66fmgi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1af12b1c-b711-4118-bb67-f97cd1b83061',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:52:35Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=4d2e7124-8b09-4fd5-bc94-c2c9a3d964af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.922 226109 DEBUG nova.network.os_vif_util [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.923 226109 DEBUG nova.network.os_vif_util [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:63:bb,bridge_name='br-int',has_traffic_filtering=True,id=5a1f8827-eedc-46b4-859d-37533bd8cdb6,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a1f8827-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.923 226109 DEBUG os_vif [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:63:bb,bridge_name='br-int',has_traffic_filtering=True,id=5a1f8827-eedc-46b4-859d-37533bd8cdb6,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a1f8827-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.924 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.924 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.925 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.930 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.930 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a1f8827-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.930 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a1f8827-ee, col_values=(('external_ids', {'iface-id': '5a1f8827-eedc-46b4-859d-37533bd8cdb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:63:bb', 'vm-uuid': '4d2e7124-8b09-4fd5-bc94-c2c9a3d964af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.932 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:44 compute-1 NetworkManager[49031]: <info>  [1765007564.9332] manager: (tap5a1f8827-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.935 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.938 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.938 226109 INFO os_vif [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:63:bb,bridge_name='br-int',has_traffic_filtering=True,id=5a1f8827-eedc-46b4-859d-37533bd8cdb6,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a1f8827-ee')
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.991 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.992 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.992 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No VIF found with MAC fa:16:3e:bf:63:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:52:44 compute-1 nova_compute[226101]: 2025-12-06 07:52:44.993 226109 INFO nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Using config drive
Dec 06 07:52:45 compute-1 nova_compute[226101]: 2025-12-06 07:52:45.018 226109 DEBUG nova.storage.rbd_utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:45.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1235752040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:45 compute-1 ceph-mon[81689]: pgmap v2975: 305 pgs: 305 active+clean; 682 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 123 op/s
Dec 06 07:52:45 compute-1 ceph-mon[81689]: pgmap v2976: 305 pgs: 305 active+clean; 682 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 120 op/s
Dec 06 07:52:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/888545578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.131 226109 INFO nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Creating config drive at /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af/disk.config
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.140 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfhekemh3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.276 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfhekemh3" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.310 226109 DEBUG nova.storage.rbd_utils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] rbd image 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.315 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af/disk.config 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.483 226109 DEBUG oslo_concurrency.processutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af/disk.config 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.484 226109 INFO nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Deleting local config drive /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af/disk.config because it was imported into RBD.
Dec 06 07:52:46 compute-1 kernel: tap5a1f8827-ee: entered promiscuous mode
Dec 06 07:52:46 compute-1 NetworkManager[49031]: <info>  [1765007566.5316] manager: (tap5a1f8827-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Dec 06 07:52:46 compute-1 ovn_controller[130279]: 2025-12-06T07:52:46Z|00662|binding|INFO|Claiming lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 for this chassis.
Dec 06 07:52:46 compute-1 ovn_controller[130279]: 2025-12-06T07:52:46Z|00663|binding|INFO|5a1f8827-eedc-46b4-859d-37533bd8cdb6: Claiming fa:16:3e:bf:63:bb 10.100.0.10
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.579 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.589 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:63:bb 10.100.0.10'], port_security=['fa:16:3e:bf:63:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d2e7124-8b09-4fd5-bc94-c2c9a3d964af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f08afae8-f952-4a01-a643-61a4dc212937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97e2366f-5968-4702-838b-5830aacf120f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9bb6731-402a-4b02-b4cf-9ac913838fd2, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=5a1f8827-eedc-46b4-859d-37533bd8cdb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.590 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 in datapath f08afae8-f952-4a01-a643-61a4dc212937 bound to our chassis
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.591 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f08afae8-f952-4a01-a643-61a4dc212937
Dec 06 07:52:46 compute-1 ovn_controller[130279]: 2025-12-06T07:52:46Z|00664|binding|INFO|Setting lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 ovn-installed in OVS
Dec 06 07:52:46 compute-1 ovn_controller[130279]: 2025-12-06T07:52:46Z|00665|binding|INFO|Setting lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 up in Southbound
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.601 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.603 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bf247d1d-e62d-41b9-9766-4ce91050b53d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.603 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf08afae8-f1 in ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.605 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf08afae8-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.605 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[093fae78-5328-4975-85ce-c9851ecf3863]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.606 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8b6e28-126e-427e-a1b1-8243d96d4a46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 systemd-machined[190302]: New machine qemu-78-instance-000000ab.
Dec 06 07:52:46 compute-1 systemd-udevd[291391]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.618 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[33c26978-8874-491a-87dd-d07b913a166f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 systemd[1]: Started Virtual Machine qemu-78-instance-000000ab.
Dec 06 07:52:46 compute-1 NetworkManager[49031]: <info>  [1765007566.6244] device (tap5a1f8827-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:52:46 compute-1 NetworkManager[49031]: <info>  [1765007566.6265] device (tap5a1f8827-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.633 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb57e3f-e4a8-411e-9d9e-a1ebc4831543]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.661 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff84fe8-e0cb-4d6d-a2a6-a0dc2ff1a19e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.665 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[02abf14f-fe0a-4110-b21d-3d5bb6ee78fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 NetworkManager[49031]: <info>  [1765007566.6672] manager: (tapf08afae8-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/301)
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.691 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a0562488-ed07-454a-8221-85e742ddc161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.693 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a55a82-158f-4e16-b35b-93130a31271a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 NetworkManager[49031]: <info>  [1765007566.7104] device (tapf08afae8-f0): carrier: link connected
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.714 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c137db-b99d-4764-94f9-2d8a9b27a8b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:46.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.730 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[42d43e72-8c91-4e62-8003-9d253ad6acac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf08afae8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:11:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784621, 'reachable_time': 30789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291423, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.743 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4a16fab0-a800-420e-865b-8a6e7cb081a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:1118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 784621, 'tstamp': 784621}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291424, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.759 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3834efd0-03ff-413e-8c3d-a02d18563603]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf08afae8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:11:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784621, 'reachable_time': 30789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291425, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.793 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[888e91d0-c246-4fbe-86ac-1cf5f93c4bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.867 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a72c9a70-4e8c-47b8-ba6f-98f11259b297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.869 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08afae8-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.869 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.870 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf08afae8-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.871 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:46 compute-1 NetworkManager[49031]: <info>  [1765007566.8724] manager: (tapf08afae8-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Dec 06 07:52:46 compute-1 kernel: tapf08afae8-f0: entered promiscuous mode
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.875 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.876 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf08afae8-f0, col_values=(('external_ids', {'iface-id': '684342c7-1709-4776-be04-f6d5a6b0b0ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.877 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:46 compute-1 ovn_controller[130279]: 2025-12-06T07:52:46Z|00666|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec 06 07:52:46 compute-1 nova_compute[226101]: 2025-12-06 07:52:46.891 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.892 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.893 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[79a0472b-76e3-4e75-94ae-5982a684ca17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.895 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-f08afae8-f952-4a01-a643-61a4dc212937
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID f08afae8-f952-4a01-a643-61a4dc212937
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:52:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:52:46.896 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'env', 'PROCESS_TAG=haproxy-f08afae8-f952-4a01-a643-61a4dc212937', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f08afae8-f952-4a01-a643-61a4dc212937.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:52:47 compute-1 podman[291493]: 2025-12-06 07:52:47.268320035 +0000 UTC m=+0.067630091 container create a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:52:47 compute-1 systemd[1]: Started libpod-conmon-a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7.scope.
Dec 06 07:52:47 compute-1 podman[291493]: 2025-12-06 07:52:47.22317661 +0000 UTC m=+0.022486686 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:52:47 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:52:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4273cfadb457ab7a4a7d0cb5dde4b40a75cf0fcbfc21793e7997988f1fb92477/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:47 compute-1 ceph-mon[81689]: pgmap v2977: 305 pgs: 305 active+clean; 702 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 113 op/s
Dec 06 07:52:47 compute-1 podman[291493]: 2025-12-06 07:52:47.338535325 +0000 UTC m=+0.137845401 container init a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 07:52:47 compute-1 podman[291493]: 2025-12-06 07:52:47.345617586 +0000 UTC m=+0.144927642 container start a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:52:47 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[291509]: [NOTICE]   (291514) : New worker (291520) forked
Dec 06 07:52:47 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[291509]: [NOTICE]   (291514) : Loading success.
Dec 06 07:52:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:47.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:47 compute-1 nova_compute[226101]: 2025-12-06 07:52:47.446 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007567.4460301, 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:52:47 compute-1 nova_compute[226101]: 2025-12-06 07:52:47.447 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] VM Started (Lifecycle Event)
Dec 06 07:52:47 compute-1 nova_compute[226101]: 2025-12-06 07:52:47.469 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:52:47 compute-1 nova_compute[226101]: 2025-12-06 07:52:47.473 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007567.4463153, 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:52:47 compute-1 nova_compute[226101]: 2025-12-06 07:52:47.474 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] VM Paused (Lifecycle Event)
Dec 06 07:52:47 compute-1 nova_compute[226101]: 2025-12-06 07:52:47.519 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:52:47 compute-1 nova_compute[226101]: 2025-12-06 07:52:47.525 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:52:47 compute-1 nova_compute[226101]: 2025-12-06 07:52:47.580 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:52:48 compute-1 ceph-mon[81689]: pgmap v2978: 305 pgs: 305 active+clean; 702 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 79 op/s
Dec 06 07:52:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3784843722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:48.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.193 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.322 226109 DEBUG nova.compute.manager [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.322 226109 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.323 226109 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.323 226109 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.324 226109 DEBUG nova.compute.manager [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Processing event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.324 226109 DEBUG nova.compute.manager [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.325 226109 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.325 226109 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.326 226109 DEBUG oslo_concurrency.lockutils [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.326 226109 DEBUG nova.compute.manager [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] No waiting events found dispatching network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.326 226109 WARNING nova.compute.manager [req-7ed35a67-974f-4216-98a8-96305e4f62a0 req-18fae10d-b92f-48a0-b0e8-e0c24bb7c838 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received unexpected event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 for instance with vm_state building and task_state spawning.
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.327 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.332 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007569.3324978, 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.333 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] VM Resumed (Lifecycle Event)
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.335 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.339 226109 INFO nova.virt.libvirt.driver [-] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Instance spawned successfully.
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.339 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:52:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:49.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.405 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.408 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.539 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.539 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.540 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.540 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.541 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.541 226109 DEBUG nova.virt.libvirt.driver [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.666 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.926 226109 INFO nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Took 14.39 seconds to spawn the instance on the hypervisor.
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.927 226109 DEBUG nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:52:49 compute-1 nova_compute[226101]: 2025-12-06 07:52:49.942 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:50 compute-1 nova_compute[226101]: 2025-12-06 07:52:50.026 226109 INFO nova.compute.manager [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Took 15.38 seconds to build instance.
Dec 06 07:52:50 compute-1 nova_compute[226101]: 2025-12-06 07:52:50.048 226109 DEBUG oslo_concurrency.lockutils [None req-b8430689-a754-4dd4-9571-6d25e25074d4 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:50 compute-1 ceph-mon[81689]: pgmap v2979: 305 pgs: 305 active+clean; 706 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Dec 06 07:52:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:51 compute-1 podman[291531]: 2025-12-06 07:52:51.070410537 +0000 UTC m=+0.052502725 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 07:52:51 compute-1 podman[291530]: 2025-12-06 07:52:51.097701541 +0000 UTC m=+0.082176762 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:52:51 compute-1 podman[291532]: 2025-12-06 07:52:51.10360481 +0000 UTC m=+0.081586617 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:52:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:51.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:52 compute-1 ceph-mon[81689]: pgmap v2980: 305 pgs: 305 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 172 op/s
Dec 06 07:52:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:52.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:53.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:54 compute-1 nova_compute[226101]: 2025-12-06 07:52:54.196 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:54.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:54 compute-1 nova_compute[226101]: 2025-12-06 07:52:54.944 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:55.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:55 compute-1 ceph-mon[81689]: pgmap v2981: 305 pgs: 305 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Dec 06 07:52:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:56.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:56 compute-1 ceph-mon[81689]: pgmap v2982: 305 pgs: 305 active+clean; 752 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 195 op/s
Dec 06 07:52:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:57.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:58.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:59 compute-1 ceph-mon[81689]: pgmap v2983: 305 pgs: 305 active+clean; 752 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 184 op/s
Dec 06 07:52:59 compute-1 nova_compute[226101]: 2025-12-06 07:52:59.232 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:52:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:59.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:59 compute-1 nova_compute[226101]: 2025-12-06 07:52:59.945 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.084 226109 DEBUG oslo_concurrency.lockutils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.084 226109 DEBUG oslo_concurrency.lockutils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.102 226109 DEBUG nova.objects.instance [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'flavor' on Instance uuid 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.142 226109 DEBUG oslo_concurrency.lockutils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.369 226109 DEBUG oslo_concurrency.lockutils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.370 226109 DEBUG oslo_concurrency.lockutils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.370 226109 INFO nova.compute.manager [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Attaching volume 903b6ad8-93b6-4571-9ff7-80ce95667ba0 to /dev/vdb
Dec 06 07:53:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/901956373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:53:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3864168267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.570 226109 DEBUG os_brick.utils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.571 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.581 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.581 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[03f5a82d-bf43-4390-9039-82da2756eaa6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.582 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.594 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.594 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbb4d14-3cf7-4b54-8c5f-a2dd25fe14dc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.595 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.604 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.605 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[d26ec509-31d4-4c78-a2a2-9832675cc500]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.606 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[56c816a1-5db5-4373-b418-c1c08339b83e]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.607 226109 DEBUG oslo_concurrency.processutils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.637 226109 DEBUG oslo_concurrency.processutils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.641 226109 DEBUG os_brick.initiator.connectors.lightos [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.642 226109 DEBUG os_brick.initiator.connectors.lightos [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.642 226109 DEBUG os_brick.initiator.connectors.lightos [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.642 226109 DEBUG os_brick.utils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:53:00 compute-1 nova_compute[226101]: 2025-12-06 07:53:00.643 226109 DEBUG nova.virt.block_device [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating existing volume attachment record: 1f746326-4f3f-419f-9f12-fd6263259048 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:53:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:00.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:01.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:01 compute-1 nova_compute[226101]: 2025-12-06 07:53:01.439 226109 DEBUG nova.objects.instance [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'flavor' on Instance uuid 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:53:01 compute-1 nova_compute[226101]: 2025-12-06 07:53:01.465 226109 DEBUG nova.virt.libvirt.driver [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Attempting to attach volume 903b6ad8-93b6-4571-9ff7-80ce95667ba0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:53:01 compute-1 nova_compute[226101]: 2025-12-06 07:53:01.468 226109 DEBUG nova.virt.libvirt.guest [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:53:01 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:53:01 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-903b6ad8-93b6-4571-9ff7-80ce95667ba0">
Dec 06 07:53:01 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:53:01 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:53:01 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:53:01 compute-1 nova_compute[226101]:   </source>
Dec 06 07:53:01 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:53:01 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:53:01 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:53:01 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:53:01 compute-1 nova_compute[226101]:   <serial>903b6ad8-93b6-4571-9ff7-80ce95667ba0</serial>
Dec 06 07:53:01 compute-1 nova_compute[226101]: </disk>
Dec 06 07:53:01 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:53:01 compute-1 ceph-mon[81689]: pgmap v2984: 305 pgs: 305 active+clean; 744 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 194 op/s
Dec 06 07:53:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1978308452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:53:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:01.673 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:01.674 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:01.675 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:01 compute-1 nova_compute[226101]: 2025-12-06 07:53:01.713 226109 DEBUG nova.virt.libvirt.driver [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:53:01 compute-1 nova_compute[226101]: 2025-12-06 07:53:01.713 226109 DEBUG nova.virt.libvirt.driver [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:53:01 compute-1 nova_compute[226101]: 2025-12-06 07:53:01.714 226109 DEBUG nova.virt.libvirt.driver [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:53:01 compute-1 nova_compute[226101]: 2025-12-06 07:53:01.714 226109 DEBUG nova.virt.libvirt.driver [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] No VIF found with MAC fa:16:3e:bf:63:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:53:01 compute-1 nova_compute[226101]: 2025-12-06 07:53:01.944 226109 DEBUG oslo_concurrency.lockutils [None req-267dfc78-230f-4a16-84a4-9a6c5509cb3a 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:02 compute-1 ceph-mon[81689]: pgmap v2985: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Dec 06 07:53:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3308678200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:02.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:03 compute-1 ovn_controller[130279]: 2025-12-06T07:53:03Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:63:bb 10.100.0.10
Dec 06 07:53:03 compute-1 ovn_controller[130279]: 2025-12-06T07:53:03Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:63:bb 10.100.0.10
Dec 06 07:53:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:03.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:04 compute-1 nova_compute[226101]: 2025-12-06 07:53:04.234 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:04 compute-1 ceph-mon[81689]: pgmap v2986: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 161 op/s
Dec 06 07:53:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2434060058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2434060058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:04 compute-1 nova_compute[226101]: 2025-12-06 07:53:04.948 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e390 e390: 3 total, 3 up, 3 in
Dec 06 07:53:05 compute-1 nova_compute[226101]: 2025-12-06 07:53:05.360 226109 DEBUG nova.compute.manager [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:05 compute-1 nova_compute[226101]: 2025-12-06 07:53:05.361 226109 DEBUG nova.compute.manager [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing instance network info cache due to event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:53:05 compute-1 nova_compute[226101]: 2025-12-06 07:53:05.361 226109 DEBUG oslo_concurrency.lockutils [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:05 compute-1 nova_compute[226101]: 2025-12-06 07:53:05.361 226109 DEBUG oslo_concurrency.lockutils [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:05 compute-1 nova_compute[226101]: 2025-12-06 07:53:05.361 226109 DEBUG nova.network.neutron [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:53:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:05.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:05 compute-1 nova_compute[226101]: 2025-12-06 07:53:05.894 226109 DEBUG nova.compute.manager [req-b90772ae-b8ef-4bbf-a44b-1550fd0c8f6e req-74cf56c7-c974-4db1-9358-f4aa7a0e3c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:05 compute-1 nova_compute[226101]: 2025-12-06 07:53:05.894 226109 DEBUG nova.compute.manager [req-b90772ae-b8ef-4bbf-a44b-1550fd0c8f6e req-74cf56c7-c974-4db1-9358-f4aa7a0e3c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing instance network info cache due to event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:53:05 compute-1 nova_compute[226101]: 2025-12-06 07:53:05.894 226109 DEBUG oslo_concurrency.lockutils [req-b90772ae-b8ef-4bbf-a44b-1550fd0c8f6e req-74cf56c7-c974-4db1-9358-f4aa7a0e3c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:06 compute-1 ceph-mon[81689]: osdmap e390: 3 total, 3 up, 3 in
Dec 06 07:53:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:06.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:07 compute-1 ceph-mon[81689]: pgmap v2988: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 710 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 267 op/s
Dec 06 07:53:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:07.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e391 e391: 3 total, 3 up, 3 in
Dec 06 07:53:07 compute-1 nova_compute[226101]: 2025-12-06 07:53:07.962 226109 DEBUG nova.compute.manager [req-99cdca06-8692-4e27-9f6c-f66249e1486b req-0cb0ffd8-a160-4cd5-a7f2-793b98617fdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:07 compute-1 nova_compute[226101]: 2025-12-06 07:53:07.962 226109 DEBUG nova.compute.manager [req-99cdca06-8692-4e27-9f6c-f66249e1486b req-0cb0ffd8-a160-4cd5-a7f2-793b98617fdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing instance network info cache due to event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:53:07 compute-1 nova_compute[226101]: 2025-12-06 07:53:07.962 226109 DEBUG oslo_concurrency.lockutils [req-99cdca06-8692-4e27-9f6c-f66249e1486b req-0cb0ffd8-a160-4cd5-a7f2-793b98617fdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:08 compute-1 nova_compute[226101]: 2025-12-06 07:53:08.425 226109 DEBUG nova.network.neutron [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updated VIF entry in instance network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:53:08 compute-1 nova_compute[226101]: 2025-12-06 07:53:08.425 226109 DEBUG nova.network.neutron [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:08 compute-1 nova_compute[226101]: 2025-12-06 07:53:08.448 226109 DEBUG oslo_concurrency.lockutils [req-d533845f-9bed-484a-8d9d-c5a0d1c1afee req-e9e8a393-e042-4b9f-9233-8365ca235f48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:08 compute-1 nova_compute[226101]: 2025-12-06 07:53:08.449 226109 DEBUG oslo_concurrency.lockutils [req-b90772ae-b8ef-4bbf-a44b-1550fd0c8f6e req-74cf56c7-c974-4db1-9358-f4aa7a0e3c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:08 compute-1 nova_compute[226101]: 2025-12-06 07:53:08.449 226109 DEBUG nova.network.neutron [req-b90772ae-b8ef-4bbf-a44b-1550fd0c8f6e req-74cf56c7-c974-4db1-9358-f4aa7a0e3c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:53:08 compute-1 ceph-mon[81689]: pgmap v2989: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 325 op/s
Dec 06 07:53:08 compute-1 ceph-mon[81689]: osdmap e391: 3 total, 3 up, 3 in
Dec 06 07:53:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:08.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:09 compute-1 nova_compute[226101]: 2025-12-06 07:53:09.267 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:09.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:10 compute-1 nova_compute[226101]: 2025-12-06 07:53:10.403 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:10.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2874821777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2874821777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:11.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.662 226109 DEBUG nova.compute.manager [req-e30b09fc-358d-4410-91d1-53a82e5fd9a0 req-8f572e64-5862-42e0-b4c3-1ebb47d41592 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-changed-c775a9ab-4177-4c1a-880a-35b1b701d488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.663 226109 DEBUG nova.compute.manager [req-e30b09fc-358d-4410-91d1-53a82e5fd9a0 req-8f572e64-5862-42e0-b4c3-1ebb47d41592 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Refreshing instance network info cache due to event network-changed-c775a9ab-4177-4c1a-880a-35b1b701d488. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.663 226109 DEBUG oslo_concurrency.lockutils [req-e30b09fc-358d-4410-91d1-53a82e5fd9a0 req-8f572e64-5862-42e0-b4c3-1ebb47d41592 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.663 226109 DEBUG oslo_concurrency.lockutils [req-e30b09fc-358d-4410-91d1-53a82e5fd9a0 req-8f572e64-5862-42e0-b4c3-1ebb47d41592 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.663 226109 DEBUG nova.network.neutron [req-e30b09fc-358d-4410-91d1-53a82e5fd9a0 req-8f572e64-5862-42e0-b4c3-1ebb47d41592 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Refreshing network info cache for port c775a9ab-4177-4c1a-880a-35b1b701d488 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.741 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.741 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.742 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.742 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.742 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.743 226109 INFO nova.compute.manager [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Terminating instance
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.745 226109 DEBUG nova.compute.manager [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:53:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:12.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.756 226109 DEBUG nova.network.neutron [req-b90772ae-b8ef-4bbf-a44b-1550fd0c8f6e req-74cf56c7-c974-4db1-9358-f4aa7a0e3c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updated VIF entry in instance network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.756 226109 DEBUG nova.network.neutron [req-b90772ae-b8ef-4bbf-a44b-1550fd0c8f6e req-74cf56c7-c974-4db1-9358-f4aa7a0e3c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.771 226109 DEBUG oslo_concurrency.lockutils [req-b90772ae-b8ef-4bbf-a44b-1550fd0c8f6e req-74cf56c7-c974-4db1-9358-f4aa7a0e3c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.772 226109 DEBUG oslo_concurrency.lockutils [req-99cdca06-8692-4e27-9f6c-f66249e1486b req-0cb0ffd8-a160-4cd5-a7f2-793b98617fdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:12 compute-1 nova_compute[226101]: 2025-12-06 07:53:12.772 226109 DEBUG nova.network.neutron [req-99cdca06-8692-4e27-9f6c-f66249e1486b req-0cb0ffd8-a160-4cd5-a7f2-793b98617fdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:53:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:13.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:14 compute-1 ceph-mon[81689]: pgmap v2991: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 654 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 319 op/s
Dec 06 07:53:14 compute-1 kernel: tapc775a9ab-41 (unregistering): left promiscuous mode
Dec 06 07:53:14 compute-1 NetworkManager[49031]: <info>  [1765007594.1285] device (tapc775a9ab-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:53:14 compute-1 ovn_controller[130279]: 2025-12-06T07:53:14Z|00667|binding|INFO|Releasing lport c775a9ab-4177-4c1a-880a-35b1b701d488 from this chassis (sb_readonly=0)
Dec 06 07:53:14 compute-1 ovn_controller[130279]: 2025-12-06T07:53:14Z|00668|binding|INFO|Setting lport c775a9ab-4177-4c1a-880a-35b1b701d488 down in Southbound
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.136 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:14 compute-1 ovn_controller[130279]: 2025-12-06T07:53:14Z|00669|binding|INFO|Removing iface tapc775a9ab-41 ovn-installed in OVS
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.139 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.151 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:14 compute-1 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Dec 06 07:53:14 compute-1 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a7.scope: Consumed 20.770s CPU time.
Dec 06 07:53:14 compute-1 systemd-machined[190302]: Machine qemu-75-instance-000000a7 terminated.
Dec 06 07:53:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.270 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:14 compute-1 kernel: tapc775a9ab-41: entered promiscuous mode
Dec 06 07:53:14 compute-1 NetworkManager[49031]: <info>  [1765007594.3676] manager: (tapc775a9ab-41): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Dec 06 07:53:14 compute-1 kernel: tapc775a9ab-41 (unregistering): left promiscuous mode
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.372 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.387 226109 INFO nova.virt.libvirt.driver [-] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Instance destroyed successfully.
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.388 226109 DEBUG nova.objects.instance [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'resources' on Instance uuid e7d5d854-2a1f-485b-931a-4ec90cf7ba04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:53:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:14.528 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:c8:49 10.100.0.4'], port_security=['fa:16:3e:66:c8:49 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e7d5d854-2a1f-485b-931a-4ec90cf7ba04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4cf19b89a6d46bca307e65731a9dd21', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8dd9f4b-9afe-430e-a0a0-846e8785c631', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1ea0b24-813d-4f2d-b582-53b5b07aa43a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=c775a9ab-4177-4c1a-880a-35b1b701d488) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:14.530 139580 INFO neutron.agent.ovn.metadata.agent [-] Port c775a9ab-4177-4c1a-880a-35b1b701d488 in datapath 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 unbound from our chassis
Dec 06 07:53:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:14.534 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:53:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:14.535 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b62f8d0-679f-4a57-928e-faacdf62eb1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:14.536 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 namespace which is not needed anymore
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.552 226109 DEBUG nova.virt.libvirt.vif [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:50:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-2011496999',display_name='tempest-TestStampPattern-server-2011496999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-teststamppattern-server-2011496999',id=167,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8KhRxaTrkKNzMUybnifFqVhR7VOW5ilrhcPN+BlOV2c9vQAH2tT4hPBYJpZ93aPVMmrWQGW35OWGQh34F5+BdF2On//RqgE6BOka+CpM6HEuYW/HMTwic5wOTQHp91yg==',key_name='tempest-TestStampPattern-1707395411',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4cf19b89a6d46bca307e65731a9dd21',ramdisk_id='',reservation_id='r-yxfbgt70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-1318067975',owner_user_name='tempest-TestStampPattern-1318067975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:41Z,user_data=None,user_id='4962bc7b172346e19d127b46ea2d7a11',uuid=e7d5d854-2a1f-485b-931a-4ec90cf7ba04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.553 226109 DEBUG nova.network.os_vif_util [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converting VIF {"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.553 226109 DEBUG nova.network.os_vif_util [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:c8:49,bridge_name='br-int',has_traffic_filtering=True,id=c775a9ab-4177-4c1a-880a-35b1b701d488,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775a9ab-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.554 226109 DEBUG os_vif [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:c8:49,bridge_name='br-int',has_traffic_filtering=True,id=c775a9ab-4177-4c1a-880a-35b1b701d488,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775a9ab-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.555 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.556 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc775a9ab-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.557 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.559 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:14 compute-1 nova_compute[226101]: 2025-12-06 07:53:14.561 226109 INFO os_vif [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:c8:49,bridge_name='br-int',has_traffic_filtering=True,id=c775a9ab-4177-4c1a-880a-35b1b701d488,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc775a9ab-41')
Dec 06 07:53:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:14.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:14 compute-1 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[289172]: [NOTICE]   (289178) : haproxy version is 2.8.14-c23fe91
Dec 06 07:53:14 compute-1 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[289172]: [NOTICE]   (289178) : path to executable is /usr/sbin/haproxy
Dec 06 07:53:14 compute-1 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[289172]: [WARNING]  (289178) : Exiting Master process...
Dec 06 07:53:14 compute-1 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[289172]: [WARNING]  (289178) : Exiting Master process...
Dec 06 07:53:14 compute-1 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[289172]: [ALERT]    (289178) : Current worker (289180) exited with code 143 (Terminated)
Dec 06 07:53:14 compute-1 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[289172]: [WARNING]  (289178) : All workers exited. Exiting... (0)
Dec 06 07:53:14 compute-1 systemd[1]: libpod-27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877.scope: Deactivated successfully.
Dec 06 07:53:14 compute-1 podman[291671]: 2025-12-06 07:53:14.946366454 +0000 UTC m=+0.322113170 container died 27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:53:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877-userdata-shm.mount: Deactivated successfully.
Dec 06 07:53:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-d2c320bd327b08948367d97b9fc8498a753574e4fed40c2107aaf466027ef86f-merged.mount: Deactivated successfully.
Dec 06 07:53:15 compute-1 podman[291671]: 2025-12-06 07:53:15.032425531 +0000 UTC m=+0.408172237 container cleanup 27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:53:15 compute-1 systemd[1]: libpod-conmon-27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877.scope: Deactivated successfully.
Dec 06 07:53:15 compute-1 ceph-mon[81689]: pgmap v2992: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 349 op/s
Dec 06 07:53:15 compute-1 ceph-mon[81689]: pgmap v2993: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 252 op/s
Dec 06 07:53:15 compute-1 podman[291700]: 2025-12-06 07:53:15.337786789 +0000 UTC m=+0.286289726 container remove 27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.344 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe0f3dd-9836-4db0-ad1f-127307a15394]: (4, ('Sat Dec  6 07:53:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 (27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877)\n27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877\nSat Dec  6 07:53:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 (27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877)\n27f5b08d0fece3ec2e3fd51f93c84b428c92f0434309b9201a8d4f0e1ca1d877\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.345 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[19617a4f-e4bd-4102-8569-87b582e6c2ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.346 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e0e5f36-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:15 compute-1 nova_compute[226101]: 2025-12-06 07:53:15.348 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:15 compute-1 kernel: tap9e0e5f36-40: left promiscuous mode
Dec 06 07:53:15 compute-1 nova_compute[226101]: 2025-12-06 07:53:15.376 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.379 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4a29b0-272d-4133-8cf5-7f2714359272]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.398 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a05e10ff-3ffc-449e-850c-3db5549c4851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.399 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e17208-f34e-4140-a918-e9ca14470a4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.413 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6057c538-1da7-46a2-9218-2b44ebb32aa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771785, 'reachable_time': 35573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291715, 'error': None, 'target': 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.415 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:53:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:15.416 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[14239367-e061-409e-be1b-19ec3ea7b67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:15 compute-1 systemd[1]: run-netns-ovnmeta\x2d9e0e5f36\x2d40fa\x2d4d3b\x2db8ee\x2d8071f7ac21d7.mount: Deactivated successfully.
Dec 06 07:53:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:15.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:16 compute-1 ceph-mon[81689]: pgmap v2994: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 368 KiB/s wr, 159 op/s
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.321 226109 INFO nova.virt.libvirt.driver [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Deleting instance files /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04_del
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.321 226109 INFO nova.virt.libvirt.driver [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Deletion of /var/lib/nova/instances/e7d5d854-2a1f-485b-931a-4ec90cf7ba04_del complete
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.436 226109 INFO nova.compute.manager [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Took 3.69 seconds to destroy the instance on the hypervisor.
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.437 226109 DEBUG oslo.service.loopingcall [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.437 226109 DEBUG nova.compute.manager [-] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.437 226109 DEBUG nova.network.neutron [-] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:53:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.976 226109 DEBUG nova.compute.manager [req-de8a9fa8-04ef-4e60-9a1e-c827da7e5284 req-534d5b4d-7c78-4ae8-9147-927d448a35cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-vif-unplugged-c775a9ab-4177-4c1a-880a-35b1b701d488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.977 226109 DEBUG oslo_concurrency.lockutils [req-de8a9fa8-04ef-4e60-9a1e-c827da7e5284 req-534d5b4d-7c78-4ae8-9147-927d448a35cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.977 226109 DEBUG oslo_concurrency.lockutils [req-de8a9fa8-04ef-4e60-9a1e-c827da7e5284 req-534d5b4d-7c78-4ae8-9147-927d448a35cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.977 226109 DEBUG oslo_concurrency.lockutils [req-de8a9fa8-04ef-4e60-9a1e-c827da7e5284 req-534d5b4d-7c78-4ae8-9147-927d448a35cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.977 226109 DEBUG nova.compute.manager [req-de8a9fa8-04ef-4e60-9a1e-c827da7e5284 req-534d5b4d-7c78-4ae8-9147-927d448a35cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] No waiting events found dispatching network-vif-unplugged-c775a9ab-4177-4c1a-880a-35b1b701d488 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:16 compute-1 nova_compute[226101]: 2025-12-06 07:53:16.978 226109 DEBUG nova.compute.manager [req-de8a9fa8-04ef-4e60-9a1e-c827da7e5284 req-534d5b4d-7c78-4ae8-9147-927d448a35cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-vif-unplugged-c775a9ab-4177-4c1a-880a-35b1b701d488 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:53:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:17.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:17 compute-1 nova_compute[226101]: 2025-12-06 07:53:17.702 226109 DEBUG nova.network.neutron [req-e30b09fc-358d-4410-91d1-53a82e5fd9a0 req-8f572e64-5862-42e0-b4c3-1ebb47d41592 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updated VIF entry in instance network info cache for port c775a9ab-4177-4c1a-880a-35b1b701d488. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:53:17 compute-1 nova_compute[226101]: 2025-12-06 07:53:17.703 226109 DEBUG nova.network.neutron [req-e30b09fc-358d-4410-91d1-53a82e5fd9a0 req-8f572e64-5862-42e0-b4c3-1ebb47d41592 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updating instance_info_cache with network_info: [{"id": "c775a9ab-4177-4c1a-880a-35b1b701d488", "address": "fa:16:3e:66:c8:49", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc775a9ab-41", "ovs_interfaceid": "c775a9ab-4177-4c1a-880a-35b1b701d488", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:17 compute-1 nova_compute[226101]: 2025-12-06 07:53:17.784 226109 DEBUG oslo_concurrency.lockutils [req-e30b09fc-358d-4410-91d1-53a82e5fd9a0 req-8f572e64-5862-42e0-b4c3-1ebb47d41592 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e7d5d854-2a1f-485b-931a-4ec90cf7ba04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:17 compute-1 nova_compute[226101]: 2025-12-06 07:53:17.925 226109 DEBUG nova.network.neutron [req-99cdca06-8692-4e27-9f6c-f66249e1486b req-0cb0ffd8-a160-4cd5-a7f2-793b98617fdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updated VIF entry in instance network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:53:17 compute-1 nova_compute[226101]: 2025-12-06 07:53:17.925 226109 DEBUG nova.network.neutron [req-99cdca06-8692-4e27-9f6c-f66249e1486b req-0cb0ffd8-a160-4cd5-a7f2-793b98617fdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.203 226109 DEBUG oslo_concurrency.lockutils [req-99cdca06-8692-4e27-9f6c-f66249e1486b req-0cb0ffd8-a160-4cd5-a7f2-793b98617fdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.292 226109 DEBUG nova.network.neutron [-] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.297 226109 DEBUG nova.compute.manager [req-d5822642-fec7-40d3-99fe-2085436dedaa req-e6ad4df4-c7bb-453a-be10-604854489351 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-vif-deleted-c775a9ab-4177-4c1a-880a-35b1b701d488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.298 226109 INFO nova.compute.manager [req-d5822642-fec7-40d3-99fe-2085436dedaa req-e6ad4df4-c7bb-453a-be10-604854489351 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Neutron deleted interface c775a9ab-4177-4c1a-880a-35b1b701d488; detaching it from the instance and deleting it from the info cache
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.298 226109 DEBUG nova.network.neutron [req-d5822642-fec7-40d3-99fe-2085436dedaa req-e6ad4df4-c7bb-453a-be10-604854489351 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:18 compute-1 ceph-mon[81689]: pgmap v2995: 305 pgs: 305 active+clean; 612 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 515 KiB/s wr, 113 op/s
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.380 226109 INFO nova.compute.manager [-] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Took 1.94 seconds to deallocate network for instance.
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.433 226109 DEBUG nova.compute.manager [req-d5822642-fec7-40d3-99fe-2085436dedaa req-e6ad4df4-c7bb-453a-be10-604854489351 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Detach interface failed, port_id=c775a9ab-4177-4c1a-880a-35b1b701d488, reason: Instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.659 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.660 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:18 compute-1 sshd-session[291717]: Received disconnect from 154.219.116.39 port 44972:11: Bye Bye [preauth]
Dec 06 07:53:18 compute-1 sshd-session[291717]: Disconnected from authenticating user root 154.219.116.39 port 44972 [preauth]
Dec 06 07:53:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e392 e392: 3 total, 3 up, 3 in
Dec 06 07:53:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:18.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:18 compute-1 nova_compute[226101]: 2025-12-06 07:53:18.776 226109 DEBUG oslo_concurrency.processutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.149 226109 DEBUG nova.compute.manager [req-da5984e8-d292-4b86-9d80-251c179b28fd req-6f271f07-e33e-4308-b27b-9a7d149a9f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received event network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.149 226109 DEBUG oslo_concurrency.lockutils [req-da5984e8-d292-4b86-9d80-251c179b28fd req-6f271f07-e33e-4308-b27b-9a7d149a9f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.150 226109 DEBUG oslo_concurrency.lockutils [req-da5984e8-d292-4b86-9d80-251c179b28fd req-6f271f07-e33e-4308-b27b-9a7d149a9f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.150 226109 DEBUG oslo_concurrency.lockutils [req-da5984e8-d292-4b86-9d80-251c179b28fd req-6f271f07-e33e-4308-b27b-9a7d149a9f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.150 226109 DEBUG nova.compute.manager [req-da5984e8-d292-4b86-9d80-251c179b28fd req-6f271f07-e33e-4308-b27b-9a7d149a9f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] No waiting events found dispatching network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.150 226109 WARNING nova.compute.manager [req-da5984e8-d292-4b86-9d80-251c179b28fd req-6f271f07-e33e-4308-b27b-9a7d149a9f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Received unexpected event network-vif-plugged-c775a9ab-4177-4c1a-880a-35b1b701d488 for instance with vm_state deleted and task_state None.
Dec 06 07:53:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3201494518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.236 226109 DEBUG oslo_concurrency.processutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.243 226109 DEBUG nova.compute.provider_tree [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.316 226109 DEBUG nova.scheduler.client.report [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.320 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.381 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.406 226109 INFO nova.scheduler.client.report [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Deleted allocations for instance e7d5d854-2a1f-485b-931a-4ec90cf7ba04
Dec 06 07:53:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:19.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.498 226109 DEBUG oslo_concurrency.lockutils [None req-8cd1d140-6e86-4c02-9533-84bf5a0cad7a 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "e7d5d854-2a1f-485b-931a-4ec90cf7ba04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:19 compute-1 nova_compute[226101]: 2025-12-06 07:53:19.558 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:19 compute-1 ceph-mon[81689]: osdmap e392: 3 total, 3 up, 3 in
Dec 06 07:53:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3201494518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:20 compute-1 nova_compute[226101]: 2025-12-06 07:53:20.391 226109 DEBUG oslo_concurrency.lockutils [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:20 compute-1 nova_compute[226101]: 2025-12-06 07:53:20.392 226109 DEBUG oslo_concurrency.lockutils [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:20 compute-1 nova_compute[226101]: 2025-12-06 07:53:20.392 226109 INFO nova.compute.manager [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Rebooting instance
Dec 06 07:53:20 compute-1 nova_compute[226101]: 2025-12-06 07:53:20.462 226109 DEBUG oslo_concurrency.lockutils [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:20 compute-1 nova_compute[226101]: 2025-12-06 07:53:20.462 226109 DEBUG oslo_concurrency.lockutils [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:20 compute-1 nova_compute[226101]: 2025-12-06 07:53:20.463 226109 DEBUG nova.network.neutron [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:53:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:20.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:20 compute-1 ceph-mon[81689]: pgmap v2997: 305 pgs: 305 active+clean; 605 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 861 KiB/s rd, 1.7 MiB/s wr, 139 op/s
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.222 226109 DEBUG nova.compute.manager [req-552e54fe-36b3-4451-925e-535fd820e924 req-f50425e2-7b79-43ea-bfbf-6d6eddb1ca73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.223 226109 DEBUG nova.compute.manager [req-552e54fe-36b3-4451-925e-535fd820e924 req-f50425e2-7b79-43ea-bfbf-6d6eddb1ca73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing instance network info cache due to event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.223 226109 DEBUG oslo_concurrency.lockutils [req-552e54fe-36b3-4451-925e-535fd820e924 req-f50425e2-7b79-43ea-bfbf-6d6eddb1ca73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:21.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.650 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.650 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1779789501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1779789501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.856 226109 DEBUG nova.network.neutron [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.891 226109 DEBUG oslo_concurrency.lockutils [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.892 226109 DEBUG nova.compute.manager [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.893 226109 DEBUG oslo_concurrency.lockutils [req-552e54fe-36b3-4451-925e-535fd820e924 req-f50425e2-7b79-43ea-bfbf-6d6eddb1ca73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:21 compute-1 nova_compute[226101]: 2025-12-06 07:53:21.893 226109 DEBUG nova.network.neutron [req-552e54fe-36b3-4451-925e-535fd820e924 req-f50425e2-7b79-43ea-bfbf-6d6eddb1ca73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:53:22 compute-1 podman[291761]: 2025-12-06 07:53:22.079856948 +0000 UTC m=+0.056593173 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:53:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1020941421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:22 compute-1 podman[291762]: 2025-12-06 07:53:22.104979055 +0000 UTC m=+0.077997651 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.116 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:22 compute-1 podman[291763]: 2025-12-06 07:53:22.141157598 +0000 UTC m=+0.110371241 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.203 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000ab as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.203 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000ab as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.204 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000ab as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.207 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.207 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.388 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.389 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3934MB free_disk=20.739639282226562GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.389 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.390 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.533 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 0852932a-3266-432f-9975-8870535aff4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.533 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.533 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.534 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:53:22 compute-1 nova_compute[226101]: 2025-12-06 07:53:22.603 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:22.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:22 compute-1 ceph-mon[81689]: pgmap v2998: 305 pgs: 305 active+clean; 592 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 915 KiB/s rd, 2.6 MiB/s wr, 157 op/s
Dec 06 07:53:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1020941421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1210000473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:23 compute-1 nova_compute[226101]: 2025-12-06 07:53:23.053 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:23 compute-1 nova_compute[226101]: 2025-12-06 07:53:23.058 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:53:23 compute-1 nova_compute[226101]: 2025-12-06 07:53:23.075 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:53:23 compute-1 nova_compute[226101]: 2025-12-06 07:53:23.109 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:53:23 compute-1 nova_compute[226101]: 2025-12-06 07:53:23.109 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:23.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1210000473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.304 226109 DEBUG nova.network.neutron [req-552e54fe-36b3-4451-925e-535fd820e924 req-f50425e2-7b79-43ea-bfbf-6d6eddb1ca73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updated VIF entry in instance network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.305 226109 DEBUG nova.network.neutron [req-552e54fe-36b3-4451-925e-535fd820e924 req-f50425e2-7b79-43ea-bfbf-6d6eddb1ca73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.320 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.323 226109 DEBUG oslo_concurrency.lockutils [req-552e54fe-36b3-4451-925e-535fd820e924 req-f50425e2-7b79-43ea-bfbf-6d6eddb1ca73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:24 compute-1 kernel: tap5a1f8827-ee (unregistering): left promiscuous mode
Dec 06 07:53:24 compute-1 NetworkManager[49031]: <info>  [1765007604.5444] device (tap5a1f8827-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.548 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:24 compute-1 ovn_controller[130279]: 2025-12-06T07:53:24Z|00670|binding|INFO|Releasing lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 from this chassis (sb_readonly=0)
Dec 06 07:53:24 compute-1 ovn_controller[130279]: 2025-12-06T07:53:24Z|00671|binding|INFO|Setting lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 down in Southbound
Dec 06 07:53:24 compute-1 ovn_controller[130279]: 2025-12-06T07:53:24Z|00672|binding|INFO|Removing iface tap5a1f8827-ee ovn-installed in OVS
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.551 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:24.556 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:63:bb 10.100.0.10'], port_security=['fa:16:3e:bf:63:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d2e7124-8b09-4fd5-bc94-c2c9a3d964af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f08afae8-f952-4a01-a643-61a4dc212937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5ab362db-0222-4797-8028-08029e0f339a 97e2366f-5968-4702-838b-5830aacf120f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9bb6731-402a-4b02-b4cf-9ac913838fd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=5a1f8827-eedc-46b4-859d-37533bd8cdb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:24.558 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 in datapath f08afae8-f952-4a01-a643-61a4dc212937 unbound from our chassis
Dec 06 07:53:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:24.561 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f08afae8-f952-4a01-a643-61a4dc212937, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.559 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:24.562 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[08bd1686-97d3-4285-9ef8-d3b48833b821]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:24.563 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 namespace which is not needed anymore
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:24 compute-1 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Dec 06 07:53:24 compute-1 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000ab.scope: Consumed 14.679s CPU time.
Dec 06 07:53:24 compute-1 systemd-machined[190302]: Machine qemu-78-instance-000000ab terminated.
Dec 06 07:53:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:24.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.774 226109 DEBUG nova.compute.manager [req-d4611f0c-8208-4975-9397-8cd7fe7350f1 req-69976742-ad05-4f1f-8205-c76d22f6a6fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-unplugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.775 226109 DEBUG oslo_concurrency.lockutils [req-d4611f0c-8208-4975-9397-8cd7fe7350f1 req-69976742-ad05-4f1f-8205-c76d22f6a6fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.775 226109 DEBUG oslo_concurrency.lockutils [req-d4611f0c-8208-4975-9397-8cd7fe7350f1 req-69976742-ad05-4f1f-8205-c76d22f6a6fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.775 226109 DEBUG oslo_concurrency.lockutils [req-d4611f0c-8208-4975-9397-8cd7fe7350f1 req-69976742-ad05-4f1f-8205-c76d22f6a6fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.775 226109 DEBUG nova.compute.manager [req-d4611f0c-8208-4975-9397-8cd7fe7350f1 req-69976742-ad05-4f1f-8205-c76d22f6a6fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] No waiting events found dispatching network-vif-unplugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.775 226109 WARNING nova.compute.manager [req-d4611f0c-8208-4975-9397-8cd7fe7350f1 req-69976742-ad05-4f1f-8205-c76d22f6a6fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received unexpected event network-vif-unplugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 for instance with vm_state active and task_state reboot_started.
Dec 06 07:53:24 compute-1 nova_compute[226101]: 2025-12-06 07:53:24.776 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:25 compute-1 nova_compute[226101]: 2025-12-06 07:53:25.010 226109 INFO nova.virt.libvirt.driver [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Instance shutdown successfully.
Dec 06 07:53:25 compute-1 kernel: tap5a1f8827-ee: entered promiscuous mode
Dec 06 07:53:25 compute-1 systemd-udevd[291853]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:53:25 compute-1 nova_compute[226101]: 2025-12-06 07:53:25.063 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:25 compute-1 NetworkManager[49031]: <info>  [1765007605.0637] manager: (tap5a1f8827-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Dec 06 07:53:25 compute-1 ovn_controller[130279]: 2025-12-06T07:53:25Z|00673|binding|INFO|Claiming lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 for this chassis.
Dec 06 07:53:25 compute-1 ovn_controller[130279]: 2025-12-06T07:53:25Z|00674|binding|INFO|5a1f8827-eedc-46b4-859d-37533bd8cdb6: Claiming fa:16:3e:bf:63:bb 10.100.0.10
Dec 06 07:53:25 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:25.069 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:63:bb 10.100.0.10'], port_security=['fa:16:3e:bf:63:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d2e7124-8b09-4fd5-bc94-c2c9a3d964af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f08afae8-f952-4a01-a643-61a4dc212937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5ab362db-0222-4797-8028-08029e0f339a 97e2366f-5968-4702-838b-5830aacf120f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9bb6731-402a-4b02-b4cf-9ac913838fd2, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=5a1f8827-eedc-46b4-859d-37533bd8cdb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:25 compute-1 NetworkManager[49031]: <info>  [1765007605.0730] device (tap5a1f8827-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:53:25 compute-1 NetworkManager[49031]: <info>  [1765007605.0740] device (tap5a1f8827-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:53:25 compute-1 ovn_controller[130279]: 2025-12-06T07:53:25Z|00675|binding|INFO|Setting lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 ovn-installed in OVS
Dec 06 07:53:25 compute-1 ovn_controller[130279]: 2025-12-06T07:53:25Z|00676|binding|INFO|Setting lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 up in Southbound
Dec 06 07:53:25 compute-1 nova_compute[226101]: 2025-12-06 07:53:25.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:25 compute-1 nova_compute[226101]: 2025-12-06 07:53:25.084 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:25 compute-1 systemd-machined[190302]: New machine qemu-79-instance-000000ab.
Dec 06 07:53:25 compute-1 systemd[1]: Started Virtual Machine qemu-79-instance-000000ab.
Dec 06 07:53:25 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[291509]: [NOTICE]   (291514) : haproxy version is 2.8.14-c23fe91
Dec 06 07:53:25 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[291509]: [NOTICE]   (291514) : path to executable is /usr/sbin/haproxy
Dec 06 07:53:25 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[291509]: [WARNING]  (291514) : Exiting Master process...
Dec 06 07:53:25 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[291509]: [ALERT]    (291514) : Current worker (291520) exited with code 143 (Terminated)
Dec 06 07:53:25 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[291509]: [WARNING]  (291514) : All workers exited. Exiting... (0)
Dec 06 07:53:25 compute-1 systemd[1]: libpod-a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7.scope: Deactivated successfully.
Dec 06 07:53:25 compute-1 podman[291873]: 2025-12-06 07:53:25.213133229 +0000 UTC m=+0.558113473 container died a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:53:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:25.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:25 compute-1 ceph-mon[81689]: pgmap v2999: 305 pgs: 305 active+clean; 592 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 915 KiB/s rd, 2.6 MiB/s wr, 157 op/s
Dec 06 07:53:26 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7-userdata-shm.mount: Deactivated successfully.
Dec 06 07:53:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-4273cfadb457ab7a4a7d0cb5dde4b40a75cf0fcbfc21793e7997988f1fb92477-merged.mount: Deactivated successfully.
Dec 06 07:53:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:26.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:26 compute-1 ovn_controller[130279]: 2025-12-06T07:53:26Z|00677|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:53:26 compute-1 ovn_controller[130279]: 2025-12-06T07:53:26Z|00678|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec 06 07:53:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/199665366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:26 compute-1 ceph-mon[81689]: pgmap v3000: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 2.6 MiB/s wr, 137 op/s
Dec 06 07:53:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2736518958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.910 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:26 compute-1 podman[291873]: 2025-12-06 07:53:26.911988872 +0000 UTC m=+2.256969126 container cleanup a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:53:26 compute-1 systemd[1]: libpod-conmon-a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7.scope: Deactivated successfully.
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.942 226109 DEBUG nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.943 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.943 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.943 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.943 226109 DEBUG nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] No waiting events found dispatching network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.943 226109 WARNING nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received unexpected event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 for instance with vm_state active and task_state reboot_started.
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.944 226109 DEBUG nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.944 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.944 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.944 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.944 226109 DEBUG nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] No waiting events found dispatching network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.945 226109 WARNING nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received unexpected event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 for instance with vm_state active and task_state reboot_started.
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.945 226109 DEBUG nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.945 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.945 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.946 226109 DEBUG oslo_concurrency.lockutils [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.946 226109 DEBUG nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] No waiting events found dispatching network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.946 226109 WARNING nova.compute.manager [req-64c5a61b-82de-43c2-afb6-82e7093e892e req-762c945e-c2f1-41c5-b0c1-0c6223e51ee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received unexpected event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 for instance with vm_state active and task_state reboot_started.
Dec 06 07:53:26 compute-1 podman[291964]: 2025-12-06 07:53:26.986685763 +0000 UTC m=+0.051346523 container remove a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:53:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:26.992 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c82873-6308-49d0-9f2c-f612cfddef15]: (4, ('Sat Dec  6 07:53:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 (a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7)\na72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7\nSat Dec  6 07:53:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 (a72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7)\na72342461ea7f93dcd4f0f98b1725c7af6e295580c3cc326860a845b8137f8e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:26.993 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[77c352d2-de91-4064-ade4-9aee58c1bcc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:26.994 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08afae8-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:26 compute-1 nova_compute[226101]: 2025-12-06 07:53:26.996 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:26 compute-1 kernel: tapf08afae8-f0: left promiscuous mode
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.009 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.013 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1ec84c-97b0-4300-8687-3deea80f7c7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.025 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b51f038c-199e-45d8-b9e6-7b5629afde9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.027 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2c56b28a-8bc4-4f83-8a29-62cdcca73c00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.040 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d8c708-fbf1-469d-be7a-c192e15a7f37]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784616, 'reachable_time': 43603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292006, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 systemd[1]: run-netns-ovnmeta\x2df08afae8\x2df952\x2d4a01\x2da643\x2d61a4dc212937.mount: Deactivated successfully.
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.044 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.044 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[16fcd170-04ce-4e38-bcbd-bbff3884e9e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.045 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 in datapath f08afae8-f952-4a01-a643-61a4dc212937 unbound from our chassis
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.046 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f08afae8-f952-4a01-a643-61a4dc212937
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.056 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6609887e-f305-49f0-8bc4-4133657dc302]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.056 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf08afae8-f1 in ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.058 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf08afae8-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.058 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7af764-b724-40bb-8cd2-4d932d9be3c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.058 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[51c0b3ff-304f-499d-8c58-827de2ed845c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.068 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2990ae76-6344-46d1-ae14-a9ac385828ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.090 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5da8ae52-c425-4735-97b3-4e4ef2e2b3ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.095 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.095 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007607.0953925, 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.096 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] VM Resumed (Lifecycle Event)
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.100 226109 INFO nova.virt.libvirt.driver [-] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Instance running successfully.
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.101 226109 INFO nova.virt.libvirt.driver [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Instance soft rebooted successfully.
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.101 226109 DEBUG nova.compute.manager [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.121 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7abbc516-8e3a-43fa-bd6c-a25f0e34f150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 NetworkManager[49031]: <info>  [1765007607.1266] manager: (tapf08afae8-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.127 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a2235635-8597-49a2-8a8b-f839e9937819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.128 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.132 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.154 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] During sync_power_state the instance has a pending task (reboot_started). Skip.
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.154 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007607.0972931, 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.154 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] VM Started (Lifecycle Event)
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.158 226109 DEBUG oslo_concurrency.lockutils [None req-abba17ea-f7c4-4969-9a3c-9cd0e7bbdc23 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.158 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[208a3855-8390-477e-aa9e-f744ac6270e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.161 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8564f67e-d959-4542-bb76-e7b898abcbb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.174 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.177 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:53:27 compute-1 NetworkManager[49031]: <info>  [1765007607.1825] device (tapf08afae8-f0): carrier: link connected
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.186 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[306ea2ee-7fbc-4a7e-a5d0-0d9d55ec434b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.202 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[47a9875e-a3b2-40b4-865e-1baa424bec6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf08afae8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:11:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788668, 'reachable_time': 16549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292031, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.213 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[669f739f-acf3-401c-bce5-ed76ead21bd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:1118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 788668, 'tstamp': 788668}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292032, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.227 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3cc1ad-f23c-4723-9912-95353d36a075]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf08afae8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:11:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788668, 'reachable_time': 16549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292033, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.249 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2b470eab-4e63-4def-a43a-21ae4d561a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.291 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1950c7-961f-4f63-b2f5-c682f198bdaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.292 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08afae8-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.293 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.293 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf08afae8-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:27 compute-1 NetworkManager[49031]: <info>  [1765007607.2955] manager: (tapf08afae8-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.295 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:27 compute-1 kernel: tapf08afae8-f0: entered promiscuous mode
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.298 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf08afae8-f0, col_values=(('external_ids', {'iface-id': '684342c7-1709-4776-be04-f6d5a6b0b0ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:27 compute-1 ovn_controller[130279]: 2025-12-06T07:53:27Z|00679|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.300 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:27 compute-1 nova_compute[226101]: 2025-12-06 07:53:27.315 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.315 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.317 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[90f7cbc6-bcbc-438c-b703-5297e6462ca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.318 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-f08afae8-f952-4a01-a643-61a4dc212937
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/f08afae8-f952-4a01-a643-61a4dc212937.pid.haproxy
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID f08afae8-f952-4a01-a643-61a4dc212937
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:53:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:27.320 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'env', 'PROCESS_TAG=haproxy-f08afae8-f952-4a01-a643-61a4dc212937', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f08afae8-f952-4a01-a643-61a4dc212937.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:53:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:27.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:27 compute-1 podman[292065]: 2025-12-06 07:53:27.628201769 +0000 UTC m=+0.018667233 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:53:27 compute-1 podman[292065]: 2025-12-06 07:53:27.983237134 +0000 UTC m=+0.373702568 container create c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:53:28 compute-1 systemd[1]: Started libpod-conmon-c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b.scope.
Dec 06 07:53:28 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:53:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e94429c28d80103ca895d10b95325b1bb18b58f9ae1c814aa78c835c14e280d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:28 compute-1 nova_compute[226101]: 2025-12-06 07:53:28.109 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:28 compute-1 nova_compute[226101]: 2025-12-06 07:53:28.110 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:53:28 compute-1 nova_compute[226101]: 2025-12-06 07:53:28.111 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:53:28 compute-1 nova_compute[226101]: 2025-12-06 07:53:28.623 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:28 compute-1 nova_compute[226101]: 2025-12-06 07:53:28.624 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:28 compute-1 nova_compute[226101]: 2025-12-06 07:53:28.625 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:53:28 compute-1 nova_compute[226101]: 2025-12-06 07:53:28.625 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:53:28 compute-1 podman[292065]: 2025-12-06 07:53:28.656923457 +0000 UTC m=+1.047388971 container init c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:53:28 compute-1 podman[292065]: 2025-12-06 07:53:28.662934909 +0000 UTC m=+1.053400383 container start c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:53:28 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[292080]: [NOTICE]   (292084) : New worker (292086) forked
Dec 06 07:53:28 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[292080]: [NOTICE]   (292084) : Loading success.
Dec 06 07:53:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:28.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:29 compute-1 ceph-mon[81689]: pgmap v3001: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 433 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Dec 06 07:53:29 compute-1 nova_compute[226101]: 2025-12-06 07:53:29.321 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:29 compute-1 nova_compute[226101]: 2025-12-06 07:53:29.385 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007594.3843095, e7d5d854-2a1f-485b-931a-4ec90cf7ba04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:53:29 compute-1 nova_compute[226101]: 2025-12-06 07:53:29.385 226109 INFO nova.compute.manager [-] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] VM Stopped (Lifecycle Event)
Dec 06 07:53:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:29.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:29 compute-1 nova_compute[226101]: 2025-12-06 07:53:29.510 226109 DEBUG nova.compute.manager [None req-1021f01f-efd9-4ad7-849f-12e79f59f6f1 - - - - - -] [instance: e7d5d854-2a1f-485b-931a-4ec90cf7ba04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:29 compute-1 nova_compute[226101]: 2025-12-06 07:53:29.561 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:30 compute-1 ceph-mon[81689]: pgmap v3002: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 842 KiB/s rd, 963 KiB/s wr, 84 op/s
Dec 06 07:53:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:31.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:32 compute-1 ceph-mon[81689]: pgmap v3003: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 843 KiB/s wr, 125 op/s
Dec 06 07:53:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:32.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:32 compute-1 nova_compute[226101]: 2025-12-06 07:53:32.874 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updating instance_info_cache with network_info: [{"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:32 compute-1 nova_compute[226101]: 2025-12-06 07:53:32.896 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-0852932a-3266-432f-9975-8870535aff4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:32 compute-1 nova_compute[226101]: 2025-12-06 07:53:32.897 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:53:32 compute-1 nova_compute[226101]: 2025-12-06 07:53:32.897 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:32 compute-1 nova_compute[226101]: 2025-12-06 07:53:32.897 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:32 compute-1 nova_compute[226101]: 2025-12-06 07:53:32.898 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:33.396679) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007613396926, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1928, "num_deletes": 258, "total_data_size": 4335325, "memory_usage": 4404272, "flush_reason": "Manual Compaction"}
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Dec 06 07:53:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:33.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007613761549, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 2846580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62621, "largest_seqno": 64544, "table_properties": {"data_size": 2838345, "index_size": 4984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17926, "raw_average_key_size": 21, "raw_value_size": 2821728, "raw_average_value_size": 3311, "num_data_blocks": 216, "num_entries": 852, "num_filter_entries": 852, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007463, "oldest_key_time": 1765007463, "file_creation_time": 1765007613, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 365342 microseconds, and 5954 cpu microseconds.
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:33.762029) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 2846580 bytes OK
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:33.762121) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:33.984548) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:33.984600) EVENT_LOG_v1 {"time_micros": 1765007613984582, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:33.984620) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 4326467, prev total WAL file size 4372613, number of live WAL files 2.
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:33.986483) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(2779KB)], [126(9953KB)]
Dec 06 07:53:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007613986513, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 13039100, "oldest_snapshot_seqno": -1}
Dec 06 07:53:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:34 compute-1 nova_compute[226101]: 2025-12-06 07:53:34.323 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 9361 keys, 11088272 bytes, temperature: kUnknown
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007614498842, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 11088272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11028736, "index_size": 35039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 247573, "raw_average_key_size": 26, "raw_value_size": 10865019, "raw_average_value_size": 1160, "num_data_blocks": 1329, "num_entries": 9361, "num_filter_entries": 9361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007613, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:53:34 compute-1 nova_compute[226101]: 2025-12-06 07:53:34.564 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:34.628825) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11088272 bytes
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:34.631519) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.4 rd, 21.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.7 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(8.5) write-amplify(3.9) OK, records in: 9892, records dropped: 531 output_compression: NoCompression
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:34.631550) EVENT_LOG_v1 {"time_micros": 1765007614631537, "job": 80, "event": "compaction_finished", "compaction_time_micros": 512447, "compaction_time_cpu_micros": 29169, "output_level": 6, "num_output_files": 1, "total_output_size": 11088272, "num_input_records": 9892, "num_output_records": 9361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007614632585, "job": 80, "event": "table_file_deletion", "file_number": 128}
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007614635957, "job": 80, "event": "table_file_deletion", "file_number": 126}
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:33.986335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:34.636099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:34.636104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:34.636105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:34.636106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:53:34.636108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:34.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:35 compute-1 ceph-mon[81689]: pgmap v3004: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 90 op/s
Dec 06 07:53:35 compute-1 nova_compute[226101]: 2025-12-06 07:53:35.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:35.091 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:35.093 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:53:35 compute-1 ovn_controller[130279]: 2025-12-06T07:53:35Z|00680|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:53:35 compute-1 ovn_controller[130279]: 2025-12-06T07:53:35Z|00681|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec 06 07:53:35 compute-1 nova_compute[226101]: 2025-12-06 07:53:35.318 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:35.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:36 compute-1 nova_compute[226101]: 2025-12-06 07:53:36.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:36 compute-1 nova_compute[226101]: 2025-12-06 07:53:36.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:36.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3540705987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:37.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:37 compute-1 nova_compute[226101]: 2025-12-06 07:53:37.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:37 compute-1 sudo[292095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:37 compute-1 sudo[292095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:37 compute-1 sudo[292095]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:38 compute-1 sudo[292120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:53:38 compute-1 sudo[292120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:38 compute-1 sudo[292120]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:38 compute-1 sudo[292145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:38 compute-1 sudo[292145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:38 compute-1 sudo[292145]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:38 compute-1 sudo[292170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:53:38 compute-1 sudo[292170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:38 compute-1 sudo[292170]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:38.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:39.094 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:39 compute-1 nova_compute[226101]: 2025-12-06 07:53:39.325 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:39.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:39 compute-1 nova_compute[226101]: 2025-12-06 07:53:39.566 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:39 compute-1 ceph-mon[81689]: pgmap v3005: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 44 KiB/s wr, 91 op/s
Dec 06 07:53:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3684386716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:40.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:40 compute-1 ceph-mon[81689]: pgmap v3006: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 75 op/s
Dec 06 07:53:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:53:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:53:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:41.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:41 compute-1 ovn_controller[130279]: 2025-12-06T07:53:41Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:63:bb 10.100.0.10
Dec 06 07:53:41 compute-1 ceph-mon[81689]: pgmap v3007: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.7 KiB/s wr, 72 op/s
Dec 06 07:53:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:42 compute-1 nova_compute[226101]: 2025-12-06 07:53:42.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:42.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:43 compute-1 ceph-mon[81689]: pgmap v3008: 305 pgs: 305 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 76 op/s
Dec 06 07:53:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:43.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:43 compute-1 nova_compute[226101]: 2025-12-06 07:53:43.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/194808883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:53:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:53:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:53:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:53:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:53:44 compute-1 nova_compute[226101]: 2025-12-06 07:53:44.369 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:44 compute-1 sshd-session[292225]: Received disconnect from 136.112.8.45 port 50054:11: Bye Bye [preauth]
Dec 06 07:53:44 compute-1 sshd-session[292225]: Disconnected from authenticating user root 136.112.8.45 port 50054 [preauth]
Dec 06 07:53:44 compute-1 nova_compute[226101]: 2025-12-06 07:53:44.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:44.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:45 compute-1 ceph-mon[81689]: pgmap v3009: 305 pgs: 305 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 111 KiB/s rd, 20 KiB/s wr, 24 op/s
Dec 06 07:53:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:45.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e393 e393: 3 total, 3 up, 3 in
Dec 06 07:53:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:46.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:47 compute-1 ceph-mon[81689]: pgmap v3010: 305 pgs: 305 active+clean; 508 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 40 KiB/s wr, 73 op/s
Dec 06 07:53:47 compute-1 ceph-mon[81689]: osdmap e393: 3 total, 3 up, 3 in
Dec 06 07:53:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:47.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/610152004' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/610152004' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e394 e394: 3 total, 3 up, 3 in
Dec 06 07:53:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:53:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/65824309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:53:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/65824309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:48.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:49 compute-1 ceph-mon[81689]: pgmap v3012: 305 pgs: 305 active+clean; 508 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 39 KiB/s wr, 91 op/s
Dec 06 07:53:49 compute-1 ceph-mon[81689]: osdmap e394: 3 total, 3 up, 3 in
Dec 06 07:53:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/65824309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/65824309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e395 e395: 3 total, 3 up, 3 in
Dec 06 07:53:49 compute-1 nova_compute[226101]: 2025-12-06 07:53:49.449 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:49.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:49 compute-1 sudo[292227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:49 compute-1 sudo[292227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:49 compute-1 sudo[292227]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:49 compute-1 sudo[292252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:53:49 compute-1 sudo[292252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:49 compute-1 sudo[292252]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:49 compute-1 nova_compute[226101]: 2025-12-06 07:53:49.568 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:50 compute-1 ceph-mon[81689]: osdmap e395: 3 total, 3 up, 3 in
Dec 06 07:53:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:50 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:50 compute-1 nova_compute[226101]: 2025-12-06 07:53:50.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:50.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:51 compute-1 ceph-mon[81689]: pgmap v3015: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 529 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 164 op/s
Dec 06 07:53:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:51.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:52.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:53 compute-1 podman[292278]: 2025-12-06 07:53:53.067726926 +0000 UTC m=+0.051493329 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:53:53 compute-1 podman[292277]: 2025-12-06 07:53:53.074853767 +0000 UTC m=+0.058617820 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible)
Dec 06 07:53:53 compute-1 podman[292279]: 2025-12-06 07:53:53.134400962 +0000 UTC m=+0.114138707 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:53:53 compute-1 ceph-mon[81689]: pgmap v3016: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 541 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 7.8 MiB/s wr, 230 op/s
Dec 06 07:53:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:53.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:54 compute-1 nova_compute[226101]: 2025-12-06 07:53:54.450 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:54 compute-1 nova_compute[226101]: 2025-12-06 07:53:54.570 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:54.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:55 compute-1 ceph-mon[81689]: pgmap v3017: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 541 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 193 op/s
Dec 06 07:53:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/433557300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:55.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.612 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.613 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.674 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:53:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.720 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.720 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.721 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.721 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.721 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.722 226109 INFO nova.compute.manager [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Terminating instance
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.723 226109 DEBUG nova.compute.manager [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:53:55 compute-1 kernel: tap680d7c20-81 (unregistering): left promiscuous mode
Dec 06 07:53:55 compute-1 NetworkManager[49031]: <info>  [1765007635.7715] device (tap680d7c20-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:53:55 compute-1 ovn_controller[130279]: 2025-12-06T07:53:55Z|00682|binding|INFO|Releasing lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 from this chassis (sb_readonly=0)
Dec 06 07:53:55 compute-1 ovn_controller[130279]: 2025-12-06T07:53:55Z|00683|binding|INFO|Setting lport 680d7c20-81e0-48d0-954c-9fbcda7e7615 down in Southbound
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:55 compute-1 ovn_controller[130279]: 2025-12-06T07:53:55Z|00684|binding|INFO|Removing iface tap680d7c20-81 ovn-installed in OVS
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:55.792 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:1c:a5 10.100.0.11'], port_security=['fa:16:3e:d5:1c:a5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0852932a-3266-432f-9975-8870535aff4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '8', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=680d7c20-81e0-48d0-954c-9fbcda7e7615) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:55.794 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 680d7c20-81e0-48d0-954c-9fbcda7e7615 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis
Dec 06 07:53:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:55.796 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:53:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:55.797 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[569bbd75-3b3c-4fad-bc9f-68f5419a728b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:55.798 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.798 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:55 compute-1 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Dec 06 07:53:55 compute-1 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a4.scope: Consumed 19.034s CPU time.
Dec 06 07:53:55 compute-1 systemd-machined[190302]: Machine qemu-77-instance-000000a4 terminated.
Dec 06 07:53:55 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290408]: [NOTICE]   (290426) : haproxy version is 2.8.14-c23fe91
Dec 06 07:53:55 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290408]: [NOTICE]   (290426) : path to executable is /usr/sbin/haproxy
Dec 06 07:53:55 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290408]: [WARNING]  (290426) : Exiting Master process...
Dec 06 07:53:55 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290408]: [ALERT]    (290426) : Current worker (290429) exited with code 143 (Terminated)
Dec 06 07:53:55 compute-1 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[290408]: [WARNING]  (290426) : All workers exited. Exiting... (0)
Dec 06 07:53:55 compute-1 systemd[1]: libpod-0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6.scope: Deactivated successfully.
Dec 06 07:53:55 compute-1 podman[292364]: 2025-12-06 07:53:55.91774963 +0000 UTC m=+0.040515756 container died 0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:53:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-37e896e7df497063e9b2ac38f68f8242b255bafe628c25ea8b02978ae84a1e9b-merged.mount: Deactivated successfully.
Dec 06 07:53:55 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6-userdata-shm.mount: Deactivated successfully.
Dec 06 07:53:55 compute-1 podman[292364]: 2025-12-06 07:53:55.95849553 +0000 UTC m=+0.081261656 container cleanup 0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.964 226109 INFO nova.virt.libvirt.driver [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] Instance destroyed successfully.
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.965 226109 DEBUG nova.objects.instance [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'resources' on Instance uuid 0852932a-3266-432f-9975-8870535aff4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:53:55 compute-1 systemd[1]: libpod-conmon-0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6.scope: Deactivated successfully.
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.986 226109 DEBUG nova.virt.libvirt.vif [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:49:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-881580221',display_name='tempest-ServerRescueNegativeTestJSON-server-881580221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-881580221',id=164,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:51:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-8ig59luj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:33Z,user_data=None,user_id='f2335740042045fba7f544ee5140eb87',uuid=0852932a-3266-432f-9975-8870535aff4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.986 226109 DEBUG nova.network.os_vif_util [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "address": "fa:16:3e:d5:1c:a5", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680d7c20-81", "ovs_interfaceid": "680d7c20-81e0-48d0-954c-9fbcda7e7615", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.987 226109 DEBUG nova.network.os_vif_util [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:1c:a5,bridge_name='br-int',has_traffic_filtering=True,id=680d7c20-81e0-48d0-954c-9fbcda7e7615,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680d7c20-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.987 226109 DEBUG os_vif [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:1c:a5,bridge_name='br-int',has_traffic_filtering=True,id=680d7c20-81e0-48d0-954c-9fbcda7e7615,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680d7c20-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.990 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap680d7c20-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.991 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:53:55 compute-1 nova_compute[226101]: 2025-12-06 07:53:55.996 226109 INFO os_vif [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:1c:a5,bridge_name='br-int',has_traffic_filtering=True,id=680d7c20-81e0-48d0-954c-9fbcda7e7615,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680d7c20-81')
Dec 06 07:53:56 compute-1 podman[292403]: 2025-12-06 07:53:56.052375914 +0000 UTC m=+0.071530076 container remove 0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.058 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a4e819-a598-41c3-9a67-410234c29c46]: (4, ('Sat Dec  6 07:53:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6)\n0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6\nSat Dec  6 07:53:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6)\n0cf59b3df2f8548e5e58579c0814e7f3f313c5c0423bff459ef4e270f08c51c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.060 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fe615d-9eda-4c01-bdd6-e31787fc0fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.060 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:56 compute-1 kernel: tap3d151181-00: left promiscuous mode
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.063 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.075 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.078 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7ff43a-d013-4846-ad7a-5baf0c49e411]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.098 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b23377-e920-49cf-9a9a-4acc97f4b2c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.099 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[01c08323-a83b-4712-9341-4978db3b7a93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.114 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c15717fb-f6dc-4880-a6fc-932ff690cc18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777162, 'reachable_time': 38662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292435, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.117 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:53:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:53:56.117 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8457d0-5485-4cea-b50b-47c8e1c3105c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:56 compute-1 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.124 226109 DEBUG nova.compute.manager [req-a049daf8-7fb0-4f3a-9f9c-90348125889b req-c9731e2b-c4c1-4720-8637-8557f370ae38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.125 226109 DEBUG oslo_concurrency.lockutils [req-a049daf8-7fb0-4f3a-9f9c-90348125889b req-c9731e2b-c4c1-4720-8637-8557f370ae38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.125 226109 DEBUG oslo_concurrency.lockutils [req-a049daf8-7fb0-4f3a-9f9c-90348125889b req-c9731e2b-c4c1-4720-8637-8557f370ae38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.125 226109 DEBUG oslo_concurrency.lockutils [req-a049daf8-7fb0-4f3a-9f9c-90348125889b req-c9731e2b-c4c1-4720-8637-8557f370ae38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.126 226109 DEBUG nova.compute.manager [req-a049daf8-7fb0-4f3a-9f9c-90348125889b req-c9731e2b-c4c1-4720-8637-8557f370ae38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.126 226109 DEBUG nova.compute.manager [req-a049daf8-7fb0-4f3a-9f9c-90348125889b req-c9731e2b-c4c1-4720-8637-8557f370ae38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-unplugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.403 226109 INFO nova.virt.libvirt.driver [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Deleting instance files /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e_del
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.404 226109 INFO nova.virt.libvirt.driver [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Deletion of /var/lib/nova/instances/0852932a-3266-432f-9975-8870535aff4e_del complete
Dec 06 07:53:56 compute-1 ceph-mon[81689]: pgmap v3018: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 251 op/s
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.496 226109 INFO nova.compute.manager [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Took 0.77 seconds to destroy the instance on the hypervisor.
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.496 226109 DEBUG oslo.service.loopingcall [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.497 226109 DEBUG nova.compute.manager [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.497 226109 DEBUG nova.network.neutron [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:53:56 compute-1 nova_compute[226101]: 2025-12-06 07:53:56.644 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.375 226109 DEBUG nova.network.neutron [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.399 226109 INFO nova.compute.manager [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] Took 0.90 seconds to deallocate network for instance.
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.450 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.450 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.460 226109 DEBUG nova.compute.manager [req-b32fc164-cc0c-44d5-88e5-199d9d453190 req-1468f13a-c301-47f8-8108-7cd69edbe3a7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-deleted-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:57.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.539 226109 DEBUG oslo_concurrency.processutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4115530818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.959 226109 DEBUG oslo_concurrency.processutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.965 226109 DEBUG nova.compute.provider_tree [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:53:57 compute-1 nova_compute[226101]: 2025-12-06 07:53:57.983 226109 DEBUG nova.scheduler.client.report [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.018 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.041 226109 INFO nova.scheduler.client.report [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Deleted allocations for instance 0852932a-3266-432f-9975-8870535aff4e
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.175 226109 DEBUG oslo_concurrency.lockutils [None req-4b5bd80d-fd39-4841-8534-136b11c308c3 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.242 226109 DEBUG nova.compute.manager [req-80805905-b695-463b-ada6-b984e0a48a5d req-f8932d11-873a-44a6-a4ba-7ff9b88da51f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.242 226109 DEBUG oslo_concurrency.lockutils [req-80805905-b695-463b-ada6-b984e0a48a5d req-f8932d11-873a-44a6-a4ba-7ff9b88da51f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0852932a-3266-432f-9975-8870535aff4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.243 226109 DEBUG oslo_concurrency.lockutils [req-80805905-b695-463b-ada6-b984e0a48a5d req-f8932d11-873a-44a6-a4ba-7ff9b88da51f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.243 226109 DEBUG oslo_concurrency.lockutils [req-80805905-b695-463b-ada6-b984e0a48a5d req-f8932d11-873a-44a6-a4ba-7ff9b88da51f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0852932a-3266-432f-9975-8870535aff4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.243 226109 DEBUG nova.compute.manager [req-80805905-b695-463b-ada6-b984e0a48a5d req-f8932d11-873a-44a6-a4ba-7ff9b88da51f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] No waiting events found dispatching network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:58 compute-1 nova_compute[226101]: 2025-12-06 07:53:58.243 226109 WARNING nova.compute.manager [req-80805905-b695-463b-ada6-b984e0a48a5d req-f8932d11-873a-44a6-a4ba-7ff9b88da51f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0852932a-3266-432f-9975-8870535aff4e] Received unexpected event network-vif-plugged-680d7c20-81e0-48d0-954c-9fbcda7e7615 for instance with vm_state deleted and task_state None.
Dec 06 07:53:58 compute-1 ceph-mon[81689]: pgmap v3019: 305 pgs: 305 active+clean; 430 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 225 op/s
Dec 06 07:53:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4115530818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e396 e396: 3 total, 3 up, 3 in
Dec 06 07:53:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:58.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:53:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:59.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:59 compute-1 nova_compute[226101]: 2025-12-06 07:53:59.493 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:59 compute-1 ceph-mon[81689]: osdmap e396: 3 total, 3 up, 3 in
Dec 06 07:53:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/808322903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:00 compute-1 nova_compute[226101]: 2025-12-06 07:54:00.430 226109 DEBUG nova.compute.manager [req-0361da0d-f10c-471b-9971-2205c20de1cd req-1757a48f-9bcc-45f2-9a5d-974a585a8f12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:00 compute-1 nova_compute[226101]: 2025-12-06 07:54:00.430 226109 DEBUG nova.compute.manager [req-0361da0d-f10c-471b-9971-2205c20de1cd req-1757a48f-9bcc-45f2-9a5d-974a585a8f12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing instance network info cache due to event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:54:00 compute-1 nova_compute[226101]: 2025-12-06 07:54:00.431 226109 DEBUG oslo_concurrency.lockutils [req-0361da0d-f10c-471b-9971-2205c20de1cd req-1757a48f-9bcc-45f2-9a5d-974a585a8f12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:54:00 compute-1 nova_compute[226101]: 2025-12-06 07:54:00.431 226109 DEBUG oslo_concurrency.lockutils [req-0361da0d-f10c-471b-9971-2205c20de1cd req-1757a48f-9bcc-45f2-9a5d-974a585a8f12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:54:00 compute-1 nova_compute[226101]: 2025-12-06 07:54:00.431 226109 DEBUG nova.network.neutron [req-0361da0d-f10c-471b-9971-2205c20de1cd req-1757a48f-9bcc-45f2-9a5d-974a585a8f12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:54:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:00 compute-1 ceph-mon[81689]: pgmap v3021: 305 pgs: 305 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 190 op/s
Dec 06 07:54:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:00.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:00 compute-1 nova_compute[226101]: 2025-12-06 07:54:00.992 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:01.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:01.674 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:01.675 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:01.675 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:01 compute-1 nova_compute[226101]: 2025-12-06 07:54:01.924 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:01 compute-1 nova_compute[226101]: 2025-12-06 07:54:01.950 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:54:01 compute-1 nova_compute[226101]: 2025-12-06 07:54:01.951 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:01 compute-1 nova_compute[226101]: 2025-12-06 07:54:01.951 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:01 compute-1 nova_compute[226101]: 2025-12-06 07:54:01.973 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:02 compute-1 ceph-mon[81689]: pgmap v3022: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 17 KiB/s wr, 135 op/s
Dec 06 07:54:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:02.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:02 compute-1 nova_compute[226101]: 2025-12-06 07:54:02.849 226109 DEBUG nova.network.neutron [req-0361da0d-f10c-471b-9971-2205c20de1cd req-1757a48f-9bcc-45f2-9a5d-974a585a8f12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updated VIF entry in instance network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:54:02 compute-1 nova_compute[226101]: 2025-12-06 07:54:02.850 226109 DEBUG nova.network.neutron [req-0361da0d-f10c-471b-9971-2205c20de1cd req-1757a48f-9bcc-45f2-9a5d-974a585a8f12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:54:02 compute-1 nova_compute[226101]: 2025-12-06 07:54:02.887 226109 DEBUG oslo_concurrency.lockutils [req-0361da0d-f10c-471b-9971-2205c20de1cd req-1757a48f-9bcc-45f2-9a5d-974a585a8f12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:54:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:03.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:04 compute-1 nova_compute[226101]: 2025-12-06 07:54:04.537 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:04 compute-1 ovn_controller[130279]: 2025-12-06T07:54:04Z|00685|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec 06 07:54:04 compute-1 nova_compute[226101]: 2025-12-06 07:54:04.546 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:04 compute-1 ovn_controller[130279]: 2025-12-06T07:54:04Z|00686|binding|INFO|Releasing lport 684342c7-1709-4776-be04-f6d5a6b0b0ae from this chassis (sb_readonly=0)
Dec 06 07:54:04 compute-1 nova_compute[226101]: 2025-12-06 07:54:04.709 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:04.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:04 compute-1 ceph-mon[81689]: pgmap v3023: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 17 KiB/s wr, 135 op/s
Dec 06 07:54:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:05.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1188045838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:05 compute-1 nova_compute[226101]: 2025-12-06 07:54:05.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:06.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:06 compute-1 ceph-mon[81689]: pgmap v3024: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 68 op/s
Dec 06 07:54:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:07.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:08 compute-1 nova_compute[226101]: 2025-12-06 07:54:08.416 226109 DEBUG nova.compute.manager [req-ad537940-fdca-4736-9cfb-159d271fc88f req-0b0bd85f-3d99-46db-841b-694800f46ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:08 compute-1 nova_compute[226101]: 2025-12-06 07:54:08.416 226109 DEBUG nova.compute.manager [req-ad537940-fdca-4736-9cfb-159d271fc88f req-0b0bd85f-3d99-46db-841b-694800f46ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing instance network info cache due to event network-changed-5a1f8827-eedc-46b4-859d-37533bd8cdb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:54:08 compute-1 nova_compute[226101]: 2025-12-06 07:54:08.417 226109 DEBUG oslo_concurrency.lockutils [req-ad537940-fdca-4736-9cfb-159d271fc88f req-0b0bd85f-3d99-46db-841b-694800f46ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:54:08 compute-1 nova_compute[226101]: 2025-12-06 07:54:08.417 226109 DEBUG oslo_concurrency.lockutils [req-ad537940-fdca-4736-9cfb-159d271fc88f req-0b0bd85f-3d99-46db-841b-694800f46ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:54:08 compute-1 nova_compute[226101]: 2025-12-06 07:54:08.417 226109 DEBUG nova.network.neutron [req-ad537940-fdca-4736-9cfb-159d271fc88f req-0b0bd85f-3d99-46db-841b-694800f46ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Refreshing network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:54:08 compute-1 ceph-mon[81689]: pgmap v3025: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 13 KiB/s wr, 63 op/s
Dec 06 07:54:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:08 compute-1 nova_compute[226101]: 2025-12-06 07:54:08.943 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 07:54:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:09.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 07:54:09 compute-1 nova_compute[226101]: 2025-12-06 07:54:09.585 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3822958825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3822958825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:10 compute-1 ceph-mon[81689]: pgmap v3026: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 12 KiB/s wr, 41 op/s
Dec 06 07:54:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:10.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:10 compute-1 nova_compute[226101]: 2025-12-06 07:54:10.961 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007635.960127, 0852932a-3266-432f-9975-8870535aff4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:54:10 compute-1 nova_compute[226101]: 2025-12-06 07:54:10.962 226109 INFO nova.compute.manager [-] [instance: 0852932a-3266-432f-9975-8870535aff4e] VM Stopped (Lifecycle Event)
Dec 06 07:54:10 compute-1 nova_compute[226101]: 2025-12-06 07:54:10.987 226109 DEBUG nova.compute.manager [None req-ca5ec6f7-1cce-4a37-b05f-6be774452267 - - - - - -] [instance: 0852932a-3266-432f-9975-8870535aff4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:54:10 compute-1 nova_compute[226101]: 2025-12-06 07:54:10.996 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.333 226109 DEBUG oslo_concurrency.lockutils [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.333 226109 DEBUG oslo_concurrency.lockutils [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.349 226109 INFO nova.compute.manager [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Detaching volume 903b6ad8-93b6-4571-9ff7-80ce95667ba0
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.372 226109 DEBUG nova.network.neutron [req-ad537940-fdca-4736-9cfb-159d271fc88f req-0b0bd85f-3d99-46db-841b-694800f46ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updated VIF entry in instance network info cache for port 5a1f8827-eedc-46b4-859d-37533bd8cdb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.373 226109 DEBUG nova.network.neutron [req-ad537940-fdca-4736-9cfb-159d271fc88f req-0b0bd85f-3d99-46db-841b-694800f46ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [{"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.390 226109 DEBUG oslo_concurrency.lockutils [req-ad537940-fdca-4736-9cfb-159d271fc88f req-0b0bd85f-3d99-46db-841b-694800f46ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:54:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:11.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.721 226109 INFO nova.virt.block_device [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Attempting to driver detach volume 903b6ad8-93b6-4571-9ff7-80ce95667ba0 from mountpoint /dev/vdb
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.732 226109 DEBUG nova.virt.libvirt.driver [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Attempting to detach device vdb from instance 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.732 226109 DEBUG nova.virt.libvirt.guest [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-903b6ad8-93b6-4571-9ff7-80ce95667ba0">
Dec 06 07:54:11 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   </source>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <serial>903b6ad8-93b6-4571-9ff7-80ce95667ba0</serial>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]: </disk>
Dec 06 07:54:11 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.898 226109 INFO nova.virt.libvirt.driver [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully detached device vdb from instance 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af from the persistent domain config.
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.899 226109 DEBUG nova.virt.libvirt.driver [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.900 226109 DEBUG nova.virt.libvirt.guest [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-903b6ad8-93b6-4571-9ff7-80ce95667ba0">
Dec 06 07:54:11 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   </source>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <serial>903b6ad8-93b6-4571-9ff7-80ce95667ba0</serial>
Dec 06 07:54:11 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:54:11 compute-1 nova_compute[226101]: </disk>
Dec 06 07:54:11 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.969 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765007651.9695091, 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.971 226109 DEBUG nova.virt.libvirt.driver [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:54:11 compute-1 nova_compute[226101]: 2025-12-06 07:54:11.973 226109 INFO nova.virt.libvirt.driver [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully detached device vdb from instance 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af from the live domain config.
Dec 06 07:54:12 compute-1 nova_compute[226101]: 2025-12-06 07:54:12.288 226109 DEBUG nova.objects.instance [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'flavor' on Instance uuid 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:54:12 compute-1 nova_compute[226101]: 2025-12-06 07:54:12.328 226109 DEBUG oslo_concurrency.lockutils [None req-513bb81b-66fd-435d-8946-f0b9e84ae375 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:12 compute-1 ceph-mon[81689]: pgmap v3027: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 10 KiB/s wr, 36 op/s
Dec 06 07:54:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/917406840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:13.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2975706700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.547 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.548 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.548 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.549 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.549 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.551 226109 INFO nova.compute.manager [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Terminating instance
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.552 226109 DEBUG nova.compute.manager [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.625 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:14 compute-1 kernel: tap5a1f8827-ee (unregistering): left promiscuous mode
Dec 06 07:54:14 compute-1 NetworkManager[49031]: <info>  [1765007654.6488] device (tap5a1f8827-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:54:14 compute-1 ovn_controller[130279]: 2025-12-06T07:54:14Z|00687|binding|INFO|Releasing lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 from this chassis (sb_readonly=0)
Dec 06 07:54:14 compute-1 ovn_controller[130279]: 2025-12-06T07:54:14Z|00688|binding|INFO|Setting lport 5a1f8827-eedc-46b4-859d-37533bd8cdb6 down in Southbound
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.657 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:14 compute-1 ovn_controller[130279]: 2025-12-06T07:54:14Z|00689|binding|INFO|Removing iface tap5a1f8827-ee ovn-installed in OVS
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.659 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.660 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:14.665 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:63:bb 10.100.0.10'], port_security=['fa:16:3e:bf:63:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d2e7124-8b09-4fd5-bc94-c2c9a3d964af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f08afae8-f952-4a01-a643-61a4dc212937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c8fc5bc237e42bfad505a0bca6681eb', 'neutron:revision_number': '8', 'neutron:security_group_ids': '97e2366f-5968-4702-838b-5830aacf120f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9bb6731-402a-4b02-b4cf-9ac913838fd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=5a1f8827-eedc-46b4-859d-37533bd8cdb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:54:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:14.666 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 5a1f8827-eedc-46b4-859d-37533bd8cdb6 in datapath f08afae8-f952-4a01-a643-61a4dc212937 unbound from our chassis
Dec 06 07:54:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:14.667 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f08afae8-f952-4a01-a643-61a4dc212937, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:54:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:14.668 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[22ad9634-e2a3-4d7a-80e0-362b5595410e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:14.669 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 namespace which is not needed anymore
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.683 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:14 compute-1 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Dec 06 07:54:14 compute-1 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000ab.scope: Consumed 15.210s CPU time.
Dec 06 07:54:14 compute-1 systemd-machined[190302]: Machine qemu-79-instance-000000ab terminated.
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.791 226109 INFO nova.virt.libvirt.driver [-] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Instance destroyed successfully.
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.792 226109 DEBUG nova.objects.instance [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lazy-loading 'resources' on Instance uuid 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.806 226109 DEBUG nova.virt.libvirt.vif [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:52:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-340091425',display_name='tempest-TestMinimumBasicScenario-server-340091425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-340091425',id=171,image_ref='1af12b1c-b711-4118-bb67-f97cd1b83061',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWg+OuPgRfJkF5bYdFQQQkvtf9shaTk87ZwwwRFdmtYywqGfmf219JAc/WhRsuU/0QBvKCF559O8SFUHaVuq2eR12p/xyrs9TGhlFmJ6ye8Pfwsq05h3CgeNSYYLP6AOQ==',key_name='tempest-TestMinimumBasicScenario-980781584',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:52:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c8fc5bc237e42bfad505a0bca6681eb',ramdisk_id='',reservation_id='r-qu66fmgi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1af12b1c-b711-4118-bb67-f97cd1b83061',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1310413980',owner_user_name='tempest-TestMinimumBasicScenario-1310413980-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:53:27Z,user_data=None,user_id='0f669e963dc54ad7bebf8dd20341428a',uuid=4d2e7124-8b09-4fd5-bc94-c2c9a3d964af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.806 226109 DEBUG nova.network.os_vif_util [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converting VIF {"id": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "address": "fa:16:3e:bf:63:bb", "network": {"id": "f08afae8-f952-4a01-a643-61a4dc212937", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-655838283-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c8fc5bc237e42bfad505a0bca6681eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a1f8827-ee", "ovs_interfaceid": "5a1f8827-eedc-46b4-859d-37533bd8cdb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.807 226109 DEBUG nova.network.os_vif_util [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:63:bb,bridge_name='br-int',has_traffic_filtering=True,id=5a1f8827-eedc-46b4-859d-37533bd8cdb6,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a1f8827-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.807 226109 DEBUG os_vif [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:63:bb,bridge_name='br-int',has_traffic_filtering=True,id=5a1f8827-eedc-46b4-859d-37533bd8cdb6,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a1f8827-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.808 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.809 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a1f8827-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.813 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.816 226109 INFO os_vif [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:63:bb,bridge_name='br-int',has_traffic_filtering=True,id=5a1f8827-eedc-46b4-859d-37533bd8cdb6,network=Network(f08afae8-f952-4a01-a643-61a4dc212937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a1f8827-ee')
Dec 06 07:54:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:14.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.900 226109 DEBUG nova.compute.manager [req-7000ad71-b8e6-4e06-aa54-7684de0249b4 req-c1372a52-59fd-41c0-80d5-5cad929020e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-unplugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.901 226109 DEBUG oslo_concurrency.lockutils [req-7000ad71-b8e6-4e06-aa54-7684de0249b4 req-c1372a52-59fd-41c0-80d5-5cad929020e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.901 226109 DEBUG oslo_concurrency.lockutils [req-7000ad71-b8e6-4e06-aa54-7684de0249b4 req-c1372a52-59fd-41c0-80d5-5cad929020e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.901 226109 DEBUG oslo_concurrency.lockutils [req-7000ad71-b8e6-4e06-aa54-7684de0249b4 req-c1372a52-59fd-41c0-80d5-5cad929020e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.901 226109 DEBUG nova.compute.manager [req-7000ad71-b8e6-4e06-aa54-7684de0249b4 req-c1372a52-59fd-41c0-80d5-5cad929020e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] No waiting events found dispatching network-vif-unplugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:54:14 compute-1 nova_compute[226101]: 2025-12-06 07:54:14.902 226109 DEBUG nova.compute.manager [req-7000ad71-b8e6-4e06-aa54-7684de0249b4 req-c1372a52-59fd-41c0-80d5-5cad929020e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-unplugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:54:15 compute-1 ceph-mon[81689]: pgmap v3028: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 3.0 KiB/s wr, 0 op/s
Dec 06 07:54:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3823144344' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3823144344' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:15 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[292080]: [NOTICE]   (292084) : haproxy version is 2.8.14-c23fe91
Dec 06 07:54:15 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[292080]: [NOTICE]   (292084) : path to executable is /usr/sbin/haproxy
Dec 06 07:54:15 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[292080]: [WARNING]  (292084) : Exiting Master process...
Dec 06 07:54:15 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[292080]: [ALERT]    (292084) : Current worker (292086) exited with code 143 (Terminated)
Dec 06 07:54:15 compute-1 neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937[292080]: [WARNING]  (292084) : All workers exited. Exiting... (0)
Dec 06 07:54:15 compute-1 systemd[1]: libpod-c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b.scope: Deactivated successfully.
Dec 06 07:54:15 compute-1 podman[292485]: 2025-12-06 07:54:15.143835457 +0000 UTC m=+0.382572843 container died c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:54:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b-userdata-shm.mount: Deactivated successfully.
Dec 06 07:54:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-9e94429c28d80103ca895d10b95325b1bb18b58f9ae1c814aa78c835c14e280d-merged.mount: Deactivated successfully.
Dec 06 07:54:15 compute-1 podman[292485]: 2025-12-06 07:54:15.193787405 +0000 UTC m=+0.432524791 container cleanup c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:54:15 compute-1 systemd[1]: libpod-conmon-c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b.scope: Deactivated successfully.
Dec 06 07:54:15 compute-1 podman[292542]: 2025-12-06 07:54:15.251541511 +0000 UTC m=+0.038101191 container remove c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.257 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4dff1725-4912-4120-bb7a-f99e0e941d06]: (4, ('Sat Dec  6 07:54:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 (c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b)\nc27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b\nSat Dec  6 07:54:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 (c27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b)\nc27414dd185a4b0966e6e87d148768b16273cdbcd223c23c324c062722f5394b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.259 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a021dff7-a24a-4239-9f70-81480dcb6af2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.260 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08afae8-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:15 compute-1 nova_compute[226101]: 2025-12-06 07:54:15.261 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:15 compute-1 kernel: tapf08afae8-f0: left promiscuous mode
Dec 06 07:54:15 compute-1 nova_compute[226101]: 2025-12-06 07:54:15.274 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.277 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e93709bf-1be0-4ac6-93c3-3bb7b2a2644c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.291 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b5215f-ce09-4b22-9868-840dcb442d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.292 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7efa3177-268d-4a5b-aac8-2e577a5468e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.306 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[390b565b-bd0f-4c87-afeb-d20f44d015ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788661, 'reachable_time': 34457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292558, 'error': None, 'target': 'ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.308 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f08afae8-f952-4a01-a643-61a4dc212937 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:54:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:15.308 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2aaf1f-2ca4-4692-96b5-ef094ef8cecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:15 compute-1 systemd[1]: run-netns-ovnmeta\x2df08afae8\x2df952\x2d4a01\x2da643\x2d61a4dc212937.mount: Deactivated successfully.
Dec 06 07:54:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:15 compute-1 nova_compute[226101]: 2025-12-06 07:54:15.569 226109 INFO nova.virt.libvirt.driver [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Deleting instance files /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_del
Dec 06 07:54:15 compute-1 nova_compute[226101]: 2025-12-06 07:54:15.570 226109 INFO nova.virt.libvirt.driver [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Deletion of /var/lib/nova/instances/4d2e7124-8b09-4fd5-bc94-c2c9a3d964af_del complete
Dec 06 07:54:15 compute-1 nova_compute[226101]: 2025-12-06 07:54:15.646 226109 INFO nova.compute.manager [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Took 1.09 seconds to destroy the instance on the hypervisor.
Dec 06 07:54:15 compute-1 nova_compute[226101]: 2025-12-06 07:54:15.647 226109 DEBUG oslo.service.loopingcall [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:54:15 compute-1 nova_compute[226101]: 2025-12-06 07:54:15.647 226109 DEBUG nova.compute.manager [-] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:54:15 compute-1 nova_compute[226101]: 2025-12-06 07:54:15.647 226109 DEBUG nova.network.neutron [-] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:54:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:16 compute-1 nova_compute[226101]: 2025-12-06 07:54:16.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:16 compute-1 nova_compute[226101]: 2025-12-06 07:54:16.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:54:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:16 compute-1 nova_compute[226101]: 2025-12-06 07:54:16.878 226109 DEBUG nova.network.neutron [-] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:54:16 compute-1 nova_compute[226101]: 2025-12-06 07:54:16.894 226109 INFO nova.compute.manager [-] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Took 1.25 seconds to deallocate network for instance.
Dec 06 07:54:16 compute-1 nova_compute[226101]: 2025-12-06 07:54:16.972 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:16 compute-1 nova_compute[226101]: 2025-12-06 07:54:16.972 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:17 compute-1 nova_compute[226101]: 2025-12-06 07:54:17.038 226109 DEBUG oslo_concurrency.processutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:17 compute-1 nova_compute[226101]: 2025-12-06 07:54:17.220 226109 DEBUG nova.compute.manager [req-3ad8aa7d-fc78-4727-8345-6cf4b7952a36 req-fe0f93b6-2a5b-4f96-b93d-6575f7384b55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:17 compute-1 nova_compute[226101]: 2025-12-06 07:54:17.220 226109 DEBUG oslo_concurrency.lockutils [req-3ad8aa7d-fc78-4727-8345-6cf4b7952a36 req-fe0f93b6-2a5b-4f96-b93d-6575f7384b55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:17 compute-1 nova_compute[226101]: 2025-12-06 07:54:17.221 226109 DEBUG oslo_concurrency.lockutils [req-3ad8aa7d-fc78-4727-8345-6cf4b7952a36 req-fe0f93b6-2a5b-4f96-b93d-6575f7384b55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:17 compute-1 nova_compute[226101]: 2025-12-06 07:54:17.221 226109 DEBUG oslo_concurrency.lockutils [req-3ad8aa7d-fc78-4727-8345-6cf4b7952a36 req-fe0f93b6-2a5b-4f96-b93d-6575f7384b55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:17 compute-1 nova_compute[226101]: 2025-12-06 07:54:17.221 226109 DEBUG nova.compute.manager [req-3ad8aa7d-fc78-4727-8345-6cf4b7952a36 req-fe0f93b6-2a5b-4f96-b93d-6575f7384b55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] No waiting events found dispatching network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:54:17 compute-1 nova_compute[226101]: 2025-12-06 07:54:17.221 226109 WARNING nova.compute.manager [req-3ad8aa7d-fc78-4727-8345-6cf4b7952a36 req-fe0f93b6-2a5b-4f96-b93d-6575f7384b55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received unexpected event network-vif-plugged-5a1f8827-eedc-46b4-859d-37533bd8cdb6 for instance with vm_state deleted and task_state None.
Dec 06 07:54:17 compute-1 nova_compute[226101]: 2025-12-06 07:54:17.222 226109 DEBUG nova.compute.manager [req-3ad8aa7d-fc78-4727-8345-6cf4b7952a36 req-fe0f93b6-2a5b-4f96-b93d-6575f7384b55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Received event network-vif-deleted-5a1f8827-eedc-46b4-859d-37533bd8cdb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:17.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:17 compute-1 ceph-mon[81689]: pgmap v3029: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec 06 07:54:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e397 e397: 3 total, 3 up, 3 in
Dec 06 07:54:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:54:18 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/51705472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:18 compute-1 nova_compute[226101]: 2025-12-06 07:54:18.191 226109 DEBUG oslo_concurrency.processutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:18 compute-1 nova_compute[226101]: 2025-12-06 07:54:18.197 226109 DEBUG nova.compute.provider_tree [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:54:18 compute-1 nova_compute[226101]: 2025-12-06 07:54:18.212 226109 DEBUG nova.scheduler.client.report [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:54:18 compute-1 nova_compute[226101]: 2025-12-06 07:54:18.231 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:18 compute-1 nova_compute[226101]: 2025-12-06 07:54:18.257 226109 INFO nova.scheduler.client.report [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Deleted allocations for instance 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af
Dec 06 07:54:18 compute-1 nova_compute[226101]: 2025-12-06 07:54:18.319 226109 DEBUG oslo_concurrency.lockutils [None req-0a18cdfe-37e1-462a-99c5-2554f3f771d0 0f669e963dc54ad7bebf8dd20341428a 0c8fc5bc237e42bfad505a0bca6681eb - - default default] Lock "4d2e7124-8b09-4fd5-bc94-c2c9a3d964af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:18 compute-1 ceph-mon[81689]: pgmap v3030: 305 pgs: 305 active+clean; 369 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec 06 07:54:18 compute-1 ceph-mon[81689]: osdmap e397: 3 total, 3 up, 3 in
Dec 06 07:54:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/51705472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:19.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:19 compute-1 nova_compute[226101]: 2025-12-06 07:54:19.628 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:19 compute-1 nova_compute[226101]: 2025-12-06 07:54:19.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:20.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:20 compute-1 ceph-mon[81689]: pgmap v3032: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 321 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 193 op/s
Dec 06 07:54:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e398 e398: 3 total, 3 up, 3 in
Dec 06 07:54:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:21.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:21 compute-1 nova_compute[226101]: 2025-12-06 07:54:21.606 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:21 compute-1 nova_compute[226101]: 2025-12-06 07:54:21.633 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:21 compute-1 nova_compute[226101]: 2025-12-06 07:54:21.633 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:21 compute-1 nova_compute[226101]: 2025-12-06 07:54:21.633 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:21 compute-1 nova_compute[226101]: 2025-12-06 07:54:21.633 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:54:21 compute-1 nova_compute[226101]: 2025-12-06 07:54:21.634 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:21 compute-1 ceph-mon[81689]: osdmap e398: 3 total, 3 up, 3 in
Dec 06 07:54:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:54:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2227001455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.053 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.204 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.205 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4323MB free_disk=20.942642211914062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.205 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.205 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.253 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.253 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.279 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:54:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/122337862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.684 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.689 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.704 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.725 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:54:22 compute-1 nova_compute[226101]: 2025-12-06 07:54:22.726 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:22.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:23 compute-1 ceph-mon[81689]: pgmap v3034: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 212 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 323 op/s
Dec 06 07:54:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2227001455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/122337862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:23.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:24 compute-1 podman[292628]: 2025-12-06 07:54:24.068889389 +0000 UTC m=+0.054508141 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:54:24 compute-1 podman[292629]: 2025-12-06 07:54:24.08353119 +0000 UTC m=+0.058372034 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:54:24 compute-1 podman[292630]: 2025-12-06 07:54:24.092484179 +0000 UTC m=+0.074230798 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:54:24 compute-1 nova_compute[226101]: 2025-12-06 07:54:24.674 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:24 compute-1 nova_compute[226101]: 2025-12-06 07:54:24.811 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:24.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:25 compute-1 ceph-mon[81689]: pgmap v3035: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 212 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 642 KiB/s wr, 204 op/s
Dec 06 07:54:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:25.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:26 compute-1 nova_compute[226101]: 2025-12-06 07:54:26.348 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:26.714 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:54:26 compute-1 nova_compute[226101]: 2025-12-06 07:54:26.714 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:26.715 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:54:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:27.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:27 compute-1 ceph-mon[81689]: pgmap v3036: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 191 op/s
Dec 06 07:54:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/230897178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:27 compute-1 nova_compute[226101]: 2025-12-06 07:54:27.709 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:27 compute-1 nova_compute[226101]: 2025-12-06 07:54:27.710 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:54:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:27.717 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:27 compute-1 nova_compute[226101]: 2025-12-06 07:54:27.743 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:54:28 compute-1 ceph-mon[81689]: pgmap v3037: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 162 op/s
Dec 06 07:54:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2688298799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 e399: 3 total, 3 up, 3 in
Dec 06 07:54:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:28.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:29.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:29 compute-1 nova_compute[226101]: 2025-12-06 07:54:29.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:29 compute-1 nova_compute[226101]: 2025-12-06 07:54:29.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:54:29 compute-1 nova_compute[226101]: 2025-12-06 07:54:29.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:29 compute-1 ceph-mon[81689]: osdmap e399: 3 total, 3 up, 3 in
Dec 06 07:54:29 compute-1 nova_compute[226101]: 2025-12-06 07:54:29.789 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007654.7881746, 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:54:29 compute-1 nova_compute[226101]: 2025-12-06 07:54:29.789 226109 INFO nova.compute.manager [-] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] VM Stopped (Lifecycle Event)
Dec 06 07:54:29 compute-1 nova_compute[226101]: 2025-12-06 07:54:29.812 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:29 compute-1 nova_compute[226101]: 2025-12-06 07:54:29.836 226109 DEBUG nova.compute.manager [None req-81fa08c6-4f1e-4716-bd31-09806bcfbc34 - - - - - -] [instance: 4d2e7124-8b09-4fd5-bc94-c2c9a3d964af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:54:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:30.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:30 compute-1 ceph-mon[81689]: pgmap v3039: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 503 KiB/s rd, 3.6 KiB/s wr, 57 op/s
Dec 06 07:54:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:31.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:32.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:32 compute-1 ceph-mon[81689]: pgmap v3040: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 513 KiB/s rd, 16 KiB/s wr, 62 op/s
Dec 06 07:54:33 compute-1 sshd-session[292604]: Connection closed by 165.154.55.146 port 60156 [preauth]
Dec 06 07:54:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:33.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:33 compute-1 nova_compute[226101]: 2025-12-06 07:54:33.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:34 compute-1 nova_compute[226101]: 2025-12-06 07:54:34.678 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:34 compute-1 nova_compute[226101]: 2025-12-06 07:54:34.813 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:34.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:35 compute-1 ceph-mon[81689]: pgmap v3041: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 513 KiB/s rd, 16 KiB/s wr, 62 op/s
Dec 06 07:54:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:35.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1322277353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:36 compute-1 sshd-session[292692]: Received disconnect from 154.219.116.39 port 42530:11: Bye Bye [preauth]
Dec 06 07:54:36 compute-1 sshd-session[292692]: Disconnected from authenticating user root 154.219.116.39 port 42530 [preauth]
Dec 06 07:54:36 compute-1 nova_compute[226101]: 2025-12-06 07:54:36.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:36 compute-1 nova_compute[226101]: 2025-12-06 07:54:36.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:36.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:37 compute-1 ceph-mon[81689]: pgmap v3042: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 28 KiB/s wr, 53 op/s
Dec 06 07:54:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3167987060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:37.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:38 compute-1 ceph-mon[81689]: pgmap v3043: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 28 KiB/s wr, 53 op/s
Dec 06 07:54:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:38.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:39.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:39 compute-1 nova_compute[226101]: 2025-12-06 07:54:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:39 compute-1 nova_compute[226101]: 2025-12-06 07:54:39.680 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:39 compute-1 nova_compute[226101]: 2025-12-06 07:54:39.814 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:40 compute-1 ceph-mon[81689]: pgmap v3044: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 26 KiB/s wr, 47 op/s
Dec 06 07:54:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:40.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:41.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3558779861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.027 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.028 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.051 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.161 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.162 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.182 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.182 226109 INFO nova.compute.claims [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.282 226109 DEBUG nova.scheduler.client.report [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.300 226109 DEBUG nova.scheduler.client.report [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.300 226109 DEBUG nova.compute.provider_tree [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.312 226109 DEBUG nova.scheduler.client.report [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.345 226109 DEBUG nova.scheduler.client.report [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.384 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:54:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2838096353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.803 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.809 226109 DEBUG nova.compute.provider_tree [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.833 226109 DEBUG nova.scheduler.client.report [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:54:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:42.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.858 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.859 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.933 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.933 226109 DEBUG nova.network.neutron [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.960 226109 INFO nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:54:42 compute-1 nova_compute[226101]: 2025-12-06 07:54:42.983 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.096 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.098 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.098 226109 INFO nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Creating image(s)
Dec 06 07:54:43 compute-1 ceph-mon[81689]: pgmap v3045: 305 pgs: 305 active+clean; 145 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 490 KiB/s rd, 22 KiB/s wr, 52 op/s
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.246 226109 DEBUG nova.storage.rbd_utils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.273 226109 DEBUG nova.storage.rbd_utils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.301 226109 DEBUG nova.storage.rbd_utils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.304 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.329 226109 DEBUG nova.policy [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ce6d0a8def6432aa60891ea00ef9d8b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63df107b8bd14504974c75ba92ae469b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.364 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.365 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.366 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.366 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.391 226109 DEBUG nova.storage.rbd_utils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.395 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:43.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.668 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.755 226109 DEBUG nova.storage.rbd_utils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] resizing rbd image 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.861 226109 DEBUG nova.objects.instance [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'migration_context' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.891 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.891 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Ensure instance console log exists: /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.891 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.892 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:43 compute-1 nova_compute[226101]: 2025-12-06 07:54:43.892 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2838096353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:44 compute-1 nova_compute[226101]: 2025-12-06 07:54:44.230 226109 DEBUG nova.network.neutron [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Successfully created port: 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:54:44 compute-1 nova_compute[226101]: 2025-12-06 07:54:44.682 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:44 compute-1 nova_compute[226101]: 2025-12-06 07:54:44.815 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:44.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:45 compute-1 ceph-mon[81689]: pgmap v3046: 305 pgs: 305 active+clean; 145 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 129 KiB/s rd, 11 KiB/s wr, 24 op/s
Dec 06 07:54:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.621 226109 DEBUG nova.network.neutron [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Successfully updated port: 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.637 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.637 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquired lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.637 226109 DEBUG nova.network.neutron [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:54:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:54:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1223599382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:54:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1223599382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.773 226109 DEBUG nova.compute.manager [req-a0947197-b29c-43aa-9e07-b2dd5a921622 req-564f0590-43ea-400f-a295-73ed756e7fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-changed-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.773 226109 DEBUG nova.compute.manager [req-a0947197-b29c-43aa-9e07-b2dd5a921622 req-564f0590-43ea-400f-a295-73ed756e7fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Refreshing instance network info cache due to event network-changed-73cf92a3-d7ff-4ec0-950c-c0c31afd8004. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.774 226109 DEBUG oslo_concurrency.lockutils [req-a0947197-b29c-43aa-9e07-b2dd5a921622 req-564f0590-43ea-400f-a295-73ed756e7fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:54:45 compute-1 nova_compute[226101]: 2025-12-06 07:54:45.897 226109 DEBUG nova.network.neutron [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:54:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1223599382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1223599382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.776 226109 DEBUG nova.network.neutron [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating instance_info_cache with network_info: [{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.800 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Releasing lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.801 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance network_info: |[{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.802 226109 DEBUG oslo_concurrency.lockutils [req-a0947197-b29c-43aa-9e07-b2dd5a921622 req-564f0590-43ea-400f-a295-73ed756e7fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.802 226109 DEBUG nova.network.neutron [req-a0947197-b29c-43aa-9e07-b2dd5a921622 req-564f0590-43ea-400f-a295-73ed756e7fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Refreshing network info cache for port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.808 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Start _get_guest_xml network_info=[{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.814 226109 WARNING nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.819 226109 DEBUG nova.virt.libvirt.host [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.819 226109 DEBUG nova.virt.libvirt.host [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.833 226109 DEBUG nova.virt.libvirt.host [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.834 226109 DEBUG nova.virt.libvirt.host [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.836 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.836 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.837 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.838 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.838 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.839 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.839 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.840 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.840 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.841 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.841 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.841 226109 DEBUG nova.virt.hardware [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:54:46 compute-1 nova_compute[226101]: 2025-12-06 07:54:46.847 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:46.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:47 compute-1 ceph-mon[81689]: pgmap v3047: 305 pgs: 305 active+clean; 138 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 392 KiB/s wr, 48 op/s
Dec 06 07:54:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:54:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2161790072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.271 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.297 226109 DEBUG nova.storage.rbd_utils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.300 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:54:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1092582499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:54:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1092582499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:47.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:54:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3787459085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.722 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.724 226109 DEBUG nova.virt.libvirt.vif [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:54:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.724 226109 DEBUG nova.network.os_vif_util [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.725 226109 DEBUG nova.network.os_vif_util [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.726 226109 DEBUG nova.objects.instance [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'pci_devices' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.755 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <uuid>563d5f63-1ecc-4c65-855c-e375f6f97f29</uuid>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <name>instance-000000ad</name>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <nova:name>tempest-AttachVolumeTestJSON-server-1295588604</nova:name>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:54:46</nova:creationTime>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <nova:user uuid="0ce6d0a8def6432aa60891ea00ef9d8b">tempest-AttachVolumeTestJSON-950214889-project-member</nova:user>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <nova:project uuid="63df107b8bd14504974c75ba92ae469b">tempest-AttachVolumeTestJSON-950214889</nova:project>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <nova:port uuid="73cf92a3-d7ff-4ec0-950c-c0c31afd8004">
Dec 06 07:54:47 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <system>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <entry name="serial">563d5f63-1ecc-4c65-855c-e375f6f97f29</entry>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <entry name="uuid">563d5f63-1ecc-4c65-855c-e375f6f97f29</entry>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </system>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <os>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   </os>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <features>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   </features>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/563d5f63-1ecc-4c65-855c-e375f6f97f29_disk">
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       </source>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/563d5f63-1ecc-4c65-855c-e375f6f97f29_disk.config">
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       </source>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:54:47 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d6:20:98"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <target dev="tap73cf92a3-d7"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/console.log" append="off"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <video>
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </video>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:54:47 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:54:47 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:54:47 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:54:47 compute-1 nova_compute[226101]: </domain>
Dec 06 07:54:47 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.756 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Preparing to wait for external event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.756 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.757 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.757 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.758 226109 DEBUG nova.virt.libvirt.vif [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:54:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.758 226109 DEBUG nova.network.os_vif_util [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.758 226109 DEBUG nova.network.os_vif_util [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.759 226109 DEBUG os_vif [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.759 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.760 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.760 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.762 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.763 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73cf92a3-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.763 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73cf92a3-d7, col_values=(('external_ids', {'iface-id': '73cf92a3-d7ff-4ec0-950c-c0c31afd8004', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:20:98', 'vm-uuid': '563d5f63-1ecc-4c65-855c-e375f6f97f29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.764 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:47 compute-1 NetworkManager[49031]: <info>  [1765007687.7655] manager: (tap73cf92a3-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.771 226109 INFO os_vif [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7')
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.812 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.812 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.812 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No VIF found with MAC fa:16:3e:d6:20:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.813 226109 INFO nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Using config drive
Dec 06 07:54:47 compute-1 nova_compute[226101]: 2025-12-06 07:54:47.837 226109 DEBUG nova.storage.rbd_utils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:54:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2161790072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:48 compute-1 ceph-mon[81689]: pgmap v3048: 305 pgs: 305 active+clean; 159 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.4 MiB/s wr, 59 op/s
Dec 06 07:54:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1092582499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1092582499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3787459085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.376 226109 DEBUG nova.network.neutron [req-a0947197-b29c-43aa-9e07-b2dd5a921622 req-564f0590-43ea-400f-a295-73ed756e7fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updated VIF entry in instance network info cache for port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.377 226109 DEBUG nova.network.neutron [req-a0947197-b29c-43aa-9e07-b2dd5a921622 req-564f0590-43ea-400f-a295-73ed756e7fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating instance_info_cache with network_info: [{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.393 226109 DEBUG oslo_concurrency.lockutils [req-a0947197-b29c-43aa-9e07-b2dd5a921622 req-564f0590-43ea-400f-a295-73ed756e7fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.443 226109 INFO nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Creating config drive at /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/disk.config
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.451 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoo9km3iw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.586 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoo9km3iw" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.619 226109 DEBUG nova.storage.rbd_utils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.624 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/disk.config 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:48.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.879 226109 DEBUG oslo_concurrency.processutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/disk.config 563d5f63-1ecc-4c65-855c-e375f6f97f29_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.880 226109 INFO nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Deleting local config drive /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/disk.config because it was imported into RBD.
Dec 06 07:54:48 compute-1 kernel: tap73cf92a3-d7: entered promiscuous mode
Dec 06 07:54:48 compute-1 NetworkManager[49031]: <info>  [1765007688.9263] manager: (tap73cf92a3-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Dec 06 07:54:48 compute-1 ovn_controller[130279]: 2025-12-06T07:54:48Z|00690|binding|INFO|Claiming lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for this chassis.
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.935 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:48 compute-1 ovn_controller[130279]: 2025-12-06T07:54:48Z|00691|binding|INFO|73cf92a3-d7ff-4ec0-950c-c0c31afd8004: Claiming fa:16:3e:d6:20:98 10.100.0.5
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.953 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:20:98 10.100.0.5'], port_security=['fa:16:3e:d6:20:98 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '563d5f63-1ecc-4c65-855c-e375f6f97f29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '88b6dfa1-842c-4e82-9a0e-5e2b4d8b3cb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=73cf92a3-d7ff-4ec0-950c-c0c31afd8004) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.955 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 bound to our chassis
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.956 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:54:48 compute-1 systemd-udevd[293017]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.969 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e97f1969-0a81-4fe5-8229-9131c5d39f9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.969 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap997afd36-d1 in ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:54:48 compute-1 NetworkManager[49031]: <info>  [1765007688.9709] device (tap73cf92a3-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.971 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap997afd36-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.971 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[633a2898-64bc-4fcb-a7e1-bcb3802817e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:48 compute-1 NetworkManager[49031]: <info>  [1765007688.9726] device (tap73cf92a3-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:54:48 compute-1 systemd-machined[190302]: New machine qemu-80-instance-000000ad.
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.973 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6263753d-a896-400a-9cf4-8f7fa1276958]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:48 compute-1 systemd[1]: Started Virtual Machine qemu-80-instance-000000ad.
Dec 06 07:54:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:48.987 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[d62539bb-25bf-4ce2-b0c6-b83413c4cc70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:48 compute-1 nova_compute[226101]: 2025-12-06 07:54:48.998 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:49 compute-1 ovn_controller[130279]: 2025-12-06T07:54:49Z|00692|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 ovn-installed in OVS
Dec 06 07:54:49 compute-1 ovn_controller[130279]: 2025-12-06T07:54:49Z|00693|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 up in Southbound
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.009 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.012 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfd4b2e-ead5-4fee-aadc-1d389e1404a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.041 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f99170-a73e-4814-b025-055250ee7d60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.046 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a120b317-f429-4ac9-8236-c3c6b86b9631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 systemd-udevd[293022]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:54:49 compute-1 NetworkManager[49031]: <info>  [1765007689.0483] manager: (tap997afd36-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/309)
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.073 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f74a9438-93c8-4dba-8b4b-e55c818d2c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.076 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9ad792-9c0c-4935-8393-2b4be45b01dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 NetworkManager[49031]: <info>  [1765007689.0954] device (tap997afd36-d0): carrier: link connected
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.100 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[fe138c46-e654-4ef8-8945-482db56ef675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.116 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d87e2389-c30f-4681-8698-5a2677c14617]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796859, 'reachable_time': 25590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293052, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.131 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[24f1626d-9014-4b59-a15b-58cff7e1d539]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:8b72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796859, 'tstamp': 796859}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293053, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.146 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f35fd229-e3f7-43ae-abd2-88a168ff9758]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796859, 'reachable_time': 25590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293054, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.178 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[38a9b42b-180f-45b8-bf7d-ee9c1c5f3429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.227 226109 DEBUG nova.compute.manager [req-db466379-97e4-41dc-8905-b5d788b1ec36 req-5b4317c5-4d71-4d94-a790-6ecf63803dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.228 226109 DEBUG oslo_concurrency.lockutils [req-db466379-97e4-41dc-8905-b5d788b1ec36 req-5b4317c5-4d71-4d94-a790-6ecf63803dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.228 226109 DEBUG oslo_concurrency.lockutils [req-db466379-97e4-41dc-8905-b5d788b1ec36 req-5b4317c5-4d71-4d94-a790-6ecf63803dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.229 226109 DEBUG oslo_concurrency.lockutils [req-db466379-97e4-41dc-8905-b5d788b1ec36 req-5b4317c5-4d71-4d94-a790-6ecf63803dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.229 226109 DEBUG nova.compute.manager [req-db466379-97e4-41dc-8905-b5d788b1ec36 req-5b4317c5-4d71-4d94-a790-6ecf63803dd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Processing event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.237 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[791212ec-94f3-4790-a97f-03303d2fb01c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.238 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.238 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.238 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap997afd36-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:49 compute-1 kernel: tap997afd36-d0: entered promiscuous mode
Dec 06 07:54:49 compute-1 NetworkManager[49031]: <info>  [1765007689.2409] manager: (tap997afd36-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.242 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.246 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap997afd36-d0, col_values=(('external_ids', {'iface-id': '904065d3-3080-49e2-8707-2794a4ba4e6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.247 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:49 compute-1 ovn_controller[130279]: 2025-12-06T07:54:49Z|00694|binding|INFO|Releasing lport 904065d3-3080-49e2-8707-2794a4ba4e6e from this chassis (sb_readonly=0)
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.251 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.252 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0762f9dd-9191-49a4-bae3-885bef7f09c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.253 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:54:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:54:49.255 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'env', 'PROCESS_TAG=haproxy-997afd36-d3a2-430f-ba34-f342135a9bb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/997afd36-d3a2-430f-ba34-f342135a9bb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.260 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:49.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:49 compute-1 podman[293086]: 2025-12-06 07:54:49.645904341 +0000 UTC m=+0.096354561 container create 17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:54:49 compute-1 podman[293086]: 2025-12-06 07:54:49.575791974 +0000 UTC m=+0.026242224 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:54:49 compute-1 systemd[1]: Started libpod-conmon-17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3.scope.
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:49 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:54:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22ffe312685f84fb7d7aba40651ebb4ea70a24156ca9b88bc9c8530786441471/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:49 compute-1 podman[293086]: 2025-12-06 07:54:49.71532619 +0000 UTC m=+0.165776460 container init 17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:54:49 compute-1 podman[293086]: 2025-12-06 07:54:49.721379912 +0000 UTC m=+0.171830132 container start 17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:54:49 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293141]: [NOTICE]   (293167) : New worker (293173) forked
Dec 06 07:54:49 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293141]: [NOTICE]   (293167) : Loading success.
Dec 06 07:54:49 compute-1 sudo[293145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:49 compute-1 sudo[293145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:49 compute-1 sudo[293145]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.784 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.785 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007689.7850025, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.785 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] VM Started (Lifecycle Event)
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.788 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.791 226109 INFO nova.virt.libvirt.driver [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance spawned successfully.
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.791 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:54:49 compute-1 sudo[293183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:54:49 compute-1 sudo[293183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:49 compute-1 sudo[293183]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.810 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.816 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.821 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.822 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.822 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.823 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.823 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.824 226109 DEBUG nova.virt.libvirt.driver [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.852 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.853 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007689.7851763, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.853 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] VM Paused (Lifecycle Event)
Dec 06 07:54:49 compute-1 sudo[293208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:49 compute-1 sudo[293208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:49 compute-1 sudo[293208]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.881 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.884 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007689.7875724, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.885 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] VM Resumed (Lifecycle Event)
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.901 226109 INFO nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Took 6.80 seconds to spawn the instance on the hypervisor.
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.902 226109 DEBUG nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.903 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.908 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:54:49 compute-1 sudo[293233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:54:49 compute-1 sudo[293233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.946 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.972 226109 INFO nova.compute.manager [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Took 7.85 seconds to build instance.
Dec 06 07:54:49 compute-1 nova_compute[226101]: 2025-12-06 07:54:49.987 226109 DEBUG oslo_concurrency.lockutils [None req-7d3c2506-badf-45ae-854a-14cf2549e90d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:50 compute-1 sudo[293233]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:50 compute-1 ceph-mon[81689]: pgmap v3049: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Dec 06 07:54:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:50.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:51 compute-1 nova_compute[226101]: 2025-12-06 07:54:51.338 226109 DEBUG nova.compute.manager [req-102a6f74-e522-4335-a5ba-877834310cf3 req-4a6c13c9-e587-4f53-8677-ff9d38d6a1a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:51 compute-1 nova_compute[226101]: 2025-12-06 07:54:51.338 226109 DEBUG oslo_concurrency.lockutils [req-102a6f74-e522-4335-a5ba-877834310cf3 req-4a6c13c9-e587-4f53-8677-ff9d38d6a1a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:51 compute-1 nova_compute[226101]: 2025-12-06 07:54:51.339 226109 DEBUG oslo_concurrency.lockutils [req-102a6f74-e522-4335-a5ba-877834310cf3 req-4a6c13c9-e587-4f53-8677-ff9d38d6a1a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:51 compute-1 nova_compute[226101]: 2025-12-06 07:54:51.339 226109 DEBUG oslo_concurrency.lockutils [req-102a6f74-e522-4335-a5ba-877834310cf3 req-4a6c13c9-e587-4f53-8677-ff9d38d6a1a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:51 compute-1 nova_compute[226101]: 2025-12-06 07:54:51.339 226109 DEBUG nova.compute.manager [req-102a6f74-e522-4335-a5ba-877834310cf3 req-4a6c13c9-e587-4f53-8677-ff9d38d6a1a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:54:51 compute-1 nova_compute[226101]: 2025-12-06 07:54:51.339 226109 WARNING nova.compute.manager [req-102a6f74-e522-4335-a5ba-877834310cf3 req-4a6c13c9-e587-4f53-8677-ff9d38d6a1a4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state active and task_state None.
Dec 06 07:54:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:54:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:54:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:54:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:54:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:54:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:51.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.408 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:52 compute-1 NetworkManager[49031]: <info>  [1765007692.4095] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Dec 06 07:54:52 compute-1 NetworkManager[49031]: <info>  [1765007692.4105] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.506 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:52 compute-1 ceph-mon[81689]: pgmap v3050: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 07:54:52 compute-1 ovn_controller[130279]: 2025-12-06T07:54:52Z|00695|binding|INFO|Releasing lport 904065d3-3080-49e2-8707-2794a4ba4e6e from this chassis (sb_readonly=0)
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.518 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.765 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.810 226109 DEBUG nova.compute.manager [req-e29aab32-f846-41db-98dd-800e266fe106 req-38da57c5-b7da-44d3-a874-b8a6bc560cb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-changed-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.811 226109 DEBUG nova.compute.manager [req-e29aab32-f846-41db-98dd-800e266fe106 req-38da57c5-b7da-44d3-a874-b8a6bc560cb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Refreshing instance network info cache due to event network-changed-73cf92a3-d7ff-4ec0-950c-c0c31afd8004. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.811 226109 DEBUG oslo_concurrency.lockutils [req-e29aab32-f846-41db-98dd-800e266fe106 req-38da57c5-b7da-44d3-a874-b8a6bc560cb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.811 226109 DEBUG oslo_concurrency.lockutils [req-e29aab32-f846-41db-98dd-800e266fe106 req-38da57c5-b7da-44d3-a874-b8a6bc560cb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:54:52 compute-1 nova_compute[226101]: 2025-12-06 07:54:52.812 226109 DEBUG nova.network.neutron [req-e29aab32-f846-41db-98dd-800e266fe106 req-38da57c5-b7da-44d3-a874-b8a6bc560cb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Refreshing network info cache for port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:54:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:52.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:53.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:54 compute-1 ceph-mon[81689]: pgmap v3051: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Dec 06 07:54:54 compute-1 nova_compute[226101]: 2025-12-06 07:54:54.658 226109 DEBUG nova.network.neutron [req-e29aab32-f846-41db-98dd-800e266fe106 req-38da57c5-b7da-44d3-a874-b8a6bc560cb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updated VIF entry in instance network info cache for port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:54:54 compute-1 nova_compute[226101]: 2025-12-06 07:54:54.659 226109 DEBUG nova.network.neutron [req-e29aab32-f846-41db-98dd-800e266fe106 req-38da57c5-b7da-44d3-a874-b8a6bc560cb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating instance_info_cache with network_info: [{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:54:54 compute-1 nova_compute[226101]: 2025-12-06 07:54:54.684 226109 DEBUG oslo_concurrency.lockutils [req-e29aab32-f846-41db-98dd-800e266fe106 req-38da57c5-b7da-44d3-a874-b8a6bc560cb4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:54:54 compute-1 nova_compute[226101]: 2025-12-06 07:54:54.736 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:55 compute-1 podman[293290]: 2025-12-06 07:54:55.107385282 +0000 UTC m=+0.078829382 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 07:54:55 compute-1 podman[293291]: 2025-12-06 07:54:55.127072659 +0000 UTC m=+0.097523443 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:54:55 compute-1 podman[293292]: 2025-12-06 07:54:55.157466782 +0000 UTC m=+0.124847373 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:54:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:55.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:56 compute-1 nova_compute[226101]: 2025-12-06 07:54:56.071 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:56 compute-1 sudo[293352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:56 compute-1 sudo[293352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:56 compute-1 sudo[293352]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:56 compute-1 sudo[293377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:54:56 compute-1 sudo[293377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:56 compute-1 sudo[293377]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:56.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:57 compute-1 ceph-mon[81689]: pgmap v3052: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:54:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:57.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:57 compute-1 nova_compute[226101]: 2025-12-06 07:54:57.768 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:58.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:59 compute-1 ceph-mon[81689]: pgmap v3053: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 119 op/s
Dec 06 07:54:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:54:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:59 compute-1 nova_compute[226101]: 2025-12-06 07:54:59.738 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:00.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:00 compute-1 nova_compute[226101]: 2025-12-06 07:55:00.949 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:01 compute-1 ceph-mon[81689]: pgmap v3054: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 409 KiB/s wr, 96 op/s
Dec 06 07:55:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:01.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:01.675 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:01.676 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:01.676 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:02 compute-1 ceph-mon[81689]: pgmap v3055: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Dec 06 07:55:02 compute-1 ovn_controller[130279]: 2025-12-06T07:55:02Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:20:98 10.100.0.5
Dec 06 07:55:02 compute-1 ovn_controller[130279]: 2025-12-06T07:55:02Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:20:98 10.100.0.5
Dec 06 07:55:02 compute-1 nova_compute[226101]: 2025-12-06 07:55:02.811 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:02.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:55:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:03.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:55:04 compute-1 ceph-mon[81689]: pgmap v3056: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 988 KiB/s rd, 85 B/s wr, 38 op/s
Dec 06 07:55:04 compute-1 nova_compute[226101]: 2025-12-06 07:55:04.738 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:04.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:05.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:06 compute-1 ceph-mon[81689]: pgmap v3057: 305 pgs: 305 active+clean; 186 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 70 op/s
Dec 06 07:55:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:07.051 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:55:07 compute-1 nova_compute[226101]: 2025-12-06 07:55:07.051 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:07.052 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:55:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:07.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:07 compute-1 nova_compute[226101]: 2025-12-06 07:55:07.813 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:08.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:09 compute-1 ceph-mon[81689]: pgmap v3058: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 442 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 06 07:55:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:55:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548726769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:55:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:55:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548726769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:55:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:09.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:09 compute-1 nova_compute[226101]: 2025-12-06 07:55:09.740 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:10 compute-1 nova_compute[226101]: 2025-12-06 07:55:10.174 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:10.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/548726769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:55:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/548726769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:55:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:11.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:11 compute-1 ceph-mon[81689]: pgmap v3059: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.357 226109 DEBUG oslo_concurrency.lockutils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.358 226109 DEBUG oslo_concurrency.lockutils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.374 226109 DEBUG nova.objects.instance [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.423 226109 DEBUG oslo_concurrency.lockutils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.635 226109 DEBUG oslo_concurrency.lockutils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.635 226109 DEBUG oslo_concurrency.lockutils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.635 226109 INFO nova.compute.manager [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Attaching volume 4740f75b-6222-44e7-8496-fed71a346d64 to /dev/vdb
Dec 06 07:55:12 compute-1 ceph-mon[81689]: pgmap v3060: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:55:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/264023895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.781 226109 DEBUG os_brick.utils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.782 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.793 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.794 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[a80ef210-22af-47ab-9f41-4b4de41cfe18]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.795 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.802 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.803 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb6df93-5e6f-4f20-a6a2-eeb2ecb3c527]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.804 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.811 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.811 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[022737a4-4a12-4322-8a2f-98bba6d00044]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.812 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0721ba-0f3e-4586-a83c-344729a3a730]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.812 226109 DEBUG oslo_concurrency.processutils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.837 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.839 226109 DEBUG oslo_concurrency.processutils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.841 226109 DEBUG os_brick.initiator.connectors.lightos [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.841 226109 DEBUG os_brick.initiator.connectors.lightos [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.841 226109 DEBUG os_brick.initiator.connectors.lightos [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.841 226109 DEBUG os_brick.utils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:55:12 compute-1 nova_compute[226101]: 2025-12-06 07:55:12.842 226109 DEBUG nova.virt.block_device [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating existing volume attachment record: 265aadc4-cb45-4d0e-961a-0c69172d5d36 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:55:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:12.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:13.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:13 compute-1 nova_compute[226101]: 2025-12-06 07:55:13.664 226109 DEBUG nova.objects.instance [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:13 compute-1 nova_compute[226101]: 2025-12-06 07:55:13.709 226109 DEBUG nova.virt.libvirt.driver [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Attempting to attach volume 4740f75b-6222-44e7-8496-fed71a346d64 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:55:13 compute-1 nova_compute[226101]: 2025-12-06 07:55:13.711 226109 DEBUG nova.virt.libvirt.guest [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:55:13 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:55:13 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-4740f75b-6222-44e7-8496-fed71a346d64">
Dec 06 07:55:13 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:13 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:13 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:13 compute-1 nova_compute[226101]:   </source>
Dec 06 07:55:13 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:55:13 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:13 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:55:13 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:55:13 compute-1 nova_compute[226101]:   <serial>4740f75b-6222-44e7-8496-fed71a346d64</serial>
Dec 06 07:55:13 compute-1 nova_compute[226101]: </disk>
Dec 06 07:55:13 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:55:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3405073010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:14 compute-1 nova_compute[226101]: 2025-12-06 07:55:14.219 226109 DEBUG nova.virt.libvirt.driver [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:14 compute-1 nova_compute[226101]: 2025-12-06 07:55:14.219 226109 DEBUG nova.virt.libvirt.driver [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:14 compute-1 nova_compute[226101]: 2025-12-06 07:55:14.220 226109 DEBUG nova.virt.libvirt.driver [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:14 compute-1 nova_compute[226101]: 2025-12-06 07:55:14.220 226109 DEBUG nova.virt.libvirt.driver [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No VIF found with MAC fa:16:3e:d6:20:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:55:14 compute-1 nova_compute[226101]: 2025-12-06 07:55:14.741 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:14 compute-1 nova_compute[226101]: 2025-12-06 07:55:14.819 226109 DEBUG oslo_concurrency.lockutils [None req-7482f409-838e-4bef-826b-ab2b14204b5a 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:14.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:14 compute-1 ceph-mon[81689]: pgmap v3061: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:55:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:15.054 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:15 compute-1 nova_compute[226101]: 2025-12-06 07:55:15.343 226109 DEBUG oslo_concurrency.lockutils [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:15 compute-1 nova_compute[226101]: 2025-12-06 07:55:15.344 226109 DEBUG oslo_concurrency.lockutils [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:15 compute-1 nova_compute[226101]: 2025-12-06 07:55:15.344 226109 DEBUG nova.compute.manager [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:15 compute-1 nova_compute[226101]: 2025-12-06 07:55:15.349 226109 DEBUG nova.compute.manager [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 06 07:55:15 compute-1 nova_compute[226101]: 2025-12-06 07:55:15.350 226109 DEBUG nova.objects.instance [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:15 compute-1 nova_compute[226101]: 2025-12-06 07:55:15.375 226109 DEBUG nova.virt.libvirt.driver [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:55:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:15.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:16.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:16 compute-1 ceph-mon[81689]: pgmap v3062: 305 pgs: 305 active+clean; 231 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Dec 06 07:55:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:17.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:17 compute-1 kernel: tap73cf92a3-d7 (unregistering): left promiscuous mode
Dec 06 07:55:17 compute-1 NetworkManager[49031]: <info>  [1765007717.6397] device (tap73cf92a3-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:55:17 compute-1 ovn_controller[130279]: 2025-12-06T07:55:17Z|00696|binding|INFO|Releasing lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 from this chassis (sb_readonly=0)
Dec 06 07:55:17 compute-1 ovn_controller[130279]: 2025-12-06T07:55:17Z|00697|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 down in Southbound
Dec 06 07:55:17 compute-1 ovn_controller[130279]: 2025-12-06T07:55:17Z|00698|binding|INFO|Removing iface tap73cf92a3-d7 ovn-installed in OVS
Dec 06 07:55:17 compute-1 nova_compute[226101]: 2025-12-06 07:55:17.692 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:17 compute-1 nova_compute[226101]: 2025-12-06 07:55:17.710 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:17.729 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:20:98 10.100.0.5'], port_security=['fa:16:3e:d6:20:98 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '563d5f63-1ecc-4c65-855c-e375f6f97f29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '88b6dfa1-842c-4e82-9a0e-5e2b4d8b3cb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=73cf92a3-d7ff-4ec0-950c-c0c31afd8004) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:55:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:17.730 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 unbound from our chassis
Dec 06 07:55:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:17.732 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 997afd36-d3a2-430f-ba34-f342135a9bb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:55:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:17.733 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[04ea6773-04f7-4696-89b7-024217cd758c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:17.734 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace which is not needed anymore
Dec 06 07:55:17 compute-1 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Dec 06 07:55:17 compute-1 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000ad.scope: Consumed 14.368s CPU time.
Dec 06 07:55:17 compute-1 systemd-machined[190302]: Machine qemu-80-instance-000000ad terminated.
Dec 06 07:55:17 compute-1 nova_compute[226101]: 2025-12-06 07:55:17.839 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:17 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293141]: [NOTICE]   (293167) : haproxy version is 2.8.14-c23fe91
Dec 06 07:55:17 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293141]: [NOTICE]   (293167) : path to executable is /usr/sbin/haproxy
Dec 06 07:55:17 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293141]: [WARNING]  (293167) : Exiting Master process...
Dec 06 07:55:17 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293141]: [ALERT]    (293167) : Current worker (293173) exited with code 143 (Terminated)
Dec 06 07:55:17 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293141]: [WARNING]  (293167) : All workers exited. Exiting... (0)
Dec 06 07:55:17 compute-1 systemd[1]: libpod-17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3.scope: Deactivated successfully.
Dec 06 07:55:17 compute-1 conmon[293141]: conmon 17209db3b153ef7cfbb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3.scope/container/memory.events
Dec 06 07:55:17 compute-1 podman[293454]: 2025-12-06 07:55:17.860529709 +0000 UTC m=+0.040355811 container died 17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:55:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3-userdata-shm.mount: Deactivated successfully.
Dec 06 07:55:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-22ffe312685f84fb7d7aba40651ebb4ea70a24156ca9b88bc9c8530786441471-merged.mount: Deactivated successfully.
Dec 06 07:55:17 compute-1 podman[293454]: 2025-12-06 07:55:17.903317215 +0000 UTC m=+0.083143317 container cleanup 17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 07:55:17 compute-1 nova_compute[226101]: 2025-12-06 07:55:17.909 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:17 compute-1 nova_compute[226101]: 2025-12-06 07:55:17.914 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:17 compute-1 systemd[1]: libpod-conmon-17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3.scope: Deactivated successfully.
Dec 06 07:55:17 compute-1 podman[293488]: 2025-12-06 07:55:17.977551783 +0000 UTC m=+0.050727609 container remove 17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:55:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1464405074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2755240412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:17.985 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c46ea3f2-6b1b-4042-99ab-f23c0233dfd3]: (4, ('Sat Dec  6 07:55:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3)\n17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3\nSat Dec  6 07:55:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3)\n17209db3b153ef7cfbb2761ccf9e79bd989a9779a01ae6222dfad5619c81f2a3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:17.987 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ce52d4b9-57ae-4b66-8fbd-6402f0458d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:17.988 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:17 compute-1 nova_compute[226101]: 2025-12-06 07:55:17.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:17 compute-1 kernel: tap997afd36-d0: left promiscuous mode
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.007 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:18.008 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3a847406-e86a-479a-b13c-00c0644e255f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:18.035 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bb10445b-8aeb-498a-9d43-2cccc8c789de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:18.036 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5d855486-7de6-48ea-95cd-c517d58e64b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:18.052 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4b51d6-8ffd-4d05-add9-e0f71461c516]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796854, 'reachable_time': 33119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293509, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d997afd36\x2dd3a2\x2d430f\x2dba34\x2df342135a9bb6.mount: Deactivated successfully.
Dec 06 07:55:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:18.055 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:55:18 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:18.055 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[e6418f2c-dc05-4664-b600-21946d5c9be6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.142 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.220 226109 DEBUG nova.compute.manager [req-16fed1f5-7240-4b14-bf24-5aa7a2e79aee req-e32af5fd-15cb-41c3-bd15-9eea4d982430 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.220 226109 DEBUG oslo_concurrency.lockutils [req-16fed1f5-7240-4b14-bf24-5aa7a2e79aee req-e32af5fd-15cb-41c3-bd15-9eea4d982430 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.221 226109 DEBUG oslo_concurrency.lockutils [req-16fed1f5-7240-4b14-bf24-5aa7a2e79aee req-e32af5fd-15cb-41c3-bd15-9eea4d982430 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.221 226109 DEBUG oslo_concurrency.lockutils [req-16fed1f5-7240-4b14-bf24-5aa7a2e79aee req-e32af5fd-15cb-41c3-bd15-9eea4d982430 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.221 226109 DEBUG nova.compute.manager [req-16fed1f5-7240-4b14-bf24-5aa7a2e79aee req-e32af5fd-15cb-41c3-bd15-9eea4d982430 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.221 226109 WARNING nova.compute.manager [req-16fed1f5-7240-4b14-bf24-5aa7a2e79aee req-e32af5fd-15cb-41c3-bd15-9eea4d982430 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state active and task_state powering-off.
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.389 226109 INFO nova.virt.libvirt.driver [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance shutdown successfully after 3 seconds.
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.393 226109 INFO nova.virt.libvirt.driver [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance destroyed successfully.
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.394 226109 DEBUG nova.objects.instance [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'numa_topology' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.408 226109 DEBUG nova.compute.manager [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:18 compute-1 nova_compute[226101]: 2025-12-06 07:55:18.452 226109 DEBUG oslo_concurrency.lockutils [None req-b1ca365f-cf91-486e-9ee0-82ff2aa644fc 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:19 compute-1 ceph-mon[81689]: pgmap v3063: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 128 KiB/s rd, 2.6 MiB/s wr, 60 op/s
Dec 06 07:55:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:19.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:19 compute-1 nova_compute[226101]: 2025-12-06 07:55:19.744 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.430 226109 DEBUG nova.compute.manager [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.430 226109 DEBUG oslo_concurrency.lockutils [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.430 226109 DEBUG oslo_concurrency.lockutils [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.431 226109 DEBUG oslo_concurrency.lockutils [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.431 226109 DEBUG nova.compute.manager [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.431 226109 WARNING nova.compute.manager [req-d4268b93-1034-4f8c-ab0f-12977d43e237 req-cfd968e9-9a12-453e-86d4-7d40d5f6491c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state stopped and task_state None.
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.805 226109 DEBUG nova.objects.instance [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.842 226109 DEBUG oslo_concurrency.lockutils [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.842 226109 DEBUG oslo_concurrency.lockutils [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquired lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.843 226109 DEBUG nova.network.neutron [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:55:20 compute-1 nova_compute[226101]: 2025-12-06 07:55:20.843 226109 DEBUG nova.objects.instance [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'info_cache' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:21 compute-1 ceph-mon[81689]: pgmap v3064: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:55:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:21.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:22 compute-1 nova_compute[226101]: 2025-12-06 07:55:22.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:22 compute-1 nova_compute[226101]: 2025-12-06 07:55:22.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:22 compute-1 nova_compute[226101]: 2025-12-06 07:55:22.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:22 compute-1 nova_compute[226101]: 2025-12-06 07:55:22.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:22 compute-1 nova_compute[226101]: 2025-12-06 07:55:22.615 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:55:22 compute-1 nova_compute[226101]: 2025-12-06 07:55:22.616 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:22 compute-1 nova_compute[226101]: 2025-12-06 07:55:22.843 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:22.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.011 226109 DEBUG nova.network.neutron [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating instance_info_cache with network_info: [{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:55:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:55:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1818746968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.022 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.035 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.042 226109 DEBUG oslo_concurrency.lockutils [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Releasing lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:55:23 compute-1 ceph-mon[81689]: pgmap v3065: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 06 07:55:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1818746968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.072 226109 INFO nova.virt.libvirt.driver [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance destroyed successfully.
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.073 226109 DEBUG nova.objects.instance [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'numa_topology' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.089 226109 DEBUG nova.objects.instance [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'resources' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.104 226109 DEBUG nova.virt.libvirt.vif [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:54:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:55:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.104 226109 DEBUG nova.network.os_vif_util [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.105 226109 DEBUG nova.network.os_vif_util [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.105 226109 DEBUG os_vif [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.106 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.107 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73cf92a3-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.109 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.111 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.113 226109 INFO os_vif [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7')
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.115 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.116 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.116 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.126 226109 DEBUG nova.virt.libvirt.driver [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Start _get_guest_xml network_info=[{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4740f75b-6222-44e7-8496-fed71a346d64', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4740f75b-6222-44e7-8496-fed71a346d64', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '563d5f63-1ecc-4c65-855c-e375f6f97f29', 'attached_at': '', 'detached_at': '', 'volume_id': '4740f75b-6222-44e7-8496-fed71a346d64', 'serial': '4740f75b-6222-44e7-8496-fed71a346d64'}, 'mount_device': '/dev/vdb', 'boot_index': None, 'attachment_id': '265aadc4-cb45-4d0e-961a-0c69172d5d36', 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.131 226109 WARNING nova.virt.libvirt.driver [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.136 226109 DEBUG nova.virt.libvirt.host [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.137 226109 DEBUG nova.virt.libvirt.host [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.140 226109 DEBUG nova.virt.libvirt.host [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.140 226109 DEBUG nova.virt.libvirt.host [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.141 226109 DEBUG nova.virt.libvirt.driver [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.141 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.142 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.142 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.142 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.143 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.143 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.143 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.143 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.144 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.144 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.144 226109 DEBUG nova.virt.hardware [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.144 226109 DEBUG nova.objects.instance [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.160 226109 DEBUG oslo_concurrency.processutils [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.296 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.297 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4352MB free_disk=20.92172622680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.297 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.297 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.373 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 563d5f63-1ecc-4c65-855c-e375f6f97f29 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.374 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.374 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.422 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:55:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1116877406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:23.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.610 226109 DEBUG oslo_concurrency.processutils [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.653 226109 DEBUG oslo_concurrency.processutils [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:55:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2797911486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.882 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.890 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.906 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.932 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:55:23 compute-1 nova_compute[226101]: 2025-12-06 07:55:23.933 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:55:24 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3794574650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.076 226109 DEBUG oslo_concurrency.processutils [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1116877406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2797911486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3794574650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.104 226109 DEBUG nova.virt.libvirt.vif [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:54:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:55:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.105 226109 DEBUG nova.network.os_vif_util [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.105 226109 DEBUG nova.network.os_vif_util [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.107 226109 DEBUG nova.objects.instance [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'pci_devices' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.121 226109 DEBUG nova.virt.libvirt.driver [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <uuid>563d5f63-1ecc-4c65-855c-e375f6f97f29</uuid>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <name>instance-000000ad</name>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <nova:name>tempest-AttachVolumeTestJSON-server-1295588604</nova:name>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:55:23</nova:creationTime>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <nova:user uuid="0ce6d0a8def6432aa60891ea00ef9d8b">tempest-AttachVolumeTestJSON-950214889-project-member</nova:user>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <nova:project uuid="63df107b8bd14504974c75ba92ae469b">tempest-AttachVolumeTestJSON-950214889</nova:project>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <nova:port uuid="73cf92a3-d7ff-4ec0-950c-c0c31afd8004">
Dec 06 07:55:24 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <system>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <entry name="serial">563d5f63-1ecc-4c65-855c-e375f6f97f29</entry>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <entry name="uuid">563d5f63-1ecc-4c65-855c-e375f6f97f29</entry>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </system>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <os>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   </os>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <features>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   </features>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/563d5f63-1ecc-4c65-855c-e375f6f97f29_disk">
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </source>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/563d5f63-1ecc-4c65-855c-e375f6f97f29_disk.config">
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </source>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <source protocol="rbd" name="volumes/volume-4740f75b-6222-44e7-8496-fed71a346d64">
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </source>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:55:24 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <serial>4740f75b-6222-44e7-8496-fed71a346d64</serial>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d6:20:98"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <target dev="tap73cf92a3-d7"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/console.log" append="off"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <video>
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </video>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:55:24 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:55:24 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:55:24 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:55:24 compute-1 nova_compute[226101]: </domain>
Dec 06 07:55:24 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.123 226109 DEBUG nova.virt.libvirt.driver [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.124 226109 DEBUG nova.virt.libvirt.driver [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.124 226109 DEBUG nova.virt.libvirt.driver [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.125 226109 DEBUG nova.virt.libvirt.vif [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:54:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:55:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.125 226109 DEBUG nova.network.os_vif_util [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.126 226109 DEBUG nova.network.os_vif_util [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.126 226109 DEBUG os_vif [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.126 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.127 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.127 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.129 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.129 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73cf92a3-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.130 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73cf92a3-d7, col_values=(('external_ids', {'iface-id': '73cf92a3-d7ff-4ec0-950c-c0c31afd8004', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:20:98', 'vm-uuid': '563d5f63-1ecc-4c65-855c-e375f6f97f29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.131 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 NetworkManager[49031]: <info>  [1765007724.1323] manager: (tap73cf92a3-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.134 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.136 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.137 226109 INFO os_vif [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7')
Dec 06 07:55:24 compute-1 kernel: tap73cf92a3-d7: entered promiscuous mode
Dec 06 07:55:24 compute-1 NetworkManager[49031]: <info>  [1765007724.1932] manager: (tap73cf92a3-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Dec 06 07:55:24 compute-1 ovn_controller[130279]: 2025-12-06T07:55:24Z|00699|binding|INFO|Claiming lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for this chassis.
Dec 06 07:55:24 compute-1 ovn_controller[130279]: 2025-12-06T07:55:24Z|00700|binding|INFO|73cf92a3-d7ff-4ec0-950c-c0c31afd8004: Claiming fa:16:3e:d6:20:98 10.100.0.5
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.196 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.203 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:20:98 10.100.0.5'], port_security=['fa:16:3e:d6:20:98 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '563d5f63-1ecc-4c65-855c-e375f6f97f29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '88b6dfa1-842c-4e82-9a0e-5e2b4d8b3cb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=73cf92a3-d7ff-4ec0-950c-c0c31afd8004) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.204 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 bound to our chassis
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.206 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:55:24 compute-1 ovn_controller[130279]: 2025-12-06T07:55:24Z|00701|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 ovn-installed in OVS
Dec 06 07:55:24 compute-1 ovn_controller[130279]: 2025-12-06T07:55:24Z|00702|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 up in Southbound
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.212 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.214 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.217 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a98794-bd4f-42b9-947d-0944557fc3e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.218 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap997afd36-d1 in ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.220 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap997afd36-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.220 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[deaf7176-cd73-47d4-81e2-5ee907fd0fd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 systemd-udevd[293631]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.221 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dc83bef4-b7f2-4126-976a-0bcea2b9cff6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.232 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[020c1cfa-9c3d-4bb4-bdb8-da86f5c9cd79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 NetworkManager[49031]: <info>  [1765007724.2359] device (tap73cf92a3-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:55:24 compute-1 NetworkManager[49031]: <info>  [1765007724.2371] device (tap73cf92a3-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:55:24 compute-1 systemd-machined[190302]: New machine qemu-81-instance-000000ad.
Dec 06 07:55:24 compute-1 systemd[1]: Started Virtual Machine qemu-81-instance-000000ad.
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.254 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[83a995d4-a272-4675-b5ee-60b0ed626f42]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.282 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[46e41a39-a589-4d3f-92e2-de55a077bf76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 NetworkManager[49031]: <info>  [1765007724.2903] manager: (tap997afd36-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/315)
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.289 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6267c6-7c0a-49ab-9cd1-78a8ff8d9b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 systemd-udevd[293637]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.319 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e14ebd64-3542-4f00-8e36-5723beade424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.323 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3e75382d-4daa-4987-b097-8025d560364c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 NetworkManager[49031]: <info>  [1765007724.3449] device (tap997afd36-d0): carrier: link connected
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.349 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3fea40ee-1a34-437b-b67c-c35309547955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.367 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6a6ac4-1914-47d5-9871-da9ab31be139]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800384, 'reachable_time': 24929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293666, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.379 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ab24e66e-d3ed-48bf-958d-1819bd2e8446]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:8b72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 800384, 'tstamp': 800384}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293667, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.395 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf0251a-40bd-445a-9ad6-459d119a60dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800384, 'reachable_time': 24929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293668, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.419 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9021e5cd-1c13-4608-ac1d-363661dcb886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.471 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f3220966-bd39-403c-8d4e-212898f268ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.472 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.473 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.474 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap997afd36-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.517 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 NetworkManager[49031]: <info>  [1765007724.5177] manager: (tap997afd36-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Dec 06 07:55:24 compute-1 kernel: tap997afd36-d0: entered promiscuous mode
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.520 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap997afd36-d0, col_values=(('external_ids', {'iface-id': '904065d3-3080-49e2-8707-2794a4ba4e6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.522 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 ovn_controller[130279]: 2025-12-06T07:55:24Z|00703|binding|INFO|Releasing lport 904065d3-3080-49e2-8707-2794a4ba4e6e from this chassis (sb_readonly=0)
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.523 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.523 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.523 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7d91ca-a68f-4a1e-a8de-48cb01c0656f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.524 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:55:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:24.525 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'env', 'PROCESS_TAG=haproxy-997afd36-d3a2-430f-ba34-f342135a9bb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/997afd36-d3a2-430f-ba34-f342135a9bb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.535 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.659 226109 DEBUG nova.compute.manager [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.660 226109 DEBUG oslo_concurrency.lockutils [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.660 226109 DEBUG oslo_concurrency.lockutils [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.660 226109 DEBUG oslo_concurrency.lockutils [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.661 226109 DEBUG nova.compute.manager [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.661 226109 WARNING nova.compute.manager [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state stopped and task_state powering-on.
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.661 226109 DEBUG nova.compute.manager [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.661 226109 DEBUG oslo_concurrency.lockutils [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.662 226109 DEBUG oslo_concurrency.lockutils [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.662 226109 DEBUG oslo_concurrency.lockutils [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.662 226109 DEBUG nova.compute.manager [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.662 226109 WARNING nova.compute.manager [req-1d952cb4-8484-47cc-8839-f05d2494c910 req-8bad1f0b-3c2e-420b-969d-cd3a71bec395 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state stopped and task_state powering-on.
Dec 06 07:55:24 compute-1 nova_compute[226101]: 2025-12-06 07:55:24.746 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:24 compute-1 podman[293736]: 2025-12-06 07:55:24.850253506 +0000 UTC m=+0.020166980 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:55:25 compute-1 sshd-session[293554]: Received disconnect from 136.112.8.45 port 43322:11: Bye Bye [preauth]
Dec 06 07:55:25 compute-1 sshd-session[293554]: Disconnected from authenticating user root 136.112.8.45 port 43322 [preauth]
Dec 06 07:55:25 compute-1 ceph-mon[81689]: pgmap v3066: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 06 07:55:25 compute-1 podman[293736]: 2025-12-06 07:55:25.300627785 +0000 UTC m=+0.470541229 container create 942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:55:25 compute-1 systemd[1]: Started libpod-conmon-942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67.scope.
Dec 06 07:55:25 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:55:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2085dc81b25e883c54339f0009af0a2a1002ca477a64b7ed7a9a8568188d13e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.437 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 563d5f63-1ecc-4c65-855c-e375f6f97f29 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.438 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007725.436859, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.439 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] VM Resumed (Lifecycle Event)
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.440 226109 DEBUG nova.compute.manager [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.443 226109 INFO nova.virt.libvirt.driver [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance rebooted successfully.
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.443 226109 DEBUG nova.compute.manager [None req-ddbaa0e7-b1ed-4024-b2d2-e902582022d5 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.466 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.471 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.496 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007725.4383366, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.498 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] VM Started (Lifecycle Event)
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.516 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:25 compute-1 nova_compute[226101]: 2025-12-06 07:55:25.521 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:55:25 compute-1 podman[293736]: 2025-12-06 07:55:25.532156692 +0000 UTC m=+0.702070146 container init 942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:55:25 compute-1 podman[293767]: 2025-12-06 07:55:25.53356539 +0000 UTC m=+0.194728874 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec 06 07:55:25 compute-1 podman[293736]: 2025-12-06 07:55:25.539525 +0000 UTC m=+0.709438444 container start 942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:55:25 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293787]: [NOTICE]   (293839) : New worker (293842) forked
Dec 06 07:55:25 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293787]: [NOTICE]   (293839) : Loading success.
Dec 06 07:55:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:25.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:25 compute-1 podman[293774]: 2025-12-06 07:55:25.768217312 +0000 UTC m=+0.418253188 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:55:25 compute-1 podman[293773]: 2025-12-06 07:55:25.831148557 +0000 UTC m=+0.489870545 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true)
Dec 06 07:55:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:26 compute-1 ceph-mon[81689]: pgmap v3067: 305 pgs: 305 active+clean; 254 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 98 op/s
Dec 06 07:55:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 e400: 3 total, 3 up, 3 in
Dec 06 07:55:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:26.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:27.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:27 compute-1 ceph-mon[81689]: osdmap e400: 3 total, 3 up, 3 in
Dec 06 07:55:27 compute-1 nova_compute[226101]: 2025-12-06 07:55:27.933 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:27 compute-1 nova_compute[226101]: 2025-12-06 07:55:27.934 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:55:27 compute-1 nova_compute[226101]: 2025-12-06 07:55:27.934 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:55:28 compute-1 nova_compute[226101]: 2025-12-06 07:55:28.477 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:55:28 compute-1 nova_compute[226101]: 2025-12-06 07:55:28.478 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:55:28 compute-1 nova_compute[226101]: 2025-12-06 07:55:28.478 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:55:28 compute-1 nova_compute[226101]: 2025-12-06 07:55:28.478 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:28.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:29 compute-1 nova_compute[226101]: 2025-12-06 07:55:29.131 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:29 compute-1 ceph-mon[81689]: pgmap v3069: 305 pgs: 305 active+clean; 254 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 857 KiB/s wr, 113 op/s
Dec 06 07:55:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:29.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:29 compute-1 nova_compute[226101]: 2025-12-06 07:55:29.664 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating instance_info_cache with network_info: [{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:55:29 compute-1 nova_compute[226101]: 2025-12-06 07:55:29.690 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:55:29 compute-1 nova_compute[226101]: 2025-12-06 07:55:29.690 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:55:29 compute-1 nova_compute[226101]: 2025-12-06 07:55:29.748 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/791543045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:30 compute-1 ceph-mon[81689]: pgmap v3070: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 06 07:55:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3403846061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:30.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:31 compute-1 nova_compute[226101]: 2025-12-06 07:55:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:31 compute-1 nova_compute[226101]: 2025-12-06 07:55:31.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:55:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:31.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:32 compute-1 ceph-mon[81689]: pgmap v3071: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.0 MiB/s wr, 189 op/s
Dec 06 07:55:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:32.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:33.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:34 compute-1 nova_compute[226101]: 2025-12-06 07:55:34.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:34 compute-1 nova_compute[226101]: 2025-12-06 07:55:34.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:34 compute-1 nova_compute[226101]: 2025-12-06 07:55:34.749 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:34 compute-1 ceph-mon[81689]: pgmap v3072: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 209 op/s
Dec 06 07:55:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:34.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:35.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:36 compute-1 ceph-mon[81689]: pgmap v3073: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 171 op/s
Dec 06 07:55:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1621257542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:36.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:37 compute-1 nova_compute[226101]: 2025-12-06 07:55:37.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:37.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3361064064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:38 compute-1 ovn_controller[130279]: 2025-12-06T07:55:38Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:20:98 10.100.0.5
Dec 06 07:55:38 compute-1 nova_compute[226101]: 2025-12-06 07:55:38.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:38 compute-1 ceph-mon[81689]: pgmap v3074: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 178 op/s
Dec 06 07:55:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:38.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:39 compute-1 nova_compute[226101]: 2025-12-06 07:55:39.190 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:39 compute-1 nova_compute[226101]: 2025-12-06 07:55:39.751 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:40 compute-1 ceph-mon[81689]: pgmap v3075: 305 pgs: 305 active+clean; 318 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Dec 06 07:55:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:40.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:41 compute-1 nova_compute[226101]: 2025-12-06 07:55:41.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:41.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:42 compute-1 ceph-mon[81689]: pgmap v3076: 305 pgs: 305 active+clean; 411 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 6.6 MiB/s wr, 278 op/s
Dec 06 07:55:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:42.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:43 compute-1 nova_compute[226101]: 2025-12-06 07:55:43.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:43.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1319919641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:44 compute-1 nova_compute[226101]: 2025-12-06 07:55:44.192 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:44 compute-1 nova_compute[226101]: 2025-12-06 07:55:44.705 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:44.707 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:55:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:44.709 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:55:44 compute-1 nova_compute[226101]: 2025-12-06 07:55:44.753 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:44.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:45 compute-1 ceph-mon[81689]: pgmap v3077: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.5 MiB/s wr, 235 op/s
Dec 06 07:55:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/272212823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:45 compute-1 nova_compute[226101]: 2025-12-06 07:55:45.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:45.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:46 compute-1 ceph-mon[81689]: pgmap v3078: 305 pgs: 305 active+clean; 409 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.6 MiB/s wr, 234 op/s
Dec 06 07:55:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:46.711 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:46.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/80759059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1452116217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:47.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:55:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459220151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:48 compute-1 nova_compute[226101]: 2025-12-06 07:55:48.456 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:48 compute-1 nova_compute[226101]: 2025-12-06 07:55:48.456 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:48 compute-1 nova_compute[226101]: 2025-12-06 07:55:48.476 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:55:48 compute-1 nova_compute[226101]: 2025-12-06 07:55:48.594 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:48 compute-1 nova_compute[226101]: 2025-12-06 07:55:48.595 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:48 compute-1 nova_compute[226101]: 2025-12-06 07:55:48.613 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:55:48 compute-1 nova_compute[226101]: 2025-12-06 07:55:48.614 226109 INFO nova.compute.claims [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:55:48 compute-1 ceph-mon[81689]: pgmap v3079: 305 pgs: 305 active+clean; 389 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 212 op/s
Dec 06 07:55:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2459220151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/709697698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:48 compute-1 nova_compute[226101]: 2025-12-06 07:55:48.785 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:55:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/523815383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.215 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.233 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.238 226109 DEBUG nova.compute.provider_tree [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.253 226109 DEBUG nova.scheduler.client.report [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.275 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.276 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.314 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.315 226109 DEBUG nova.network.neutron [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.335 226109 INFO nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.352 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.412 226109 INFO nova.virt.block_device [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Booting with volume 38a07fdf-f182-4202-a84f-220c1b1f0f42 at /dev/vda
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.531 226109 DEBUG os_brick.utils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.532 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.542 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.543 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[069ac435-af55-4882-b981-08bc97c8d713]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.544 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.551 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.551 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[23e07fd9-fb4a-4133-80d8-f94d923150a2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.552 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.560 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.561 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[0d52485f-bb24-4ac6-b350-1af623e20402]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.562 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[52e63fa1-71a8-4854-89c2-5e9747e23e5f]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.562 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.588 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.590 226109 DEBUG os_brick.initiator.connectors.lightos [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.591 226109 DEBUG os_brick.initiator.connectors.lightos [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.591 226109 DEBUG os_brick.initiator.connectors.lightos [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.591 226109 DEBUG os_brick.utils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.592 226109 DEBUG nova.virt.block_device [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating existing volume attachment record: 3e903a71-9fe4-4d3b-852b-f059add7a90f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:55:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/523815383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:49.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:49 compute-1 nova_compute[226101]: 2025-12-06 07:55:49.756 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.330 226109 DEBUG nova.policy [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e685a049c8a74aa8aea831fbdaf2acf8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6164fee998c94b71a37886fe42b4c56c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.526 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.527 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.528 226109 INFO nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Creating image(s)
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.528 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.528 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Ensure instance console log exists: /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.529 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.529 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:50 compute-1 nova_compute[226101]: 2025-12-06 07:55:50.529 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:50 compute-1 ceph-mon[81689]: pgmap v3080: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.4 MiB/s wr, 186 op/s
Dec 06 07:55:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1446130076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:50.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:50 compute-1 sshd-session[293882]: Received disconnect from 154.219.116.39 port 39764:11: Bye Bye [preauth]
Dec 06 07:55:50 compute-1 sshd-session[293882]: Disconnected from authenticating user root 154.219.116.39 port 39764 [preauth]
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.063 226109 DEBUG nova.network.neutron [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Successfully created port: 803864b4-2921-44f0-8812-bdf6e19aa53d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:55:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:51.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.709 226109 DEBUG nova.network.neutron [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Successfully updated port: 803864b4-2921-44f0-8812-bdf6e19aa53d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.724 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.725 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.725 226109 DEBUG nova.network.neutron [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.833 226109 DEBUG nova.compute.manager [req-e43a5e63-f6db-4924-9371-08e4e437d7f5 req-0bf5ab4d-ccab-4438-897f-bb9c127ce994 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.833 226109 DEBUG nova.compute.manager [req-e43a5e63-f6db-4924-9371-08e4e437d7f5 req-0bf5ab4d-ccab-4438-897f-bb9c127ce994 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing instance network info cache due to event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.833 226109 DEBUG oslo_concurrency.lockutils [req-e43a5e63-f6db-4924-9371-08e4e437d7f5 req-0bf5ab4d-ccab-4438-897f-bb9c127ce994 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:55:51 compute-1 nova_compute[226101]: 2025-12-06 07:55:51.887 226109 DEBUG nova.network.neutron [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:55:52 compute-1 ceph-mon[81689]: pgmap v3081: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Dec 06 07:55:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3864288182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:53.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.217 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.629 226109 DEBUG nova.network.neutron [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.658 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.658 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Instance network_info: |[{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.659 226109 DEBUG oslo_concurrency.lockutils [req-e43a5e63-f6db-4924-9371-08e4e437d7f5 req-0bf5ab4d-ccab-4438-897f-bb9c127ce994 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.659 226109 DEBUG nova.network.neutron [req-e43a5e63-f6db-4924-9371-08e4e437d7f5 req-0bf5ab4d-ccab-4438-897f-bb9c127ce994 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.664 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Start _get_guest_xml network_info=[{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-38a07fdf-f182-4202-a84f-220c1b1f0f42', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '38a07fdf-f182-4202-a84f-220c1b1f0f42', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5ae1d2f2-f2e3-43b3-8b37-94095588eb3a', 'attached_at': '', 'detached_at': '', 'volume_id': '38a07fdf-f182-4202-a84f-220c1b1f0f42', 'serial': '38a07fdf-f182-4202-a84f-220c1b1f0f42'}, 'mount_device': '/dev/vda', 'boot_index': 0, 'attachment_id': '3e903a71-9fe4-4d3b-852b-f059add7a90f', 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.670 226109 WARNING nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.684 226109 DEBUG nova.virt.libvirt.host [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.685 226109 DEBUG nova.virt.libvirt.host [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.688 226109 DEBUG nova.virt.libvirt.host [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.688 226109 DEBUG nova.virt.libvirt.host [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.689 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.689 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.690 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.690 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.690 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.691 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.691 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.691 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.691 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.691 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.691 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.692 226109 DEBUG nova.virt.hardware [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.718 226109 DEBUG nova.storage.rbd_utils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.722 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:54 compute-1 nova_compute[226101]: 2025-12-06 07:55:54.759 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:54 compute-1 ceph-mon[81689]: pgmap v3082: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 800 KiB/s rd, 888 KiB/s wr, 78 op/s
Dec 06 07:55:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:54.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:55:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/574396443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.183 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.219 226109 DEBUG nova.virt.libvirt.vif [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:55:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-375403999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-375403999',id=177,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-lf3iw94r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:55:49Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=5ae1d2f2-f2e3-43b3-8b37-94095588eb3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.220 226109 DEBUG nova.network.os_vif_util [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.220 226109 DEBUG nova.network.os_vif_util [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:22:84,bridge_name='br-int',has_traffic_filtering=True,id=803864b4-2921-44f0-8812-bdf6e19aa53d,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap803864b4-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.221 226109 DEBUG nova.objects.instance [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.236 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <uuid>5ae1d2f2-f2e3-43b3-8b37-94095588eb3a</uuid>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <name>instance-000000b1</name>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-375403999</nova:name>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:55:54</nova:creationTime>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <nova:user uuid="e685a049c8a74aa8aea831fbdaf2acf8">tempest-TestInstancesWithCinderVolumes-1429596635-project-member</nova:user>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <nova:project uuid="6164fee998c94b71a37886fe42b4c56c">tempest-TestInstancesWithCinderVolumes-1429596635</nova:project>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <nova:port uuid="803864b4-2921-44f0-8812-bdf6e19aa53d">
Dec 06 07:55:55 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <system>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <entry name="serial">5ae1d2f2-f2e3-43b3-8b37-94095588eb3a</entry>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <entry name="uuid">5ae1d2f2-f2e3-43b3-8b37-94095588eb3a</entry>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </system>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <os>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   </os>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <features>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   </features>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a_disk.config">
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       </source>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <source protocol="rbd" name="volumes/volume-38a07fdf-f182-4202-a84f-220c1b1f0f42">
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       </source>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:55:55 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <serial>38a07fdf-f182-4202-a84f-220c1b1f0f42</serial>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:57:22:84"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <target dev="tap803864b4-29"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a/console.log" append="off"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <video>
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </video>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:55:55 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:55:55 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:55:55 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:55:55 compute-1 nova_compute[226101]: </domain>
Dec 06 07:55:55 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.237 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Preparing to wait for external event network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.237 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.238 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.238 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.238 226109 DEBUG nova.virt.libvirt.vif [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:55:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-375403999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-375403999',id=177,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-lf3iw94r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:55:49Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=5ae1d2f2-f2e3-43b3-8b37-94095588eb3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.239 226109 DEBUG nova.network.os_vif_util [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.239 226109 DEBUG nova.network.os_vif_util [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:22:84,bridge_name='br-int',has_traffic_filtering=True,id=803864b4-2921-44f0-8812-bdf6e19aa53d,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap803864b4-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.240 226109 DEBUG os_vif [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:22:84,bridge_name='br-int',has_traffic_filtering=True,id=803864b4-2921-44f0-8812-bdf6e19aa53d,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap803864b4-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.240 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.241 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.243 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.243 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap803864b4-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.244 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap803864b4-29, col_values=(('external_ids', {'iface-id': '803864b4-2921-44f0-8812-bdf6e19aa53d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:22:84', 'vm-uuid': '5ae1d2f2-f2e3-43b3-8b37-94095588eb3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:55 compute-1 NetworkManager[49031]: <info>  [1765007755.2464] manager: (tap803864b4-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.248 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.250 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.251 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.252 226109 INFO os_vif [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:22:84,bridge_name='br-int',has_traffic_filtering=True,id=803864b4-2921-44f0-8812-bdf6e19aa53d,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap803864b4-29')
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.316 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.317 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.317 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:57:22:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.318 226109 INFO nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Using config drive
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.342 226109 DEBUG nova.storage.rbd_utils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:55:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:55.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.751 226109 INFO nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Creating config drive at /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a/disk.config
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.758 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbr421gj2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.787 226109 DEBUG nova.network.neutron [req-e43a5e63-f6db-4924-9371-08e4e437d7f5 req-0bf5ab4d-ccab-4438-897f-bb9c127ce994 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updated VIF entry in instance network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.788 226109 DEBUG nova.network.neutron [req-e43a5e63-f6db-4924-9371-08e4e437d7f5 req-0bf5ab4d-ccab-4438-897f-bb9c127ce994 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.812 226109 DEBUG oslo_concurrency.lockutils [req-e43a5e63-f6db-4924-9371-08e4e437d7f5 req-0bf5ab4d-ccab-4438-897f-bb9c127ce994 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:55:55 compute-1 nova_compute[226101]: 2025-12-06 07:55:55.893 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbr421gj2" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/574396443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:56 compute-1 podman[293956]: 2025-12-06 07:55:56.097760614 +0000 UTC m=+0.065602587 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:55:56 compute-1 podman[293957]: 2025-12-06 07:55:56.120950014 +0000 UTC m=+0.092945649 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 06 07:55:56 compute-1 podman[293955]: 2025-12-06 07:55:56.121080088 +0000 UTC m=+0.097229854 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.142 226109 DEBUG nova.storage.rbd_utils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.146 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a/disk.config 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.309 226109 DEBUG oslo_concurrency.processutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a/disk.config 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.310 226109 INFO nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Deleting local config drive /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a/disk.config because it was imported into RBD.
Dec 06 07:55:56 compute-1 NetworkManager[49031]: <info>  [1765007756.3676] manager: (tap803864b4-29): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Dec 06 07:55:56 compute-1 kernel: tap803864b4-29: entered promiscuous mode
Dec 06 07:55:56 compute-1 ovn_controller[130279]: 2025-12-06T07:55:56Z|00704|binding|INFO|Claiming lport 803864b4-2921-44f0-8812-bdf6e19aa53d for this chassis.
Dec 06 07:55:56 compute-1 ovn_controller[130279]: 2025-12-06T07:55:56Z|00705|binding|INFO|803864b4-2921-44f0-8812-bdf6e19aa53d: Claiming fa:16:3e:57:22:84 10.100.0.13
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.370 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:56 compute-1 ovn_controller[130279]: 2025-12-06T07:55:56Z|00706|binding|INFO|Setting lport 803864b4-2921-44f0-8812-bdf6e19aa53d ovn-installed in OVS
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.385 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.387 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:56 compute-1 systemd-udevd[294060]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:55:56 compute-1 systemd-machined[190302]: New machine qemu-82-instance-000000b1.
Dec 06 07:55:56 compute-1 NetworkManager[49031]: <info>  [1765007756.4126] device (tap803864b4-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:55:56 compute-1 NetworkManager[49031]: <info>  [1765007756.4136] device (tap803864b4-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:55:56 compute-1 systemd[1]: Started Virtual Machine qemu-82-instance-000000b1.
Dec 06 07:55:56 compute-1 ovn_controller[130279]: 2025-12-06T07:55:56Z|00707|binding|INFO|Setting lport 803864b4-2921-44f0-8812-bdf6e19aa53d up in Southbound
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.631 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:22:84 10.100.0.13'], port_security=['fa:16:3e:57:22:84 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5ae1d2f2-f2e3-43b3-8b37-94095588eb3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3764201-4b86-4407-84d2-684bd05a44b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6164fee998c94b71a37886fe42b4c56c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2bb7af25-e3c4-4687-888a-3caf6297e5c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a293aea-136f-4ea2-8198-6213071653ca, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=803864b4-2921-44f0-8812-bdf6e19aa53d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.632 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 803864b4-2921-44f0-8812-bdf6e19aa53d in datapath a3764201-4b86-4407-84d2-684bd05a44b3 bound to our chassis
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.634 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3764201-4b86-4407-84d2-684bd05a44b3
Dec 06 07:55:56 compute-1 sudo[294070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.644 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6f382c-f92c-4465-b7c4-ef025a5610a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.645 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3764201-41 in ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:55:56 compute-1 sudo[294070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.650 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3764201-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.650 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e37b2194-1f8e-4f71-85ac-836c61c3113f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.651 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ab952c24-8262-421c-811b-901d8064d4e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 sudo[294070]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.665 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[593ff8e3-9148-4fc1-aa23-b50ec17f11fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.680 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[78673afa-8e42-4e7d-8000-d642ee37b22d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 sudo[294097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:55:56 compute-1 sudo[294097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:56 compute-1 sudo[294097]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.714 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a18fbe-ebc2-4101-9043-c074ea6308f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 NetworkManager[49031]: <info>  [1765007756.7201] manager: (tapa3764201-40): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Dec 06 07:55:56 compute-1 systemd-udevd[294063]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.720 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a81b0d89-972f-4b49-850e-082bdf52b093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.758 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5fca4fc7-0ae6-4049-9013-7ce76e7d7ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.761 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[db3f5590-7051-45c4-8363-ec814c1787ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 sudo[294126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:56 compute-1 sudo[294126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:56 compute-1 sudo[294126]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:56 compute-1 NetworkManager[49031]: <info>  [1765007756.7863] device (tapa3764201-40): carrier: link connected
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.792 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c95292db-5657-453b-9e38-6e18e10712c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.813 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e4efc4a0-4986-4a2e-a2ec-8d6f1448ffaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3764201-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:90:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803629, 'reachable_time': 38038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294214, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.827 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0ed0e9-5c70-4e40-97fe-c97389eee804]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:90e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803629, 'tstamp': 803629}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294229, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 sudo[294187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:55:56 compute-1 sudo[294187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.847 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb29d48-dc38-450e-9ecf-580368279a25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3764201-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:90:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803629, 'reachable_time': 38038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294232, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.878 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4de2128e-ac58-47cb-929c-0f84a008c5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.920 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007756.9195979, 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.920 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] VM Started (Lifecycle Event)
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.937 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[73be50ee-51d4-4777-b33b-2363ac8c2a4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.938 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3764201-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.939 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.939 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3764201-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.941 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:56 compute-1 NetworkManager[49031]: <info>  [1765007756.9420] manager: (tapa3764201-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Dec 06 07:55:56 compute-1 kernel: tapa3764201-40: entered promiscuous mode
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.944 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.946 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3764201-40, col_values=(('external_ids', {'iface-id': '901b0fd3-1832-4628-bbf4-0a14b30cd979'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:56 compute-1 ovn_controller[130279]: 2025-12-06T07:55:56Z|00708|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec 06 07:55:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:56.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.948 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.950 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.951 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[69b01df8-b856-4314-a8a6-e72a6107783a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.952 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-a3764201-4b86-4407-84d2-684bd05a44b3
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID a3764201-4b86-4407-84d2-684bd05a44b3
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:55:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:55:56.954 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'env', 'PROCESS_TAG=haproxy-a3764201-4b86-4407-84d2-684bd05a44b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3764201-4b86-4407-84d2-684bd05a44b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:55:56 compute-1 nova_compute[226101]: 2025-12-06 07:55:56.962 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:57 compute-1 nova_compute[226101]: 2025-12-06 07:55:57.199 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:57 compute-1 nova_compute[226101]: 2025-12-06 07:55:57.203 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007756.9197242, 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:55:57 compute-1 nova_compute[226101]: 2025-12-06 07:55:57.203 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] VM Paused (Lifecycle Event)
Dec 06 07:55:57 compute-1 podman[294287]: 2025-12-06 07:55:57.303929526 +0000 UTC m=+0.048726496 container create fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:55:57 compute-1 ceph-mon[81689]: pgmap v3083: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 28 KiB/s wr, 80 op/s
Dec 06 07:55:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3251479305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:57 compute-1 systemd[1]: Started libpod-conmon-fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463.scope.
Dec 06 07:55:57 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:55:57 compute-1 podman[294287]: 2025-12-06 07:55:57.27529711 +0000 UTC m=+0.020094100 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:55:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6180fbdd8199a15086c01befabb0b6162338a2f7edd972434cd5f2da08647d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:55:57 compute-1 sudo[294187]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:57 compute-1 podman[294287]: 2025-12-06 07:55:57.386571779 +0000 UTC m=+0.131368769 container init fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:55:57 compute-1 podman[294287]: 2025-12-06 07:55:57.391595804 +0000 UTC m=+0.136392774 container start fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:55:57 compute-1 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[294315]: [NOTICE]   (294321) : New worker (294323) forked
Dec 06 07:55:57 compute-1 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[294315]: [NOTICE]   (294321) : Loading success.
Dec 06 07:55:57 compute-1 nova_compute[226101]: 2025-12-06 07:55:57.449 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:57 compute-1 nova_compute[226101]: 2025-12-06 07:55:57.452 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:55:57 compute-1 nova_compute[226101]: 2025-12-06 07:55:57.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:57.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.240 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.708 226109 DEBUG oslo_concurrency.lockutils [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.708 226109 DEBUG oslo_concurrency.lockutils [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.743 226109 INFO nova.compute.manager [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Detaching volume 4740f75b-6222-44e7-8496-fed71a346d64
Dec 06 07:55:58 compute-1 ceph-mon[81689]: pgmap v3084: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 31 KiB/s wr, 118 op/s
Dec 06 07:55:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:55:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:55:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:55:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:55:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:55:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:55:58 compute-1 ovn_controller[130279]: 2025-12-06T07:55:58Z|00709|binding|INFO|Releasing lport 904065d3-3080-49e2-8707-2794a4ba4e6e from this chassis (sb_readonly=0)
Dec 06 07:55:58 compute-1 ovn_controller[130279]: 2025-12-06T07:55:58Z|00710|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:58.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.960 226109 DEBUG nova.compute.manager [req-62af066d-09f0-4817-9f9a-c92099e3211b req-9f846eda-ca95-4f63-924a-6c5c4c97f88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.960 226109 DEBUG oslo_concurrency.lockutils [req-62af066d-09f0-4817-9f9a-c92099e3211b req-9f846eda-ca95-4f63-924a-6c5c4c97f88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.960 226109 DEBUG oslo_concurrency.lockutils [req-62af066d-09f0-4817-9f9a-c92099e3211b req-9f846eda-ca95-4f63-924a-6c5c4c97f88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.960 226109 DEBUG oslo_concurrency.lockutils [req-62af066d-09f0-4817-9f9a-c92099e3211b req-9f846eda-ca95-4f63-924a-6c5c4c97f88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.961 226109 DEBUG nova.compute.manager [req-62af066d-09f0-4817-9f9a-c92099e3211b req-9f846eda-ca95-4f63-924a-6c5c4c97f88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Processing event network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.961 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.964 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007758.9646888, 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.964 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] VM Resumed (Lifecycle Event)
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.966 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.968 226109 INFO nova.virt.libvirt.driver [-] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Instance spawned successfully.
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.969 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.995 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:58 compute-1 nova_compute[226101]: 2025-12-06 07:55:58.997 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.003 226109 INFO nova.virt.block_device [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Attempting to driver detach volume 4740f75b-6222-44e7-8496-fed71a346d64 from mountpoint /dev/vdb
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.008 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.008 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.008 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.009 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.009 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.009 226109 DEBUG nova.virt.libvirt.driver [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.018 226109 DEBUG nova.virt.libvirt.driver [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Attempting to detach device vdb from instance 563d5f63-1ecc-4c65-855c-e375f6f97f29 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.018 226109 DEBUG nova.virt.libvirt.guest [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-4740f75b-6222-44e7-8496-fed71a346d64">
Dec 06 07:55:59 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   </source>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <serial>4740f75b-6222-44e7-8496-fed71a346d64</serial>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]: </disk>
Dec 06 07:55:59 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.025 226109 INFO nova.virt.libvirt.driver [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully detached device vdb from instance 563d5f63-1ecc-4c65-855c-e375f6f97f29 from the persistent domain config.
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.026 226109 DEBUG nova.virt.libvirt.driver [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 563d5f63-1ecc-4c65-855c-e375f6f97f29 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.026 226109 DEBUG nova.virt.libvirt.guest [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-4740f75b-6222-44e7-8496-fed71a346d64">
Dec 06 07:55:59 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   </source>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <serial>4740f75b-6222-44e7-8496-fed71a346d64</serial>
Dec 06 07:55:59 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec 06 07:55:59 compute-1 nova_compute[226101]: </disk>
Dec 06 07:55:59 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.040 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.076 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765007759.0764835, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.078 226109 DEBUG nova.virt.libvirt.driver [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 563d5f63-1ecc-4c65-855c-e375f6f97f29 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.081 226109 INFO nova.virt.libvirt.driver [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully detached device vdb from instance 563d5f63-1ecc-4c65-855c-e375f6f97f29 from the live domain config.
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.097 226109 INFO nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Took 8.57 seconds to spawn the instance on the hypervisor.
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.098 226109 DEBUG nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.157 226109 INFO nova.compute.manager [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Took 10.63 seconds to build instance.
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.177 226109 DEBUG oslo_concurrency.lockutils [None req-46ed70cb-6bfc-4b1f-9d62-196ef39dbb12 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.295 226109 DEBUG nova.objects.instance [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.333 226109 DEBUG oslo_concurrency.lockutils [None req-a0f93d64-f27e-41e2-bebb-6551fb77b1a9 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:55:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:59.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:59 compute-1 nova_compute[226101]: 2025-12-06 07:55:59.761 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:00 compute-1 nova_compute[226101]: 2025-12-06 07:56:00.119 226109 DEBUG oslo_concurrency.lockutils [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:00 compute-1 nova_compute[226101]: 2025-12-06 07:56:00.120 226109 DEBUG oslo_concurrency.lockutils [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:00 compute-1 nova_compute[226101]: 2025-12-06 07:56:00.120 226109 DEBUG nova.compute.manager [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:56:00 compute-1 nova_compute[226101]: 2025-12-06 07:56:00.124 226109 DEBUG nova.compute.manager [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 06 07:56:00 compute-1 nova_compute[226101]: 2025-12-06 07:56:00.125 226109 DEBUG nova.objects.instance [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:00 compute-1 nova_compute[226101]: 2025-12-06 07:56:00.163 226109 DEBUG nova.virt.libvirt.driver [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:56:00 compute-1 nova_compute[226101]: 2025-12-06 07:56:00.276 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:56:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1213676908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:00 compute-1 ceph-mon[81689]: pgmap v3085: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 46 KiB/s wr, 135 op/s
Dec 06 07:56:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1213676908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:00.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:01 compute-1 nova_compute[226101]: 2025-12-06 07:56:01.044 226109 DEBUG nova.compute.manager [req-b4ec496e-322d-4488-b0b4-0a995238e75a req-8548de1d-3e56-4bc8-9fb0-c91eec5460ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:01 compute-1 nova_compute[226101]: 2025-12-06 07:56:01.045 226109 DEBUG oslo_concurrency.lockutils [req-b4ec496e-322d-4488-b0b4-0a995238e75a req-8548de1d-3e56-4bc8-9fb0-c91eec5460ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:01 compute-1 nova_compute[226101]: 2025-12-06 07:56:01.045 226109 DEBUG oslo_concurrency.lockutils [req-b4ec496e-322d-4488-b0b4-0a995238e75a req-8548de1d-3e56-4bc8-9fb0-c91eec5460ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:01 compute-1 nova_compute[226101]: 2025-12-06 07:56:01.045 226109 DEBUG oslo_concurrency.lockutils [req-b4ec496e-322d-4488-b0b4-0a995238e75a req-8548de1d-3e56-4bc8-9fb0-c91eec5460ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:01 compute-1 nova_compute[226101]: 2025-12-06 07:56:01.046 226109 DEBUG nova.compute.manager [req-b4ec496e-322d-4488-b0b4-0a995238e75a req-8548de1d-3e56-4bc8-9fb0-c91eec5460ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] No waiting events found dispatching network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:56:01 compute-1 nova_compute[226101]: 2025-12-06 07:56:01.046 226109 WARNING nova.compute.manager [req-b4ec496e-322d-4488-b0b4-0a995238e75a req-8548de1d-3e56-4bc8-9fb0-c91eec5460ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received unexpected event network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d for instance with vm_state active and task_state None.
Dec 06 07:56:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:01.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:01.676 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:01.677 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:01.677 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:02 compute-1 kernel: tap73cf92a3-d7 (unregistering): left promiscuous mode
Dec 06 07:56:02 compute-1 NetworkManager[49031]: <info>  [1765007762.8157] device (tap73cf92a3-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:56:02 compute-1 nova_compute[226101]: 2025-12-06 07:56:02.827 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:02 compute-1 ovn_controller[130279]: 2025-12-06T07:56:02Z|00711|binding|INFO|Releasing lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 from this chassis (sb_readonly=0)
Dec 06 07:56:02 compute-1 ovn_controller[130279]: 2025-12-06T07:56:02Z|00712|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 down in Southbound
Dec 06 07:56:02 compute-1 ovn_controller[130279]: 2025-12-06T07:56:02Z|00713|binding|INFO|Removing iface tap73cf92a3-d7 ovn-installed in OVS
Dec 06 07:56:02 compute-1 nova_compute[226101]: 2025-12-06 07:56:02.830 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:02.842 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:20:98 10.100.0.5'], port_security=['fa:16:3e:d6:20:98 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '563d5f63-1ecc-4c65-855c-e375f6f97f29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '88b6dfa1-842c-4e82-9a0e-5e2b4d8b3cb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=73cf92a3-d7ff-4ec0-950c-c0c31afd8004) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:56:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:02.844 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 unbound from our chassis
Dec 06 07:56:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:02.845 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 997afd36-d3a2-430f-ba34-f342135a9bb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:56:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:02.847 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f45069aa-a319-4064-b983-639e708aa19f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:02.847 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace which is not needed anymore
Dec 06 07:56:02 compute-1 nova_compute[226101]: 2025-12-06 07:56:02.852 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:02 compute-1 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Dec 06 07:56:02 compute-1 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ad.scope: Consumed 15.191s CPU time.
Dec 06 07:56:02 compute-1 systemd-machined[190302]: Machine qemu-81-instance-000000ad terminated.
Dec 06 07:56:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:02.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.179 226109 INFO nova.virt.libvirt.driver [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance shutdown successfully after 3 seconds.
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.184 226109 INFO nova.virt.libvirt.driver [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance destroyed successfully.
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.184 226109 DEBUG nova.objects.instance [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'numa_topology' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.198 226109 DEBUG nova.compute.manager [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.235 226109 DEBUG nova.compute.manager [req-0d4e1269-93eb-4897-9b9c-2d8b7a77deae req-600a0e2e-bd7c-43c2-94b1-d16638f40a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.235 226109 DEBUG oslo_concurrency.lockutils [req-0d4e1269-93eb-4897-9b9c-2d8b7a77deae req-600a0e2e-bd7c-43c2-94b1-d16638f40a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.235 226109 DEBUG oslo_concurrency.lockutils [req-0d4e1269-93eb-4897-9b9c-2d8b7a77deae req-600a0e2e-bd7c-43c2-94b1-d16638f40a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.236 226109 DEBUG oslo_concurrency.lockutils [req-0d4e1269-93eb-4897-9b9c-2d8b7a77deae req-600a0e2e-bd7c-43c2-94b1-d16638f40a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.236 226109 DEBUG nova.compute.manager [req-0d4e1269-93eb-4897-9b9c-2d8b7a77deae req-600a0e2e-bd7c-43c2-94b1-d16638f40a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.236 226109 WARNING nova.compute.manager [req-0d4e1269-93eb-4897-9b9c-2d8b7a77deae req-600a0e2e-bd7c-43c2-94b1-d16638f40a39 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state active and task_state powering-off.
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.242 226109 DEBUG oslo_concurrency.lockutils [None req-e1838dfd-cebb-4cad-a8fd-9dc881c9fe5d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:03 compute-1 ceph-mon[81689]: pgmap v3086: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 49 KiB/s wr, 191 op/s
Dec 06 07:56:03 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293787]: [NOTICE]   (293839) : haproxy version is 2.8.14-c23fe91
Dec 06 07:56:03 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293787]: [NOTICE]   (293839) : path to executable is /usr/sbin/haproxy
Dec 06 07:56:03 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293787]: [WARNING]  (293839) : Exiting Master process...
Dec 06 07:56:03 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293787]: [ALERT]    (293839) : Current worker (293842) exited with code 143 (Terminated)
Dec 06 07:56:03 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[293787]: [WARNING]  (293839) : All workers exited. Exiting... (0)
Dec 06 07:56:03 compute-1 systemd[1]: libpod-942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67.scope: Deactivated successfully.
Dec 06 07:56:03 compute-1 podman[294358]: 2025-12-06 07:56:03.296697772 +0000 UTC m=+0.354985285 container died 942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:56:03 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67-userdata-shm.mount: Deactivated successfully.
Dec 06 07:56:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-f2085dc81b25e883c54339f0009af0a2a1002ca477a64b7ed7a9a8568188d13e-merged.mount: Deactivated successfully.
Dec 06 07:56:03 compute-1 podman[294358]: 2025-12-06 07:56:03.341672606 +0000 UTC m=+0.399960119 container cleanup 942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:56:03 compute-1 systemd[1]: libpod-conmon-942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67.scope: Deactivated successfully.
Dec 06 07:56:03 compute-1 podman[294399]: 2025-12-06 07:56:03.60216063 +0000 UTC m=+0.241167868 container remove 942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.607 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7553805e-02a2-4a08-aa47-0eb5e532665c]: (4, ('Sat Dec  6 07:56:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67)\n942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67\nSat Dec  6 07:56:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67)\n942d130280791e4e0538ddd5729d83f2334ea84318cfecd7db7b3eeaf1224d67\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.610 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9ed66e-e63d-4b32-9f10-2d1079a7e2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.612 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.614 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:03 compute-1 kernel: tap997afd36-d0: left promiscuous mode
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.633 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:03 compute-1 nova_compute[226101]: 2025-12-06 07:56:03.634 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.639 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[afa82eb8-8f4c-4bfd-93e5-5d4dce764128]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.655 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1201e160-6462-4e5c-bb19-1df2a35ac176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.656 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[80c1813a-5d5e-4064-8e88-5081242c5c4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.669 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a6219d72-c177-4154-9ad7-37ce8b10d773]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800378, 'reachable_time': 32387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294419, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.672 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:56:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:03.672 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[560faf78-32fc-414e-a041-32670cc7a903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:03 compute-1 systemd[1]: run-netns-ovnmeta\x2d997afd36\x2dd3a2\x2d430f\x2dba34\x2df342135a9bb6.mount: Deactivated successfully.
Dec 06 07:56:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:03.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:04 compute-1 ovn_controller[130279]: 2025-12-06T07:56:04Z|00714|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec 06 07:56:04 compute-1 nova_compute[226101]: 2025-12-06 07:56:04.205 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:04 compute-1 nova_compute[226101]: 2025-12-06 07:56:04.763 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:04 compute-1 sudo[294421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:04 compute-1 sudo[294421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:04 compute-1 sudo[294421]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:04 compute-1 sudo[294446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:56:04 compute-1 sudo[294446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:04 compute-1 sudo[294446]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:04 compute-1 nova_compute[226101]: 2025-12-06 07:56:04.944 226109 DEBUG nova.objects.instance [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:04.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:04 compute-1 nova_compute[226101]: 2025-12-06 07:56:04.974 226109 DEBUG oslo_concurrency.lockutils [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:04 compute-1 nova_compute[226101]: 2025-12-06 07:56:04.974 226109 DEBUG oslo_concurrency.lockutils [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquired lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:04 compute-1 nova_compute[226101]: 2025-12-06 07:56:04.974 226109 DEBUG nova.network.neutron [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:56:04 compute-1 nova_compute[226101]: 2025-12-06 07:56:04.975 226109 DEBUG nova.objects.instance [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'info_cache' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:05 compute-1 ceph-mon[81689]: pgmap v3087: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 38 KiB/s wr, 218 op/s
Dec 06 07:56:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:56:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:56:05 compute-1 nova_compute[226101]: 2025-12-06 07:56:05.278 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:05 compute-1 nova_compute[226101]: 2025-12-06 07:56:05.402 226109 DEBUG nova.compute.manager [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:05 compute-1 nova_compute[226101]: 2025-12-06 07:56:05.402 226109 DEBUG oslo_concurrency.lockutils [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:05 compute-1 nova_compute[226101]: 2025-12-06 07:56:05.403 226109 DEBUG oslo_concurrency.lockutils [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:05 compute-1 nova_compute[226101]: 2025-12-06 07:56:05.403 226109 DEBUG oslo_concurrency.lockutils [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:05 compute-1 nova_compute[226101]: 2025-12-06 07:56:05.403 226109 DEBUG nova.compute.manager [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:56:05 compute-1 nova_compute[226101]: 2025-12-06 07:56:05.404 226109 WARNING nova.compute.manager [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state stopped and task_state powering-on.
Dec 06 07:56:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:05.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.755 226109 DEBUG nova.network.neutron [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating instance_info_cache with network_info: [{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.782 226109 DEBUG oslo_concurrency.lockutils [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Releasing lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.809 226109 INFO nova.virt.libvirt.driver [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance destroyed successfully.
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.810 226109 DEBUG nova.objects.instance [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'numa_topology' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.827 226109 DEBUG nova.objects.instance [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'resources' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.846 226109 DEBUG nova.virt.libvirt.vif [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:54:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:56:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.847 226109 DEBUG nova.network.os_vif_util [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.848 226109 DEBUG nova.network.os_vif_util [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.849 226109 DEBUG os_vif [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.852 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.853 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73cf92a3-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.855 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.857 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.860 226109 INFO os_vif [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7')
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.870 226109 DEBUG nova.virt.libvirt.driver [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Start _get_guest_xml network_info=[{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.874 226109 WARNING nova.virt.libvirt.driver [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.880 226109 DEBUG nova.virt.libvirt.host [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.881 226109 DEBUG nova.virt.libvirt.host [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.884 226109 DEBUG nova.virt.libvirt.host [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.885 226109 DEBUG nova.virt.libvirt.host [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.886 226109 DEBUG nova.virt.libvirt.driver [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.886 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.887 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.887 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.887 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.888 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.888 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.888 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.889 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.889 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.889 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.890 226109 DEBUG nova.virt.hardware [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.890 226109 DEBUG nova.objects.instance [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:06 compute-1 nova_compute[226101]: 2025-12-06 07:56:06.913 226109 DEBUG oslo_concurrency.processutils [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:06.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:07 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Dec 06 07:56:07 compute-1 ceph-mon[81689]: pgmap v3088: 305 pgs: 305 active+clean; 367 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 688 KiB/s wr, 206 op/s
Dec 06 07:56:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:56:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3175340184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.396 226109 DEBUG oslo_concurrency.processutils [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.427 226109 DEBUG oslo_concurrency.processutils [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:07.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:56:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/582007015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.850 226109 DEBUG oslo_concurrency.processutils [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.852 226109 DEBUG nova.virt.libvirt.vif [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:54:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:56:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.852 226109 DEBUG nova.network.os_vif_util [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.853 226109 DEBUG nova.network.os_vif_util [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.855 226109 DEBUG nova.objects.instance [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'pci_devices' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.871 226109 DEBUG nova.virt.libvirt.driver [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <uuid>563d5f63-1ecc-4c65-855c-e375f6f97f29</uuid>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <name>instance-000000ad</name>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <nova:name>tempest-AttachVolumeTestJSON-server-1295588604</nova:name>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:56:06</nova:creationTime>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <nova:user uuid="0ce6d0a8def6432aa60891ea00ef9d8b">tempest-AttachVolumeTestJSON-950214889-project-member</nova:user>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <nova:project uuid="63df107b8bd14504974c75ba92ae469b">tempest-AttachVolumeTestJSON-950214889</nova:project>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <nova:port uuid="73cf92a3-d7ff-4ec0-950c-c0c31afd8004">
Dec 06 07:56:07 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <system>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <entry name="serial">563d5f63-1ecc-4c65-855c-e375f6f97f29</entry>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <entry name="uuid">563d5f63-1ecc-4c65-855c-e375f6f97f29</entry>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </system>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <os>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   </os>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <features>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   </features>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/563d5f63-1ecc-4c65-855c-e375f6f97f29_disk">
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       </source>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/563d5f63-1ecc-4c65-855c-e375f6f97f29_disk.config">
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       </source>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:56:07 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d6:20:98"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <target dev="tap73cf92a3-d7"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29/console.log" append="off"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <video>
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </video>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:56:07 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:56:07 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:56:07 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:56:07 compute-1 nova_compute[226101]: </domain>
Dec 06 07:56:07 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.873 226109 DEBUG nova.virt.libvirt.driver [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.873 226109 DEBUG nova.virt.libvirt.driver [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.874 226109 DEBUG nova.virt.libvirt.vif [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:54:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:56:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.874 226109 DEBUG nova.network.os_vif_util [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.875 226109 DEBUG nova.network.os_vif_util [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.875 226109 DEBUG os_vif [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.876 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.876 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.877 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.879 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.880 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73cf92a3-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.880 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73cf92a3-d7, col_values=(('external_ids', {'iface-id': '73cf92a3-d7ff-4ec0-950c-c0c31afd8004', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:20:98', 'vm-uuid': '563d5f63-1ecc-4c65-855c-e375f6f97f29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:07 compute-1 NetworkManager[49031]: <info>  [1765007767.8826] manager: (tap73cf92a3-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.884 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.887 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.888 226109 INFO os_vif [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7')
Dec 06 07:56:07 compute-1 kernel: tap73cf92a3-d7: entered promiscuous mode
Dec 06 07:56:07 compute-1 NetworkManager[49031]: <info>  [1765007767.9698] manager: (tap73cf92a3-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Dec 06 07:56:07 compute-1 ovn_controller[130279]: 2025-12-06T07:56:07Z|00715|binding|INFO|Claiming lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for this chassis.
Dec 06 07:56:07 compute-1 ovn_controller[130279]: 2025-12-06T07:56:07Z|00716|binding|INFO|73cf92a3-d7ff-4ec0-950c-c0c31afd8004: Claiming fa:16:3e:d6:20:98 10.100.0.5
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:07.977 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:20:98 10.100.0.5'], port_security=['fa:16:3e:d6:20:98 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '563d5f63-1ecc-4c65-855c-e375f6f97f29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '88b6dfa1-842c-4e82-9a0e-5e2b4d8b3cb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=73cf92a3-d7ff-4ec0-950c-c0c31afd8004) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:56:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:07.978 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 bound to our chassis
Dec 06 07:56:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:07.979 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:56:07 compute-1 ovn_controller[130279]: 2025-12-06T07:56:07Z|00717|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 ovn-installed in OVS
Dec 06 07:56:07 compute-1 ovn_controller[130279]: 2025-12-06T07:56:07Z|00718|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 up in Southbound
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:07.993 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[90f20820-b2b6-4396-af29-b974ff23f093]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:07 compute-1 nova_compute[226101]: 2025-12-06 07:56:07.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:07.994 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap997afd36-d1 in ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:56:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:07.997 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap997afd36-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:56:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:07.997 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[00625f5e-da53-42b8-9ec0-6208395ae009]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:07.998 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0d3654-3119-4a8a-a9c6-62709c8402be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 systemd-udevd[294548]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.011 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9d82ad-ed1d-4dd1-be31-ae261cbbab43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 systemd-machined[190302]: New machine qemu-83-instance-000000ad.
Dec 06 07:56:08 compute-1 NetworkManager[49031]: <info>  [1765007768.0183] device (tap73cf92a3-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:56:08 compute-1 NetworkManager[49031]: <info>  [1765007768.0193] device (tap73cf92a3-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:56:08 compute-1 systemd[1]: Started Virtual Machine qemu-83-instance-000000ad.
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.026 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[470cd864-8647-416d-9a20-ffe699144a1e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.060 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[75ceff61-8932-4a8e-9138-49c13a836e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.066 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f79a8ac5-137c-475e-9554-bcbc6fa21b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 NetworkManager[49031]: <info>  [1765007768.0673] manager: (tap997afd36-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.095 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[45a0c7ec-57bc-49c2-b4eb-dd7d6c4733be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.097 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2d345269-b367-4963-86f1-1b019a1951a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 NetworkManager[49031]: <info>  [1765007768.1187] device (tap997afd36-d0): carrier: link connected
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.127 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c965afd2-8aa0-4cac-8890-e5e19328b7d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.146 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd0b395-1e7a-4685-9fc8-c2b86c60a83a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804762, 'reachable_time': 17126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294581, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.163 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9f29c3b4-7442-46db-9167-2c5f4d85aa5d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:8b72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 804762, 'tstamp': 804762}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294582, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.179 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[81af4f6b-a052-4048-a49e-7eca1ad7315a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804762, 'reachable_time': 17126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294583, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.215 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[108aad87-5a94-44c6-8ed5-03f433767a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.268 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[261a4602-5378-44c5-bd65-31c57facd6e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.269 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.270 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.270 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap997afd36-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.272 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:08 compute-1 NetworkManager[49031]: <info>  [1765007768.2728] manager: (tap997afd36-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Dec 06 07:56:08 compute-1 kernel: tap997afd36-d0: entered promiscuous mode
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.274 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.274 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap997afd36-d0, col_values=(('external_ids', {'iface-id': '904065d3-3080-49e2-8707-2794a4ba4e6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.275 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:08 compute-1 ovn_controller[130279]: 2025-12-06T07:56:08Z|00719|binding|INFO|Releasing lport 904065d3-3080-49e2-8707-2794a4ba4e6e from this chassis (sb_readonly=0)
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.291 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.291 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.292 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.293 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5809f625-473f-40fb-9071-a188f0b236c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.294 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:56:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:08.295 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'env', 'PROCESS_TAG=haproxy-997afd36-d3a2-430f-ba34-f342135a9bb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/997afd36-d3a2-430f-ba34-f342135a9bb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:56:08 compute-1 ceph-mon[81689]: pgmap v3089: 305 pgs: 305 active+clean; 380 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.4 MiB/s wr, 218 op/s
Dec 06 07:56:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3175340184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/582007015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.519 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 563d5f63-1ecc-4c65-855c-e375f6f97f29 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.520 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007768.5196526, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.520 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] VM Resumed (Lifecycle Event)
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.523 226109 DEBUG nova.compute.manager [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.526 226109 INFO nova.virt.libvirt.driver [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance rebooted successfully.
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.526 226109 DEBUG nova.compute.manager [None req-18967329-816b-4123-9091-4fe1812f7c4d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.541 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.544 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.641 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007768.5202823, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.642 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] VM Started (Lifecycle Event)
Dec 06 07:56:08 compute-1 podman[294657]: 2025-12-06 07:56:08.680579713 +0000 UTC m=+0.048311003 container create 2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:56:08 compute-1 systemd[1]: Started libpod-conmon-2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25.scope.
Dec 06 07:56:08 compute-1 podman[294657]: 2025-12-06 07:56:08.655598826 +0000 UTC m=+0.023330146 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:56:08 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:56:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aecaccacc92ee7ddc1d4e0853deb3c2796570b5532b17cf6b36f823e75c46da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:08 compute-1 podman[294657]: 2025-12-06 07:56:08.773896062 +0000 UTC m=+0.141627352 container init 2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:56:08 compute-1 podman[294657]: 2025-12-06 07:56:08.779159643 +0000 UTC m=+0.146890943 container start 2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:56:08 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[294672]: [NOTICE]   (294676) : New worker (294678) forked
Dec 06 07:56:08 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[294672]: [NOTICE]   (294676) : Loading success.
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.814 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:56:08 compute-1 nova_compute[226101]: 2025-12-06 07:56:08.818 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:56:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:08.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2025625938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2025625938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:09.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:09 compute-1 nova_compute[226101]: 2025-12-06 07:56:09.765 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.166 226109 DEBUG nova.compute.manager [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.166 226109 DEBUG oslo_concurrency.lockutils [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.167 226109 DEBUG oslo_concurrency.lockutils [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.167 226109 DEBUG oslo_concurrency.lockutils [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.167 226109 DEBUG nova.compute.manager [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.168 226109 WARNING nova.compute.manager [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state active and task_state None.
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.168 226109 DEBUG nova.compute.manager [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.168 226109 DEBUG oslo_concurrency.lockutils [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.168 226109 DEBUG oslo_concurrency.lockutils [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.169 226109 DEBUG oslo_concurrency.lockutils [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.169 226109 DEBUG nova.compute.manager [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:56:10 compute-1 nova_compute[226101]: 2025-12-06 07:56:10.169 226109 WARNING nova.compute.manager [req-420661b9-eade-40ca-b2b1-a1525e935404 req-284a82f2-da78-4a59-afa0-5e4c2e663709 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state active and task_state None.
Dec 06 07:56:10 compute-1 ceph-mon[81689]: pgmap v3090: 305 pgs: 305 active+clean; 394 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 188 op/s
Dec 06 07:56:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:10.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:11.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:12 compute-1 ceph-mon[81689]: pgmap v3091: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 289 op/s
Dec 06 07:56:12 compute-1 nova_compute[226101]: 2025-12-06 07:56:12.929 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:12.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:13.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:14 compute-1 nova_compute[226101]: 2025-12-06 07:56:14.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:14 compute-1 ceph-mon[81689]: pgmap v3092: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.2 MiB/s wr, 259 op/s
Dec 06 07:56:14 compute-1 ovn_controller[130279]: 2025-12-06T07:56:14Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:22:84 10.100.0.13
Dec 06 07:56:14 compute-1 ovn_controller[130279]: 2025-12-06T07:56:14Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:22:84 10.100.0.13
Dec 06 07:56:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:14.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:15.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:16.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:17 compute-1 ceph-mon[81689]: pgmap v3093: 305 pgs: 305 active+clean; 443 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.8 MiB/s wr, 246 op/s
Dec 06 07:56:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:17 compute-1 nova_compute[226101]: 2025-12-06 07:56:17.933 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:18 compute-1 nova_compute[226101]: 2025-12-06 07:56:18.492 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:18.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:19 compute-1 ceph-mon[81689]: pgmap v3094: 305 pgs: 305 active+clean; 454 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.9 MiB/s wr, 254 op/s
Dec 06 07:56:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:19 compute-1 nova_compute[226101]: 2025-12-06 07:56:19.771 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3084393564' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3084393564' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:20.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:21 compute-1 ceph-mon[81689]: pgmap v3095: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 239 op/s
Dec 06 07:56:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:21.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:22 compute-1 ovn_controller[130279]: 2025-12-06T07:56:22Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:20:98 10.100.0.5
Dec 06 07:56:22 compute-1 ceph-mon[81689]: pgmap v3096: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 244 op/s
Dec 06 07:56:22 compute-1 nova_compute[226101]: 2025-12-06 07:56:22.597 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:22 compute-1 nova_compute[226101]: 2025-12-06 07:56:22.934 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:22.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:23.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:24 compute-1 nova_compute[226101]: 2025-12-06 07:56:24.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:24 compute-1 nova_compute[226101]: 2025-12-06 07:56:24.614 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:24 compute-1 nova_compute[226101]: 2025-12-06 07:56:24.614 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:24 compute-1 nova_compute[226101]: 2025-12-06 07:56:24.614 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:24 compute-1 nova_compute[226101]: 2025-12-06 07:56:24.614 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:56:24 compute-1 nova_compute[226101]: 2025-12-06 07:56:24.615 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:24 compute-1 nova_compute[226101]: 2025-12-06 07:56:24.776 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:24.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:56:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3802332635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.285 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:25 compute-1 ceph-mon[81689]: pgmap v3097: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 143 op/s
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.396 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.397 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.399 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.400 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.553 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.554 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3948MB free_disk=20.942161560058594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.554 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.554 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:25.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.711 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 563d5f63-1ecc-4c65-855c-e375f6f97f29 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.711 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.712 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.712 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:56:25 compute-1 nova_compute[226101]: 2025-12-06 07:56:25.869 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:56:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3501000067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:26 compute-1 nova_compute[226101]: 2025-12-06 07:56:26.317 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:26 compute-1 nova_compute[226101]: 2025-12-06 07:56:26.325 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:56:26 compute-1 nova_compute[226101]: 2025-12-06 07:56:26.342 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:56:26 compute-1 nova_compute[226101]: 2025-12-06 07:56:26.380 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:56:26 compute-1 nova_compute[226101]: 2025-12-06 07:56:26.380 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3802332635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:26 compute-1 ceph-mon[81689]: pgmap v3098: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 728 KiB/s rd, 1.6 MiB/s wr, 118 op/s
Dec 06 07:56:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:56:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2046333412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:56:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2046333412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:26.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:27 compute-1 podman[294736]: 2025-12-06 07:56:27.105430161 +0000 UTC m=+0.064957531 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 07:56:27 compute-1 podman[294738]: 2025-12-06 07:56:27.121519941 +0000 UTC m=+0.088960352 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:56:27 compute-1 podman[294737]: 2025-12-06 07:56:27.121640995 +0000 UTC m=+0.082720056 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 07:56:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:27.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.000 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.336 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.380 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.380 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.380 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.573 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.573 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.573 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:56:28 compute-1 nova_compute[226101]: 2025-12-06 07:56:28.574 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3501000067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2046333412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2046333412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:29.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:29.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:29 compute-1 nova_compute[226101]: 2025-12-06 07:56:29.756 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating instance_info_cache with network_info: [{"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:29 compute-1 nova_compute[226101]: 2025-12-06 07:56:29.779 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:29 compute-1 nova_compute[226101]: 2025-12-06 07:56:29.782 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-563d5f63-1ecc-4c65-855c-e375f6f97f29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:56:29 compute-1 nova_compute[226101]: 2025-12-06 07:56:29.782 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:56:29 compute-1 ceph-mon[81689]: pgmap v3099: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 700 KiB/s rd, 1.0 MiB/s wr, 105 op/s
Dec 06 07:56:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:31.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:31 compute-1 nova_compute[226101]: 2025-12-06 07:56:31.315 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:31 compute-1 nova_compute[226101]: 2025-12-06 07:56:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:31 compute-1 nova_compute[226101]: 2025-12-06 07:56:31.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:56:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:31.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3374446509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:32 compute-1 ceph-mon[81689]: pgmap v3100: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 263 KiB/s wr, 89 op/s
Dec 06 07:56:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3455582906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:33 compute-1 nova_compute[226101]: 2025-12-06 07:56:33.004 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:33.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1791126325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:33 compute-1 ceph-mon[81689]: pgmap v3101: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 567 KiB/s rd, 217 KiB/s wr, 80 op/s
Dec 06 07:56:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2624542154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:33.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:34 compute-1 nova_compute[226101]: 2025-12-06 07:56:34.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:34 compute-1 nova_compute[226101]: 2025-12-06 07:56:34.780 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:35.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:35 compute-1 ceph-mon[81689]: pgmap v3102: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 410 KiB/s rd, 33 KiB/s wr, 51 op/s
Dec 06 07:56:35 compute-1 sshd-session[294688]: Connection closed by 165.154.55.146 port 34384 [preauth]
Dec 06 07:56:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:35.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:36 compute-1 ceph-mon[81689]: pgmap v3103: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 21 KiB/s wr, 31 op/s
Dec 06 07:56:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:37.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:37.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3087927977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:38 compute-1 nova_compute[226101]: 2025-12-06 07:56:38.007 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:38 compute-1 nova_compute[226101]: 2025-12-06 07:56:38.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:38 compute-1 ceph-mon[81689]: pgmap v3104: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 18 KiB/s wr, 18 op/s
Dec 06 07:56:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4063988645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:39.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:39.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:39 compute-1 nova_compute[226101]: 2025-12-06 07:56:39.783 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:40 compute-1 sshd[164848]: Timeout before authentication for connection from 14.103.118.136 to 38.102.83.204, pid = 292694
Dec 06 07:56:40 compute-1 nova_compute[226101]: 2025-12-06 07:56:40.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:40 compute-1 ceph-mon[81689]: pgmap v3105: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 16 KiB/s wr, 13 op/s
Dec 06 07:56:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:41.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:41.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2467485622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.695 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.695 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.696 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.696 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.696 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.697 226109 INFO nova.compute.manager [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Terminating instance
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.698 226109 DEBUG nova.compute.manager [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:56:42 compute-1 kernel: tap73cf92a3-d7 (unregistering): left promiscuous mode
Dec 06 07:56:42 compute-1 NetworkManager[49031]: <info>  [1765007802.7553] device (tap73cf92a3-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:56:42 compute-1 ovn_controller[130279]: 2025-12-06T07:56:42Z|00720|binding|INFO|Releasing lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 from this chassis (sb_readonly=0)
Dec 06 07:56:42 compute-1 ovn_controller[130279]: 2025-12-06T07:56:42Z|00721|binding|INFO|Setting lport 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 down in Southbound
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.758 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:42 compute-1 ovn_controller[130279]: 2025-12-06T07:56:42Z|00722|binding|INFO|Removing iface tap73cf92a3-d7 ovn-installed in OVS
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.761 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:42.767 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:20:98 10.100.0.5'], port_security=['fa:16:3e:d6:20:98 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '563d5f63-1ecc-4c65-855c-e375f6f97f29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '88b6dfa1-842c-4e82-9a0e-5e2b4d8b3cb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=73cf92a3-d7ff-4ec0-950c-c0c31afd8004) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:56:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:42.769 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 73cf92a3-d7ff-4ec0-950c-c0c31afd8004 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 unbound from our chassis
Dec 06 07:56:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:42.772 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 997afd36-d3a2-430f-ba34-f342135a9bb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:56:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:42.773 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d5195307-c039-4d76-902a-cabbc5c78b56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:42.774 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace which is not needed anymore
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.778 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:42 compute-1 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Dec 06 07:56:42 compute-1 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000ad.scope: Consumed 14.718s CPU time.
Dec 06 07:56:42 compute-1 systemd-machined[190302]: Machine qemu-83-instance-000000ad terminated.
Dec 06 07:56:42 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[294672]: [NOTICE]   (294676) : haproxy version is 2.8.14-c23fe91
Dec 06 07:56:42 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[294672]: [NOTICE]   (294676) : path to executable is /usr/sbin/haproxy
Dec 06 07:56:42 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[294672]: [WARNING]  (294676) : Exiting Master process...
Dec 06 07:56:42 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[294672]: [ALERT]    (294676) : Current worker (294678) exited with code 143 (Terminated)
Dec 06 07:56:42 compute-1 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[294672]: [WARNING]  (294676) : All workers exited. Exiting... (0)
Dec 06 07:56:42 compute-1 systemd[1]: libpod-2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25.scope: Deactivated successfully.
Dec 06 07:56:42 compute-1 podman[294821]: 2025-12-06 07:56:42.90009569 +0000 UTC m=+0.043524496 container died 2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.932 226109 INFO nova.virt.libvirt.driver [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Instance destroyed successfully.
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.933 226109 DEBUG nova.objects.instance [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'resources' on Instance uuid 563d5f63-1ecc-4c65-855c-e375f6f97f29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:42 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25-userdata-shm.mount: Deactivated successfully.
Dec 06 07:56:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-0aecaccacc92ee7ddc1d4e0853deb3c2796570b5532b17cf6b36f823e75c46da-merged.mount: Deactivated successfully.
Dec 06 07:56:42 compute-1 podman[294821]: 2025-12-06 07:56:42.949620767 +0000 UTC m=+0.093049563 container cleanup 2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:56:42 compute-1 systemd[1]: libpod-conmon-2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25.scope: Deactivated successfully.
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.967 226109 DEBUG nova.virt.libvirt.vif [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-1295588604',display_name='tempest-AttachVolumeTestJSON-server-1295588604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-1295588604',id=173,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkZWnDd2ourzjHlcEK7cHRdQV+nbLAhyX6fgrqVnvZqgS31pND6qBaKlg3yv3dw5xYj9PWy81LB50kUPIrMKeW7VSpCHVdreKc29y6jmgbqwFAxDIw3ZAxRTEFcoovdYA==',key_name='tempest-keypair-1313997358',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:54:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-dfyu9z67',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:56:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=563d5f63-1ecc-4c65-855c-e375f6f97f29,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.968 226109 DEBUG nova.network.os_vif_util [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "address": "fa:16:3e:d6:20:98", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73cf92a3-d7", "ovs_interfaceid": "73cf92a3-d7ff-4ec0-950c-c0c31afd8004", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.969 226109 DEBUG nova.network.os_vif_util [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.970 226109 DEBUG os_vif [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.972 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.972 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73cf92a3-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.976 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.978 226109 DEBUG nova.compute.manager [req-babe5fc3-5a69-428b-bcae-5111b9998be4 req-f7cf8a77-4a7b-4dca-950e-8832dafe398e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.979 226109 DEBUG oslo_concurrency.lockutils [req-babe5fc3-5a69-428b-bcae-5111b9998be4 req-f7cf8a77-4a7b-4dca-950e-8832dafe398e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.979 226109 DEBUG oslo_concurrency.lockutils [req-babe5fc3-5a69-428b-bcae-5111b9998be4 req-f7cf8a77-4a7b-4dca-950e-8832dafe398e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.979 226109 DEBUG oslo_concurrency.lockutils [req-babe5fc3-5a69-428b-bcae-5111b9998be4 req-f7cf8a77-4a7b-4dca-950e-8832dafe398e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.979 226109 DEBUG nova.compute.manager [req-babe5fc3-5a69-428b-bcae-5111b9998be4 req-f7cf8a77-4a7b-4dca-950e-8832dafe398e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.979 226109 DEBUG nova.compute.manager [req-babe5fc3-5a69-428b-bcae-5111b9998be4 req-f7cf8a77-4a7b-4dca-950e-8832dafe398e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-unplugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:56:42 compute-1 nova_compute[226101]: 2025-12-06 07:56:42.980 226109 INFO os_vif [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:20:98,bridge_name='br-int',has_traffic_filtering=True,id=73cf92a3-d7ff-4ec0-950c-c0c31afd8004,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73cf92a3-d7')
Dec 06 07:56:43 compute-1 podman[294860]: 2025-12-06 07:56:43.015219783 +0000 UTC m=+0.042951881 container remove 2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:56:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:43.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.021 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[72dbc8fb-b1fe-47d5-aaad-1b3f5180595e]: (4, ('Sat Dec  6 07:56:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25)\n2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25\nSat Dec  6 07:56:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25)\n2989426a3a5547a25881075b0e33633c753a6c3c79d068ffd2226a3439547b25\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.022 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[135968df-d556-405c-8c99-27a33d3a8e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.023 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:43 compute-1 nova_compute[226101]: 2025-12-06 07:56:43.025 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:43 compute-1 kernel: tap997afd36-d0: left promiscuous mode
Dec 06 07:56:43 compute-1 nova_compute[226101]: 2025-12-06 07:56:43.040 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.042 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[563d9f4e-db3b-4d38-85fb-d18d9dcc1a1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.062 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[04dd038b-934b-4b22-8f8f-24fbdf59cc2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.064 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7177b5-4f47-427e-b83e-7e60395f97b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.079 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2474db72-7d91-4a97-881c-e49db3cd1e56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 804755, 'reachable_time': 16420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294893, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.082 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:56:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:43.082 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[59e3d467-7987-4a51-a586-99cb5824c1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:43 compute-1 systemd[1]: run-netns-ovnmeta\x2d997afd36\x2dd3a2\x2d430f\x2dba34\x2df342135a9bb6.mount: Deactivated successfully.
Dec 06 07:56:43 compute-1 nova_compute[226101]: 2025-12-06 07:56:43.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:43.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:43 compute-1 ceph-mon[81689]: pgmap v3106: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 KiB/s rd, 19 KiB/s wr, 7 op/s
Dec 06 07:56:44 compute-1 nova_compute[226101]: 2025-12-06 07:56:44.079 226109 INFO nova.virt.libvirt.driver [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Deleting instance files /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29_del
Dec 06 07:56:44 compute-1 nova_compute[226101]: 2025-12-06 07:56:44.080 226109 INFO nova.virt.libvirt.driver [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Deletion of /var/lib/nova/instances/563d5f63-1ecc-4c65-855c-e375f6f97f29_del complete
Dec 06 07:56:44 compute-1 nova_compute[226101]: 2025-12-06 07:56:44.132 226109 INFO nova.compute.manager [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Took 1.43 seconds to destroy the instance on the hypervisor.
Dec 06 07:56:44 compute-1 nova_compute[226101]: 2025-12-06 07:56:44.133 226109 DEBUG oslo.service.loopingcall [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:56:44 compute-1 nova_compute[226101]: 2025-12-06 07:56:44.133 226109 DEBUG nova.compute.manager [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:56:44 compute-1 nova_compute[226101]: 2025-12-06 07:56:44.133 226109 DEBUG nova.network.neutron [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:56:44 compute-1 nova_compute[226101]: 2025-12-06 07:56:44.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:45.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:45 compute-1 ceph-mon[81689]: pgmap v3107: 305 pgs: 305 active+clean; 469 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 192 KiB/s wr, 5 op/s
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.081 226109 DEBUG nova.compute.manager [req-8f17967f-cf96-4b8e-808f-fa0017f458d1 req-fa6a5983-71ca-4e1e-9a05-586271867d9e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.082 226109 DEBUG oslo_concurrency.lockutils [req-8f17967f-cf96-4b8e-808f-fa0017f458d1 req-fa6a5983-71ca-4e1e-9a05-586271867d9e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.082 226109 DEBUG oslo_concurrency.lockutils [req-8f17967f-cf96-4b8e-808f-fa0017f458d1 req-fa6a5983-71ca-4e1e-9a05-586271867d9e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.082 226109 DEBUG oslo_concurrency.lockutils [req-8f17967f-cf96-4b8e-808f-fa0017f458d1 req-fa6a5983-71ca-4e1e-9a05-586271867d9e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.082 226109 DEBUG nova.compute.manager [req-8f17967f-cf96-4b8e-808f-fa0017f458d1 req-fa6a5983-71ca-4e1e-9a05-586271867d9e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] No waiting events found dispatching network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.083 226109 WARNING nova.compute.manager [req-8f17967f-cf96-4b8e-808f-fa0017f458d1 req-fa6a5983-71ca-4e1e-9a05-586271867d9e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received unexpected event network-vif-plugged-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 for instance with vm_state active and task_state deleting.
Dec 06 07:56:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:45.238 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:56:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:45.238 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.243 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.290 226109 DEBUG nova.network.neutron [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.310 226109 INFO nova.compute.manager [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Took 1.18 seconds to deallocate network for instance.
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.364 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.364 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.446 226109 DEBUG oslo_concurrency.processutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:56:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1981897599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.920 226109 DEBUG oslo_concurrency.processutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.926 226109 DEBUG nova.compute.provider_tree [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.949 226109 DEBUG nova.scheduler.client.report [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:56:45 compute-1 nova_compute[226101]: 2025-12-06 07:56:45.973 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:46 compute-1 nova_compute[226101]: 2025-12-06 07:56:46.003 226109 INFO nova.scheduler.client.report [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Deleted allocations for instance 563d5f63-1ecc-4c65-855c-e375f6f97f29
Dec 06 07:56:46 compute-1 nova_compute[226101]: 2025-12-06 07:56:46.080 226109 DEBUG oslo_concurrency.lockutils [None req-3352c22e-b297-4ba8-bb4f-a36ee3c0c937 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "563d5f63-1ecc-4c65-855c-e375f6f97f29" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1981897599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:47 compute-1 nova_compute[226101]: 2025-12-06 07:56:47.172 226109 DEBUG nova.compute.manager [req-2b11ea79-4b13-45fb-8b8d-208de4d4bf3d req-d6cbba1f-edda-4f67-bbea-002ffda74fd6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Received event network-vif-deleted-73cf92a3-d7ff-4ec0-950c-c0c31afd8004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:47 compute-1 nova_compute[226101]: 2025-12-06 07:56:47.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:47 compute-1 ceph-mon[81689]: pgmap v3108: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 895 KiB/s wr, 25 op/s
Dec 06 07:56:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3066351593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2783866018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:47 compute-1 nova_compute[226101]: 2025-12-06 07:56:47.975 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:48 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Dec 06 07:56:48 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:48.979124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:56:48 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Dec 06 07:56:48 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007808979177, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2368, "num_deletes": 262, "total_data_size": 5457712, "memory_usage": 5541184, "flush_reason": "Manual Compaction"}
Dec 06 07:56:48 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Dec 06 07:56:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:49.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809148345, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 3565501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64549, "largest_seqno": 66912, "table_properties": {"data_size": 3556074, "index_size": 5856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20437, "raw_average_key_size": 20, "raw_value_size": 3536743, "raw_average_value_size": 3543, "num_data_blocks": 254, "num_entries": 998, "num_filter_entries": 998, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007613, "oldest_key_time": 1765007613, "file_creation_time": 1765007808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 169256 microseconds, and 13687 cpu microseconds.
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.148387) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 3565501 bytes OK
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.148405) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.232131) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.232174) EVENT_LOG_v1 {"time_micros": 1765007809232163, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.232199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 5447150, prev total WAL file size 5449101, number of live WAL files 2.
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.233832) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323734' seq:72057594037927935, type:22 .. '6C6F676D0032353238' seq:0, type:0; will stop at (end)
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(3481KB)], [129(10MB)]
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809233865, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 14653773, "oldest_snapshot_seqno": -1}
Dec 06 07:56:49 compute-1 ceph-mon[81689]: pgmap v3109: 305 pgs: 305 active+clean; 444 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 06 07:56:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:49 compute-1 nova_compute[226101]: 2025-12-06 07:56:49.787 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 9817 keys, 14481927 bytes, temperature: kUnknown
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809962941, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 14481927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14416042, "index_size": 40236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 258218, "raw_average_key_size": 26, "raw_value_size": 14241174, "raw_average_value_size": 1450, "num_data_blocks": 1547, "num_entries": 9817, "num_filter_entries": 9817, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007809, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.963236) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 14481927 bytes
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.964490) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.1 rd, 19.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 10.6 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.2) write-amplify(4.1) OK, records in: 10359, records dropped: 542 output_compression: NoCompression
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.964511) EVENT_LOG_v1 {"time_micros": 1765007809964500, "job": 82, "event": "compaction_finished", "compaction_time_micros": 729136, "compaction_time_cpu_micros": 30707, "output_level": 6, "num_output_files": 1, "total_output_size": 14481927, "num_input_records": 10359, "num_output_records": 9817, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809965249, "job": 82, "event": "table_file_deletion", "file_number": 131}
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809967466, "job": 82, "event": "table_file_deletion", "file_number": 129}
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.233701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.967532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.967537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.967539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.967541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:49.967543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:50 compute-1 ceph-mon[81689]: pgmap v3110: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Dec 06 07:56:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/826770176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/826770176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:51.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:52 compute-1 ceph-mon[81689]: pgmap v3111: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Dec 06 07:56:52 compute-1 nova_compute[226101]: 2025-12-06 07:56:52.995 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:56:53.240 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.704336) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813704642, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 314, "num_deletes": 251, "total_data_size": 169928, "memory_usage": 176512, "flush_reason": "Manual Compaction"}
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813725937, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 111527, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66917, "largest_seqno": 67226, "table_properties": {"data_size": 109552, "index_size": 202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5114, "raw_average_key_size": 18, "raw_value_size": 105652, "raw_average_value_size": 380, "num_data_blocks": 9, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007809, "oldest_key_time": 1765007809, "file_creation_time": 1765007813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 21720 microseconds, and 1159 cpu microseconds.
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:56:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.726066) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 111527 bytes OK
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.726111) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.768847) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.768896) EVENT_LOG_v1 {"time_micros": 1765007813768885, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.768923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 167671, prev total WAL file size 167671, number of live WAL files 2.
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.769727) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(108KB)], [132(13MB)]
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813769794, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 14593454, "oldest_snapshot_seqno": -1}
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 9586 keys, 12657678 bytes, temperature: kUnknown
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813918375, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 12657678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12595068, "index_size": 37537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24005, "raw_key_size": 254113, "raw_average_key_size": 26, "raw_value_size": 12425933, "raw_average_value_size": 1296, "num_data_blocks": 1425, "num_entries": 9586, "num_filter_entries": 9586, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.918671) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 12657678 bytes
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.920581) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.1 rd, 85.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.8 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(244.3) write-amplify(113.5) OK, records in: 10095, records dropped: 509 output_compression: NoCompression
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.920597) EVENT_LOG_v1 {"time_micros": 1765007813920590, "job": 84, "event": "compaction_finished", "compaction_time_micros": 148691, "compaction_time_cpu_micros": 28998, "output_level": 6, "num_output_files": 1, "total_output_size": 12657678, "num_input_records": 10095, "num_output_records": 9586, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813920716, "job": 84, "event": "table_file_deletion", "file_number": 134}
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813922928, "job": 84, "event": "table_file_deletion", "file_number": 132}
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.769613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.923009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.923013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.923014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.923016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:56:53.923017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:54 compute-1 nova_compute[226101]: 2025-12-06 07:56:54.789 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:55.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:55 compute-1 ceph-mon[81689]: pgmap v3112: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 07:56:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1180554779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1180554779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/805143102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:55.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:56 compute-1 ceph-mon[81689]: pgmap v3113: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 159 op/s
Dec 06 07:56:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:57.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:57 compute-1 nova_compute[226101]: 2025-12-06 07:56:57.931 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007802.929316, 563d5f63-1ecc-4c65-855c-e375f6f97f29 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:56:57 compute-1 nova_compute[226101]: 2025-12-06 07:56:57.931 226109 INFO nova.compute.manager [-] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] VM Stopped (Lifecycle Event)
Dec 06 07:56:57 compute-1 nova_compute[226101]: 2025-12-06 07:56:57.953 226109 DEBUG nova.compute.manager [None req-7ce152f8-3971-44e3-9f82-4840ad26a7fd - - - - - -] [instance: 563d5f63-1ecc-4c65-855c-e375f6f97f29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:56:57 compute-1 nova_compute[226101]: 2025-12-06 07:56:57.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:58 compute-1 podman[294920]: 2025-12-06 07:56:58.102608443 +0000 UTC m=+0.079727115 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:56:58 compute-1 podman[294918]: 2025-12-06 07:56:58.102909792 +0000 UTC m=+0.084206676 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 07:56:58 compute-1 podman[294919]: 2025-12-06 07:56:58.102685925 +0000 UTC m=+0.083789643 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:56:58 compute-1 ceph-mon[81689]: pgmap v3114: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 152 op/s
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.008 226109 DEBUG oslo_concurrency.lockutils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.009 226109 DEBUG oslo_concurrency.lockutils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:59.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.138 226109 DEBUG nova.objects.instance [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.211 226109 DEBUG oslo_concurrency.lockutils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.711 226109 DEBUG oslo_concurrency.lockutils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.711 226109 DEBUG oslo_concurrency.lockutils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.712 226109 INFO nova.compute.manager [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Attaching volume 9cadf3d0-8133-453d-9dbc-ba304e3665f2 to /dev/vdb
Dec 06 07:56:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:56:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.791 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.919 226109 DEBUG os_brick.utils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.921 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.931 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.932 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[095fa556-2428-4829-a2a9-619efe98f88f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.933 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.940 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.940 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[b4636cb1-a1d2-4496-b73e-c1d6db4a4c37]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.942 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.950 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.950 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[e45ad328-9cdc-475f-bd99-057583b6002a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.952 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[af3f634c-f0c1-4bc5-b29c-e3d60651c4de]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.953 226109 DEBUG oslo_concurrency.processutils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.980 226109 DEBUG oslo_concurrency.processutils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.983 226109 DEBUG os_brick.initiator.connectors.lightos [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.984 226109 DEBUG os_brick.initiator.connectors.lightos [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.984 226109 DEBUG os_brick.initiator.connectors.lightos [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.984 226109 DEBUG os_brick.utils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:56:59 compute-1 nova_compute[226101]: 2025-12-06 07:56:59.985 226109 DEBUG nova.virt.block_device [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating existing volume attachment record: 59aaa346-0c9c-4675-ad79-24f49c115eba _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:57:00 compute-1 nova_compute[226101]: 2025-12-06 07:57:00.700 226109 DEBUG nova.objects.instance [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:00 compute-1 nova_compute[226101]: 2025-12-06 07:57:00.726 226109 DEBUG nova.virt.libvirt.driver [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Attempting to attach volume 9cadf3d0-8133-453d-9dbc-ba304e3665f2 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:57:00 compute-1 nova_compute[226101]: 2025-12-06 07:57:00.728 226109 DEBUG nova.virt.libvirt.guest [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:57:00 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:00 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-9cadf3d0-8133-453d-9dbc-ba304e3665f2">
Dec 06 07:57:00 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:00 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:00 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:00 compute-1 nova_compute[226101]:   </source>
Dec 06 07:57:00 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:57:00 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:57:00 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:57:00 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:57:00 compute-1 nova_compute[226101]:   <serial>9cadf3d0-8133-453d-9dbc-ba304e3665f2</serial>
Dec 06 07:57:00 compute-1 nova_compute[226101]: </disk>
Dec 06 07:57:00 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:57:00 compute-1 nova_compute[226101]: 2025-12-06 07:57:00.935 226109 DEBUG nova.virt.libvirt.driver [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:00 compute-1 nova_compute[226101]: 2025-12-06 07:57:00.935 226109 DEBUG nova.virt.libvirt.driver [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:00 compute-1 nova_compute[226101]: 2025-12-06 07:57:00.935 226109 DEBUG nova.virt.libvirt.driver [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:00 compute-1 nova_compute[226101]: 2025-12-06 07:57:00.935 226109 DEBUG nova.virt.libvirt.driver [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:57:22:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:57:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:01.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:01 compute-1 nova_compute[226101]: 2025-12-06 07:57:01.143 226109 DEBUG oslo_concurrency.lockutils [None req-2e94a754-7c8c-4548-a3ad-f04b6312305d e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:01 compute-1 ceph-mon[81689]: pgmap v3115: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 965 KiB/s wr, 137 op/s
Dec 06 07:57:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2066413510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2001879348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2862865191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:01.677 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:01.678 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:01.679 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:01.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:02 compute-1 ceph-mon[81689]: pgmap v3116: 305 pgs: 305 active+clean; 472 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 139 op/s
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.387 226109 DEBUG oslo_concurrency.lockutils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.388 226109 DEBUG oslo_concurrency.lockutils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.403 226109 DEBUG nova.objects.instance [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.450 226109 DEBUG oslo_concurrency.lockutils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.658 226109 DEBUG oslo_concurrency.lockutils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.659 226109 DEBUG oslo_concurrency.lockutils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.660 226109 INFO nova.compute.manager [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Attaching volume 8eb73d4f-1376-44e4-bc55-b6d259ff218b to /dev/vdc
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.771 226109 DEBUG os_brick.utils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.772 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.782 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.782 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[aefa75fa-1811-4e02-adec-bc003001f5eb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.783 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.790 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.790 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[36cd7d78-1d53-4d97-a109-fdba941db986]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.792 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.798 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.798 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[49d5292b-4b88-4d37-a424-b1abe6f818d9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.799 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[95372aff-7220-43f5-9f40-6496990c9721]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.800 226109 DEBUG oslo_concurrency.processutils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.825 226109 DEBUG oslo_concurrency.processutils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.828 226109 DEBUG os_brick.initiator.connectors.lightos [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.828 226109 DEBUG os_brick.initiator.connectors.lightos [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.828 226109 DEBUG os_brick.initiator.connectors.lightos [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.829 226109 DEBUG os_brick.utils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:57:02 compute-1 nova_compute[226101]: 2025-12-06 07:57:02.829 226109 DEBUG nova.virt.block_device [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating existing volume attachment record: 7e7be027-efe1-4de5-ac6d-30c958af2890 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:57:03 compute-1 nova_compute[226101]: 2025-12-06 07:57:03.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:03.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:04 compute-1 sshd-session[295011]: Received disconnect from 136.112.8.45 port 60418:11: Bye Bye [preauth]
Dec 06 07:57:04 compute-1 sshd-session[295011]: Disconnected from authenticating user root 136.112.8.45 port 60418 [preauth]
Dec 06 07:57:04 compute-1 nova_compute[226101]: 2025-12-06 07:57:04.137 226109 DEBUG nova.objects.instance [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:04 compute-1 nova_compute[226101]: 2025-12-06 07:57:04.198 226109 DEBUG nova.virt.libvirt.driver [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Attempting to attach volume 8eb73d4f-1376-44e4-bc55-b6d259ff218b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:57:04 compute-1 nova_compute[226101]: 2025-12-06 07:57:04.201 226109 DEBUG nova.virt.libvirt.guest [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:57:04 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:04 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-8eb73d4f-1376-44e4-bc55-b6d259ff218b">
Dec 06 07:57:04 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:04 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:04 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:04 compute-1 nova_compute[226101]:   </source>
Dec 06 07:57:04 compute-1 nova_compute[226101]:   <auth username="openstack">
Dec 06 07:57:04 compute-1 nova_compute[226101]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:57:04 compute-1 nova_compute[226101]:   </auth>
Dec 06 07:57:04 compute-1 nova_compute[226101]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:57:04 compute-1 nova_compute[226101]:   <serial>8eb73d4f-1376-44e4-bc55-b6d259ff218b</serial>
Dec 06 07:57:04 compute-1 nova_compute[226101]: </disk>
Dec 06 07:57:04 compute-1 nova_compute[226101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:57:04 compute-1 nova_compute[226101]: 2025-12-06 07:57:04.828 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:05.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:05 compute-1 sudo[295042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:05 compute-1 sudo[295042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:05 compute-1 sudo[295042]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:05 compute-1 sudo[295067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:57:05 compute-1 sudo[295067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:05 compute-1 sudo[295067]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:05 compute-1 sudo[295092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:05 compute-1 sudo[295092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:05 compute-1 sudo[295092]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:05 compute-1 sudo[295117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:57:05 compute-1 sudo[295117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:05 compute-1 nova_compute[226101]: 2025-12-06 07:57:05.333 226109 DEBUG nova.virt.libvirt.driver [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:05 compute-1 nova_compute[226101]: 2025-12-06 07:57:05.334 226109 DEBUG nova.virt.libvirt.driver [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:05 compute-1 nova_compute[226101]: 2025-12-06 07:57:05.334 226109 DEBUG nova.virt.libvirt.driver [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:05 compute-1 nova_compute[226101]: 2025-12-06 07:57:05.334 226109 DEBUG nova.virt.libvirt.driver [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:05 compute-1 nova_compute[226101]: 2025-12-06 07:57:05.334 226109 DEBUG nova.virt.libvirt.driver [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:57:22:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:57:05 compute-1 sshd-session[295020]: Received disconnect from 154.219.116.39 port 33468:11: Bye Bye [preauth]
Dec 06 07:57:05 compute-1 ceph-mon[81689]: pgmap v3117: 305 pgs: 305 active+clean; 479 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 696 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Dec 06 07:57:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3478441018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:05 compute-1 sshd-session[295020]: Disconnected from authenticating user root 154.219.116.39 port 33468 [preauth]
Dec 06 07:57:05 compute-1 sudo[295117]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:05 compute-1 nova_compute[226101]: 2025-12-06 07:57:05.907 226109 DEBUG oslo_concurrency.lockutils [None req-df063ad8-3de8-490f-a0c0-77df2365e4a2 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:07.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:57:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:07.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:08 compute-1 nova_compute[226101]: 2025-12-06 07:57:08.053 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:08 compute-1 nova_compute[226101]: 2025-12-06 07:57:08.782 226109 DEBUG nova.compute.manager [req-2234fdc1-d261-4a34-ba94-8ad6a1abf29c req-aee46672-6bd1-48eb-a504-eb241dadb653 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:08 compute-1 nova_compute[226101]: 2025-12-06 07:57:08.783 226109 DEBUG nova.compute.manager [req-2234fdc1-d261-4a34-ba94-8ad6a1abf29c req-aee46672-6bd1-48eb-a504-eb241dadb653 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing instance network info cache due to event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:08 compute-1 nova_compute[226101]: 2025-12-06 07:57:08.783 226109 DEBUG oslo_concurrency.lockutils [req-2234fdc1-d261-4a34-ba94-8ad6a1abf29c req-aee46672-6bd1-48eb-a504-eb241dadb653 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:08 compute-1 nova_compute[226101]: 2025-12-06 07:57:08.783 226109 DEBUG oslo_concurrency.lockutils [req-2234fdc1-d261-4a34-ba94-8ad6a1abf29c req-aee46672-6bd1-48eb-a504-eb241dadb653 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:08 compute-1 nova_compute[226101]: 2025-12-06 07:57:08.783 226109 DEBUG nova.network.neutron [req-2234fdc1-d261-4a34-ba94-8ad6a1abf29c req-aee46672-6bd1-48eb-a504-eb241dadb653 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:08 compute-1 ceph-mon[81689]: pgmap v3118: 305 pgs: 305 active+clean; 501 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 622 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec 06 07:57:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:57:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:57:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:57:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:57:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:57:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:09.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:09 compute-1 ceph-mon[81689]: pgmap v3119: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 385 KiB/s rd, 3.9 MiB/s wr, 115 op/s
Dec 06 07:57:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2468210167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2468210167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:09 compute-1 nova_compute[226101]: 2025-12-06 07:57:09.830 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:10 compute-1 nova_compute[226101]: 2025-12-06 07:57:10.172 226109 DEBUG nova.network.neutron [req-2234fdc1-d261-4a34-ba94-8ad6a1abf29c req-aee46672-6bd1-48eb-a504-eb241dadb653 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updated VIF entry in instance network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:10 compute-1 nova_compute[226101]: 2025-12-06 07:57:10.173 226109 DEBUG nova.network.neutron [req-2234fdc1-d261-4a34-ba94-8ad6a1abf29c req-aee46672-6bd1-48eb-a504-eb241dadb653 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:10 compute-1 nova_compute[226101]: 2025-12-06 07:57:10.202 226109 DEBUG oslo_concurrency.lockutils [req-2234fdc1-d261-4a34-ba94-8ad6a1abf29c req-aee46672-6bd1-48eb-a504-eb241dadb653 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:11 compute-1 ceph-mon[81689]: pgmap v3120: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 131 op/s
Dec 06 07:57:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:11.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:11 compute-1 nova_compute[226101]: 2025-12-06 07:57:11.101 226109 DEBUG nova.compute.manager [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:11 compute-1 nova_compute[226101]: 2025-12-06 07:57:11.102 226109 DEBUG nova.compute.manager [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing instance network info cache due to event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:11 compute-1 nova_compute[226101]: 2025-12-06 07:57:11.102 226109 DEBUG oslo_concurrency.lockutils [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:11 compute-1 nova_compute[226101]: 2025-12-06 07:57:11.102 226109 DEBUG oslo_concurrency.lockutils [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:11 compute-1 nova_compute[226101]: 2025-12-06 07:57:11.102 226109 DEBUG nova.network.neutron [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:12 compute-1 nova_compute[226101]: 2025-12-06 07:57:12.124 226109 DEBUG nova.network.neutron [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updated VIF entry in instance network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:12 compute-1 nova_compute[226101]: 2025-12-06 07:57:12.124 226109 DEBUG nova.network.neutron [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:12 compute-1 nova_compute[226101]: 2025-12-06 07:57:12.138 226109 DEBUG oslo_concurrency.lockutils [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:12 compute-1 nova_compute[226101]: 2025-12-06 07:57:12.139 226109 DEBUG nova.compute.manager [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:12 compute-1 nova_compute[226101]: 2025-12-06 07:57:12.139 226109 DEBUG nova.compute.manager [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing instance network info cache due to event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:12 compute-1 nova_compute[226101]: 2025-12-06 07:57:12.139 226109 DEBUG oslo_concurrency.lockutils [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:12 compute-1 nova_compute[226101]: 2025-12-06 07:57:12.140 226109 DEBUG oslo_concurrency.lockutils [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:12 compute-1 nova_compute[226101]: 2025-12-06 07:57:12.140 226109 DEBUG nova.network.neutron [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:13.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.055 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.284 226109 DEBUG nova.compute.manager [req-2b1d22a9-e638-4287-81fa-ede55b858240 req-150fcb95-5179-4682-ad17-9945c398656e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.284 226109 DEBUG nova.compute.manager [req-2b1d22a9-e638-4287-81fa-ede55b858240 req-150fcb95-5179-4682-ad17-9945c398656e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing instance network info cache due to event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.285 226109 DEBUG oslo_concurrency.lockutils [req-2b1d22a9-e638-4287-81fa-ede55b858240 req-150fcb95-5179-4682-ad17-9945c398656e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.589 226109 DEBUG nova.network.neutron [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updated VIF entry in instance network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.590 226109 DEBUG nova.network.neutron [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.606 226109 DEBUG oslo_concurrency.lockutils [req-ff79725a-58a9-4ec6-b5e1-acb0f7ab53a2 req-ebf147d6-3975-40d1-9a1f-393ba4405936 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.607 226109 DEBUG oslo_concurrency.lockutils [req-2b1d22a9-e638-4287-81fa-ede55b858240 req-150fcb95-5179-4682-ad17-9945c398656e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:13 compute-1 nova_compute[226101]: 2025-12-06 07:57:13.607 226109 DEBUG nova.network.neutron [req-2b1d22a9-e638-4287-81fa-ede55b858240 req-150fcb95-5179-4682-ad17-9945c398656e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:14 compute-1 ceph-mon[81689]: pgmap v3121: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 153 op/s
Dec 06 07:57:14 compute-1 nova_compute[226101]: 2025-12-06 07:57:14.827 226109 DEBUG nova.network.neutron [req-2b1d22a9-e638-4287-81fa-ede55b858240 req-150fcb95-5179-4682-ad17-9945c398656e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updated VIF entry in instance network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:14 compute-1 nova_compute[226101]: 2025-12-06 07:57:14.828 226109 DEBUG nova.network.neutron [req-2b1d22a9-e638-4287-81fa-ede55b858240 req-150fcb95-5179-4682-ad17-9945c398656e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:14 compute-1 nova_compute[226101]: 2025-12-06 07:57:14.832 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:14 compute-1 nova_compute[226101]: 2025-12-06 07:57:14.850 226109 DEBUG oslo_concurrency.lockutils [req-2b1d22a9-e638-4287-81fa-ede55b858240 req-150fcb95-5179-4682-ad17-9945c398656e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:15.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:16 compute-1 ceph-mon[81689]: pgmap v3122: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 07:57:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:16 compute-1 sudo[295173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:16 compute-1 sudo[295173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:16 compute-1 sudo[295173]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:16 compute-1 sudo[295198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:57:16 compute-1 sudo[295198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:16 compute-1 sudo[295198]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:17.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:17 compute-1 ceph-mon[81689]: pgmap v3123: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 123 op/s
Dec 06 07:57:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.058 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.329 226109 DEBUG oslo_concurrency.lockutils [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.329 226109 DEBUG oslo_concurrency.lockutils [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.364 226109 INFO nova.compute.manager [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Detaching volume 9cadf3d0-8133-453d-9dbc-ba304e3665f2
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.533 226109 INFO nova.virt.block_device [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Attempting to driver detach volume 9cadf3d0-8133-453d-9dbc-ba304e3665f2 from mountpoint /dev/vdb
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.544 226109 DEBUG nova.virt.libvirt.driver [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Attempting to detach device vdb from instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.544 226109 DEBUG nova.virt.libvirt.guest [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-9cadf3d0-8133-453d-9dbc-ba304e3665f2">
Dec 06 07:57:18 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   </source>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <serial>9cadf3d0-8133-453d-9dbc-ba304e3665f2</serial>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]: </disk>
Dec 06 07:57:18 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.561 226109 INFO nova.virt.libvirt.driver [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdb from instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a from the persistent domain config.
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.562 226109 DEBUG nova.virt.libvirt.driver [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.562 226109 DEBUG nova.virt.libvirt.guest [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-9cadf3d0-8133-453d-9dbc-ba304e3665f2">
Dec 06 07:57:18 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   </source>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <serial>9cadf3d0-8133-453d-9dbc-ba304e3665f2</serial>
Dec 06 07:57:18 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:57:18 compute-1 nova_compute[226101]: </disk>
Dec 06 07:57:18 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.707 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765007838.7070296, 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.710 226109 DEBUG nova.virt.libvirt.driver [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.712 226109 INFO nova.virt.libvirt.driver [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdb from instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a from the live domain config.
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.919 226109 DEBUG nova.objects.instance [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:18 compute-1 nova_compute[226101]: 2025-12-06 07:57:18.954 226109 DEBUG oslo_concurrency.lockutils [None req-06252268-e7b5-4d23-8af5-84e577982c37 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:19.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:19 compute-1 ceph-mon[81689]: pgmap v3124: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 526 KiB/s wr, 91 op/s
Dec 06 07:57:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:19 compute-1 nova_compute[226101]: 2025-12-06 07:57:19.835 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:57:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/672821242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:57:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/672821242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:21.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:23.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:23 compute-1 nova_compute[226101]: 2025-12-06 07:57:23.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:23.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:24 compute-1 ceph-mon[81689]: pgmap v3125: 305 pgs: 305 active+clean; 513 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 749 KiB/s wr, 77 op/s
Dec 06 07:57:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/672821242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/672821242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:24 compute-1 ceph-mon[81689]: pgmap v3126: 305 pgs: 305 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Dec 06 07:57:24 compute-1 nova_compute[226101]: 2025-12-06 07:57:24.838 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:25.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:25 compute-1 nova_compute[226101]: 2025-12-06 07:57:25.866 226109 DEBUG oslo_concurrency.lockutils [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:25 compute-1 nova_compute[226101]: 2025-12-06 07:57:25.866 226109 DEBUG oslo_concurrency.lockutils [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:25 compute-1 nova_compute[226101]: 2025-12-06 07:57:25.881 226109 INFO nova.compute.manager [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Detaching volume 8eb73d4f-1376-44e4-bc55-b6d259ff218b
Dec 06 07:57:25 compute-1 ceph-mon[81689]: pgmap v3127: 305 pgs: 305 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.017 226109 INFO nova.virt.block_device [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Attempting to driver detach volume 8eb73d4f-1376-44e4-bc55-b6d259ff218b from mountpoint /dev/vdc
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.024 226109 DEBUG nova.virt.libvirt.driver [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Attempting to detach device vdc from instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.025 226109 DEBUG nova.virt.libvirt.guest [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-8eb73d4f-1376-44e4-bc55-b6d259ff218b">
Dec 06 07:57:26 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   </source>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <serial>8eb73d4f-1376-44e4-bc55-b6d259ff218b</serial>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]: </disk>
Dec 06 07:57:26 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.031 226109 INFO nova.virt.libvirt.driver [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdc from instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a from the persistent domain config.
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.031 226109 DEBUG nova.virt.libvirt.driver [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.032 226109 DEBUG nova.virt.libvirt.guest [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <source protocol="rbd" name="volumes/volume-8eb73d4f-1376-44e4-bc55-b6d259ff218b">
Dec 06 07:57:26 compute-1 nova_compute[226101]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   </source>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <serial>8eb73d4f-1376-44e4-bc55-b6d259ff218b</serial>
Dec 06 07:57:26 compute-1 nova_compute[226101]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec 06 07:57:26 compute-1 nova_compute[226101]: </disk>
Dec 06 07:57:26 compute-1 nova_compute[226101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.140 226109 DEBUG nova.virt.libvirt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Received event <DeviceRemovedEvent: 1765007846.1401658, 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.142 226109 DEBUG nova.virt.libvirt.driver [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.144 226109 INFO nova.virt.libvirt.driver [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdc from instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a from the live domain config.
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.297 226109 DEBUG nova.objects.instance [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.341 226109 DEBUG oslo_concurrency.lockutils [None req-debd1771-21ef-4a42-90ff-2c09d99fecac e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.616 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.616 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.617 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.617 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:57:26 compute-1 nova_compute[226101]: 2025-12-06 07:57:26.617 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:26 compute-1 ceph-mon[81689]: pgmap v3128: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 2.5 MiB/s wr, 97 op/s
Dec 06 07:57:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:27.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:57:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/668820867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.090 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.161 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.162 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.320 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.321 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4148MB free_disk=20.89681625366211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.321 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.322 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.410 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.411 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.411 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.455 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:27.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:57:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/167430239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.920 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.925 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.946 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.980 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:57:27 compute-1 nova_compute[226101]: 2025-12-06 07:57:27.981 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:28 compute-1 nova_compute[226101]: 2025-12-06 07:57:28.100 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:28 compute-1 nova_compute[226101]: 2025-12-06 07:57:28.553 226109 DEBUG nova.compute.manager [req-20c5172e-9583-49b2-ae4d-08996c69b534 req-38ecfd98-b604-4299-a197-c144c7e55d28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:28 compute-1 nova_compute[226101]: 2025-12-06 07:57:28.554 226109 DEBUG nova.compute.manager [req-20c5172e-9583-49b2-ae4d-08996c69b534 req-38ecfd98-b604-4299-a197-c144c7e55d28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing instance network info cache due to event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:28 compute-1 nova_compute[226101]: 2025-12-06 07:57:28.554 226109 DEBUG oslo_concurrency.lockutils [req-20c5172e-9583-49b2-ae4d-08996c69b534 req-38ecfd98-b604-4299-a197-c144c7e55d28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:28 compute-1 nova_compute[226101]: 2025-12-06 07:57:28.554 226109 DEBUG oslo_concurrency.lockutils [req-20c5172e-9583-49b2-ae4d-08996c69b534 req-38ecfd98-b604-4299-a197-c144c7e55d28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:28 compute-1 nova_compute[226101]: 2025-12-06 07:57:28.554 226109 DEBUG nova.network.neutron [req-20c5172e-9583-49b2-ae4d-08996c69b534 req-38ecfd98-b604-4299-a197-c144c7e55d28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/668820867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/167430239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:28 compute-1 nova_compute[226101]: 2025-12-06 07:57:28.981 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:28 compute-1 nova_compute[226101]: 2025-12-06 07:57:28.981 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:57:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:29.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:29 compute-1 podman[295274]: 2025-12-06 07:57:29.072860033 +0000 UTC m=+0.055753374 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:57:29 compute-1 podman[295273]: 2025-12-06 07:57:29.077187629 +0000 UTC m=+0.061110358 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:57:29 compute-1 podman[295275]: 2025-12-06 07:57:29.099325732 +0000 UTC m=+0.079873481 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 07:57:29 compute-1 nova_compute[226101]: 2025-12-06 07:57:29.232 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:29 compute-1 ceph-mon[81689]: pgmap v3129: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 411 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:57:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/597368563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/597368563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:29.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:29 compute-1 nova_compute[226101]: 2025-12-06 07:57:29.840 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:30 compute-1 nova_compute[226101]: 2025-12-06 07:57:30.099 226109 DEBUG nova.network.neutron [req-20c5172e-9583-49b2-ae4d-08996c69b534 req-38ecfd98-b604-4299-a197-c144c7e55d28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updated VIF entry in instance network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:30 compute-1 nova_compute[226101]: 2025-12-06 07:57:30.100 226109 DEBUG nova.network.neutron [req-20c5172e-9583-49b2-ae4d-08996c69b534 req-38ecfd98-b604-4299-a197-c144c7e55d28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:30 compute-1 nova_compute[226101]: 2025-12-06 07:57:30.119 226109 DEBUG oslo_concurrency.lockutils [req-20c5172e-9583-49b2-ae4d-08996c69b534 req-38ecfd98-b604-4299-a197-c144c7e55d28 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:30 compute-1 nova_compute[226101]: 2025-12-06 07:57:30.120 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:30 compute-1 nova_compute[226101]: 2025-12-06 07:57:30.120 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:57:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:31.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e401 e401: 3 total, 3 up, 3 in
Dec 06 07:57:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:31.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:32 compute-1 nova_compute[226101]: 2025-12-06 07:57:32.321 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:32 compute-1 nova_compute[226101]: 2025-12-06 07:57:32.364 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:32 compute-1 nova_compute[226101]: 2025-12-06 07:57:32.365 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:57:32 compute-1 ceph-mon[81689]: pgmap v3130: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 414 KiB/s rd, 2.3 MiB/s wr, 100 op/s
Dec 06 07:57:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3812412658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:32 compute-1 nova_compute[226101]: 2025-12-06 07:57:32.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:32 compute-1 nova_compute[226101]: 2025-12-06 07:57:32.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:57:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:33 compute-1 nova_compute[226101]: 2025-12-06 07:57:33.102 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:33 compute-1 nova_compute[226101]: 2025-12-06 07:57:33.715 226109 DEBUG nova.compute.manager [req-a1351e29-fcee-4c1b-8cc9-38dfc2ae79a4 req-f9f5d070-32eb-4cf6-90c4-a3b4f8490e94 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:33 compute-1 nova_compute[226101]: 2025-12-06 07:57:33.715 226109 DEBUG nova.compute.manager [req-a1351e29-fcee-4c1b-8cc9-38dfc2ae79a4 req-f9f5d070-32eb-4cf6-90c4-a3b4f8490e94 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing instance network info cache due to event network-changed-803864b4-2921-44f0-8812-bdf6e19aa53d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:33 compute-1 nova_compute[226101]: 2025-12-06 07:57:33.715 226109 DEBUG oslo_concurrency.lockutils [req-a1351e29-fcee-4c1b-8cc9-38dfc2ae79a4 req-f9f5d070-32eb-4cf6-90c4-a3b4f8490e94 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:33 compute-1 nova_compute[226101]: 2025-12-06 07:57:33.715 226109 DEBUG oslo_concurrency.lockutils [req-a1351e29-fcee-4c1b-8cc9-38dfc2ae79a4 req-f9f5d070-32eb-4cf6-90c4-a3b4f8490e94 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:33 compute-1 nova_compute[226101]: 2025-12-06 07:57:33.716 226109 DEBUG nova.network.neutron [req-a1351e29-fcee-4c1b-8cc9-38dfc2ae79a4 req-f9f5d070-32eb-4cf6-90c4-a3b4f8490e94 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Refreshing network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:33.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1802422466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:34 compute-1 ceph-mon[81689]: pgmap v3131: 305 pgs: 305 active+clean; 538 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec 06 07:57:34 compute-1 ceph-mon[81689]: osdmap e401: 3 total, 3 up, 3 in
Dec 06 07:57:34 compute-1 nova_compute[226101]: 2025-12-06 07:57:34.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:34 compute-1 nova_compute[226101]: 2025-12-06 07:57:34.842 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:35.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:35.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:35 compute-1 ceph-mon[81689]: pgmap v3133: 305 pgs: 305 active+clean; 538 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 669 KiB/s wr, 58 op/s
Dec 06 07:57:36 compute-1 nova_compute[226101]: 2025-12-06 07:57:36.193 226109 DEBUG nova.network.neutron [req-a1351e29-fcee-4c1b-8cc9-38dfc2ae79a4 req-f9f5d070-32eb-4cf6-90c4-a3b4f8490e94 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updated VIF entry in instance network info cache for port 803864b4-2921-44f0-8812-bdf6e19aa53d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:36 compute-1 nova_compute[226101]: 2025-12-06 07:57:36.194 226109 DEBUG nova.network.neutron [req-a1351e29-fcee-4c1b-8cc9-38dfc2ae79a4 req-f9f5d070-32eb-4cf6-90c4-a3b4f8490e94 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [{"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:36 compute-1 nova_compute[226101]: 2025-12-06 07:57:36.217 226109 DEBUG oslo_concurrency.lockutils [req-a1351e29-fcee-4c1b-8cc9-38dfc2ae79a4 req-f9f5d070-32eb-4cf6-90c4-a3b4f8490e94 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e402 e402: 3 total, 3 up, 3 in
Dec 06 07:57:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:37 compute-1 ceph-mon[81689]: pgmap v3134: 305 pgs: 305 active+clean; 589 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 94 op/s
Dec 06 07:57:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:37.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:38 compute-1 nova_compute[226101]: 2025-12-06 07:57:38.104 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:38 compute-1 ceph-mon[81689]: osdmap e402: 3 total, 3 up, 3 in
Dec 06 07:57:38 compute-1 ceph-mon[81689]: pgmap v3136: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 119 op/s
Dec 06 07:57:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1035152140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e403 e403: 3 total, 3 up, 3 in
Dec 06 07:57:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:39.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:39 compute-1 nova_compute[226101]: 2025-12-06 07:57:39.843 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:39 compute-1 ceph-mon[81689]: osdmap e403: 3 total, 3 up, 3 in
Dec 06 07:57:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2114481433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:40 compute-1 nova_compute[226101]: 2025-12-06 07:57:40.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:41 compute-1 ceph-mon[81689]: pgmap v3138: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 118 op/s
Dec 06 07:57:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:41.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:41.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.051 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.052 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.052 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.052 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.052 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.054 226109 INFO nova.compute.manager [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Terminating instance
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.055 226109 DEBUG nova.compute.manager [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:42 compute-1 ceph-mon[81689]: pgmap v3139: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 121 op/s
Dec 06 07:57:42 compute-1 kernel: tap803864b4-29 (unregistering): left promiscuous mode
Dec 06 07:57:42 compute-1 NetworkManager[49031]: <info>  [1765007862.8650] device (tap803864b4-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:57:42 compute-1 ovn_controller[130279]: 2025-12-06T07:57:42Z|00723|binding|INFO|Releasing lport 803864b4-2921-44f0-8812-bdf6e19aa53d from this chassis (sb_readonly=0)
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.873 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:42 compute-1 ovn_controller[130279]: 2025-12-06T07:57:42Z|00724|binding|INFO|Setting lport 803864b4-2921-44f0-8812-bdf6e19aa53d down in Southbound
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.875 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:42 compute-1 ovn_controller[130279]: 2025-12-06T07:57:42Z|00725|binding|INFO|Removing iface tap803864b4-29 ovn-installed in OVS
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.879 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:42.892 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:22:84 10.100.0.13'], port_security=['fa:16:3e:57:22:84 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5ae1d2f2-f2e3-43b3-8b37-94095588eb3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3764201-4b86-4407-84d2-684bd05a44b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6164fee998c94b71a37886fe42b4c56c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2bb7af25-e3c4-4687-888a-3caf6297e5c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a293aea-136f-4ea2-8198-6213071653ca, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=803864b4-2921-44f0-8812-bdf6e19aa53d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:57:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:42.895 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 803864b4-2921-44f0-8812-bdf6e19aa53d in datapath a3764201-4b86-4407-84d2-684bd05a44b3 unbound from our chassis
Dec 06 07:57:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:42.897 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3764201-4b86-4407-84d2-684bd05a44b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:57:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:42.899 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b7396ad3-6a28-4e2b-87b2-00fbc027dfdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:42.899 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 namespace which is not needed anymore
Dec 06 07:57:42 compute-1 nova_compute[226101]: 2025-12-06 07:57:42.917 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:42 compute-1 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Dec 06 07:57:42 compute-1 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b1.scope: Consumed 18.786s CPU time.
Dec 06 07:57:42 compute-1 systemd-machined[190302]: Machine qemu-82-instance-000000b1 terminated.
Dec 06 07:57:43 compute-1 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[294315]: [NOTICE]   (294321) : haproxy version is 2.8.14-c23fe91
Dec 06 07:57:43 compute-1 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[294315]: [NOTICE]   (294321) : path to executable is /usr/sbin/haproxy
Dec 06 07:57:43 compute-1 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[294315]: [WARNING]  (294321) : Exiting Master process...
Dec 06 07:57:43 compute-1 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[294315]: [WARNING]  (294321) : Exiting Master process...
Dec 06 07:57:43 compute-1 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[294315]: [ALERT]    (294321) : Current worker (294323) exited with code 143 (Terminated)
Dec 06 07:57:43 compute-1 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[294315]: [WARNING]  (294321) : All workers exited. Exiting... (0)
Dec 06 07:57:43 compute-1 systemd[1]: libpod-fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463.scope: Deactivated successfully.
Dec 06 07:57:43 compute-1 podman[295361]: 2025-12-06 07:57:43.08685666 +0000 UTC m=+0.079790177 container died fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.091 226109 INFO nova.virt.libvirt.driver [-] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Instance destroyed successfully.
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.092 226109 DEBUG nova.objects.instance [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'resources' on Instance uuid 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:43.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.104 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.155 226109 DEBUG nova.virt.libvirt.vif [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:55:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-375403999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-375403999',id=177,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:55:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-lf3iw94r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:55:59Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=5ae1d2f2-f2e3-43b3-8b37-94095588eb3a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.155 226109 DEBUG nova.network.os_vif_util [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "803864b4-2921-44f0-8812-bdf6e19aa53d", "address": "fa:16:3e:57:22:84", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap803864b4-29", "ovs_interfaceid": "803864b4-2921-44f0-8812-bdf6e19aa53d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.156 226109 DEBUG nova.network.os_vif_util [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:22:84,bridge_name='br-int',has_traffic_filtering=True,id=803864b4-2921-44f0-8812-bdf6e19aa53d,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap803864b4-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.157 226109 DEBUG os_vif [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:22:84,bridge_name='br-int',has_traffic_filtering=True,id=803864b4-2921-44f0-8812-bdf6e19aa53d,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap803864b4-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.159 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.159 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap803864b4-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:43 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463-userdata-shm.mount: Deactivated successfully.
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.163 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-7d6180fbdd8199a15086c01befabb0b6162338a2f7edd972434cd5f2da08647d-merged.mount: Deactivated successfully.
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.166 226109 INFO os_vif [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:22:84,bridge_name='br-int',has_traffic_filtering=True,id=803864b4-2921-44f0-8812-bdf6e19aa53d,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap803864b4-29')
Dec 06 07:57:43 compute-1 podman[295361]: 2025-12-06 07:57:43.170712264 +0000 UTC m=+0.163645751 container cleanup fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:57:43 compute-1 systemd[1]: libpod-conmon-fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463.scope: Deactivated successfully.
Dec 06 07:57:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.607 226109 DEBUG nova.compute.manager [req-a85ddef4-03ac-4e80-894a-b26b5e9d0c70 req-e811b5a8-d184-43de-960d-cc386c775940 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-vif-unplugged-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.608 226109 DEBUG oslo_concurrency.lockutils [req-a85ddef4-03ac-4e80-894a-b26b5e9d0c70 req-e811b5a8-d184-43de-960d-cc386c775940 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.608 226109 DEBUG oslo_concurrency.lockutils [req-a85ddef4-03ac-4e80-894a-b26b5e9d0c70 req-e811b5a8-d184-43de-960d-cc386c775940 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.609 226109 DEBUG oslo_concurrency.lockutils [req-a85ddef4-03ac-4e80-894a-b26b5e9d0c70 req-e811b5a8-d184-43de-960d-cc386c775940 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.609 226109 DEBUG nova.compute.manager [req-a85ddef4-03ac-4e80-894a-b26b5e9d0c70 req-e811b5a8-d184-43de-960d-cc386c775940 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] No waiting events found dispatching network-vif-unplugged-803864b4-2921-44f0-8812-bdf6e19aa53d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:57:43 compute-1 nova_compute[226101]: 2025-12-06 07:57:43.609 226109 DEBUG nova.compute.manager [req-a85ddef4-03ac-4e80-894a-b26b5e9d0c70 req-e811b5a8-d184-43de-960d-cc386c775940 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-vif-unplugged-803864b4-2921-44f0-8812-bdf6e19aa53d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:57:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:43.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:44 compute-1 podman[295410]: 2025-12-06 07:57:44.325160873 +0000 UTC m=+1.134081154 container remove fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:57:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e404 e404: 3 total, 3 up, 3 in
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.373 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a890c826-5bb7-4f88-b40c-411393c36647]: (4, ('Sat Dec  6 07:57:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 (fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463)\nfae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463\nSat Dec  6 07:57:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 (fae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463)\nfae4c31567a42f5d6b82fb0c1b4d7b54c10fc90bf9ec8a6d001e836e7abc2463\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.377 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d5688d3e-41c3-42c0-9d03-e07f825f571d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.378 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3764201-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.380 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:44 compute-1 kernel: tapa3764201-40: left promiscuous mode
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.402 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.405 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9ba1a9-0afc-4212-873d-6a7aa953d702]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.418 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f52743-7748-460f-ba42-1accd75db9ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.419 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e206fe0a-dbc9-4f2c-88fb-5b2f455ce494]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.434 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0368543a-c6c0-4b41-bd53-33adf290e11d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803621, 'reachable_time': 26126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295436, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.436 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:57:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:44.436 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe47974-6f85-4966-8ab6-de4ec15cf683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:44 compute-1 systemd[1]: run-netns-ovnmeta\x2da3764201\x2d4b86\x2d4407\x2d84d2\x2d684bd05a44b3.mount: Deactivated successfully.
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.539 226109 INFO nova.virt.libvirt.driver [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Deleting instance files /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a_del
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.540 226109 INFO nova.virt.libvirt.driver [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Deletion of /var/lib/nova/instances/5ae1d2f2-f2e3-43b3-8b37-94095588eb3a_del complete
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.599 226109 INFO nova.compute.manager [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Took 2.54 seconds to destroy the instance on the hypervisor.
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.600 226109 DEBUG oslo.service.loopingcall [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.600 226109 DEBUG nova.compute.manager [-] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.601 226109 DEBUG nova.network.neutron [-] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:57:44 compute-1 ceph-mon[81689]: pgmap v3140: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 22 op/s
Dec 06 07:57:44 compute-1 ceph-mon[81689]: osdmap e404: 3 total, 3 up, 3 in
Dec 06 07:57:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2598950693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:44 compute-1 nova_compute[226101]: 2025-12-06 07:57:44.846 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:45 compute-1 nova_compute[226101]: 2025-12-06 07:57:45.697 226109 DEBUG nova.compute.manager [req-37577e27-8cd7-4edc-907e-c3f5a744d3ce req-f3f6d72e-611c-424d-add2-a8eb0c24b390 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:45 compute-1 nova_compute[226101]: 2025-12-06 07:57:45.698 226109 DEBUG oslo_concurrency.lockutils [req-37577e27-8cd7-4edc-907e-c3f5a744d3ce req-f3f6d72e-611c-424d-add2-a8eb0c24b390 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:45 compute-1 nova_compute[226101]: 2025-12-06 07:57:45.698 226109 DEBUG oslo_concurrency.lockutils [req-37577e27-8cd7-4edc-907e-c3f5a744d3ce req-f3f6d72e-611c-424d-add2-a8eb0c24b390 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:45 compute-1 nova_compute[226101]: 2025-12-06 07:57:45.699 226109 DEBUG oslo_concurrency.lockutils [req-37577e27-8cd7-4edc-907e-c3f5a744d3ce req-f3f6d72e-611c-424d-add2-a8eb0c24b390 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:45 compute-1 nova_compute[226101]: 2025-12-06 07:57:45.699 226109 DEBUG nova.compute.manager [req-37577e27-8cd7-4edc-907e-c3f5a744d3ce req-f3f6d72e-611c-424d-add2-a8eb0c24b390 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] No waiting events found dispatching network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:57:45 compute-1 nova_compute[226101]: 2025-12-06 07:57:45.700 226109 WARNING nova.compute.manager [req-37577e27-8cd7-4edc-907e-c3f5a744d3ce req-f3f6d72e-611c-424d-add2-a8eb0c24b390 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received unexpected event network-vif-plugged-803864b4-2921-44f0-8812-bdf6e19aa53d for instance with vm_state active and task_state deleting.
Dec 06 07:57:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:45.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1237685915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.151 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:46.152 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:57:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:46.153 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.194 226109 DEBUG nova.network.neutron [-] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.215 226109 INFO nova.compute.manager [-] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Took 1.61 seconds to deallocate network for instance.
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.264 226109 DEBUG nova.compute.manager [req-df4a0a25-8c9e-40b2-9f04-1724636fc33e req-0a898cc7-de6c-47b6-99d8-4ca6fb146c61 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Received event network-vif-deleted-803864b4-2921-44f0-8812-bdf6e19aa53d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.484 226109 INFO nova.compute.manager [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Took 0.27 seconds to detach 1 volumes for instance.
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.548 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.549 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.588 226109 DEBUG oslo_concurrency.processutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:46 compute-1 nova_compute[226101]: 2025-12-06 07:57:46.619 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:57:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3400634989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:47 compute-1 nova_compute[226101]: 2025-12-06 07:57:47.035 226109 DEBUG oslo_concurrency.processutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:47 compute-1 nova_compute[226101]: 2025-12-06 07:57:47.042 226109 DEBUG nova.compute.provider_tree [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:57:47 compute-1 nova_compute[226101]: 2025-12-06 07:57:47.059 226109 DEBUG nova.scheduler.client.report [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:57:47 compute-1 ceph-mon[81689]: pgmap v3142: 305 pgs: 305 active+clean; 573 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.9 KiB/s wr, 58 op/s
Dec 06 07:57:47 compute-1 nova_compute[226101]: 2025-12-06 07:57:47.079 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3400634989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:47.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:47 compute-1 nova_compute[226101]: 2025-12-06 07:57:47.104 226109 INFO nova.scheduler.client.report [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Deleted allocations for instance 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a
Dec 06 07:57:47 compute-1 nova_compute[226101]: 2025-12-06 07:57:47.172 226109 DEBUG oslo_concurrency.lockutils [None req-136b1ca2-cfa5-4899-9bab-760067e3a9a9 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "5ae1d2f2-f2e3-43b3-8b37-94095588eb3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:48 compute-1 nova_compute[226101]: 2025-12-06 07:57:48.174 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:49 compute-1 ceph-mon[81689]: pgmap v3143: 305 pgs: 305 active+clean; 553 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 3.1 KiB/s wr, 66 op/s
Dec 06 07:57:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:49.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:49 compute-1 nova_compute[226101]: 2025-12-06 07:57:49.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:49.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:49 compute-1 nova_compute[226101]: 2025-12-06 07:57:49.846 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1315147020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3680227484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.098 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.098 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.098 226109 INFO nova.compute.manager [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Unshelving
Dec 06 07:57:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:51.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.192 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.192 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.197 226109 DEBUG nova.objects.instance [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'pci_requests' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.213 226109 DEBUG nova.objects.instance [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'numa_topology' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.226 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.226 226109 INFO nova.compute.claims [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.330 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:57:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3980298066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.759 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.764 226109 DEBUG nova.compute.provider_tree [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.782 226109 DEBUG nova.scheduler.client.report [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.807 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:51.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:51 compute-1 ceph-mon[81689]: pgmap v3144: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 4.8 KiB/s wr, 73 op/s
Dec 06 07:57:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3980298066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:51 compute-1 nova_compute[226101]: 2025-12-06 07:57:51.989 226109 INFO nova.network.neutron [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating port 3b228814-72c3-4086-9740-e056ea1c6d7b with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 07:57:53 compute-1 ceph-mon[81689]: pgmap v3145: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 6.0 KiB/s wr, 84 op/s
Dec 06 07:57:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:53.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:53 compute-1 nova_compute[226101]: 2025-12-06 07:57:53.179 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:53 compute-1 nova_compute[226101]: 2025-12-06 07:57:53.374 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:53 compute-1 nova_compute[226101]: 2025-12-06 07:57:53.375 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquired lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:53 compute-1 nova_compute[226101]: 2025-12-06 07:57:53.375 226109 DEBUG nova.network.neutron [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:57:53 compute-1 nova_compute[226101]: 2025-12-06 07:57:53.581 226109 DEBUG nova.compute.manager [req-df4ad31a-6307-4e00-b07f-09ea66dc2660 req-76e57b20-847b-4d9e-a7b4-5d8464a82a22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:53 compute-1 nova_compute[226101]: 2025-12-06 07:57:53.581 226109 DEBUG nova.compute.manager [req-df4ad31a-6307-4e00-b07f-09ea66dc2660 req-76e57b20-847b-4d9e-a7b4-5d8464a82a22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing instance network info cache due to event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:53 compute-1 nova_compute[226101]: 2025-12-06 07:57:53.582 226109 DEBUG oslo_concurrency.lockutils [req-df4ad31a-6307-4e00-b07f-09ea66dc2660 req-76e57b20-847b-4d9e-a7b4-5d8464a82a22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:53.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:54 compute-1 nova_compute[226101]: 2025-12-06 07:57:54.849 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:55.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:55 compute-1 ceph-mon[81689]: pgmap v3146: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 6.0 KiB/s wr, 84 op/s
Dec 06 07:57:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/490756509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:55 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:57:55.156 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.327 226109 DEBUG nova.network.neutron [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.363 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Releasing lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.364 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.365 226109 INFO nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Creating image(s)
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.386 226109 DEBUG nova.storage.rbd_utils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.390 226109 DEBUG nova.objects.instance [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.392 226109 DEBUG oslo_concurrency.lockutils [req-df4ad31a-6307-4e00-b07f-09ea66dc2660 req-76e57b20-847b-4d9e-a7b4-5d8464a82a22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.393 226109 DEBUG nova.network.neutron [req-df4ad31a-6307-4e00-b07f-09ea66dc2660 req-76e57b20-847b-4d9e-a7b4-5d8464a82a22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.451 226109 DEBUG nova.storage.rbd_utils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.474 226109 DEBUG nova.storage.rbd_utils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.478 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "269b991f39f2f08907a28adebca0c93583fadcd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.479 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "269b991f39f2f08907a28adebca0c93583fadcd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.817 226109 DEBUG nova.virt.libvirt.imagebackend [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/12e13902-aaee-45d7-b251-bc95170ae31e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/12e13902-aaee-45d7-b251-bc95170ae31e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:57:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:55.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.864 226109 DEBUG nova.virt.libvirt.imagebackend [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/12e13902-aaee-45d7-b251-bc95170ae31e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:57:55 compute-1 nova_compute[226101]: 2025-12-06 07:57:55.864 226109 DEBUG nova.storage.rbd_utils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] cloning images/12e13902-aaee-45d7-b251-bc95170ae31e@snap to None/9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.285 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "269b991f39f2f08907a28adebca0c93583fadcd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.408 226109 DEBUG nova.objects.instance [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'migration_context' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.498 226109 DEBUG nova.storage.rbd_utils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] flattening vms/9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.981 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Image rbd:vms/9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.983 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.983 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Ensure instance console log exists: /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.984 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.984 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.985 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.989 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Start _get_guest_xml network_info=[{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:57:25Z,direct_url=<?>,disk_format='raw',id=12e13902-aaee-45d7-b251-bc95170ae31e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-760058587-shelved',owner='c3c0564f8e9f4af9ae5b597a275c989f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:57:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:57:56 compute-1 nova_compute[226101]: 2025-12-06 07:57:56.994 226109 WARNING nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.045 226109 DEBUG nova.virt.libvirt.host [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.046 226109 DEBUG nova.virt.libvirt.host [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.050 226109 DEBUG nova.virt.libvirt.host [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.051 226109 DEBUG nova.virt.libvirt.host [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.052 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.052 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:57:25Z,direct_url=<?>,disk_format='raw',id=12e13902-aaee-45d7-b251-bc95170ae31e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-760058587-shelved',owner='c3c0564f8e9f4af9ae5b597a275c989f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:57:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.053 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.053 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.054 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.054 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.054 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.055 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.055 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.055 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.055 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.056 226109 DEBUG nova.virt.hardware [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.056 226109 DEBUG nova.objects.instance [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:57.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.149 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:57 compute-1 ceph-mon[81689]: pgmap v3147: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 7.1 KiB/s wr, 70 op/s
Dec 06 07:57:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:57:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1565005430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.619 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.644 226109 DEBUG nova.storage.rbd_utils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.648 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:57:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1758045960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:57:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1758045960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:57.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.851 226109 DEBUG nova.network.neutron [req-df4ad31a-6307-4e00-b07f-09ea66dc2660 req-76e57b20-847b-4d9e-a7b4-5d8464a82a22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updated VIF entry in instance network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:57 compute-1 nova_compute[226101]: 2025-12-06 07:57:57.852 226109 DEBUG nova.network.neutron [req-df4ad31a-6307-4e00-b07f-09ea66dc2660 req-76e57b20-847b-4d9e-a7b4-5d8464a82a22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:57:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/998417130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.090 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007863.0890698, 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.091 226109 INFO nova.compute.manager [-] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] VM Stopped (Lifecycle Event)
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.093 226109 DEBUG oslo_concurrency.lockutils [req-df4ad31a-6307-4e00-b07f-09ea66dc2660 req-76e57b20-847b-4d9e-a7b4-5d8464a82a22 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.093 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.095 226109 DEBUG nova.virt.libvirt.vif [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:56:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-760058587',display_name='tempest-TestShelveInstance-server-760058587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-760058587',id=178,image_ref='12e13902-aaee-45d7-b251-bc95170ae31e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-967718151',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:56:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-g1sdzs5v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member',shelved_at='2025-12-06T07:57:41.125298',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='12e13902-aaee-45d7-b251-bc95170ae31e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:57:51Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=9aa31d67-6e8e-4301-99db-832dd1fe00bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.095 226109 DEBUG nova.network.os_vif_util [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.096 226109 DEBUG nova.network.os_vif_util [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.097 226109 DEBUG nova.objects.instance [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.181 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1565005430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1758045960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1758045960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/998417130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.388 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <uuid>9aa31d67-6e8e-4301-99db-832dd1fe00bc</uuid>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <name>instance-000000b2</name>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <nova:name>tempest-TestShelveInstance-server-760058587</nova:name>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:57:56</nova:creationTime>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <nova:user uuid="98e657096e3f4b528cd461a3dd6a750e">tempest-TestShelveInstance-1863009913-project-member</nova:user>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <nova:project uuid="c3c0564f8e9f4af9ae5b597a275c989f">tempest-TestShelveInstance-1863009913</nova:project>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="12e13902-aaee-45d7-b251-bc95170ae31e"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <nova:port uuid="3b228814-72c3-4086-9740-e056ea1c6d7b">
Dec 06 07:57:58 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <system>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <entry name="serial">9aa31d67-6e8e-4301-99db-832dd1fe00bc</entry>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <entry name="uuid">9aa31d67-6e8e-4301-99db-832dd1fe00bc</entry>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </system>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <os>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   </os>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <features>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   </features>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk">
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       </source>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config">
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       </source>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:57:58 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:b9:91:4c"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <target dev="tap3b228814-72"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/console.log" append="off"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <video>
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </video>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:57:58 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:57:58 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:57:58 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:57:58 compute-1 nova_compute[226101]: </domain>
Dec 06 07:57:58 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.390 226109 DEBUG nova.compute.manager [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Preparing to wait for external event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.390 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.390 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.390 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.391 226109 DEBUG nova.virt.libvirt.vif [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:56:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-760058587',display_name='tempest-TestShelveInstance-server-760058587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-760058587',id=178,image_ref='12e13902-aaee-45d7-b251-bc95170ae31e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-967718151',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:56:49Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-g1sdzs5v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member',shelved_at='2025-12-06T07:57:41.125298',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='12e13902-aaee-45d7-b251-bc95170ae31e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:57:51Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=9aa31d67-6e8e-4301-99db-832dd1fe00bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.391 226109 DEBUG nova.network.os_vif_util [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.392 226109 DEBUG nova.network.os_vif_util [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.392 226109 DEBUG os_vif [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.393 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.394 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.394 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.395 226109 DEBUG nova.compute.manager [None req-759b5245-7b48-4076-987f-87ef9e8a21fa - - - - - -] [instance: 5ae1d2f2-f2e3-43b3-8b37-94095588eb3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.396 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.397 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b228814-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.397 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b228814-72, col_values=(('external_ids', {'iface-id': '3b228814-72c3-4086-9740-e056ea1c6d7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:91:4c', 'vm-uuid': '9aa31d67-6e8e-4301-99db-832dd1fe00bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.398 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-1 NetworkManager[49031]: <info>  [1765007878.3995] manager: (tap3b228814-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.401 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.404 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.405 226109 INFO os_vif [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72')
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.748 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.748 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.748 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] No VIF found with MAC fa:16:3e:b9:91:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.749 226109 INFO nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Using config drive
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.775 226109 DEBUG nova.storage.rbd_utils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.863 226109 DEBUG nova.objects.instance [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:58 compute-1 nova_compute[226101]: 2025-12-06 07:57:58.938 226109 DEBUG nova.objects.instance [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'keypairs' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:59.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:57:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:59.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:59 compute-1 nova_compute[226101]: 2025-12-06 07:57:59.851 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:59 compute-1 ceph-mon[81689]: pgmap v3148: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 7.2 KiB/s wr, 53 op/s
Dec 06 07:58:00 compute-1 podman[295780]: 2025-12-06 07:58:00.081716253 +0000 UTC m=+0.065044433 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:58:00 compute-1 podman[295779]: 2025-12-06 07:58:00.107839562 +0000 UTC m=+0.090770742 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:58:00 compute-1 podman[295781]: 2025-12-06 07:58:00.138186495 +0000 UTC m=+0.118739271 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.148 226109 INFO nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Creating config drive at /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.153 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphaudo2xu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.296 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphaudo2xu" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.324 226109 DEBUG nova.storage.rbd_utils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] rbd image 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.327 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.766 226109 DEBUG oslo_concurrency.processutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config 9aa31d67-6e8e-4301-99db-832dd1fe00bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.767 226109 INFO nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Deleting local config drive /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc/disk.config because it was imported into RBD.
Dec 06 07:58:00 compute-1 kernel: tap3b228814-72: entered promiscuous mode
Dec 06 07:58:00 compute-1 NetworkManager[49031]: <info>  [1765007880.8251] manager: (tap3b228814-72): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Dec 06 07:58:00 compute-1 ovn_controller[130279]: 2025-12-06T07:58:00Z|00726|binding|INFO|Claiming lport 3b228814-72c3-4086-9740-e056ea1c6d7b for this chassis.
Dec 06 07:58:00 compute-1 ovn_controller[130279]: 2025-12-06T07:58:00Z|00727|binding|INFO|3b228814-72c3-4086-9740-e056ea1c6d7b: Claiming fa:16:3e:b9:91:4c 10.100.0.10
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.826 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.844 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:00 compute-1 ovn_controller[130279]: 2025-12-06T07:58:00Z|00728|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b ovn-installed in OVS
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.847 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:00 compute-1 nova_compute[226101]: 2025-12-06 07:58:00.850 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:00 compute-1 systemd-udevd[295890]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:58:00 compute-1 NetworkManager[49031]: <info>  [1765007880.8695] device (tap3b228814-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:58:00 compute-1 NetworkManager[49031]: <info>  [1765007880.8705] device (tap3b228814-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:58:00 compute-1 systemd-machined[190302]: New machine qemu-84-instance-000000b2.
Dec 06 07:58:00 compute-1 systemd[1]: Started Virtual Machine qemu-84-instance-000000b2.
Dec 06 07:58:01 compute-1 ovn_controller[130279]: 2025-12-06T07:58:01Z|00729|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b up in Southbound
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.004 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:91:4c 10.100.0.10'], port_security=['fa:16:3e:b9:91:4c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9aa31d67-6e8e-4301-99db-832dd1fe00bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '89072f55-dc9c-4de3-8430-0091f653d55a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=3b228814-72c3-4086-9740-e056ea1c6d7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.005 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 3b228814-72c3-4086-9740-e056ea1c6d7b in datapath 8caa40db-27da-43ab-86ca-042284636e71 bound to our chassis
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.007 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8caa40db-27da-43ab-86ca-042284636e71
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.016 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5facfb21-50aa-4620-be68-275fc4072696]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.017 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8caa40db-21 in ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.019 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8caa40db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.019 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[205638d5-57d9-40ce-b070-49cb78d76d60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.019 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7925b0-b74e-4530-9939-61bff7439725]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.033 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[006597c1-6826-40a6-a612-397f5539ab10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.055 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a20c779d-38a2-4901-91ab-47a6655dbb80]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.083 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[11b1dc93-0c59-419f-8623-fb775942ec51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 systemd-udevd[295895]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.088 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1393a9-9fe0-4de0-bea6-1a3837f198d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 NetworkManager[49031]: <info>  [1765007881.0901] manager: (tap8caa40db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Dec 06 07:58:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:01.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.124 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d7f082-1459-49d7-8ff1-9e9f8fbef295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.130 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcaab95-196a-4593-a09e-5cca52fe616b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 NetworkManager[49031]: <info>  [1765007881.1512] device (tap8caa40db-20): carrier: link connected
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.156 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[1486ed8a-23ea-48ad-9df0-27bc3c917f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.175 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cb33ef-0d12-499a-9279-106973a24dae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8caa40db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:45:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816065, 'reachable_time': 15719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295926, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.190 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad49196-0054-4588-917f-a1165b2e21e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:4540'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816065, 'tstamp': 816065}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295927, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ceph-mon[81689]: pgmap v3149: 305 pgs: 305 active+clean; 528 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 103 op/s
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.205 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f05ff9e8-c21d-4c46-b729-15b55f4c32dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8caa40db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:45:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816065, 'reachable_time': 15719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295928, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.234 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[262911a2-f1db-44c2-9840-4ab930ac1205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.291 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[99e62f8c-5bc4-4309-bea4-452550a0fe7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.293 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8caa40db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.293 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.293 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8caa40db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:58:01 compute-1 kernel: tap8caa40db-20: entered promiscuous mode
Dec 06 07:58:01 compute-1 nova_compute[226101]: 2025-12-06 07:58:01.295 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:01 compute-1 NetworkManager[49031]: <info>  [1765007881.2958] manager: (tap8caa40db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Dec 06 07:58:01 compute-1 nova_compute[226101]: 2025-12-06 07:58:01.297 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.300 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8caa40db-20, col_values=(('external_ids', {'iface-id': '13935eb5-7198-4d3b-b91f-e4e0daf2886d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:58:01 compute-1 nova_compute[226101]: 2025-12-06 07:58:01.301 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:01 compute-1 ovn_controller[130279]: 2025-12-06T07:58:01Z|00730|binding|INFO|Releasing lport 13935eb5-7198-4d3b-b91f-e4e0daf2886d from this chassis (sb_readonly=0)
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.305 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.307 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3c15f87e-ab8b-4aee-bde1-d3ae704fbddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.308 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-8caa40db-27da-43ab-86ca-042284636e71
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/8caa40db-27da-43ab-86ca-042284636e71.pid.haproxy
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 8caa40db-27da-43ab-86ca-042284636e71
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.311 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'env', 'PROCESS_TAG=haproxy-8caa40db-27da-43ab-86ca-042284636e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8caa40db-27da-43ab-86ca-042284636e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:58:01 compute-1 nova_compute[226101]: 2025-12-06 07:58:01.316 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:01 compute-1 nova_compute[226101]: 2025-12-06 07:58:01.487 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007881.4865947, 9aa31d67-6e8e-4301-99db-832dd1fe00bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:58:01 compute-1 nova_compute[226101]: 2025-12-06 07:58:01.488 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] VM Started (Lifecycle Event)
Dec 06 07:58:01 compute-1 nova_compute[226101]: 2025-12-06 07:58:01.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.679 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.680 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:01.680 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:01 compute-1 podman[296001]: 2025-12-06 07:58:01.652756155 +0000 UTC m=+0.019058891 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:58:01 compute-1 podman[296001]: 2025-12-06 07:58:01.832159198 +0000 UTC m=+0.198461944 container create 984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:58:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:01 compute-1 systemd[1]: Started libpod-conmon-984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc.scope.
Dec 06 07:58:01 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:58:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b237a68f77ac411ba32aa2ea1c7acf1ea539fb599ff073b79306ecbf6378d8e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:01 compute-1 podman[296001]: 2025-12-06 07:58:01.930634944 +0000 UTC m=+0.296937700 container init 984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:58:01 compute-1 podman[296001]: 2025-12-06 07:58:01.938141035 +0000 UTC m=+0.304443771 container start 984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:58:01 compute-1 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[296016]: [NOTICE]   (296020) : New worker (296022) forked
Dec 06 07:58:01 compute-1 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[296016]: [NOTICE]   (296020) : Loading success.
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.085 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.093 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007881.4866738, 9aa31d67-6e8e-4301-99db-832dd1fe00bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.093 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] VM Paused (Lifecycle Event)
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.343 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.348 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.553 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.837 226109 DEBUG nova.compute.manager [req-b26e4cf9-9595-47a8-8b0c-de4ced08324d req-1ff86958-242e-4559-96b7-98d1310ff909 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.837 226109 DEBUG oslo_concurrency.lockutils [req-b26e4cf9-9595-47a8-8b0c-de4ced08324d req-1ff86958-242e-4559-96b7-98d1310ff909 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.837 226109 DEBUG oslo_concurrency.lockutils [req-b26e4cf9-9595-47a8-8b0c-de4ced08324d req-1ff86958-242e-4559-96b7-98d1310ff909 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.838 226109 DEBUG oslo_concurrency.lockutils [req-b26e4cf9-9595-47a8-8b0c-de4ced08324d req-1ff86958-242e-4559-96b7-98d1310ff909 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.838 226109 DEBUG nova.compute.manager [req-b26e4cf9-9595-47a8-8b0c-de4ced08324d req-1ff86958-242e-4559-96b7-98d1310ff909 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Processing event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.839 226109 DEBUG nova.compute.manager [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.843 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007882.8428771, 9aa31d67-6e8e-4301-99db-832dd1fe00bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.844 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] VM Resumed (Lifecycle Event)
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.845 226109 DEBUG nova.virt.libvirt.driver [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.848 226109 INFO nova.virt.libvirt.driver [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance spawned successfully.
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.924 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.927 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:58:02 compute-1 nova_compute[226101]: 2025-12-06 07:58:02.946 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:58:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:03.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:03 compute-1 nova_compute[226101]: 2025-12-06 07:58:03.401 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:03 compute-1 ceph-mon[81689]: pgmap v3150: 305 pgs: 305 active+clean; 461 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Dec 06 07:58:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2165652783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2165652783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:04 compute-1 nova_compute[226101]: 2025-12-06 07:58:04.882 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:04 compute-1 ceph-mon[81689]: pgmap v3151: 305 pgs: 305 active+clean; 461 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Dec 06 07:58:05 compute-1 nova_compute[226101]: 2025-12-06 07:58:05.097 226109 DEBUG nova.compute.manager [req-1c433db4-1756-4f90-b04d-c3a30679f8c0 req-62412f68-682b-4c86-af23-3f4d9ad19347 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:58:05 compute-1 nova_compute[226101]: 2025-12-06 07:58:05.097 226109 DEBUG oslo_concurrency.lockutils [req-1c433db4-1756-4f90-b04d-c3a30679f8c0 req-62412f68-682b-4c86-af23-3f4d9ad19347 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:05 compute-1 nova_compute[226101]: 2025-12-06 07:58:05.098 226109 DEBUG oslo_concurrency.lockutils [req-1c433db4-1756-4f90-b04d-c3a30679f8c0 req-62412f68-682b-4c86-af23-3f4d9ad19347 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:05 compute-1 nova_compute[226101]: 2025-12-06 07:58:05.098 226109 DEBUG oslo_concurrency.lockutils [req-1c433db4-1756-4f90-b04d-c3a30679f8c0 req-62412f68-682b-4c86-af23-3f4d9ad19347 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:05 compute-1 nova_compute[226101]: 2025-12-06 07:58:05.098 226109 DEBUG nova.compute.manager [req-1c433db4-1756-4f90-b04d-c3a30679f8c0 req-62412f68-682b-4c86-af23-3f4d9ad19347 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:58:05 compute-1 nova_compute[226101]: 2025-12-06 07:58:05.098 226109 WARNING nova.compute.manager [req-1c433db4-1756-4f90-b04d-c3a30679f8c0 req-62412f68-682b-4c86-af23-3f4d9ad19347 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state shelved_offloaded and task_state spawning.
Dec 06 07:58:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:05.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:07.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/239946977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:07.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:08 compute-1 nova_compute[226101]: 2025-12-06 07:58:08.404 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e405 e405: 3 total, 3 up, 3 in
Dec 06 07:58:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:09.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:09 compute-1 nova_compute[226101]: 2025-12-06 07:58:09.881 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:10 compute-1 ceph-mon[81689]: pgmap v3152: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Dec 06 07:58:10 compute-1 ceph-mon[81689]: pgmap v3153: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Dec 06 07:58:10 compute-1 ceph-mon[81689]: osdmap e405: 3 total, 3 up, 3 in
Dec 06 07:58:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:11.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:11.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/362523394' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/362523394' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:12 compute-1 ceph-mon[81689]: pgmap v3155: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 203 op/s
Dec 06 07:58:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:13.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:13 compute-1 nova_compute[226101]: 2025-12-06 07:58:13.408 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:13.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:14 compute-1 nova_compute[226101]: 2025-12-06 07:58:14.882 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:15.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:15 compute-1 ceph-mon[81689]: pgmap v3156: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 146 op/s
Dec 06 07:58:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:58:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3384756515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:58:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3384756515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:15 compute-1 nova_compute[226101]: 2025-12-06 07:58:15.740 226109 DEBUG nova.compute.manager [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:58:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:16 compute-1 ceph-mon[81689]: pgmap v3157: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 146 op/s
Dec 06 07:58:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3384756515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3384756515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:16 compute-1 nova_compute[226101]: 2025-12-06 07:58:16.137 226109 DEBUG oslo_concurrency.lockutils [None req-77e80ab5-7412-4332-bced-ea34cafd4249 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 25.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:16 compute-1 ovn_controller[130279]: 2025-12-06T07:58:16Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:91:4c 10.100.0.10
Dec 06 07:58:17 compute-1 sudo[296038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:17 compute-1 sudo[296038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-1 sudo[296038]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:17.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:17 compute-1 sudo[296063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:17 compute-1 sudo[296063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-1 sudo[296063]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:17 compute-1 sudo[296088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:17 compute-1 sudo[296088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-1 sudo[296088]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:17 compute-1 sudo[296113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:58:17 compute-1 sudo[296113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-1 sudo[296113]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:17.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:17 compute-1 sudo[296169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:17 compute-1 sudo[296169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-1 sudo[296169]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:18 compute-1 sshd-session[296036]: Connection reset by authenticating user root 91.202.233.33 port 61894 [preauth]
Dec 06 07:58:18 compute-1 sudo[296194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:18 compute-1 sudo[296194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:18 compute-1 sudo[296194]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:18 compute-1 sudo[296219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:18 compute-1 sudo[296219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:18 compute-1 sudo[296219]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:18 compute-1 sudo[296244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 07:58:18 compute-1 sudo[296244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:18 compute-1 ceph-mon[81689]: pgmap v3158: 305 pgs: 305 active+clean; 299 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 KiB/s wr, 86 op/s
Dec 06 07:58:18 compute-1 sudo[296244]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:18 compute-1 nova_compute[226101]: 2025-12-06 07:58:18.410 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:19.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e406 e406: 3 total, 3 up, 3 in
Dec 06 07:58:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:19.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:19 compute-1 nova_compute[226101]: 2025-12-06 07:58:19.886 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:20 compute-1 sshd-session[296265]: Connection reset by authenticating user root 91.202.233.33 port 61898 [preauth]
Dec 06 07:58:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:20 compute-1 ceph-mon[81689]: pgmap v3159: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 791 KiB/s rd, 16 KiB/s wr, 80 op/s
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2071960946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2071960946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:58:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:58:20 compute-1 ceph-mon[81689]: osdmap e406: 3 total, 3 up, 3 in
Dec 06 07:58:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:21.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:21 compute-1 sshd-session[296289]: Received disconnect from 154.219.116.39 port 38254:11: Bye Bye [preauth]
Dec 06 07:58:21 compute-1 sshd-session[296289]: Disconnected from authenticating user root 154.219.116.39 port 38254 [preauth]
Dec 06 07:58:21 compute-1 ceph-mon[81689]: pgmap v3161: 305 pgs: 305 active+clean; 236 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 16 KiB/s wr, 71 op/s
Dec 06 07:58:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:21.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:21 compute-1 sshd-session[296291]: Invalid user ubuntu from 91.202.233.33 port 61914
Dec 06 07:58:22 compute-1 sshd-session[296291]: Connection reset by invalid user ubuntu 91.202.233.33 port 61914 [preauth]
Dec 06 07:58:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:58:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5405.4 total, 600.0 interval
                                           Cumulative writes: 56K writes, 220K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.04 MB/s
                                           Cumulative WAL: 56K writes, 19K syncs, 2.83 writes per sync, written: 0.20 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6731 writes, 25K keys, 6731 commit groups, 1.0 writes per commit group, ingest: 23.87 MB, 0.04 MB/s
                                           Interval WAL: 6731 writes, 2707 syncs, 2.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:58:22 compute-1 ceph-mon[81689]: pgmap v3162: 305 pgs: 305 active+clean; 220 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 688 KiB/s rd, 18 KiB/s wr, 116 op/s
Dec 06 07:58:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1654241962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1654241962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:23.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.329 226109 DEBUG nova.compute.manager [req-f35b4098-1016-466a-9e9a-325bff2ad4c6 req-31efa89f-f48d-4ce3-a2de-9ec4f41b199c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.329 226109 DEBUG nova.compute.manager [req-f35b4098-1016-466a-9e9a-325bff2ad4c6 req-31efa89f-f48d-4ce3-a2de-9ec4f41b199c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing instance network info cache due to event network-changed-3b228814-72c3-4086-9740-e056ea1c6d7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.329 226109 DEBUG oslo_concurrency.lockutils [req-f35b4098-1016-466a-9e9a-325bff2ad4c6 req-31efa89f-f48d-4ce3-a2de-9ec4f41b199c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.329 226109 DEBUG oslo_concurrency.lockutils [req-f35b4098-1016-466a-9e9a-325bff2ad4c6 req-31efa89f-f48d-4ce3-a2de-9ec4f41b199c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.330 226109 DEBUG nova.network.neutron [req-f35b4098-1016-466a-9e9a-325bff2ad4c6 req-31efa89f-f48d-4ce3-a2de-9ec4f41b199c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Refreshing network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.414 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.826 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.826 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.827 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.827 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.827 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.829 226109 INFO nova.compute.manager [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Terminating instance
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.830 226109 DEBUG nova.compute.manager [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:58:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:23.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:23 compute-1 kernel: tap3b228814-72 (unregistering): left promiscuous mode
Dec 06 07:58:23 compute-1 NetworkManager[49031]: <info>  [1765007903.9093] device (tap3b228814-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:58:23 compute-1 ovn_controller[130279]: 2025-12-06T07:58:23Z|00731|binding|INFO|Releasing lport 3b228814-72c3-4086-9740-e056ea1c6d7b from this chassis (sb_readonly=0)
Dec 06 07:58:23 compute-1 ovn_controller[130279]: 2025-12-06T07:58:23Z|00732|binding|INFO|Setting lport 3b228814-72c3-4086-9740-e056ea1c6d7b down in Southbound
Dec 06 07:58:23 compute-1 ovn_controller[130279]: 2025-12-06T07:58:23Z|00733|binding|INFO|Removing iface tap3b228814-72 ovn-installed in OVS
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.919 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:23.924 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:91:4c 10.100.0.10'], port_security=['fa:16:3e:b9:91:4c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9aa31d67-6e8e-4301-99db-832dd1fe00bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8caa40db-27da-43ab-86ca-042284636e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3c0564f8e9f4af9ae5b597a275c989f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '89072f55-dc9c-4de3-8430-0091f653d55a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ce9c1bf-feee-480c-a359-3eaf272f4b83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=3b228814-72c3-4086-9740-e056ea1c6d7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:58:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:23.925 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 3b228814-72c3-4086-9740-e056ea1c6d7b in datapath 8caa40db-27da-43ab-86ca-042284636e71 unbound from our chassis
Dec 06 07:58:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:23.926 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8caa40db-27da-43ab-86ca-042284636e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:58:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:23.928 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e32823-e18f-42ef-a151-7c0d23f659fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:23.929 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 namespace which is not needed anymore
Dec 06 07:58:23 compute-1 nova_compute[226101]: 2025-12-06 07:58:23.942 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:23 compute-1 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Dec 06 07:58:23 compute-1 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b2.scope: Consumed 14.920s CPU time.
Dec 06 07:58:23 compute-1 systemd-machined[190302]: Machine qemu-84-instance-000000b2 terminated.
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.099 226109 INFO nova.virt.libvirt.driver [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Instance destroyed successfully.
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.099 226109 DEBUG nova.objects.instance [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lazy-loading 'resources' on Instance uuid 9aa31d67-6e8e-4301-99db-832dd1fe00bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:58:24 compute-1 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[296016]: [NOTICE]   (296020) : haproxy version is 2.8.14-c23fe91
Dec 06 07:58:24 compute-1 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[296016]: [NOTICE]   (296020) : path to executable is /usr/sbin/haproxy
Dec 06 07:58:24 compute-1 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[296016]: [WARNING]  (296020) : Exiting Master process...
Dec 06 07:58:24 compute-1 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[296016]: [ALERT]    (296020) : Current worker (296022) exited with code 143 (Terminated)
Dec 06 07:58:24 compute-1 neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71[296016]: [WARNING]  (296020) : All workers exited. Exiting... (0)
Dec 06 07:58:24 compute-1 systemd[1]: libpod-984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc.scope: Deactivated successfully.
Dec 06 07:58:24 compute-1 podman[296318]: 2025-12-06 07:58:24.12604666 +0000 UTC m=+0.098169119 container died 984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.137 226109 DEBUG nova.virt.libvirt.vif [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:56:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-760058587',display_name='tempest-TestShelveInstance-server-760058587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-760058587',id=178,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPMK8f3hkRq1zqoQC83gMxNZg6M7ZAxKrSEKHe3B4eb91L7pfGPJzplK7LQrq84g1u5RR1ZeVyoQ0VCNm5QQfCXxX7TmQ1C0jimNi3kHh1QlwD/S46/iChPS6huTXb5CpA==',key_name='tempest-TestShelveInstance-967718151',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:58:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3c0564f8e9f4af9ae5b597a275c989f',ramdisk_id='',reservation_id='r-g1sdzs5v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1863009913',owner_user_name='tempest-TestShelveInstance-1863009913-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:58:16Z,user_data=None,user_id='98e657096e3f4b528cd461a3dd6a750e',uuid=9aa31d67-6e8e-4301-99db-832dd1fe00bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.137 226109 DEBUG nova.network.os_vif_util [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converting VIF {"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.138 226109 DEBUG nova.network.os_vif_util [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.138 226109 DEBUG os_vif [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.140 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.140 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b228814-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.143 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.145 226109 INFO os_vif [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:91:4c,bridge_name='br-int',has_traffic_filtering=True,id=3b228814-72c3-4086-9740-e056ea1c6d7b,network=Network(8caa40db-27da-43ab-86ca-042284636e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b228814-72')
Dec 06 07:58:24 compute-1 sudo[296332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:24 compute-1 sudo[296332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:24 compute-1 sudo[296332]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-b237a68f77ac411ba32aa2ea1c7acf1ea539fb599ff073b79306ecbf6378d8e5-merged.mount: Deactivated successfully.
Dec 06 07:58:24 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc-userdata-shm.mount: Deactivated successfully.
Dec 06 07:58:24 compute-1 podman[296318]: 2025-12-06 07:58:24.209494595 +0000 UTC m=+0.181617074 container cleanup 984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:58:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:58:24 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155028513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:24 compute-1 sudo[296390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:58:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:58:24 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155028513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:24 compute-1 sudo[296390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:24 compute-1 systemd[1]: libpod-conmon-984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc.scope: Deactivated successfully.
Dec 06 07:58:24 compute-1 sudo[296390]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:24 compute-1 podman[296422]: 2025-12-06 07:58:24.316797398 +0000 UTC m=+0.085776038 container remove 984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.321 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[70dd4bd1-e393-4016-b94d-602d5f0ccc92]: (4, ('Sat Dec  6 07:58:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 (984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc)\n984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc\nSat Dec  6 07:58:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 (984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc)\n984e51492874df039180b02d57c8a90586abbf434d19f15d377359e58ba0b9fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.324 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[47f45139-ffa0-4c16-a1fc-45133a3906a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.325 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8caa40db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.326 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:24 compute-1 kernel: tap8caa40db-20: left promiscuous mode
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.329 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.333 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a7240daf-e9ea-4d4a-ba81-9a4bc3f140a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.340 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.352 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd87d48-7a1b-4f17-82c8-f8314a2dd7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.353 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f14db104-779b-4753-bce0-227a3ed972f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.369 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[db3ae707-a4d4-4bd3-984c-96f0c3534e0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816058, 'reachable_time': 42420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296435, 'error': None, 'target': 'ovnmeta-8caa40db-27da-43ab-86ca-042284636e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.371 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8caa40db-27da-43ab-86ca-042284636e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:58:24 compute-1 systemd[1]: run-netns-ovnmeta\x2d8caa40db\x2d27da\x2d43ab\x2d86ca\x2d042284636e71.mount: Deactivated successfully.
Dec 06 07:58:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:24.371 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[324ecd5e-910e-4407-94af-e032992d7ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.833 226109 INFO nova.virt.libvirt.driver [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Deleting instance files /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc_del
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.834 226109 INFO nova.virt.libvirt.driver [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Deletion of /var/lib/nova/instances/9aa31d67-6e8e-4301-99db-832dd1fe00bc_del complete
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.887 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:24 compute-1 ceph-mon[81689]: pgmap v3163: 305 pgs: 305 active+clean; 220 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 688 KiB/s rd, 18 KiB/s wr, 116 op/s
Dec 06 07:58:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/155028513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/155028513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.974 226109 INFO nova.compute.manager [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Took 1.14 seconds to destroy the instance on the hypervisor.
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.975 226109 DEBUG oslo.service.loopingcall [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.976 226109 DEBUG nova.compute.manager [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:58:24 compute-1 nova_compute[226101]: 2025-12-06 07:58:24.976 226109 DEBUG nova.network.neutron [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:58:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:25.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.505 226109 DEBUG nova.compute.manager [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.506 226109 DEBUG oslo_concurrency.lockutils [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.506 226109 DEBUG oslo_concurrency.lockutils [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.507 226109 DEBUG oslo_concurrency.lockutils [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.507 226109 DEBUG nova.compute.manager [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.507 226109 DEBUG nova.compute.manager [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-unplugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.508 226109 DEBUG nova.compute.manager [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.508 226109 DEBUG oslo_concurrency.lockutils [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.509 226109 DEBUG oslo_concurrency.lockutils [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.509 226109 DEBUG oslo_concurrency.lockutils [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.510 226109 DEBUG nova.compute.manager [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] No waiting events found dispatching network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:58:25 compute-1 nova_compute[226101]: 2025-12-06 07:58:25.510 226109 WARNING nova.compute.manager [req-8f13e871-fb07-4410-962f-1c3d7e6f7ca0 req-dfb51f60-dba8-4bbe-a91d-b0f984f83958 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received unexpected event network-vif-plugged-3b228814-72c3-4086-9740-e056ea1c6d7b for instance with vm_state active and task_state deleting.
Dec 06 07:58:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:26 compute-1 sshd-session[296293]: Connection reset by authenticating user root 91.202.233.33 port 25560 [preauth]
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.513 226109 DEBUG nova.network.neutron [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.625 226109 DEBUG nova.network.neutron [req-f35b4098-1016-466a-9e9a-325bff2ad4c6 req-31efa89f-f48d-4ce3-a2de-9ec4f41b199c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updated VIF entry in instance network info cache for port 3b228814-72c3-4086-9740-e056ea1c6d7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.625 226109 DEBUG nova.network.neutron [req-f35b4098-1016-466a-9e9a-325bff2ad4c6 req-31efa89f-f48d-4ce3-a2de-9ec4f41b199c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [{"id": "3b228814-72c3-4086-9740-e056ea1c6d7b", "address": "fa:16:3e:b9:91:4c", "network": {"id": "8caa40db-27da-43ab-86ca-042284636e71", "bridge": "br-int", "label": "tempest-TestShelveInstance-1111687365-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3c0564f8e9f4af9ae5b597a275c989f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b228814-72", "ovs_interfaceid": "3b228814-72c3-4086-9740-e056ea1c6d7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.655 226109 INFO nova.compute.manager [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Took 1.68 seconds to deallocate network for instance.
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.703 226109 DEBUG oslo_concurrency.lockutils [req-f35b4098-1016-466a-9e9a-325bff2ad4c6 req-31efa89f-f48d-4ce3-a2de-9ec4f41b199c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9aa31d67-6e8e-4301-99db-832dd1fe00bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.737 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.737 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.811 226109 DEBUG oslo_concurrency.processutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:58:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:26.970 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:58:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:26.971 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:58:26 compute-1 nova_compute[226101]: 2025-12-06 07:58:26.971 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:26 compute-1 ceph-mon[81689]: pgmap v3164: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 946 KiB/s rd, 31 KiB/s wr, 144 op/s
Dec 06 07:58:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:58:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2677149162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.222 226109 DEBUG oslo_concurrency.processutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.229 226109 DEBUG nova.compute.provider_tree [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.242 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.263 226109 DEBUG nova.scheduler.client.report [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.329 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.470 226109 INFO nova.scheduler.client.report [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Deleted allocations for instance 9aa31d67-6e8e-4301-99db-832dd1fe00bc
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.505 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.627 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.636 226109 DEBUG oslo_concurrency.lockutils [None req-a121b499-a452-4910-bccc-a074bd1a51a7 98e657096e3f4b528cd461a3dd6a750e c3c0564f8e9f4af9ae5b597a275c989f - - default default] Lock "9aa31d67-6e8e-4301-99db-832dd1fe00bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.664 226109 DEBUG nova.compute.manager [req-7de26fc9-b3f9-4ae1-80d6-a200e0780b2c req-4ca3ee9b-4d79-4f88-ba63-258fd4e6ca1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Received event network-vif-deleted-3b228814-72c3-4086-9740-e056ea1c6d7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.665 226109 INFO nova.compute.manager [req-7de26fc9-b3f9-4ae1-80d6-a200e0780b2c req-4ca3ee9b-4d79-4f88-ba63-258fd4e6ca1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Neutron deleted interface 3b228814-72c3-4086-9740-e056ea1c6d7b; detaching it from the instance and deleting it from the info cache
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.665 226109 DEBUG nova.network.neutron [req-7de26fc9-b3f9-4ae1-80d6-a200e0780b2c req-4ca3ee9b-4d79-4f88-ba63-258fd4e6ca1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:58:27 compute-1 sshd-session[296438]: Invalid user test1 from 91.202.233.33 port 25590
Dec 06 07:58:27 compute-1 nova_compute[226101]: 2025-12-06 07:58:27.819 226109 DEBUG nova.compute.manager [req-7de26fc9-b3f9-4ae1-80d6-a200e0780b2c req-4ca3ee9b-4d79-4f88-ba63-258fd4e6ca1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Detach interface failed, port_id=3b228814-72c3-4086-9740-e056ea1c6d7b, reason: Instance 9aa31d67-6e8e-4301-99db-832dd1fe00bc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 07:58:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2677149162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:28 compute-1 sshd-session[296438]: Connection reset by invalid user test1 91.202.233.33 port 25590 [preauth]
Dec 06 07:58:28 compute-1 nova_compute[226101]: 2025-12-06 07:58:28.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:28 compute-1 nova_compute[226101]: 2025-12-06 07:58:28.625 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:28 compute-1 nova_compute[226101]: 2025-12-06 07:58:28.626 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:28 compute-1 nova_compute[226101]: 2025-12-06 07:58:28.626 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:28 compute-1 nova_compute[226101]: 2025-12-06 07:58:28.627 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:58:28 compute-1 nova_compute[226101]: 2025-12-06 07:58:28.628 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:58:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:58:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/425738491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 e407: 3 total, 3 up, 3 in
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:29.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.156 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:58:29 compute-1 ceph-mon[81689]: pgmap v3165: 305 pgs: 305 active+clean; 150 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 829 KiB/s rd, 17 KiB/s wr, 130 op/s
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.316 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.317 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4304MB free_disk=20.972232818603516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.318 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.318 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.515 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.515 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.561 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:58:29 compute-1 nova_compute[226101]: 2025-12-06 07:58:29.934 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:58:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/957644364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:30 compute-1 nova_compute[226101]: 2025-12-06 07:58:30.007 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:58:30 compute-1 nova_compute[226101]: 2025-12-06 07:58:30.015 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:58:30 compute-1 nova_compute[226101]: 2025-12-06 07:58:30.042 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:58:30 compute-1 nova_compute[226101]: 2025-12-06 07:58:30.070 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:58:30 compute-1 nova_compute[226101]: 2025-12-06 07:58:30.071 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/425738491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:30 compute-1 ceph-mon[81689]: osdmap e407: 3 total, 3 up, 3 in
Dec 06 07:58:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/957644364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:58:30.974 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:58:31 compute-1 podman[296511]: 2025-12-06 07:58:31.074582703 +0000 UTC m=+0.050726369 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:58:31 compute-1 podman[296510]: 2025-12-06 07:58:31.080215193 +0000 UTC m=+0.058127316 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:58:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:31 compute-1 podman[296512]: 2025-12-06 07:58:31.246247079 +0000 UTC m=+0.219315743 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:58:31 compute-1 ceph-mon[81689]: pgmap v3167: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 17 KiB/s wr, 126 op/s
Dec 06 07:58:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/564199662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:31 compute-1 sshd-session[296506]: Received disconnect from 165.154.55.146 port 44292:11: Bye Bye [preauth]
Dec 06 07:58:31 compute-1 sshd-session[296506]: Disconnected from authenticating user root 165.154.55.146 port 44292 [preauth]
Dec 06 07:58:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:31.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2233188705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:32 compute-1 ceph-mon[81689]: pgmap v3168: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 303 KiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:58:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:33.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:34 compute-1 nova_compute[226101]: 2025-12-06 07:58:34.150 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:34 compute-1 ceph-mon[81689]: pgmap v3169: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 303 KiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:58:34 compute-1 nova_compute[226101]: 2025-12-06 07:58:34.935 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:35.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:36 compute-1 nova_compute[226101]: 2025-12-06 07:58:36.071 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:36 compute-1 nova_compute[226101]: 2025-12-06 07:58:36.072 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:58:36 compute-1 nova_compute[226101]: 2025-12-06 07:58:36.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:36 compute-1 ceph-mon[81689]: pgmap v3170: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Dec 06 07:58:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:37.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:38 compute-1 ceph-mon[81689]: pgmap v3171: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Dec 06 07:58:39 compute-1 nova_compute[226101]: 2025-12-06 07:58:39.096 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007904.0953393, 9aa31d67-6e8e-4301-99db-832dd1fe00bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:58:39 compute-1 nova_compute[226101]: 2025-12-06 07:58:39.097 226109 INFO nova.compute.manager [-] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] VM Stopped (Lifecycle Event)
Dec 06 07:58:39 compute-1 nova_compute[226101]: 2025-12-06 07:58:39.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:39.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:39 compute-1 nova_compute[226101]: 2025-12-06 07:58:39.322 226109 DEBUG nova.compute.manager [None req-b933474f-c95b-4a47-aa99-7e4ee778f32f - - - - - -] [instance: 9aa31d67-6e8e-4301-99db-832dd1fe00bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:58:39 compute-1 nova_compute[226101]: 2025-12-06 07:58:39.937 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:39.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:40 compute-1 ceph-mon[81689]: pgmap v3172: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:58:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/570637393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:41.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:41 compute-1 nova_compute[226101]: 2025-12-06 07:58:41.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:41 compute-1 sshd-session[296577]: Received disconnect from 136.112.8.45 port 46918:11: Bye Bye [preauth]
Dec 06 07:58:41 compute-1 sshd-session[296577]: Disconnected from authenticating user root 136.112.8.45 port 46918 [preauth]
Dec 06 07:58:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/319071054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:41.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:42 compute-1 nova_compute[226101]: 2025-12-06 07:58:42.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:42 compute-1 ceph-mon[81689]: pgmap v3173: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Dec 06 07:58:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:43 compute-1 nova_compute[226101]: 2025-12-06 07:58:43.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:43.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:44 compute-1 nova_compute[226101]: 2025-12-06 07:58:44.155 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:44 compute-1 ceph-mon[81689]: pgmap v3174: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Dec 06 07:58:44 compute-1 nova_compute[226101]: 2025-12-06 07:58:44.939 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:45.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:46 compute-1 nova_compute[226101]: 2025-12-06 07:58:46.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:46 compute-1 ceph-mon[81689]: pgmap v3175: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 6 op/s
Dec 06 07:58:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:47.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:47.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:49 compute-1 nova_compute[226101]: 2025-12-06 07:58:49.157 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:49.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:49 compute-1 ceph-mon[81689]: pgmap v3176: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Dec 06 07:58:49 compute-1 nova_compute[226101]: 2025-12-06 07:58:49.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:49 compute-1 nova_compute[226101]: 2025-12-06 07:58:49.943 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:49.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:50 compute-1 ceph-mon[81689]: pgmap v3177: 305 pgs: 305 active+clean; 132 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 437 KiB/s wr, 18 op/s
Dec 06 07:58:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:51.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:51.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:52 compute-1 ceph-mon[81689]: pgmap v3178: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:53.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1376359152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:53.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:54 compute-1 nova_compute[226101]: 2025-12-06 07:58:54.159 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:54 compute-1 ceph-mon[81689]: pgmap v3179: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2637314742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:58:54 compute-1 nova_compute[226101]: 2025-12-06 07:58:54.945 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:55.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:55.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:56 compute-1 ceph-mon[81689]: pgmap v3180: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:57.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:57 compute-1 nova_compute[226101]: 2025-12-06 07:58:57.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:57.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:58 compute-1 ceph-mon[81689]: pgmap v3181: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 07:58:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3401036419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:58:59 compute-1 nova_compute[226101]: 2025-12-06 07:58:59.160 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:58:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:59.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:59 compute-1 nova_compute[226101]: 2025-12-06 07:58:59.947 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:59.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:01 compute-1 ceph-mon[81689]: pgmap v3182: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 07:59:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:01.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:01.679 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:01.680 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:01.680 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3125294339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:02.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:02 compute-1 podman[296581]: 2025-12-06 07:59:02.103792183 +0000 UTC m=+0.075773240 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:59:02 compute-1 podman[296580]: 2025-12-06 07:59:02.116160694 +0000 UTC m=+0.081956294 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:59:02 compute-1 podman[296582]: 2025-12-06 07:59:02.147779391 +0000 UTC m=+0.107406057 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:59:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:03.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:03 compute-1 ceph-mon[81689]: pgmap v3183: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec 06 07:59:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:04.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:04 compute-1 nova_compute[226101]: 2025-12-06 07:59:04.162 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:04 compute-1 nova_compute[226101]: 2025-12-06 07:59:04.611 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:04 compute-1 nova_compute[226101]: 2025-12-06 07:59:04.611 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:59:04 compute-1 ceph-mon[81689]: pgmap v3184: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 3 op/s
Dec 06 07:59:04 compute-1 nova_compute[226101]: 2025-12-06 07:59:04.633 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:59:04 compute-1 nova_compute[226101]: 2025-12-06 07:59:04.949 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:05.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:06.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:07 compute-1 ceph-mon[81689]: pgmap v3185: 305 pgs: 305 active+clean; 196 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Dec 06 07:59:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:07.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:08.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:08 compute-1 nova_compute[226101]: 2025-12-06 07:59:08.777 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:08.777 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:59:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:08.778 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:59:08 compute-1 nova_compute[226101]: 2025-12-06 07:59:08.879 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "e150c7b6-c537-4096-abbc-21b5ab35233e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:08 compute-1 nova_compute[226101]: 2025-12-06 07:59:08.880 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:08 compute-1 nova_compute[226101]: 2025-12-06 07:59:08.931 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.044 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.045 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.051 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.051 226109 INFO nova.compute.claims [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Claim successful on node compute-1.ctlplane.example.com
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.163 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.221 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:09 compute-1 ceph-mon[81689]: pgmap v3186: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 06 07:59:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/137019635' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:59:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/137019635' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:59:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:09.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:59:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3664579802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.849 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.855 226109 DEBUG nova.compute.provider_tree [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:59:09 compute-1 nova_compute[226101]: 2025-12-06 07:59:09.951 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:10.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3664579802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:11.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:11 compute-1 ceph-mon[81689]: pgmap v3187: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:59:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:12.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:12 compute-1 nova_compute[226101]: 2025-12-06 07:59:12.344 226109 DEBUG nova.scheduler.client.report [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:59:12 compute-1 nova_compute[226101]: 2025-12-06 07:59:12.389 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:12 compute-1 nova_compute[226101]: 2025-12-06 07:59:12.390 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:59:12 compute-1 ceph-mon[81689]: pgmap v3188: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:59:12 compute-1 nova_compute[226101]: 2025-12-06 07:59:12.808 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:59:12 compute-1 nova_compute[226101]: 2025-12-06 07:59:12.809 226109 DEBUG nova.network.neutron [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:59:12 compute-1 nova_compute[226101]: 2025-12-06 07:59:12.833 226109 INFO nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:59:12 compute-1 nova_compute[226101]: 2025-12-06 07:59:12.861 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.021 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.022 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.022 226109 INFO nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Creating image(s)
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.048 226109 DEBUG nova.storage.rbd_utils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image e150c7b6-c537-4096-abbc-21b5ab35233e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.075 226109 DEBUG nova.storage.rbd_utils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image e150c7b6-c537-4096-abbc-21b5ab35233e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.099 226109 DEBUG nova.storage.rbd_utils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image e150c7b6-c537-4096-abbc-21b5ab35233e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.103 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.160 226109 DEBUG nova.policy [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.164 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.164 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.165 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.165 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.189 226109 DEBUG nova.storage.rbd_utils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image e150c7b6-c537-4096-abbc-21b5ab35233e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:13 compute-1 nova_compute[226101]: 2025-12-06 07:59:13.192 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e150c7b6-c537-4096-abbc-21b5ab35233e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:13.781 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:14.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:14 compute-1 nova_compute[226101]: 2025-12-06 07:59:14.164 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:14 compute-1 nova_compute[226101]: 2025-12-06 07:59:14.739 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e150c7b6-c537-4096-abbc-21b5ab35233e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:14 compute-1 nova_compute[226101]: 2025-12-06 07:59:14.822 226109 DEBUG nova.storage.rbd_utils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image e150c7b6-c537-4096-abbc-21b5ab35233e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:59:14 compute-1 nova_compute[226101]: 2025-12-06 07:59:14.955 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:15 compute-1 nova_compute[226101]: 2025-12-06 07:59:15.149 226109 DEBUG nova.objects.instance [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid e150c7b6-c537-4096-abbc-21b5ab35233e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:59:15 compute-1 nova_compute[226101]: 2025-12-06 07:59:15.178 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:59:15 compute-1 nova_compute[226101]: 2025-12-06 07:59:15.178 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Ensure instance console log exists: /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:59:15 compute-1 nova_compute[226101]: 2025-12-06 07:59:15.179 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:15 compute-1 nova_compute[226101]: 2025-12-06 07:59:15.179 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:15 compute-1 nova_compute[226101]: 2025-12-06 07:59:15.179 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:15 compute-1 ceph-mon[81689]: pgmap v3189: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec 06 07:59:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:15 compute-1 nova_compute[226101]: 2025-12-06 07:59:15.844 226109 DEBUG nova.network.neutron [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Successfully created port: 8240e8bf-1c61-4830-aa30-d8cc74050992 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:59:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:16.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1601421389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3820747644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:16 compute-1 nova_compute[226101]: 2025-12-06 07:59:16.930 226109 DEBUG nova.network.neutron [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Successfully updated port: 8240e8bf-1c61-4830-aa30-d8cc74050992 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:59:16 compute-1 nova_compute[226101]: 2025-12-06 07:59:16.954 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:59:16 compute-1 nova_compute[226101]: 2025-12-06 07:59:16.954 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:59:16 compute-1 nova_compute[226101]: 2025-12-06 07:59:16.954 226109 DEBUG nova.network.neutron [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:59:17 compute-1 nova_compute[226101]: 2025-12-06 07:59:17.045 226109 DEBUG nova.compute.manager [req-128bfcb5-f5ba-4a07-851a-71aa820fa9f2 req-bf0e9ed4-eebc-4666-99c8-6ae407e42eaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-changed-8240e8bf-1c61-4830-aa30-d8cc74050992 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:59:17 compute-1 nova_compute[226101]: 2025-12-06 07:59:17.046 226109 DEBUG nova.compute.manager [req-128bfcb5-f5ba-4a07-851a-71aa820fa9f2 req-bf0e9ed4-eebc-4666-99c8-6ae407e42eaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Refreshing instance network info cache due to event network-changed-8240e8bf-1c61-4830-aa30-d8cc74050992. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:59:17 compute-1 nova_compute[226101]: 2025-12-06 07:59:17.046 226109 DEBUG oslo_concurrency.lockutils [req-128bfcb5-f5ba-4a07-851a-71aa820fa9f2 req-bf0e9ed4-eebc-4666-99c8-6ae407e42eaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:59:17 compute-1 nova_compute[226101]: 2025-12-06 07:59:17.116 226109 DEBUG nova.network.neutron [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:59:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:17.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:17 compute-1 ceph-mon[81689]: pgmap v3190: 305 pgs: 305 active+clean; 236 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 121 op/s
Dec 06 07:59:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:18.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.529 226109 DEBUG nova.network.neutron [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updating instance_info_cache with network_info: [{"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.549 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.550 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Instance network_info: |[{"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.551 226109 DEBUG oslo_concurrency.lockutils [req-128bfcb5-f5ba-4a07-851a-71aa820fa9f2 req-bf0e9ed4-eebc-4666-99c8-6ae407e42eaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.552 226109 DEBUG nova.network.neutron [req-128bfcb5-f5ba-4a07-851a-71aa820fa9f2 req-bf0e9ed4-eebc-4666-99c8-6ae407e42eaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Refreshing network info cache for port 8240e8bf-1c61-4830-aa30-d8cc74050992 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.562 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Start _get_guest_xml network_info=[{"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.569 226109 WARNING nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:59:18 compute-1 ceph-mon[81689]: pgmap v3191: 305 pgs: 305 active+clean; 266 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 99 op/s
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.585 226109 DEBUG nova.virt.libvirt.host [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.586 226109 DEBUG nova.virt.libvirt.host [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.590 226109 DEBUG nova.virt.libvirt.host [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.591 226109 DEBUG nova.virt.libvirt.host [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.592 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.593 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.593 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.593 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.594 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.594 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.594 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.594 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.595 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.595 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.595 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.596 226109 DEBUG nova.virt.hardware [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:59:18 compute-1 nova_compute[226101]: 2025-12-06 07:59:18.599 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:59:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1432958510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.029 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.051 226109 DEBUG nova.storage.rbd_utils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image e150c7b6-c537-4096-abbc-21b5ab35233e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.055 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.167 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:19.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:59:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/29171320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.469 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.471 226109 DEBUG nova.virt.libvirt.vif [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:59:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1504657561',display_name='tempest-TestNetworkBasicOps-server-1504657561',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1504657561',id=182,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJAvo6gnJLtQ+iDyQoMhzODl+npZCwBJ411Cs2wEIHduGTuZOEIjC3CuVXxMQXImpzm7G5xXQpEVCsxgPZi7GfUa3TO9i/q7RIfqFiTkcHVDw16PqOFS53sVFMtvwiUpwQ==',key_name='tempest-TestNetworkBasicOps-774491034',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vvq0z9jl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:59:12Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=e150c7b6-c537-4096-abbc-21b5ab35233e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.471 226109 DEBUG nova.network.os_vif_util [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.472 226109 DEBUG nova.network.os_vif_util [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:18:26,bridge_name='br-int',has_traffic_filtering=True,id=8240e8bf-1c61-4830-aa30-d8cc74050992,network=Network(cdb26d0f-aef5-43f2-bb12-cbd355c4f073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8240e8bf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.473 226109 DEBUG nova.objects.instance [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid e150c7b6-c537-4096-abbc-21b5ab35233e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.492 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <uuid>e150c7b6-c537-4096-abbc-21b5ab35233e</uuid>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <name>instance-000000b6</name>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <metadata>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkBasicOps-server-1504657561</nova:name>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 07:59:18</nova:creationTime>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <nova:port uuid="8240e8bf-1c61-4830-aa30-d8cc74050992">
Dec 06 07:59:19 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   </metadata>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <system>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <entry name="serial">e150c7b6-c537-4096-abbc-21b5ab35233e</entry>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <entry name="uuid">e150c7b6-c537-4096-abbc-21b5ab35233e</entry>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </system>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <os>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   </os>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <features>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <apic/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   </features>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   </clock>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   </cpu>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   <devices>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/e150c7b6-c537-4096-abbc-21b5ab35233e_disk">
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       </source>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/e150c7b6-c537-4096-abbc-21b5ab35233e_disk.config">
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       </source>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 07:59:19 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       </auth>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </disk>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:e0:18:26"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <target dev="tap8240e8bf-1c"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </interface>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e/console.log" append="off"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </serial>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <video>
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </video>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </rng>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 07:59:19 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 07:59:19 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 07:59:19 compute-1 nova_compute[226101]:   </devices>
Dec 06 07:59:19 compute-1 nova_compute[226101]: </domain>
Dec 06 07:59:19 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.494 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Preparing to wait for external event network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.494 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.494 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.495 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.496 226109 DEBUG nova.virt.libvirt.vif [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:59:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1504657561',display_name='tempest-TestNetworkBasicOps-server-1504657561',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1504657561',id=182,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJAvo6gnJLtQ+iDyQoMhzODl+npZCwBJ411Cs2wEIHduGTuZOEIjC3CuVXxMQXImpzm7G5xXQpEVCsxgPZi7GfUa3TO9i/q7RIfqFiTkcHVDw16PqOFS53sVFMtvwiUpwQ==',key_name='tempest-TestNetworkBasicOps-774491034',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vvq0z9jl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:59:12Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=e150c7b6-c537-4096-abbc-21b5ab35233e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.496 226109 DEBUG nova.network.os_vif_util [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.497 226109 DEBUG nova.network.os_vif_util [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:18:26,bridge_name='br-int',has_traffic_filtering=True,id=8240e8bf-1c61-4830-aa30-d8cc74050992,network=Network(cdb26d0f-aef5-43f2-bb12-cbd355c4f073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8240e8bf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.497 226109 DEBUG os_vif [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:18:26,bridge_name='br-int',has_traffic_filtering=True,id=8240e8bf-1c61-4830-aa30-d8cc74050992,network=Network(cdb26d0f-aef5-43f2-bb12-cbd355c4f073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8240e8bf-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.497 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.498 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.498 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.500 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.500 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8240e8bf-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.501 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8240e8bf-1c, col_values=(('external_ids', {'iface-id': '8240e8bf-1c61-4830-aa30-d8cc74050992', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:18:26', 'vm-uuid': 'e150c7b6-c537-4096-abbc-21b5ab35233e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:19 compute-1 NetworkManager[49031]: <info>  [1765007959.5500] manager: (tap8240e8bf-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.550 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.555 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.558 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.560 226109 INFO os_vif [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:18:26,bridge_name='br-int',has_traffic_filtering=True,id=8240e8bf-1c61-4830-aa30-d8cc74050992,network=Network(cdb26d0f-aef5-43f2-bb12-cbd355c4f073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8240e8bf-1c')
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.655 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.656 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.656 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:e0:18:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.657 226109 INFO nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Using config drive
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.681 226109 DEBUG nova.storage.rbd_utils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image e150c7b6-c537-4096-abbc-21b5ab35233e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1432958510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/29171320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:19 compute-1 nova_compute[226101]: 2025-12-06 07:59:19.957 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:20.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.292 226109 INFO nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Creating config drive at /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e/disk.config
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.296 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapmepw_k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.431 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapmepw_k" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.461 226109 DEBUG nova.storage.rbd_utils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image e150c7b6-c537-4096-abbc-21b5ab35233e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.465 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e/disk.config e150c7b6-c537-4096-abbc-21b5ab35233e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.613 226109 DEBUG oslo_concurrency.processutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e/disk.config e150c7b6-c537-4096-abbc-21b5ab35233e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.613 226109 INFO nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Deleting local config drive /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e/disk.config because it was imported into RBD.
Dec 06 07:59:20 compute-1 kernel: tap8240e8bf-1c: entered promiscuous mode
Dec 06 07:59:20 compute-1 NetworkManager[49031]: <info>  [1765007960.6765] manager: (tap8240e8bf-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Dec 06 07:59:20 compute-1 ovn_controller[130279]: 2025-12-06T07:59:20Z|00734|binding|INFO|Claiming lport 8240e8bf-1c61-4830-aa30-d8cc74050992 for this chassis.
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.709 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:20 compute-1 ovn_controller[130279]: 2025-12-06T07:59:20Z|00735|binding|INFO|8240e8bf-1c61-4830-aa30-d8cc74050992: Claiming fa:16:3e:e0:18:26 10.100.0.11
Dec 06 07:59:20 compute-1 ceph-mon[81689]: pgmap v3192: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 3.5 MiB/s wr, 77 op/s
Dec 06 07:59:20 compute-1 systemd-udevd[296965]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:59:20 compute-1 NetworkManager[49031]: <info>  [1765007960.7434] device (tap8240e8bf-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:59:20 compute-1 NetworkManager[49031]: <info>  [1765007960.7447] device (tap8240e8bf-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.805 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:20 compute-1 ovn_controller[130279]: 2025-12-06T07:59:20Z|00736|binding|INFO|Setting lport 8240e8bf-1c61-4830-aa30-d8cc74050992 ovn-installed in OVS
Dec 06 07:59:20 compute-1 nova_compute[226101]: 2025-12-06 07:59:20.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:20 compute-1 systemd-machined[190302]: New machine qemu-85-instance-000000b6.
Dec 06 07:59:20 compute-1 systemd[1]: Started Virtual Machine qemu-85-instance-000000b6.
Dec 06 07:59:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:20 compute-1 ovn_controller[130279]: 2025-12-06T07:59:20Z|00737|binding|INFO|Setting lport 8240e8bf-1c61-4830-aa30-d8cc74050992 up in Southbound
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.897 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:18:26 10.100.0.11'], port_security=['fa:16:3e:e0:18:26 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e150c7b6-c537-4096-abbc-21b5ab35233e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdb26d0f-aef5-43f2-bb12-cbd355c4f073', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e06d517-5e8c-42ff-a272-4d739dd550f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15307fb5-6d15-49a7-8b9f-f345c03de2d2, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=8240e8bf-1c61-4830-aa30-d8cc74050992) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.898 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 8240e8bf-1c61-4830-aa30-d8cc74050992 in datapath cdb26d0f-aef5-43f2-bb12-cbd355c4f073 bound to our chassis
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.899 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdb26d0f-aef5-43f2-bb12-cbd355c4f073
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.909 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4d42b56b-15b2-4ae4-a65d-94cd719cde15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.910 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcdb26d0f-a1 in ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.912 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcdb26d0f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.913 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b69e9b8c-6772-4a22-823c-57cc7bcf8bde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.913 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a629babc-23fb-4f58-a133-7f81b95242ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.924 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c175f0-918f-4d1f-a780-b175cf2324f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.938 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b82cc59c-4e38-4e24-b404-d9a6745cc457]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.967 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[42638da5-0c81-4497-bcc1-e146b14087f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:20 compute-1 systemd-udevd[296967]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:59:20 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:20.974 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7fbd39-0556-446c-a85b-e90b252e075b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:20 compute-1 NetworkManager[49031]: <info>  [1765007960.9756] manager: (tapcdb26d0f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/331)
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.005 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[797abc0d-37c5-4b31-a163-e680fa7549ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.009 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cf4e23-9162-40b4-a917-5ab846199895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 NetworkManager[49031]: <info>  [1765007961.0354] device (tapcdb26d0f-a0): carrier: link connected
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.042 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f1aa5e8d-8044-40e4-b0e8-7580ac193b17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.059 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[07b5bde2-7981-4c4b-ac8c-afabe7673a39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdb26d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:8c:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 824053, 'reachable_time': 29072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297001, 'error': None, 'target': 'ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.074 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fb019e32-bbe1-49d0-adc7-2c438e535f52]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:8cf8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 824053, 'tstamp': 824053}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297002, 'error': None, 'target': 'ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.092 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f5583751-d2eb-4c99-802a-b35ea5e3facb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdb26d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:8c:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 824053, 'reachable_time': 29072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297003, 'error': None, 'target': 'ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.122 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4d614e-8ef4-498e-971a-338dfc40b7b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.160 226109 DEBUG nova.network.neutron [req-128bfcb5-f5ba-4a07-851a-71aa820fa9f2 req-bf0e9ed4-eebc-4666-99c8-6ae407e42eaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updated VIF entry in instance network info cache for port 8240e8bf-1c61-4830-aa30-d8cc74050992. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.161 226109 DEBUG nova.network.neutron [req-128bfcb5-f5ba-4a07-851a-71aa820fa9f2 req-bf0e9ed4-eebc-4666-99c8-6ae407e42eaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updating instance_info_cache with network_info: [{"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.180 226109 DEBUG oslo_concurrency.lockutils [req-128bfcb5-f5ba-4a07-851a-71aa820fa9f2 req-bf0e9ed4-eebc-4666-99c8-6ae407e42eaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.180 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[158244c6-137b-4f44-bcb5-335d2e8bbd81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.181 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdb26d0f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.182 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.182 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdb26d0f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:21 compute-1 kernel: tapcdb26d0f-a0: entered promiscuous mode
Dec 06 07:59:21 compute-1 NetworkManager[49031]: <info>  [1765007961.1863] manager: (tapcdb26d0f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.188 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdb26d0f-a0, col_values=(('external_ids', {'iface-id': '6c364430-b8fb-4cd3-a49f-cc09c991f74b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:21 compute-1 ovn_controller[130279]: 2025-12-06T07:59:21Z|00738|binding|INFO|Releasing lport 6c364430-b8fb-4cd3-a49f-cc09c991f74b from this chassis (sb_readonly=0)
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.189 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.201 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.202 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cdb26d0f-aef5-43f2-bb12-cbd355c4f073.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cdb26d0f-aef5-43f2-bb12-cbd355c4f073.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.203 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[44126b97-d347-44a0-bb33-6f1300d66bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.204 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: global
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-cdb26d0f-aef5-43f2-bb12-cbd355c4f073
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/cdb26d0f-aef5-43f2-bb12-cbd355c4f073.pid.haproxy
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID cdb26d0f-aef5-43f2-bb12-cbd355c4f073
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:59:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:21.205 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073', 'env', 'PROCESS_TAG=haproxy-cdb26d0f-aef5-43f2-bb12-cbd355c4f073', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cdb26d0f-aef5-43f2-bb12-cbd355c4f073.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:59:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:21.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.329 226109 DEBUG nova.compute.manager [req-0da78585-cdab-4cb2-94ea-e2f5369974a5 req-e963af6b-ed1a-48ff-bb94-cd21938d0dd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.331 226109 DEBUG oslo_concurrency.lockutils [req-0da78585-cdab-4cb2-94ea-e2f5369974a5 req-e963af6b-ed1a-48ff-bb94-cd21938d0dd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.331 226109 DEBUG oslo_concurrency.lockutils [req-0da78585-cdab-4cb2-94ea-e2f5369974a5 req-e963af6b-ed1a-48ff-bb94-cd21938d0dd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.331 226109 DEBUG oslo_concurrency.lockutils [req-0da78585-cdab-4cb2-94ea-e2f5369974a5 req-e963af6b-ed1a-48ff-bb94-cd21938d0dd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.332 226109 DEBUG nova.compute.manager [req-0da78585-cdab-4cb2-94ea-e2f5369974a5 req-e963af6b-ed1a-48ff-bb94-cd21938d0dd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Processing event network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.381 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007961.381532, e150c7b6-c537-4096-abbc-21b5ab35233e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.382 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] VM Started (Lifecycle Event)
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.384 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.389 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.391 226109 INFO nova.virt.libvirt.driver [-] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Instance spawned successfully.
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.392 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.406 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.409 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.417 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.417 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.418 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.418 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.419 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.419 226109 DEBUG nova.virt.libvirt.driver [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.429 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.429 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007961.3816788, e150c7b6-c537-4096-abbc-21b5ab35233e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.429 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] VM Paused (Lifecycle Event)
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.457 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.460 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765007961.3880045, e150c7b6-c537-4096-abbc-21b5ab35233e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.460 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] VM Resumed (Lifecycle Event)
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.491 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.495 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.504 226109 INFO nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Took 8.48 seconds to spawn the instance on the hypervisor.
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.504 226109 DEBUG nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.516 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.589 226109 INFO nova.compute.manager [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Took 12.58 seconds to build instance.
Dec 06 07:59:21 compute-1 podman[297075]: 2025-12-06 07:59:21.596515392 +0000 UTC m=+0.054339815 container create 3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:59:21 compute-1 nova_compute[226101]: 2025-12-06 07:59:21.608 226109 DEBUG oslo_concurrency.lockutils [None req-c14bff0e-57cd-434a-86a2-18cd0a4137d5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:21 compute-1 systemd[1]: Started libpod-conmon-3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42.scope.
Dec 06 07:59:21 compute-1 systemd[1]: Started libcrun container.
Dec 06 07:59:21 compute-1 podman[297075]: 2025-12-06 07:59:21.565065641 +0000 UTC m=+0.022890114 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:59:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951ca6e869fba8b3882b88d2515f1d4e5ee1ab2d6b157ec6602f70a5feee3b97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:21 compute-1 podman[297075]: 2025-12-06 07:59:21.696808398 +0000 UTC m=+0.154632841 container init 3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:59:21 compute-1 podman[297075]: 2025-12-06 07:59:21.704332249 +0000 UTC m=+0.162156672 container start 3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:59:21 compute-1 neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073[297090]: [NOTICE]   (297094) : New worker (297096) forked
Dec 06 07:59:21 compute-1 neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073[297090]: [NOTICE]   (297094) : Loading success.
Dec 06 07:59:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:22.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:23 compute-1 ceph-mon[81689]: pgmap v3193: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Dec 06 07:59:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:59:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:59:23 compute-1 nova_compute[226101]: 2025-12-06 07:59:23.486 226109 DEBUG nova.compute.manager [req-319f2ddc-71f6-41db-a1cf-47c677ab59a8 req-e40e4c9d-8241-4f67-b157-2d949986409b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:59:23 compute-1 nova_compute[226101]: 2025-12-06 07:59:23.486 226109 DEBUG oslo_concurrency.lockutils [req-319f2ddc-71f6-41db-a1cf-47c677ab59a8 req-e40e4c9d-8241-4f67-b157-2d949986409b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:23 compute-1 nova_compute[226101]: 2025-12-06 07:59:23.486 226109 DEBUG oslo_concurrency.lockutils [req-319f2ddc-71f6-41db-a1cf-47c677ab59a8 req-e40e4c9d-8241-4f67-b157-2d949986409b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:23 compute-1 nova_compute[226101]: 2025-12-06 07:59:23.486 226109 DEBUG oslo_concurrency.lockutils [req-319f2ddc-71f6-41db-a1cf-47c677ab59a8 req-e40e4c9d-8241-4f67-b157-2d949986409b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:23 compute-1 nova_compute[226101]: 2025-12-06 07:59:23.487 226109 DEBUG nova.compute.manager [req-319f2ddc-71f6-41db-a1cf-47c677ab59a8 req-e40e4c9d-8241-4f67-b157-2d949986409b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] No waiting events found dispatching network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:59:23 compute-1 nova_compute[226101]: 2025-12-06 07:59:23.487 226109 WARNING nova.compute.manager [req-319f2ddc-71f6-41db-a1cf-47c677ab59a8 req-e40e4c9d-8241-4f67-b157-2d949986409b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received unexpected event network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 for instance with vm_state active and task_state None.
Dec 06 07:59:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:24.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:24 compute-1 sudo[297105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:24 compute-1 sudo[297105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:24 compute-1 sudo[297105]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:24 compute-1 sudo[297130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:24 compute-1 sudo[297130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:24 compute-1 sudo[297130]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:24 compute-1 sudo[297155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:24 compute-1 nova_compute[226101]: 2025-12-06 07:59:24.550 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:24 compute-1 sudo[297155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:24 compute-1 sudo[297155]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:24 compute-1 sudo[297180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:59:24 compute-1 sudo[297180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:24 compute-1 nova_compute[226101]: 2025-12-06 07:59:24.975 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:25 compute-1 ceph-mon[81689]: pgmap v3194: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Dec 06 07:59:25 compute-1 sudo[297180]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:25.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:27.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:27 compute-1 ceph-mon[81689]: pgmap v3195: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Dec 06 07:59:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:59:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:59:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:59:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:59:27 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:59:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:28 compute-1 ceph-mon[81689]: pgmap v3196: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 214 op/s
Dec 06 07:59:28 compute-1 nova_compute[226101]: 2025-12-06 07:59:28.625 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:28 compute-1 nova_compute[226101]: 2025-12-06 07:59:28.626 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:59:28 compute-1 nova_compute[226101]: 2025-12-06 07:59:28.626 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:59:28 compute-1 nova_compute[226101]: 2025-12-06 07:59:28.890 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:59:28 compute-1 nova_compute[226101]: 2025-12-06 07:59:28.890 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:59:28 compute-1 nova_compute[226101]: 2025-12-06 07:59:28.891 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:59:28 compute-1 nova_compute[226101]: 2025-12-06 07:59:28.891 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid e150c7b6-c537-4096-abbc-21b5ab35233e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:59:29 compute-1 nova_compute[226101]: 2025-12-06 07:59:29.195 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:29 compute-1 NetworkManager[49031]: <info>  [1765007969.1961] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Dec 06 07:59:29 compute-1 NetworkManager[49031]: <info>  [1765007969.1971] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Dec 06 07:59:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:29.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:29 compute-1 ovn_controller[130279]: 2025-12-06T07:59:29Z|00739|binding|INFO|Releasing lport 6c364430-b8fb-4cd3-a49f-cc09c991f74b from this chassis (sb_readonly=0)
Dec 06 07:59:29 compute-1 ovn_controller[130279]: 2025-12-06T07:59:29Z|00740|binding|INFO|Releasing lport 6c364430-b8fb-4cd3-a49f-cc09c991f74b from this chassis (sb_readonly=0)
Dec 06 07:59:29 compute-1 nova_compute[226101]: 2025-12-06 07:59:29.410 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:29 compute-1 nova_compute[226101]: 2025-12-06 07:59:29.551 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:29 compute-1 nova_compute[226101]: 2025-12-06 07:59:29.977 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:30.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:59:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.5 total, 600.0 interval
                                           Cumulative writes: 13K writes, 68K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1587 writes, 7956 keys, 1587 commit groups, 1.0 writes per commit group, ingest: 16.10 MB, 0.03 MB/s
                                           Interval WAL: 1587 writes, 1587 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     14.6      5.75              0.25        42    0.137       0      0       0.0       0.0
                                             L6      1/0   12.07 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     39.2     33.5     12.57              1.12        41    0.307    303K    22K       0.0       0.0
                                            Sum      1/0   12.07 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     26.9     27.6     18.32              1.37        83    0.221    303K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.5     35.9     36.0      2.24              0.21        12    0.187     59K   3127       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     39.2     33.5     12.57              1.12        41    0.307    303K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     14.8      5.70              0.25        41    0.139       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.082, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.49 GB write, 0.09 MB/s write, 0.48 GB read, 0.09 MB/s read, 18.3 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 2.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 54.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000333 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3126,52.40 MB,17.2366%) FilterBlock(83,839.55 KB,0.269694%) IndexBlock(83,1.37 MB,0.450696%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:59:30 compute-1 ceph-mon[81689]: pgmap v3197: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 196 op/s
Dec 06 07:59:30 compute-1 nova_compute[226101]: 2025-12-06 07:59:30.579 226109 DEBUG nova.compute.manager [req-dfc46bbf-4159-4af6-a023-a2bab0c3e8e3 req-5e772c19-2dfa-4a03-83e0-cf9956b4c997 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-changed-8240e8bf-1c61-4830-aa30-d8cc74050992 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:59:30 compute-1 nova_compute[226101]: 2025-12-06 07:59:30.579 226109 DEBUG nova.compute.manager [req-dfc46bbf-4159-4af6-a023-a2bab0c3e8e3 req-5e772c19-2dfa-4a03-83e0-cf9956b4c997 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Refreshing instance network info cache due to event network-changed-8240e8bf-1c61-4830-aa30-d8cc74050992. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:59:30 compute-1 nova_compute[226101]: 2025-12-06 07:59:30.580 226109 DEBUG oslo_concurrency.lockutils [req-dfc46bbf-4159-4af6-a023-a2bab0c3e8e3 req-5e772c19-2dfa-4a03-83e0-cf9956b4c997 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:59:30 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:30 compute-1 sshd-session[297237]: Invalid user kali from 45.140.17.124 port 61904
Dec 06 07:59:31 compute-1 sshd-session[297237]: Connection reset by invalid user kali 45.140.17.124 port 61904 [preauth]
Dec 06 07:59:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:31.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:32.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.421 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updating instance_info_cache with network_info: [{"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.451 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.452 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.453 226109 DEBUG oslo_concurrency.lockutils [req-dfc46bbf-4159-4af6-a023-a2bab0c3e8e3 req-5e772c19-2dfa-4a03-83e0-cf9956b4c997 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.454 226109 DEBUG nova.network.neutron [req-dfc46bbf-4159-4af6-a023-a2bab0c3e8e3 req-5e772c19-2dfa-4a03-83e0-cf9956b4c997 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Refreshing network info cache for port 8240e8bf-1c61-4830-aa30-d8cc74050992 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.455 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.494 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.495 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.495 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.495 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.495 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:32 compute-1 ceph-mon[81689]: pgmap v3198: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 495 KiB/s wr, 181 op/s
Dec 06 07:59:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2834288287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:59:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2230960405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:32 compute-1 nova_compute[226101]: 2025-12-06 07:59:32.934 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:33 compute-1 podman[297266]: 2025-12-06 07:59:33.052358729 +0000 UTC m=+0.059348329 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 06 07:59:33 compute-1 podman[297265]: 2025-12-06 07:59:33.071301617 +0000 UTC m=+0.090591017 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.127 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.128 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:59:33 compute-1 podman[297267]: 2025-12-06 07:59:33.132162516 +0000 UTC m=+0.147491270 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.297 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.299 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4112MB free_disk=20.946338653564453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.299 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.299 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:33.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:33 compute-1 sshd-session[297240]: Connection reset by authenticating user root 45.140.17.124 port 61920 [preauth]
Dec 06 07:59:33 compute-1 sudo[297328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:33 compute-1 sudo[297328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:33 compute-1 sudo[297328]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.414 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance e150c7b6-c537-4096-abbc-21b5ab35233e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.414 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.414 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:59:33 compute-1 sudo[297353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:59:33 compute-1 sudo[297353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:33 compute-1 sudo[297353]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:33 compute-1 nova_compute[226101]: 2025-12-06 07:59:33.474 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2230960405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/285353139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:33.916014) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973916061, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1935, "num_deletes": 253, "total_data_size": 4498222, "memory_usage": 4577480, "flush_reason": "Manual Compaction"}
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973928315, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 1835587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67232, "largest_seqno": 69161, "table_properties": {"data_size": 1829327, "index_size": 3205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16842, "raw_average_key_size": 21, "raw_value_size": 1815433, "raw_average_value_size": 2318, "num_data_blocks": 142, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007814, "oldest_key_time": 1765007814, "file_creation_time": 1765007973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 12355 microseconds, and 4836 cpu microseconds.
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:33.928374) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 1835587 bytes OK
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:33.928395) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:33.930148) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:33.930161) EVENT_LOG_v1 {"time_micros": 1765007973930157, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:33.930176) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 4489368, prev total WAL file size 4489368, number of live WAL files 2.
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:33.931509) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323736' seq:72057594037927935, type:22 .. '6D6772737461740032353237' seq:0, type:0; will stop at (end)
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(1792KB)], [135(12MB)]
Dec 06 07:59:33 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973931582, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14493265, "oldest_snapshot_seqno": -1}
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9916 keys, 11730578 bytes, temperature: kUnknown
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007974013334, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 11730578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11668206, "index_size": 36467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24837, "raw_key_size": 261496, "raw_average_key_size": 26, "raw_value_size": 11495844, "raw_average_value_size": 1159, "num_data_blocks": 1384, "num_entries": 9916, "num_filter_entries": 9916, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765007973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:34.013559) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 11730578 bytes
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:34.015025) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.2 rd, 143.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(14.3) write-amplify(6.4) OK, records in: 10369, records dropped: 453 output_compression: NoCompression
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:34.015044) EVENT_LOG_v1 {"time_micros": 1765007974015035, "job": 86, "event": "compaction_finished", "compaction_time_micros": 81796, "compaction_time_cpu_micros": 30748, "output_level": 6, "num_output_files": 1, "total_output_size": 11730578, "num_input_records": 10369, "num_output_records": 9916, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007974015393, "job": 86, "event": "table_file_deletion", "file_number": 137}
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007974017502, "job": 86, "event": "table_file_deletion", "file_number": 135}
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:33.931308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:34.017560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:34.017565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:34.017566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:34.017568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-07:59:34.017569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:59:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1606500053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:34.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.055 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.061 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.165 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.194 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.194 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.480 226109 DEBUG nova.network.neutron [req-dfc46bbf-4159-4af6-a023-a2bab0c3e8e3 req-5e772c19-2dfa-4a03-83e0-cf9956b4c997 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updated VIF entry in instance network info cache for port 8240e8bf-1c61-4830-aa30-d8cc74050992. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.480 226109 DEBUG nova.network.neutron [req-dfc46bbf-4159-4af6-a023-a2bab0c3e8e3 req-5e772c19-2dfa-4a03-83e0-cf9956b4c997 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updating instance_info_cache with network_info: [{"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.497 226109 DEBUG oslo_concurrency.lockutils [req-dfc46bbf-4159-4af6-a023-a2bab0c3e8e3 req-5e772c19-2dfa-4a03-83e0-cf9956b4c997 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.603 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:34 compute-1 ceph-mon[81689]: pgmap v3199: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 37 KiB/s wr, 89 op/s
Dec 06 07:59:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1606500053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:34 compute-1 nova_compute[226101]: 2025-12-06 07:59:34.979 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:35 compute-1 ovn_controller[130279]: 2025-12-06T07:59:35Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:18:26 10.100.0.11
Dec 06 07:59:35 compute-1 ovn_controller[130279]: 2025-12-06T07:59:35Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:18:26 10.100.0.11
Dec 06 07:59:35 compute-1 sshd-session[297399]: Received disconnect from 154.219.116.39 port 39948:11: Bye Bye [preauth]
Dec 06 07:59:35 compute-1 sshd-session[297399]: Disconnected from authenticating user root 154.219.116.39 port 39948 [preauth]
Dec 06 07:59:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:35.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:36.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:36 compute-1 ceph-mon[81689]: pgmap v3200: 305 pgs: 305 active+clean; 344 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 183 op/s
Dec 06 07:59:37 compute-1 sshd-session[297379]: Connection reset by authenticating user root 45.140.17.124 port 53356 [preauth]
Dec 06 07:59:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:37.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:38.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:38 compute-1 nova_compute[226101]: 2025-12-06 07:59:38.329 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:38 compute-1 nova_compute[226101]: 2025-12-06 07:59:38.330 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:59:38 compute-1 nova_compute[226101]: 2025-12-06 07:59:38.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:39 compute-1 sshd-session[297404]: Connection reset by authenticating user root 45.140.17.124 port 53370 [preauth]
Dec 06 07:59:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:39.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:39 compute-1 ceph-mon[81689]: pgmap v3201: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 172 op/s
Dec 06 07:59:39 compute-1 nova_compute[226101]: 2025-12-06 07:59:39.607 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:39 compute-1 nova_compute[226101]: 2025-12-06 07:59:39.980 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:40.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:41 compute-1 ceph-mon[81689]: pgmap v3202: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Dec 06 07:59:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2980456092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:41 compute-1 nova_compute[226101]: 2025-12-06 07:59:41.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:41 compute-1 nova_compute[226101]: 2025-12-06 07:59:41.801 226109 INFO nova.compute.manager [None req-909504ed-b1b8-4bcd-b6ff-b7717614bfe3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Get console output
Dec 06 07:59:41 compute-1 nova_compute[226101]: 2025-12-06 07:59:41.807 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 07:59:41 compute-1 sshd-session[297406]: Connection reset by authenticating user root 45.140.17.124 port 53372 [preauth]
Dec 06 07:59:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:42.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3364756026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:43.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:43 compute-1 nova_compute[226101]: 2025-12-06 07:59:43.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:43 compute-1 nova_compute[226101]: 2025-12-06 07:59:43.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:44.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:44 compute-1 nova_compute[226101]: 2025-12-06 07:59:44.609 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:45 compute-1 nova_compute[226101]: 2025-12-06 07:59:45.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:45.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:46.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:46 compute-1 nova_compute[226101]: 2025-12-06 07:59:46.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:47.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:47 compute-1 ceph-mon[81689]: pgmap v3203: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec 06 07:59:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:48.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:48 compute-1 ceph-mon[81689]: pgmap v3204: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Dec 06 07:59:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1719363118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:48 compute-1 ceph-mon[81689]: pgmap v3205: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 663 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Dec 06 07:59:48 compute-1 ceph-mon[81689]: pgmap v3206: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Dec 06 07:59:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:49.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:49 compute-1 nova_compute[226101]: 2025-12-06 07:59:49.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:49 compute-1 nova_compute[226101]: 2025-12-06 07:59:49.611 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:50 compute-1 nova_compute[226101]: 2025-12-06 07:59:50.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:50.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:51.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:51 compute-1 ceph-mon[81689]: pgmap v3207: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 25 KiB/s wr, 14 op/s
Dec 06 07:59:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:52.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:52 compute-1 ceph-mon[81689]: pgmap v3208: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 31 KiB/s wr, 14 op/s
Dec 06 07:59:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:53.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:54.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:54 compute-1 nova_compute[226101]: 2025-12-06 07:59:54.613 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:55 compute-1 nova_compute[226101]: 2025-12-06 07:59:55.041 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:55 compute-1 ceph-mon[81689]: pgmap v3209: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 13 op/s
Dec 06 07:59:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:55.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:57 compute-1 ceph-mon[81689]: pgmap v3210: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 13 op/s
Dec 06 07:59:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:57.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:58.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2052642224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:59 compute-1 ceph-mon[81689]: pgmap v3211: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 7.7 KiB/s wr, 1 op/s
Dec 06 07:59:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:59.342 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:59:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 07:59:59.344 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:59:59 compute-1 nova_compute[226101]: 2025-12-06 07:59:59.344 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 07:59:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:59.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:59 compute-1 nova_compute[226101]: 2025-12-06 07:59:59.614 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:00 compute-1 nova_compute[226101]: 2025-12-06 08:00:00.043 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:00.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1256096928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:00 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 08:00:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:01.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:01 compute-1 ceph-mon[81689]: pgmap v3212: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 06 08:00:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:00:01.680 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:00:01.681 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:00:01.682 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:02.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3876234082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:02 compute-1 ceph-mon[81689]: pgmap v3213: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 06 08:00:02 compute-1 nova_compute[226101]: 2025-12-06 08:00:02.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:03.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:04 compute-1 podman[297409]: 2025-12-06 08:00:04.081542982 +0000 UTC m=+0.057517271 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:00:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:04.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:04 compute-1 podman[297408]: 2025-12-06 08:00:04.114432422 +0000 UTC m=+0.091268674 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:00:04 compute-1 podman[297410]: 2025-12-06 08:00:04.120607998 +0000 UTC m=+0.092313182 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 08:00:04 compute-1 ceph-mon[81689]: pgmap v3214: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 06 08:00:04 compute-1 nova_compute[226101]: 2025-12-06 08:00:04.617 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:05 compute-1 nova_compute[226101]: 2025-12-06 08:00:05.044 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:00:05.345 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:05.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 08:00:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:06 compute-1 ceph-mon[81689]: pgmap v3215: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.1 KiB/s wr, 0 op/s
Dec 06 08:00:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:07.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:08.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:08 compute-1 ceph-mon[81689]: pgmap v3216: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 6.4 KiB/s wr, 1 op/s
Dec 06 08:00:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:09.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:09 compute-1 nova_compute[226101]: 2025-12-06 08:00:09.619 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3263341091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:00:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3263341091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:00:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1989904733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:10 compute-1 nova_compute[226101]: 2025-12-06 08:00:10.046 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:10.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:10 compute-1 ceph-mon[81689]: pgmap v3217: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.4 KiB/s wr, 1 op/s
Dec 06 08:00:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:11.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:12.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:12 compute-1 ceph-mon[81689]: pgmap v3218: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 32 KiB/s wr, 4 op/s
Dec 06 08:00:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:13.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:14 compute-1 ceph-mon[81689]: pgmap v3219: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 32 KiB/s wr, 4 op/s
Dec 06 08:00:14 compute-1 nova_compute[226101]: 2025-12-06 08:00:14.621 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:15 compute-1 nova_compute[226101]: 2025-12-06 08:00:15.049 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:15 compute-1 ovn_controller[130279]: 2025-12-06T08:00:15Z|00741|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 06 08:00:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:15.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/903707780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:16.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:16 compute-1 ceph-mon[81689]: pgmap v3220: 305 pgs: 305 active+clean; 363 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 779 KiB/s rd, 50 KiB/s wr, 43 op/s
Dec 06 08:00:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e408 e408: 3 total, 3 up, 3 in
Dec 06 08:00:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:17.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:17 compute-1 ceph-mon[81689]: osdmap e408: 3 total, 3 up, 3 in
Dec 06 08:00:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3011078489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:18.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:18 compute-1 sshd-session[297470]: Received disconnect from 136.112.8.45 port 37306:11: Bye Bye [preauth]
Dec 06 08:00:18 compute-1 sshd-session[297470]: Disconnected from authenticating user root 136.112.8.45 port 37306 [preauth]
Dec 06 08:00:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:18 compute-1 ceph-mon[81689]: pgmap v3222: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 622 KiB/s wr, 98 op/s
Dec 06 08:00:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/188999392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:19.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:19 compute-1 nova_compute[226101]: 2025-12-06 08:00:19.622 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:20 compute-1 nova_compute[226101]: 2025-12-06 08:00:20.050 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:20.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:20 compute-1 ceph-mon[81689]: pgmap v3223: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 622 KiB/s wr, 98 op/s
Dec 06 08:00:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3158161411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:21.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1327561922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:22.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:22 compute-1 ceph-mon[81689]: pgmap v3224: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Dec 06 08:00:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:23.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:24.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:24 compute-1 nova_compute[226101]: 2025-12-06 08:00:24.651 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:24 compute-1 ceph-mon[81689]: pgmap v3225: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Dec 06 08:00:25 compute-1 nova_compute[226101]: 2025-12-06 08:00:25.052 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e409 e409: 3 total, 3 up, 3 in
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:25.945999) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025946081, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 756, "num_deletes": 251, "total_data_size": 1355828, "memory_usage": 1376240, "flush_reason": "Manual Compaction"}
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025956051, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 894187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69166, "largest_seqno": 69917, "table_properties": {"data_size": 890607, "index_size": 1423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8317, "raw_average_key_size": 19, "raw_value_size": 883367, "raw_average_value_size": 2063, "num_data_blocks": 63, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007974, "oldest_key_time": 1765007974, "file_creation_time": 1765008025, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 10088 microseconds, and 4368 cpu microseconds.
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:25.956099) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 894187 bytes OK
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:25.956120) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:25.958893) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:25.958932) EVENT_LOG_v1 {"time_micros": 1765008025958922, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:25.958955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1351845, prev total WAL file size 1351845, number of live WAL files 2.
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:25.959937) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(873KB)], [138(11MB)]
Dec 06 08:00:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025959994, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 12624765, "oldest_snapshot_seqno": -1}
Dec 06 08:00:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:26.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9830 keys, 10656939 bytes, temperature: kUnknown
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026404992, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10656939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10596132, "index_size": 35158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 260451, "raw_average_key_size": 26, "raw_value_size": 10426181, "raw_average_value_size": 1060, "num_data_blocks": 1322, "num_entries": 9830, "num_filter_entries": 9830, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008025, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:26.405396) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10656939 bytes
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:26.559817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.4 rd, 23.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(26.0) write-amplify(11.9) OK, records in: 10344, records dropped: 514 output_compression: NoCompression
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:26.559853) EVENT_LOG_v1 {"time_micros": 1765008026559840, "job": 88, "event": "compaction_finished", "compaction_time_micros": 445088, "compaction_time_cpu_micros": 30713, "output_level": 6, "num_output_files": 1, "total_output_size": 10656939, "num_input_records": 10344, "num_output_records": 9830, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026560185, "job": 88, "event": "table_file_deletion", "file_number": 140}
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026562113, "job": 88, "event": "table_file_deletion", "file_number": 138}
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:25.959785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:26.562231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:26.562238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:26.562239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:26.562241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:00:26.562242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:27 compute-1 ceph-mon[81689]: pgmap v3226: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 210 op/s
Dec 06 08:00:27 compute-1 ceph-mon[81689]: osdmap e409: 3 total, 3 up, 3 in
Dec 06 08:00:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/404955260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:27.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:28.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:29 compute-1 ceph-mon[81689]: pgmap v3228: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 204 op/s
Dec 06 08:00:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:29.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:29 compute-1 nova_compute[226101]: 2025-12-06 08:00:29.653 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:30 compute-1 sshd-session[297475]: Received disconnect from 124.18.141.70 port 38696:11: Bye Bye [preauth]
Dec 06 08:00:30 compute-1 sshd-session[297475]: Disconnected from authenticating user root 124.18.141.70 port 38696 [preauth]
Dec 06 08:00:30 compute-1 nova_compute[226101]: 2025-12-06 08:00:30.095 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:30.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:30 compute-1 nova_compute[226101]: 2025-12-06 08:00:30.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:30 compute-1 nova_compute[226101]: 2025-12-06 08:00:30.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:00:30 compute-1 nova_compute[226101]: 2025-12-06 08:00:30.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:00:31 compute-1 ceph-mon[81689]: pgmap v3229: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 204 op/s
Dec 06 08:00:31 compute-1 nova_compute[226101]: 2025-12-06 08:00:31.146 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:00:31 compute-1 nova_compute[226101]: 2025-12-06 08:00:31.146 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:00:31 compute-1 nova_compute[226101]: 2025-12-06 08:00:31.146 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:00:31 compute-1 nova_compute[226101]: 2025-12-06 08:00:31.146 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid e150c7b6-c537-4096-abbc-21b5ab35233e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:00:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:31.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:32.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:32 compute-1 nova_compute[226101]: 2025-12-06 08:00:32.809 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:33 compute-1 ceph-mon[81689]: pgmap v3230: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 08:00:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2998638154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:33.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:33 compute-1 sudo[297477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:33 compute-1 sudo[297477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:33 compute-1 sudo[297477]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:33 compute-1 sudo[297502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:33 compute-1 sudo[297502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:33 compute-1 sudo[297502]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:33 compute-1 sudo[297527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:33 compute-1 sudo[297527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:33 compute-1 sudo[297527]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:33 compute-1 sudo[297552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:00:33 compute-1 sudo[297552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.846 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updating instance_info_cache with network_info: [{"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.864 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.864 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.865 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.897 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.897 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.897 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.898 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:00:33 compute-1 nova_compute[226101]: 2025-12-06 08:00:33.898 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 e410: 3 total, 3 up, 3 in
Dec 06 08:00:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:34.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:34 compute-1 podman[297637]: 2025-12-06 08:00:34.196654021 +0000 UTC m=+0.075231204 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:00:34 compute-1 podman[297645]: 2025-12-06 08:00:34.215622659 +0000 UTC m=+0.064177369 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec 06 08:00:34 compute-1 podman[297651]: 2025-12-06 08:00:34.264138778 +0000 UTC m=+0.105040124 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:00:34 compute-1 podman[297726]: 2025-12-06 08:00:34.312112553 +0000 UTC m=+0.057260384 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:00:34 compute-1 nova_compute[226101]: 2025-12-06 08:00:34.372 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:34 compute-1 podman[297726]: 2025-12-06 08:00:34.416695482 +0000 UTC m=+0.161843303 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:00:34 compute-1 nova_compute[226101]: 2025-12-06 08:00:34.457 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:00:34 compute-1 nova_compute[226101]: 2025-12-06 08:00:34.458 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:00:34 compute-1 nova_compute[226101]: 2025-12-06 08:00:34.606 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:00:34 compute-1 nova_compute[226101]: 2025-12-06 08:00:34.607 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4023MB free_disk=20.87603759765625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:00:34 compute-1 nova_compute[226101]: 2025-12-06 08:00:34.607 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:34 compute-1 nova_compute[226101]: 2025-12-06 08:00:34.608 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:34 compute-1 nova_compute[226101]: 2025-12-06 08:00:34.655 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:34 compute-1 sudo[297552]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-1 sudo[297848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:34 compute-1 sudo[297848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:34 compute-1 sudo[297848]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-1 ceph-mon[81689]: pgmap v3231: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 08:00:34 compute-1 ceph-mon[81689]: osdmap e410: 3 total, 3 up, 3 in
Dec 06 08:00:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/847114911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2233248341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:35 compute-1 sudo[297873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:35 compute-1 sudo[297873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:35 compute-1 sudo[297873]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.083 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance e150c7b6-c537-4096-abbc-21b5ab35233e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.084 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.084 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:00:35 compute-1 sudo[297898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.097 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:35 compute-1 sudo[297898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.103 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:00:35 compute-1 sudo[297898]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.128 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.129 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.153 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:00:35 compute-1 sudo[297923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:00:35 compute-1 sudo[297923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.181 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.232 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:35.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:35 compute-1 sudo[297923]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:00:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3524168111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.685 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.691 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:00:35 compute-1 sudo[297999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:35 compute-1 sudo[297999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:35 compute-1 sudo[297999]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.719 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.720 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:00:35 compute-1 nova_compute[226101]: 2025-12-06 08:00:35.721 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:35 compute-1 sudo[298026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:35 compute-1 sudo[298026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:35 compute-1 sudo[298026]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:35 compute-1 sudo[298051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:35 compute-1 sudo[298051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:35 compute-1 sudo[298051]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:35 compute-1 sudo[298076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 08:00:35 compute-1 sudo[298076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3524168111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:36.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:36 compute-1 podman[298142]: 2025-12-06 08:00:36.212113021 +0000 UTC m=+0.035093580 container create 5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:00:36 compute-1 systemd[1]: Started libpod-conmon-5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8.scope.
Dec 06 08:00:36 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:00:36 compute-1 podman[298142]: 2025-12-06 08:00:36.19751808 +0000 UTC m=+0.020498659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:36 compute-1 podman[298142]: 2025-12-06 08:00:36.310348261 +0000 UTC m=+0.133328830 container init 5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:00:36 compute-1 podman[298142]: 2025-12-06 08:00:36.319811855 +0000 UTC m=+0.142792424 container start 5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:00:36 compute-1 podman[298142]: 2025-12-06 08:00:36.323993897 +0000 UTC m=+0.146974476 container attach 5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:00:36 compute-1 systemd[1]: libpod-5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8.scope: Deactivated successfully.
Dec 06 08:00:36 compute-1 angry_tesla[298158]: 167 167
Dec 06 08:00:36 compute-1 conmon[298158]: conmon 5b11697984750a1f4f8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8.scope/container/memory.events
Dec 06 08:00:36 compute-1 podman[298142]: 2025-12-06 08:00:36.328786055 +0000 UTC m=+0.151766664 container died 5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 08:00:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-800e7f8ee6e591d8c7c43b91c3ac79f8a6d482cfc6c1f5b0018b691d7e36af79-merged.mount: Deactivated successfully.
Dec 06 08:00:36 compute-1 podman[298142]: 2025-12-06 08:00:36.37904185 +0000 UTC m=+0.202022439 container remove 5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 08:00:36 compute-1 systemd[1]: libpod-conmon-5b11697984750a1f4f8e23c5508402ad6c3ef74ebfbbd6b1b9d095ac6f7b0dc8.scope: Deactivated successfully.
Dec 06 08:00:36 compute-1 podman[298180]: 2025-12-06 08:00:36.603111089 +0000 UTC m=+0.062775622 container create f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:00:36 compute-1 systemd[1]: Started libpod-conmon-f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a.scope.
Dec 06 08:00:36 compute-1 podman[298180]: 2025-12-06 08:00:36.576206639 +0000 UTC m=+0.035871202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:36 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:00:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4dfae4a72dd86be0dce49e5dbbe9843cefefabf7535c30fce0888bcc97921c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4dfae4a72dd86be0dce49e5dbbe9843cefefabf7535c30fce0888bcc97921c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4dfae4a72dd86be0dce49e5dbbe9843cefefabf7535c30fce0888bcc97921c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4dfae4a72dd86be0dce49e5dbbe9843cefefabf7535c30fce0888bcc97921c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:36 compute-1 podman[298180]: 2025-12-06 08:00:36.72827599 +0000 UTC m=+0.187940543 container init f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:00:36 compute-1 podman[298180]: 2025-12-06 08:00:36.73948395 +0000 UTC m=+0.199148473 container start f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:00:36 compute-1 podman[298180]: 2025-12-06 08:00:36.74358361 +0000 UTC m=+0.203248163 container attach f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:00:37 compute-1 ceph-mon[81689]: pgmap v3233: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 55 KiB/s wr, 168 op/s
Dec 06 08:00:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:37.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:37 compute-1 amazing_swanson[298196]: [
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:     {
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         "available": false,
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         "ceph_device": false,
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         "lsm_data": {},
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         "lvs": [],
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         "path": "/dev/sr0",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         "rejected_reasons": [
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "Insufficient space (<5GB)",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "Has a FileSystem"
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         ],
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         "sys_api": {
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "actuators": null,
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "device_nodes": "sr0",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "devname": "sr0",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "human_readable_size": "482.00 KB",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "id_bus": "ata",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "model": "QEMU DVD-ROM",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "nr_requests": "2",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "parent": "/dev/sr0",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "partitions": {},
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "path": "/dev/sr0",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "removable": "1",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "rev": "2.5+",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "ro": "0",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "rotational": "1",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "sas_address": "",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "sas_device_handle": "",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "scheduler_mode": "mq-deadline",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "sectors": 0,
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "sectorsize": "2048",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "size": 493568.0,
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "support_discard": "2048",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "type": "disk",
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:             "vendor": "QEMU"
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:         }
Dec 06 08:00:37 compute-1 amazing_swanson[298196]:     }
Dec 06 08:00:37 compute-1 amazing_swanson[298196]: ]
Dec 06 08:00:37 compute-1 systemd[1]: libpod-f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a.scope: Deactivated successfully.
Dec 06 08:00:37 compute-1 systemd[1]: libpod-f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a.scope: Consumed 1.225s CPU time.
Dec 06 08:00:37 compute-1 podman[298180]: 2025-12-06 08:00:37.965550836 +0000 UTC m=+1.425215379 container died f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 08:00:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-1b4dfae4a72dd86be0dce49e5dbbe9843cefefabf7535c30fce0888bcc97921c-merged.mount: Deactivated successfully.
Dec 06 08:00:38 compute-1 podman[298180]: 2025-12-06 08:00:38.013635283 +0000 UTC m=+1.473299796 container remove f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_swanson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Dec 06 08:00:38 compute-1 systemd[1]: libpod-conmon-f4a41bce4e335ad064d810d7a5fbd891691c64f8c2640cd74fe9ecb259fac23a.scope: Deactivated successfully.
Dec 06 08:00:38 compute-1 sudo[298076]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:38.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:38 compute-1 ceph-mon[81689]: pgmap v3234: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 29 KiB/s wr, 136 op/s
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:00:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:00:38 compute-1 sshd-session[297473]: Connection closed by 165.154.55.146 port 52404 [preauth]
Dec 06 08:00:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:39.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:39 compute-1 nova_compute[226101]: 2025-12-06 08:00:39.445 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:39 compute-1 nova_compute[226101]: 2025-12-06 08:00:39.447 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:39 compute-1 nova_compute[226101]: 2025-12-06 08:00:39.448 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:00:39 compute-1 nova_compute[226101]: 2025-12-06 08:00:39.657 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:40 compute-1 nova_compute[226101]: 2025-12-06 08:00:40.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:40.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:40 compute-1 ceph-mon[81689]: pgmap v3235: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 29 KiB/s wr, 136 op/s
Dec 06 08:00:40 compute-1 sshd[164848]: Timeout before authentication for connection from 14.103.118.136 to 38.102.83.204, pid = 296576
Dec 06 08:00:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:00:41.153 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:00:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:00:41.155 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:00:41 compute-1 nova_compute[226101]: 2025-12-06 08:00:41.185 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:41.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:42.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:42 compute-1 nova_compute[226101]: 2025-12-06 08:00:42.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:42 compute-1 ceph-mon[81689]: pgmap v3236: 305 pgs: 305 active+clean; 441 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 08:00:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/744438241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:43.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:43 compute-1 nova_compute[226101]: 2025-12-06 08:00:43.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1692009206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:44.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:44 compute-1 nova_compute[226101]: 2025-12-06 08:00:44.704 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:45 compute-1 nova_compute[226101]: 2025-12-06 08:00:45.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:45 compute-1 ceph-mon[81689]: pgmap v3237: 305 pgs: 305 active+clean; 441 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 08:00:45 compute-1 sudo[299537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:45 compute-1 sudo[299537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:45 compute-1 sudo[299537]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:45.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:45 compute-1 sudo[299562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:00:45 compute-1 sudo[299562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:45 compute-1 sudo[299562]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:45 compute-1 nova_compute[226101]: 2025-12-06 08:00:45.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:46.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:46 compute-1 nova_compute[226101]: 2025-12-06 08:00:46.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:46 compute-1 sshd-session[299587]: Received disconnect from 154.219.116.39 port 41246:11: Bye Bye [preauth]
Dec 06 08:00:46 compute-1 sshd-session[299587]: Disconnected from authenticating user root 154.219.116.39 port 41246 [preauth]
Dec 06 08:00:47 compute-1 ceph-mon[81689]: pgmap v3238: 305 pgs: 305 active+clean; 380 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 868 KiB/s rd, 2.3 MiB/s wr, 141 op/s
Dec 06 08:00:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:48 compute-1 ovn_controller[130279]: 2025-12-06T08:00:48Z|00742|binding|INFO|Releasing lport 6c364430-b8fb-4cd3-a49f-cc09c991f74b from this chassis (sb_readonly=0)
Dec 06 08:00:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:48.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:48 compute-1 nova_compute[226101]: 2025-12-06 08:00:48.208 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:49 compute-1 ceph-mon[81689]: pgmap v3239: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Dec 06 08:00:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:49 compute-1 nova_compute[226101]: 2025-12-06 08:00:49.706 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:50 compute-1 nova_compute[226101]: 2025-12-06 08:00:50.104 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:00:50.156 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:50.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1536474158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:50 compute-1 nova_compute[226101]: 2025-12-06 08:00:50.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:00:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3781078514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:00:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:00:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3781078514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:00:51 compute-1 ceph-mon[81689]: pgmap v3240: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Dec 06 08:00:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2562147690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3781078514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:00:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3781078514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:00:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:52.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:52 compute-1 ceph-mon[81689]: pgmap v3241: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 147 op/s
Dec 06 08:00:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:53.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:54.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:54 compute-1 ceph-mon[81689]: pgmap v3242: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 20 KiB/s wr, 86 op/s
Dec 06 08:00:54 compute-1 nova_compute[226101]: 2025-12-06 08:00:54.745 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:55 compute-1 nova_compute[226101]: 2025-12-06 08:00:55.106 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:55.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:56.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:56 compute-1 ceph-mon[81689]: pgmap v3243: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 20 KiB/s wr, 92 op/s
Dec 06 08:00:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:57.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:58.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:58 compute-1 ceph-mon[81689]: pgmap v3244: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 252 KiB/s rd, 5.3 KiB/s wr, 64 op/s
Dec 06 08:00:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:00:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:59.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:59 compute-1 nova_compute[226101]: 2025-12-06 08:00:59.748 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:00 compute-1 nova_compute[226101]: 2025-12-06 08:01:00.109 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:00 compute-1 ceph-mon[81689]: pgmap v3245: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 4.3 KiB/s wr, 62 op/s
Dec 06 08:01:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:01.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:01.681 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:01.681 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:01.682 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:01 compute-1 CROND[299590]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 08:01:01 compute-1 run-parts[299593]: (/etc/cron.hourly) starting 0anacron
Dec 06 08:01:01 compute-1 run-parts[299599]: (/etc/cron.hourly) finished 0anacron
Dec 06 08:01:01 compute-1 CROND[299589]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 08:01:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:02.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:02 compute-1 ceph-mon[81689]: pgmap v3246: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 4.3 KiB/s wr, 62 op/s
Dec 06 08:01:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:03.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:04.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:04 compute-1 ceph-mon[81689]: pgmap v3247: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 KiB/s rd, 255 B/s wr, 5 op/s
Dec 06 08:01:04 compute-1 nova_compute[226101]: 2025-12-06 08:01:04.751 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:04 compute-1 nova_compute[226101]: 2025-12-06 08:01:04.890 226109 DEBUG nova.compute.manager [req-aec2fad6-6eb4-4e60-ad32-f9e7bd5df038 req-3c2f6e80-0587-4f0c-a38b-0988cc07bb57 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-changed-8240e8bf-1c61-4830-aa30-d8cc74050992 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:01:04 compute-1 nova_compute[226101]: 2025-12-06 08:01:04.891 226109 DEBUG nova.compute.manager [req-aec2fad6-6eb4-4e60-ad32-f9e7bd5df038 req-3c2f6e80-0587-4f0c-a38b-0988cc07bb57 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Refreshing instance network info cache due to event network-changed-8240e8bf-1c61-4830-aa30-d8cc74050992. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:01:04 compute-1 nova_compute[226101]: 2025-12-06 08:01:04.892 226109 DEBUG oslo_concurrency.lockutils [req-aec2fad6-6eb4-4e60-ad32-f9e7bd5df038 req-3c2f6e80-0587-4f0c-a38b-0988cc07bb57 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:01:04 compute-1 nova_compute[226101]: 2025-12-06 08:01:04.892 226109 DEBUG oslo_concurrency.lockutils [req-aec2fad6-6eb4-4e60-ad32-f9e7bd5df038 req-3c2f6e80-0587-4f0c-a38b-0988cc07bb57 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:01:04 compute-1 nova_compute[226101]: 2025-12-06 08:01:04.893 226109 DEBUG nova.network.neutron [req-aec2fad6-6eb4-4e60-ad32-f9e7bd5df038 req-3c2f6e80-0587-4f0c-a38b-0988cc07bb57 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Refreshing network info cache for port 8240e8bf-1c61-4830-aa30-d8cc74050992 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:01:05 compute-1 podman[299601]: 2025-12-06 08:01:05.104538438 +0000 UTC m=+0.073911600 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 08:01:05 compute-1 podman[299600]: 2025-12-06 08:01:05.118165713 +0000 UTC m=+0.093266198 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.168 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 podman[299602]: 2025-12-06 08:01:05.195232107 +0000 UTC m=+0.157267451 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.348 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "e150c7b6-c537-4096-abbc-21b5ab35233e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.348 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.349 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.349 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.350 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.351 226109 INFO nova.compute.manager [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Terminating instance
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.352 226109 DEBUG nova.compute.manager [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:01:05 compute-1 kernel: tap8240e8bf-1c (unregistering): left promiscuous mode
Dec 06 08:01:05 compute-1 NetworkManager[49031]: <info>  [1765008065.4388] device (tap8240e8bf-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.450 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 ovn_controller[130279]: 2025-12-06T08:01:05Z|00743|binding|INFO|Releasing lport 8240e8bf-1c61-4830-aa30-d8cc74050992 from this chassis (sb_readonly=0)
Dec 06 08:01:05 compute-1 ovn_controller[130279]: 2025-12-06T08:01:05Z|00744|binding|INFO|Setting lport 8240e8bf-1c61-4830-aa30-d8cc74050992 down in Southbound
Dec 06 08:01:05 compute-1 ovn_controller[130279]: 2025-12-06T08:01:05Z|00745|binding|INFO|Removing iface tap8240e8bf-1c ovn-installed in OVS
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.452 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:05.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.476 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b6.scope: Deactivated successfully.
Dec 06 08:01:05 compute-1 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b6.scope: Consumed 18.026s CPU time.
Dec 06 08:01:05 compute-1 systemd-machined[190302]: Machine qemu-85-instance-000000b6 terminated.
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.582 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.588 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.604 226109 INFO nova.virt.libvirt.driver [-] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Instance destroyed successfully.
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.605 226109 DEBUG nova.objects.instance [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid e150c7b6-c537-4096-abbc-21b5ab35233e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.630 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:18:26 10.100.0.11'], port_security=['fa:16:3e:e0:18:26 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e150c7b6-c537-4096-abbc-21b5ab35233e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdb26d0f-aef5-43f2-bb12-cbd355c4f073', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e06d517-5e8c-42ff-a272-4d739dd550f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15307fb5-6d15-49a7-8b9f-f345c03de2d2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=8240e8bf-1c61-4830-aa30-d8cc74050992) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.633 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 8240e8bf-1c61-4830-aa30-d8cc74050992 in datapath cdb26d0f-aef5-43f2-bb12-cbd355c4f073 unbound from our chassis
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.634 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cdb26d0f-aef5-43f2-bb12-cbd355c4f073, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.636 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc2e583-5469-47cf-a246-aba04c3ba28f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.637 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073 namespace which is not needed anymore
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.708 226109 DEBUG nova.virt.libvirt.vif [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:59:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1504657561',display_name='tempest-TestNetworkBasicOps-server-1504657561',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1504657561',id=182,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJAvo6gnJLtQ+iDyQoMhzODl+npZCwBJ411Cs2wEIHduGTuZOEIjC3CuVXxMQXImpzm7G5xXQpEVCsxgPZi7GfUa3TO9i/q7RIfqFiTkcHVDw16PqOFS53sVFMtvwiUpwQ==',key_name='tempest-TestNetworkBasicOps-774491034',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:59:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-vvq0z9jl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:59:21Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=e150c7b6-c537-4096-abbc-21b5ab35233e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.710 226109 DEBUG nova.network.os_vif_util [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.711 226109 DEBUG nova.network.os_vif_util [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:18:26,bridge_name='br-int',has_traffic_filtering=True,id=8240e8bf-1c61-4830-aa30-d8cc74050992,network=Network(cdb26d0f-aef5-43f2-bb12-cbd355c4f073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8240e8bf-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.712 226109 DEBUG os_vif [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:18:26,bridge_name='br-int',has_traffic_filtering=True,id=8240e8bf-1c61-4830-aa30-d8cc74050992,network=Network(cdb26d0f-aef5-43f2-bb12-cbd355c4f073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8240e8bf-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.715 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.715 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8240e8bf-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.723 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.726 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.730 226109 INFO os_vif [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:18:26,bridge_name='br-int',has_traffic_filtering=True,id=8240e8bf-1c61-4830-aa30-d8cc74050992,network=Network(cdb26d0f-aef5-43f2-bb12-cbd355c4f073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8240e8bf-1c')
Dec 06 08:01:05 compute-1 neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073[297090]: [NOTICE]   (297094) : haproxy version is 2.8.14-c23fe91
Dec 06 08:01:05 compute-1 neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073[297090]: [NOTICE]   (297094) : path to executable is /usr/sbin/haproxy
Dec 06 08:01:05 compute-1 neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073[297090]: [WARNING]  (297094) : Exiting Master process...
Dec 06 08:01:05 compute-1 neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073[297090]: [WARNING]  (297094) : Exiting Master process...
Dec 06 08:01:05 compute-1 neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073[297090]: [ALERT]    (297094) : Current worker (297096) exited with code 143 (Terminated)
Dec 06 08:01:05 compute-1 neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073[297090]: [WARNING]  (297094) : All workers exited. Exiting... (0)
Dec 06 08:01:05 compute-1 systemd[1]: libpod-3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42.scope: Deactivated successfully.
Dec 06 08:01:05 compute-1 podman[299716]: 2025-12-06 08:01:05.838817078 +0000 UTC m=+0.050844223 container died 3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:01:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-951ca6e869fba8b3882b88d2515f1d4e5ee1ab2d6b157ec6602f70a5feee3b97-merged.mount: Deactivated successfully.
Dec 06 08:01:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42-userdata-shm.mount: Deactivated successfully.
Dec 06 08:01:05 compute-1 podman[299716]: 2025-12-06 08:01:05.879315151 +0000 UTC m=+0.091342306 container cleanup 3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:01:05 compute-1 systemd[1]: libpod-conmon-3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42.scope: Deactivated successfully.
Dec 06 08:01:05 compute-1 podman[299751]: 2025-12-06 08:01:05.938751373 +0000 UTC m=+0.042636453 container remove 3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.944 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[36f9a252-96f6-49ac-8259-df457f275907]: (4, ('Sat Dec  6 08:01:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073 (3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42)\n3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42\nSat Dec  6 08:01:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073 (3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42)\n3c6fd5fa3a70ffd5603e7187d0fcbcf5bc603a79f31869371e022e8b1f6ccc42\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.946 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e8746015-64cb-407f-b4f1-2847a1f33c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.949 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdb26d0f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:01:05 compute-1 kernel: tapcdb26d0f-a0: left promiscuous mode
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.952 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.957 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[64ea4f58-e14c-41c4-bd19-eb00a1edbbe8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:01:05 compute-1 nova_compute[226101]: 2025-12-06 08:01:05.966 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.970 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[83702a84-0f16-4680-8c66-19d18c3ce866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.971 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[20573e36-87cc-4c82-823d-f09db1cebaf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.988 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b3f31a-e24b-4ba9-bfcd-2bb03b0fdbf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 824046, 'reachable_time': 37548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299766, 'error': None, 'target': 'ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.990 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cdb26d0f-aef5-43f2-bb12-cbd355c4f073 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:01:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:05.991 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[832d38a3-853e-4218-9697-26743ba0d217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:01:05 compute-1 systemd[1]: run-netns-ovnmeta\x2dcdb26d0f\x2daef5\x2d43f2\x2dbb12\x2dcbd355c4f073.mount: Deactivated successfully.
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.199 226109 DEBUG nova.compute.manager [req-9b3cf3fe-a02e-456b-aecf-9cd4aee90331 req-1358b481-dea6-404d-bfeb-e9ed31458292 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-vif-unplugged-8240e8bf-1c61-4830-aa30-d8cc74050992 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.199 226109 DEBUG oslo_concurrency.lockutils [req-9b3cf3fe-a02e-456b-aecf-9cd4aee90331 req-1358b481-dea6-404d-bfeb-e9ed31458292 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.200 226109 DEBUG oslo_concurrency.lockutils [req-9b3cf3fe-a02e-456b-aecf-9cd4aee90331 req-1358b481-dea6-404d-bfeb-e9ed31458292 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.200 226109 DEBUG oslo_concurrency.lockutils [req-9b3cf3fe-a02e-456b-aecf-9cd4aee90331 req-1358b481-dea6-404d-bfeb-e9ed31458292 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.200 226109 DEBUG nova.compute.manager [req-9b3cf3fe-a02e-456b-aecf-9cd4aee90331 req-1358b481-dea6-404d-bfeb-e9ed31458292 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] No waiting events found dispatching network-vif-unplugged-8240e8bf-1c61-4830-aa30-d8cc74050992 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.200 226109 DEBUG nova.compute.manager [req-9b3cf3fe-a02e-456b-aecf-9cd4aee90331 req-1358b481-dea6-404d-bfeb-e9ed31458292 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-vif-unplugged-8240e8bf-1c61-4830-aa30-d8cc74050992 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:01:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:06.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.315 226109 INFO nova.virt.libvirt.driver [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Deleting instance files /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e_del
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.317 226109 INFO nova.virt.libvirt.driver [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Deletion of /var/lib/nova/instances/e150c7b6-c537-4096-abbc-21b5ab35233e_del complete
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.659 226109 INFO nova.compute.manager [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Took 1.31 seconds to destroy the instance on the hypervisor.
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.659 226109 DEBUG oslo.service.loopingcall [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.660 226109 DEBUG nova.compute.manager [-] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:01:06 compute-1 nova_compute[226101]: 2025-12-06 08:01:06.660 226109 DEBUG nova.network.neutron [-] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:01:06 compute-1 ceph-mon[81689]: pgmap v3248: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 KiB/s rd, 597 B/s wr, 6 op/s
Dec 06 08:01:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:07.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 08:01:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:08.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 08:01:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:08 compute-1 ceph-mon[81689]: pgmap v3249: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 3 op/s
Dec 06 08:01:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:09.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2492647070' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:01:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2492647070' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.205 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:10.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.687 226109 DEBUG nova.compute.manager [req-68516ab2-faa0-436e-b1da-4e7f04e49733 req-0c1d14a3-8acc-4731-956b-718b297729cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.687 226109 DEBUG oslo_concurrency.lockutils [req-68516ab2-faa0-436e-b1da-4e7f04e49733 req-0c1d14a3-8acc-4731-956b-718b297729cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.688 226109 DEBUG oslo_concurrency.lockutils [req-68516ab2-faa0-436e-b1da-4e7f04e49733 req-0c1d14a3-8acc-4731-956b-718b297729cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.688 226109 DEBUG oslo_concurrency.lockutils [req-68516ab2-faa0-436e-b1da-4e7f04e49733 req-0c1d14a3-8acc-4731-956b-718b297729cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.688 226109 DEBUG nova.compute.manager [req-68516ab2-faa0-436e-b1da-4e7f04e49733 req-0c1d14a3-8acc-4731-956b-718b297729cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] No waiting events found dispatching network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.688 226109 WARNING nova.compute.manager [req-68516ab2-faa0-436e-b1da-4e7f04e49733 req-0c1d14a3-8acc-4731-956b-718b297729cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received unexpected event network-vif-plugged-8240e8bf-1c61-4830-aa30-d8cc74050992 for instance with vm_state active and task_state deleting.
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.719 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:10 compute-1 ceph-mon[81689]: pgmap v3250: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 3 op/s
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.851 226109 DEBUG nova.network.neutron [req-aec2fad6-6eb4-4e60-ad32-f9e7bd5df038 req-3c2f6e80-0587-4f0c-a38b-0988cc07bb57 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updated VIF entry in instance network info cache for port 8240e8bf-1c61-4830-aa30-d8cc74050992. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:01:10 compute-1 nova_compute[226101]: 2025-12-06 08:01:10.852 226109 DEBUG nova.network.neutron [req-aec2fad6-6eb4-4e60-ad32-f9e7bd5df038 req-3c2f6e80-0587-4f0c-a38b-0988cc07bb57 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updating instance_info_cache with network_info: [{"id": "8240e8bf-1c61-4830-aa30-d8cc74050992", "address": "fa:16:3e:e0:18:26", "network": {"id": "cdb26d0f-aef5-43f2-bb12-cbd355c4f073", "bridge": "br-int", "label": "tempest-network-smoke--1687055238", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8240e8bf-1c", "ovs_interfaceid": "8240e8bf-1c61-4830-aa30-d8cc74050992", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:01:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:11.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:12.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:12 compute-1 ceph-mon[81689]: pgmap v3251: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.5 KiB/s wr, 75 op/s
Dec 06 08:01:13 compute-1 nova_compute[226101]: 2025-12-06 08:01:13.084 226109 DEBUG oslo_concurrency.lockutils [req-aec2fad6-6eb4-4e60-ad32-f9e7bd5df038 req-3c2f6e80-0587-4f0c-a38b-0988cc07bb57 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e150c7b6-c537-4096-abbc-21b5ab35233e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:01:13 compute-1 nova_compute[226101]: 2025-12-06 08:01:13.274 226109 DEBUG nova.network.neutron [-] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:01:13 compute-1 nova_compute[226101]: 2025-12-06 08:01:13.328 226109 INFO nova.compute.manager [-] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Took 6.67 seconds to deallocate network for instance.
Dec 06 08:01:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:13.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:13 compute-1 nova_compute[226101]: 2025-12-06 08:01:13.832 226109 DEBUG nova.compute.manager [req-a52936b0-b8ab-475d-ba52-ec5856adf273 req-eea98f5e-2d41-4b9b-b943-9d5e8ea1dde2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Received event network-vif-deleted-8240e8bf-1c61-4830-aa30-d8cc74050992 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:01:13 compute-1 nova_compute[226101]: 2025-12-06 08:01:13.879 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:13 compute-1 nova_compute[226101]: 2025-12-06 08:01:13.879 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:13 compute-1 nova_compute[226101]: 2025-12-06 08:01:13.954 226109 DEBUG oslo_concurrency.processutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:01:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:14.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:14 compute-1 nova_compute[226101]: 2025-12-06 08:01:14.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:14 compute-1 nova_compute[226101]: 2025-12-06 08:01:14.439 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:01:14 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3347912175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:14 compute-1 nova_compute[226101]: 2025-12-06 08:01:14.465 226109 DEBUG oslo_concurrency.processutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:01:14 compute-1 nova_compute[226101]: 2025-12-06 08:01:14.471 226109 DEBUG nova.compute.provider_tree [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:01:14 compute-1 nova_compute[226101]: 2025-12-06 08:01:14.647 226109 DEBUG nova.scheduler.client.report [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:01:14 compute-1 nova_compute[226101]: 2025-12-06 08:01:14.688 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:14 compute-1 nova_compute[226101]: 2025-12-06 08:01:14.737 226109 INFO nova.scheduler.client.report [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance e150c7b6-c537-4096-abbc-21b5ab35233e
Dec 06 08:01:14 compute-1 nova_compute[226101]: 2025-12-06 08:01:14.830 226109 DEBUG oslo_concurrency.lockutils [None req-b503ff16-e7b3-453c-9784-cb8606c0a420 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "e150c7b6-c537-4096-abbc-21b5ab35233e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:14 compute-1 ceph-mon[81689]: pgmap v3252: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.5 KiB/s wr, 75 op/s
Dec 06 08:01:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3347912175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:15 compute-1 nova_compute[226101]: 2025-12-06 08:01:15.207 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:15.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:15 compute-1 nova_compute[226101]: 2025-12-06 08:01:15.721 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:16.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:17 compute-1 ceph-mon[81689]: pgmap v3253: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 1.5 KiB/s wr, 96 op/s
Dec 06 08:01:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:17.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:18.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:19 compute-1 ceph-mon[81689]: pgmap v3254: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 1.2 KiB/s wr, 104 op/s
Dec 06 08:01:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:19.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 08:01:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 08:01:20 compute-1 nova_compute[226101]: 2025-12-06 08:01:20.238 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:20 compute-1 ceph-mon[81689]: pgmap v3255: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 1023 B/s wr, 101 op/s
Dec 06 08:01:20 compute-1 nova_compute[226101]: 2025-12-06 08:01:20.603 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008065.6011639, e150c7b6-c537-4096-abbc-21b5ab35233e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:01:20 compute-1 nova_compute[226101]: 2025-12-06 08:01:20.604 226109 INFO nova.compute.manager [-] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] VM Stopped (Lifecycle Event)
Dec 06 08:01:20 compute-1 nova_compute[226101]: 2025-12-06 08:01:20.723 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:21.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:22.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:22 compute-1 ceph-mon[81689]: pgmap v3256: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 122 KiB/s rd, 1023 B/s wr, 198 op/s
Dec 06 08:01:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:23.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:24 compute-1 ceph-mon[81689]: pgmap v3257: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 0 B/s wr, 126 op/s
Dec 06 08:01:24 compute-1 nova_compute[226101]: 2025-12-06 08:01:24.927 226109 DEBUG nova.compute.manager [None req-b5d3b55d-1ffd-44fc-a314-9316ad11ea33 - - - - - -] [instance: e150c7b6-c537-4096-abbc-21b5ab35233e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:01:25 compute-1 nova_compute[226101]: 2025-12-06 08:01:25.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:25.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:25 compute-1 nova_compute[226101]: 2025-12-06 08:01:25.725 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:26.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:26 compute-1 ceph-mon[81689]: pgmap v3258: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 0 B/s wr, 131 op/s
Dec 06 08:01:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:27.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:28.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:28 compute-1 ceph-mon[81689]: pgmap v3259: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 0 B/s wr, 109 op/s
Dec 06 08:01:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:29.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:30.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:30 compute-1 nova_compute[226101]: 2025-12-06 08:01:30.280 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:30 compute-1 nova_compute[226101]: 2025-12-06 08:01:30.727 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:31.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:31 compute-1 nova_compute[226101]: 2025-12-06 08:01:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:31 compute-1 ceph-mon[81689]: pgmap v3260: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 101 op/s
Dec 06 08:01:31 compute-1 nova_compute[226101]: 2025-12-06 08:01:31.999 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:31 compute-1 nova_compute[226101]: 2025-12-06 08:01:31.999 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:32 compute-1 nova_compute[226101]: 2025-12-06 08:01:31.999 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:32 compute-1 nova_compute[226101]: 2025-12-06 08:01:32.000 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:01:32 compute-1 nova_compute[226101]: 2025-12-06 08:01:32.000 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:01:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:32.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:01:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1832219241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:32 compute-1 nova_compute[226101]: 2025-12-06 08:01:32.916 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.916s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:01:32 compute-1 ceph-mon[81689]: pgmap v3261: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 101 op/s
Dec 06 08:01:33 compute-1 nova_compute[226101]: 2025-12-06 08:01:33.128 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:01:33 compute-1 nova_compute[226101]: 2025-12-06 08:01:33.130 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4295MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:01:33 compute-1 nova_compute[226101]: 2025-12-06 08:01:33.131 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:33 compute-1 nova_compute[226101]: 2025-12-06 08:01:33.131 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:33.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1832219241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:34.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:34 compute-1 nova_compute[226101]: 2025-12-06 08:01:34.903 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:01:34 compute-1 nova_compute[226101]: 2025-12-06 08:01:34.903 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:01:35 compute-1 nova_compute[226101]: 2025-12-06 08:01:35.075 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:01:35 compute-1 ceph-mon[81689]: pgmap v3262: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:01:35 compute-1 nova_compute[226101]: 2025-12-06 08:01:35.287 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:01:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2160125574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:35.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:35 compute-1 nova_compute[226101]: 2025-12-06 08:01:35.519 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:01:35 compute-1 nova_compute[226101]: 2025-12-06 08:01:35.524 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:01:35 compute-1 nova_compute[226101]: 2025-12-06 08:01:35.558 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:01:35 compute-1 nova_compute[226101]: 2025-12-06 08:01:35.593 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:01:35 compute-1 nova_compute[226101]: 2025-12-06 08:01:35.593 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:35 compute-1 nova_compute[226101]: 2025-12-06 08:01:35.728 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:36 compute-1 podman[299837]: 2025-12-06 08:01:36.086041323 +0000 UTC m=+0.066050879 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:01:36 compute-1 podman[299838]: 2025-12-06 08:01:36.099113833 +0000 UTC m=+0.072568854 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 08:01:36 compute-1 podman[299839]: 2025-12-06 08:01:36.120809814 +0000 UTC m=+0.098282202 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 08:01:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2059535779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2160125574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/785130296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:36.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:36 compute-1 nova_compute[226101]: 2025-12-06 08:01:36.594 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:36 compute-1 nova_compute[226101]: 2025-12-06 08:01:36.595 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:01:36 compute-1 nova_compute[226101]: 2025-12-06 08:01:36.595 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:01:36 compute-1 nova_compute[226101]: 2025-12-06 08:01:36.615 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:01:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:37.113 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:01:37 compute-1 nova_compute[226101]: 2025-12-06 08:01:37.114 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:37.114 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:01:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:37.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:37 compute-1 nova_compute[226101]: 2025-12-06 08:01:37.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:37 compute-1 nova_compute[226101]: 2025-12-06 08:01:37.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:01:38 compute-1 ceph-mon[81689]: pgmap v3263: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:01:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:38.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:39.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:39 compute-1 ceph-mon[81689]: pgmap v3264: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:39 compute-1 nova_compute[226101]: 2025-12-06 08:01:39.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:40.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:40 compute-1 nova_compute[226101]: 2025-12-06 08:01:40.327 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:40 compute-1 ceph-mon[81689]: pgmap v3265: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:40 compute-1 nova_compute[226101]: 2025-12-06 08:01:40.730 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:41.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:42.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:42 compute-1 ceph-mon[81689]: pgmap v3266: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:43.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1929047662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:44 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:01:44.115 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:01:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:44.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:44 compute-1 nova_compute[226101]: 2025-12-06 08:01:44.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:45 compute-1 nova_compute[226101]: 2025-12-06 08:01:45.328 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:45.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:45 compute-1 nova_compute[226101]: 2025-12-06 08:01:45.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:45 compute-1 sudo[299902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:45 compute-1 sudo[299902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:45 compute-1 sudo[299902]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:45 compute-1 nova_compute[226101]: 2025-12-06 08:01:45.731 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:45 compute-1 sudo[299927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:01:45 compute-1 sudo[299927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:45 compute-1 sudo[299927]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:45 compute-1 ceph-mon[81689]: pgmap v3267: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3042858999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:45 compute-1 sudo[299952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:45 compute-1 sudo[299952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:45 compute-1 sudo[299952]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:45 compute-1 sudo[299977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:01:45 compute-1 sudo[299977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:46.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:46 compute-1 sudo[299977]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:47 compute-1 ceph-mon[81689]: pgmap v3268: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:01:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:01:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:01:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:01:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:01:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:47.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:47 compute-1 nova_compute[226101]: 2025-12-06 08:01:47.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:48.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:48 compute-1 nova_compute[226101]: 2025-12-06 08:01:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:49.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:49 compute-1 ceph-mon[81689]: pgmap v3269: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:50.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:50 compute-1 nova_compute[226101]: 2025-12-06 08:01:50.374 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:50 compute-1 sshd-session[300034]: Received disconnect from 136.112.8.45 port 54634:11: Bye Bye [preauth]
Dec 06 08:01:50 compute-1 sshd-session[300034]: Disconnected from authenticating user root 136.112.8.45 port 54634 [preauth]
Dec 06 08:01:50 compute-1 nova_compute[226101]: 2025-12-06 08:01:50.733 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:50 compute-1 ceph-mon[81689]: pgmap v3270: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:51.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:51 compute-1 nova_compute[226101]: 2025-12-06 08:01:51.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:52 compute-1 nova_compute[226101]: 2025-12-06 08:01:52.005 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:52 compute-1 nova_compute[226101]: 2025-12-06 08:01:52.006 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:52 compute-1 nova_compute[226101]: 2025-12-06 08:01:52.060 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:01:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:52.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:52 compute-1 nova_compute[226101]: 2025-12-06 08:01:52.484 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:52 compute-1 nova_compute[226101]: 2025-12-06 08:01:52.485 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:52 compute-1 nova_compute[226101]: 2025-12-06 08:01:52.497 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:01:52 compute-1 nova_compute[226101]: 2025-12-06 08:01:52.498 226109 INFO nova.compute.claims [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:01:53 compute-1 nova_compute[226101]: 2025-12-06 08:01:53.052 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:01:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:53.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:54 compute-1 ceph-mon[81689]: pgmap v3271: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:54.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:01:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2554890083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:55 compute-1 sudo[300056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:55 compute-1 ceph-mon[81689]: pgmap v3272: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2554890083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:55 compute-1 sudo[300056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.084 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:01:55 compute-1 sudo[300056]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.089 226109 DEBUG nova.compute.provider_tree [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.108 226109 DEBUG nova.scheduler.client.report [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:01:55 compute-1 sudo[300083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:01:55 compute-1 sudo[300083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:55 compute-1 sudo[300083]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.152 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.152 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.204 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.204 226109 DEBUG nova.network.neutron [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.284 226109 INFO nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.302 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.375 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:55.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.562 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.563 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.563 226109 INFO nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Creating image(s)
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.591 226109 DEBUG nova.storage.rbd_utils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.624 226109 DEBUG nova.storage.rbd_utils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.654 226109 DEBUG nova.storage.rbd_utils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.658 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.721 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.722 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.723 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.723 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.752 226109 DEBUG nova.storage.rbd_utils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.755 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:01:55 compute-1 nova_compute[226101]: 2025-12-06 08:01:55.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:56 compute-1 nova_compute[226101]: 2025-12-06 08:01:56.166 226109 DEBUG nova.policy [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:01:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:56.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:57.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:57 compute-1 nova_compute[226101]: 2025-12-06 08:01:57.948 226109 DEBUG nova.network.neutron [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Successfully created port: 7be900e8-79cd-473a-8f1d-df5029d9e773 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:01:58 compute-1 sshd-session[300199]: Received disconnect from 154.219.116.39 port 60062:11: Bye Bye [preauth]
Dec 06 08:01:58 compute-1 sshd-session[300199]: Disconnected from authenticating user root 154.219.116.39 port 60062 [preauth]
Dec 06 08:01:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:58.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:59 compute-1 ceph-mon[81689]: pgmap v3273: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:59 compute-1 nova_compute[226101]: 2025-12-06 08:01:59.267 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:01:59 compute-1 nova_compute[226101]: 2025-12-06 08:01:59.344 226109 DEBUG nova.storage.rbd_utils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:01:59 compute-1 nova_compute[226101]: 2025-12-06 08:01:59.429 226109 DEBUG nova.objects.instance [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:01:59 compute-1 nova_compute[226101]: 2025-12-06 08:01:59.484 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:01:59 compute-1 nova_compute[226101]: 2025-12-06 08:01:59.485 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Ensure instance console log exists: /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:01:59 compute-1 nova_compute[226101]: 2025-12-06 08:01:59.485 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:59 compute-1 nova_compute[226101]: 2025-12-06 08:01:59.485 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:59 compute-1 nova_compute[226101]: 2025-12-06 08:01:59.486 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:01:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:59.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:59 compute-1 ceph-mon[81689]: pgmap v3274: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:02:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:00.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:00 compute-1 nova_compute[226101]: 2025-12-06 08:02:00.378 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:00 compute-1 nova_compute[226101]: 2025-12-06 08:02:00.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:01 compute-1 nova_compute[226101]: 2025-12-06 08:02:01.267 226109 DEBUG nova.network.neutron [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Successfully updated port: 7be900e8-79cd-473a-8f1d-df5029d9e773 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:02:01 compute-1 nova_compute[226101]: 2025-12-06 08:02:01.284 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:02:01 compute-1 nova_compute[226101]: 2025-12-06 08:02:01.285 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:02:01 compute-1 nova_compute[226101]: 2025-12-06 08:02:01.285 226109 DEBUG nova.network.neutron [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:02:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:01.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:01.681 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:01.682 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:01.682 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:01 compute-1 nova_compute[226101]: 2025-12-06 08:02:01.685 226109 DEBUG nova.compute.manager [req-25e775ac-3d19-4f14-b4ad-ddd896279112 req-9343858b-4268-4d30-9c0b-9849c0fa7524 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:01 compute-1 nova_compute[226101]: 2025-12-06 08:02:01.686 226109 DEBUG nova.compute.manager [req-25e775ac-3d19-4f14-b4ad-ddd896279112 req-9343858b-4268-4d30-9c0b-9849c0fa7524 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing instance network info cache due to event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:02:01 compute-1 nova_compute[226101]: 2025-12-06 08:02:01.686 226109 DEBUG oslo_concurrency.lockutils [req-25e775ac-3d19-4f14-b4ad-ddd896279112 req-9343858b-4268-4d30-9c0b-9849c0fa7524 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:02:01 compute-1 ceph-mon[81689]: pgmap v3275: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:02:02 compute-1 nova_compute[226101]: 2025-12-06 08:02:02.160 226109 DEBUG nova.network.neutron [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:02:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:02.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:03 compute-1 ceph-mon[81689]: pgmap v3276: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:03.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.280 226109 DEBUG nova.network.neutron [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:02:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:04.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.340 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.340 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance network_info: |[{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.341 226109 DEBUG oslo_concurrency.lockutils [req-25e775ac-3d19-4f14-b4ad-ddd896279112 req-9343858b-4268-4d30-9c0b-9849c0fa7524 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.341 226109 DEBUG nova.network.neutron [req-25e775ac-3d19-4f14-b4ad-ddd896279112 req-9343858b-4268-4d30-9c0b-9849c0fa7524 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.343 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Start _get_guest_xml network_info=[{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.347 226109 WARNING nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.396 226109 DEBUG nova.virt.libvirt.host [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.397 226109 DEBUG nova.virt.libvirt.host [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.411 226109 DEBUG nova.virt.libvirt.host [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.413 226109 DEBUG nova.virt.libvirt.host [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.416 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.417 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.418 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.418 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.419 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.420 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.420 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.421 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.421 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.422 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.422 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.423 226109 DEBUG nova.virt.hardware [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.427 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:02:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3338388719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.838 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.871 226109 DEBUG nova.storage.rbd_utils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:02:04 compute-1 nova_compute[226101]: 2025-12-06 08:02:04.875 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:02:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/513943934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.280 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.283 226109 DEBUG nova.virt.libvirt.vif [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:01:55Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.284 226109 DEBUG nova.network.os_vif_util [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.285 226109 DEBUG nova.network.os_vif_util [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.287 226109 DEBUG nova.objects.instance [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.381 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.388 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <uuid>a00396fd-1a78-4cad-9c38-7b0905ab5b9f</uuid>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <name>instance-000000b8</name>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1312966031</nova:name>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:02:04</nova:creationTime>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <nova:port uuid="7be900e8-79cd-473a-8f1d-df5029d9e773">
Dec 06 08:02:05 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <system>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <entry name="serial">a00396fd-1a78-4cad-9c38-7b0905ab5b9f</entry>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <entry name="uuid">a00396fd-1a78-4cad-9c38-7b0905ab5b9f</entry>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </system>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <os>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   </os>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <features>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   </features>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk">
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       </source>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk.config">
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       </source>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:02:05 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:eb:70:ef"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <target dev="tap7be900e8-79"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/console.log" append="off"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <video>
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </video>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:02:05 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:02:05 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:02:05 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:02:05 compute-1 nova_compute[226101]: </domain>
Dec 06 08:02:05 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.390 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Preparing to wait for external event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.390 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.391 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.392 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.393 226109 DEBUG nova.virt.libvirt.vif [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:01:55Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.394 226109 DEBUG nova.network.os_vif_util [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.395 226109 DEBUG nova.network.os_vif_util [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.396 226109 DEBUG os_vif [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.397 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.398 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.398 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.403 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.403 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7be900e8-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.404 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7be900e8-79, col_values=(('external_ids', {'iface-id': '7be900e8-79cd-473a-8f1d-df5029d9e773', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:70:ef', 'vm-uuid': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.406 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:05 compute-1 NetworkManager[49031]: <info>  [1765008125.4076] manager: (tap7be900e8-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.409 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.417 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:05 compute-1 nova_compute[226101]: 2025-12-06 08:02:05.418 226109 INFO os_vif [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79')
Dec 06 08:02:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:05.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:06.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:06 compute-1 nova_compute[226101]: 2025-12-06 08:02:06.456 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:02:06 compute-1 nova_compute[226101]: 2025-12-06 08:02:06.457 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:02:06 compute-1 nova_compute[226101]: 2025-12-06 08:02:06.457 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:eb:70:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:02:06 compute-1 nova_compute[226101]: 2025-12-06 08:02:06.458 226109 INFO nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Using config drive
Dec 06 08:02:06 compute-1 nova_compute[226101]: 2025-12-06 08:02:06.484 226109 DEBUG nova.storage.rbd_utils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:02:06 compute-1 ceph-mon[81689]: pgmap v3277: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3338388719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:07 compute-1 podman[300359]: 2025-12-06 08:02:07.085291496 +0000 UTC m=+0.060046868 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:02:07 compute-1 podman[300358]: 2025-12-06 08:02:07.089263273 +0000 UTC m=+0.062766562 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:02:07 compute-1 podman[300360]: 2025-12-06 08:02:07.114229791 +0000 UTC m=+0.088923851 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 08:02:07 compute-1 nova_compute[226101]: 2025-12-06 08:02:07.306 226109 INFO nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Creating config drive at /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/disk.config
Dec 06 08:02:07 compute-1 nova_compute[226101]: 2025-12-06 08:02:07.312 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkpfafb_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:07 compute-1 nova_compute[226101]: 2025-12-06 08:02:07.449 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkpfafb_" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:07 compute-1 nova_compute[226101]: 2025-12-06 08:02:07.478 226109 DEBUG nova.storage.rbd_utils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:02:07 compute-1 nova_compute[226101]: 2025-12-06 08:02:07.483 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/disk.config a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:07.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:07 compute-1 nova_compute[226101]: 2025-12-06 08:02:07.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/513943934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:07 compute-1 ceph-mon[81689]: pgmap v3278: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.190 226109 DEBUG nova.network.neutron [req-25e775ac-3d19-4f14-b4ad-ddd896279112 req-9343858b-4268-4d30-9c0b-9849c0fa7524 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated VIF entry in instance network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.191 226109 DEBUG nova.network.neutron [req-25e775ac-3d19-4f14-b4ad-ddd896279112 req-9343858b-4268-4d30-9c0b-9849c0fa7524 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.217 226109 DEBUG oslo_concurrency.processutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/disk.config a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.734s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.218 226109 INFO nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Deleting local config drive /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/disk.config because it was imported into RBD.
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.219 226109 DEBUG oslo_concurrency.lockutils [req-25e775ac-3d19-4f14-b4ad-ddd896279112 req-9343858b-4268-4d30-9c0b-9849c0fa7524 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:02:08 compute-1 kernel: tap7be900e8-79: entered promiscuous mode
Dec 06 08:02:08 compute-1 NetworkManager[49031]: <info>  [1765008128.2686] manager: (tap7be900e8-79): new Tun device (/org/freedesktop/NetworkManager/Devices/336)
Dec 06 08:02:08 compute-1 ovn_controller[130279]: 2025-12-06T08:02:08Z|00746|binding|INFO|Claiming lport 7be900e8-79cd-473a-8f1d-df5029d9e773 for this chassis.
Dec 06 08:02:08 compute-1 ovn_controller[130279]: 2025-12-06T08:02:08Z|00747|binding|INFO|7be900e8-79cd-473a-8f1d-df5029d9e773: Claiming fa:16:3e:eb:70:ef 10.100.0.4
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.269 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.286 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:70:ef 10.100.0.4'], port_security=['fa:16:3e:eb:70:ef 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89003807-49b2-48b6-9510-52c4e2235abf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64cba72f-9df1-4bbf-91d5-2ef412c84dfa, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=7be900e8-79cd-473a-8f1d-df5029d9e773) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.288 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 7be900e8-79cd-473a-8f1d-df5029d9e773 in datapath 26d75c28-bf40-4c60-9e29-1a7b2fb696a0 bound to our chassis
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.289 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.298 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[49102768-3c75-4c98-8ed1-1b455ba94907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.299 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26d75c28-b1 in ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.302 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26d75c28-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.302 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2b0e8c-c4bd-4cb8-9eba-a73b755654ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.304 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c1e70b-72d8-4630-980c-10af99e5610b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 systemd-udevd[300471]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:02:08 compute-1 systemd-machined[190302]: New machine qemu-86-instance-000000b8.
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.316 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[258eeefb-0c7d-483f-96d3-8313bbf0cb36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 NetworkManager[49031]: <info>  [1765008128.3204] device (tap7be900e8-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:02:08 compute-1 NetworkManager[49031]: <info>  [1765008128.3211] device (tap7be900e8-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:02:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:08.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:08 compute-1 systemd[1]: Started Virtual Machine qemu-86-instance-000000b8.
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.349 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f0983fe1-026f-4947-afb4-926f8d849057]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.350 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:08 compute-1 ovn_controller[130279]: 2025-12-06T08:02:08Z|00748|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 ovn-installed in OVS
Dec 06 08:02:08 compute-1 ovn_controller[130279]: 2025-12-06T08:02:08Z|00749|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 up in Southbound
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.360 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.387 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d98650af-eb71-4b9d-b4fd-e8247665d2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 NetworkManager[49031]: <info>  [1765008128.3942] manager: (tap26d75c28-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/337)
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.392 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[923eb9e4-bb4b-4645-b7b9-5bd1c9fc4bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.423 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2efecda4-7271-4d5a-8a8c-39af63d720ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.426 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[22793467-de44-41fa-b153-30dea19fa6f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 NetworkManager[49031]: <info>  [1765008128.4495] device (tap26d75c28-b0): carrier: link connected
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.455 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[12ffc2ba-1048-48b0-ad12-33b32aaa7d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.471 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a553af-ead1-4d7c-b24d-e9bad3bdc0ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d75c28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:c0:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840795, 'reachable_time': 41458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300504, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.486 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd7148f-cf1a-4ec8-a352-be227b02c7f4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:c04e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840795, 'tstamp': 840795}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300505, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.502 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a3bc3c46-c0a2-444d-89f1-e4f2ad90bdbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d75c28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:c0:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840795, 'reachable_time': 41458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300506, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.531 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8033fe-8269-47c7-b783-a64ac8f494a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.576 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[35e378be-2b3b-4147-bf3c-349a068c561f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.577 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d75c28-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.578 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.578 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d75c28-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.579 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:08 compute-1 kernel: tap26d75c28-b0: entered promiscuous mode
Dec 06 08:02:08 compute-1 NetworkManager[49031]: <info>  [1765008128.5813] manager: (tap26d75c28-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.581 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.581 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26d75c28-b0, col_values=(('external_ids', {'iface-id': '3e000649-26bb-4d0b-8171-d46f3570081b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.585 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.586 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.586 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbcbe0a-c732-478f-9a6f-d37c4889e282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.588 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:02:08 compute-1 ovn_controller[130279]: 2025-12-06T08:02:08Z|00750|binding|INFO|Releasing lport 3e000649-26bb-4d0b-8171-d46f3570081b from this chassis (sb_readonly=0)
Dec 06 08:02:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:08.590 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'env', 'PROCESS_TAG=haproxy-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.606 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:08 compute-1 podman[300575]: 2025-12-06 08:02:08.950303768 +0000 UTC m=+0.064604120 container create 23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.974 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008128.9741685, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:02:08 compute-1 nova_compute[226101]: 2025-12-06 08:02:08.975 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Started (Lifecycle Event)
Dec 06 08:02:09 compute-1 systemd[1]: Started libpod-conmon-23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a.scope.
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.003 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:02:09 compute-1 podman[300575]: 2025-12-06 08:02:08.914751717 +0000 UTC m=+0.029052159 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.007 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008128.9743385, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.007 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Paused (Lifecycle Event)
Dec 06 08:02:09 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:02:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298576a2a325dd8c7226d226d341356f1b5ea838dedab26775a02f4b590e568e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.049 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.051 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:02:09 compute-1 podman[300575]: 2025-12-06 08:02:09.056816261 +0000 UTC m=+0.171116653 container init 23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 08:02:09 compute-1 podman[300575]: 2025-12-06 08:02:09.062333499 +0000 UTC m=+0.176633861 container start 23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:02:09 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[300592]: [NOTICE]   (300596) : New worker (300598) forked
Dec 06 08:02:09 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[300592]: [NOTICE]   (300596) : Loading success.
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.085 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.249 226109 DEBUG nova.compute.manager [req-7c36616d-812c-461d-8e6e-28dc2f8ee594 req-9c94a53f-4751-44b7-ae69-0ee9f58914d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.250 226109 DEBUG oslo_concurrency.lockutils [req-7c36616d-812c-461d-8e6e-28dc2f8ee594 req-9c94a53f-4751-44b7-ae69-0ee9f58914d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.250 226109 DEBUG oslo_concurrency.lockutils [req-7c36616d-812c-461d-8e6e-28dc2f8ee594 req-9c94a53f-4751-44b7-ae69-0ee9f58914d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.251 226109 DEBUG oslo_concurrency.lockutils [req-7c36616d-812c-461d-8e6e-28dc2f8ee594 req-9c94a53f-4751-44b7-ae69-0ee9f58914d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.251 226109 DEBUG nova.compute.manager [req-7c36616d-812c-461d-8e6e-28dc2f8ee594 req-9c94a53f-4751-44b7-ae69-0ee9f58914d6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Processing event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.251 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.257 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008129.2563813, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.257 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Resumed (Lifecycle Event)
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.260 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.263 226109 INFO nova.virt.libvirt.driver [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance spawned successfully.
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.264 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.314 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.374 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.379 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.380 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.381 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.382 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.383 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.384 226109 DEBUG nova.virt.libvirt.driver [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.456 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.504 226109 INFO nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Took 13.94 seconds to spawn the instance on the hypervisor.
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.505 226109 DEBUG nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:02:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:09.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.599 226109 INFO nova.compute.manager [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Took 17.30 seconds to build instance.
Dec 06 08:02:09 compute-1 nova_compute[226101]: 2025-12-06 08:02:09.668 226109 DEBUG oslo_concurrency.lockutils [None req-94ec1a03-0765-4068-8dce-1e48f1931abc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:09 compute-1 ceph-mon[81689]: pgmap v3279: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2644919661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:02:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2644919661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:02:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:10.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:10 compute-1 nova_compute[226101]: 2025-12-06 08:02:10.430 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:10 compute-1 ceph-mon[81689]: pgmap v3280: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:11 compute-1 nova_compute[226101]: 2025-12-06 08:02:11.515 226109 DEBUG nova.compute.manager [req-7efeb21e-b694-4d04-a0c7-0ce80ed6c4b0 req-96f575f9-c8d3-48a2-aaa5-850f5dafd58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:11 compute-1 nova_compute[226101]: 2025-12-06 08:02:11.516 226109 DEBUG oslo_concurrency.lockutils [req-7efeb21e-b694-4d04-a0c7-0ce80ed6c4b0 req-96f575f9-c8d3-48a2-aaa5-850f5dafd58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:11 compute-1 nova_compute[226101]: 2025-12-06 08:02:11.516 226109 DEBUG oslo_concurrency.lockutils [req-7efeb21e-b694-4d04-a0c7-0ce80ed6c4b0 req-96f575f9-c8d3-48a2-aaa5-850f5dafd58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:11 compute-1 nova_compute[226101]: 2025-12-06 08:02:11.516 226109 DEBUG oslo_concurrency.lockutils [req-7efeb21e-b694-4d04-a0c7-0ce80ed6c4b0 req-96f575f9-c8d3-48a2-aaa5-850f5dafd58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:11 compute-1 nova_compute[226101]: 2025-12-06 08:02:11.516 226109 DEBUG nova.compute.manager [req-7efeb21e-b694-4d04-a0c7-0ce80ed6c4b0 req-96f575f9-c8d3-48a2-aaa5-850f5dafd58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:02:11 compute-1 nova_compute[226101]: 2025-12-06 08:02:11.516 226109 WARNING nova.compute.manager [req-7efeb21e-b694-4d04-a0c7-0ce80ed6c4b0 req-96f575f9-c8d3-48a2-aaa5-850f5dafd58a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state active and task_state None.
Dec 06 08:02:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:11.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:12.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:12 compute-1 ceph-mon[81689]: pgmap v3281: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 08:02:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:13.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:14.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:14 compute-1 ceph-mon[81689]: pgmap v3282: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 768 KiB/s rd, 12 KiB/s wr, 35 op/s
Dec 06 08:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:15.037 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:02:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:15.038 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:02:15 compute-1 nova_compute[226101]: 2025-12-06 08:02:15.037 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:15 compute-1 nova_compute[226101]: 2025-12-06 08:02:15.432 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:15.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:15 compute-1 NetworkManager[49031]: <info>  [1765008135.8445] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Dec 06 08:02:15 compute-1 NetworkManager[49031]: <info>  [1765008135.8453] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Dec 06 08:02:15 compute-1 nova_compute[226101]: 2025-12-06 08:02:15.843 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:15 compute-1 nova_compute[226101]: 2025-12-06 08:02:15.916 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:15 compute-1 ovn_controller[130279]: 2025-12-06T08:02:15Z|00751|binding|INFO|Releasing lport 3e000649-26bb-4d0b-8171-d46f3570081b from this chassis (sb_readonly=0)
Dec 06 08:02:15 compute-1 nova_compute[226101]: 2025-12-06 08:02:15.926 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:16.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:16 compute-1 nova_compute[226101]: 2025-12-06 08:02:16.412 226109 DEBUG nova.compute.manager [req-6401c511-38fe-43db-b979-ecea8f2e260c req-9fe0c102-0652-4630-b0fc-a35be68a5d34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:16 compute-1 nova_compute[226101]: 2025-12-06 08:02:16.412 226109 DEBUG nova.compute.manager [req-6401c511-38fe-43db-b979-ecea8f2e260c req-9fe0c102-0652-4630-b0fc-a35be68a5d34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing instance network info cache due to event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:02:16 compute-1 nova_compute[226101]: 2025-12-06 08:02:16.412 226109 DEBUG oslo_concurrency.lockutils [req-6401c511-38fe-43db-b979-ecea8f2e260c req-9fe0c102-0652-4630-b0fc-a35be68a5d34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:02:16 compute-1 nova_compute[226101]: 2025-12-06 08:02:16.413 226109 DEBUG oslo_concurrency.lockutils [req-6401c511-38fe-43db-b979-ecea8f2e260c req-9fe0c102-0652-4630-b0fc-a35be68a5d34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:02:16 compute-1 nova_compute[226101]: 2025-12-06 08:02:16.413 226109 DEBUG nova.network.neutron [req-6401c511-38fe-43db-b979-ecea8f2e260c req-9fe0c102-0652-4630-b0fc-a35be68a5d34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:02:16 compute-1 ceph-mon[81689]: pgmap v3283: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1528343113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:17.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:18 compute-1 nova_compute[226101]: 2025-12-06 08:02:18.290 226109 DEBUG nova.network.neutron [req-6401c511-38fe-43db-b979-ecea8f2e260c req-9fe0c102-0652-4630-b0fc-a35be68a5d34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated VIF entry in instance network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:02:18 compute-1 nova_compute[226101]: 2025-12-06 08:02:18.290 226109 DEBUG nova.network.neutron [req-6401c511-38fe-43db-b979-ecea8f2e260c req-9fe0c102-0652-4630-b0fc-a35be68a5d34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:02:18 compute-1 nova_compute[226101]: 2025-12-06 08:02:18.326 226109 DEBUG oslo_concurrency.lockutils [req-6401c511-38fe-43db-b979-ecea8f2e260c req-9fe0c102-0652-4630-b0fc-a35be68a5d34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:02:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000078s ======
Dec 06 08:02:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:18.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Dec 06 08:02:18 compute-1 ceph-mon[81689]: pgmap v3284: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:19.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:20.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:20 compute-1 nova_compute[226101]: 2025-12-06 08:02:20.434 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:21 compute-1 ceph-mon[81689]: pgmap v3285: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:21.040 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:21.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:21 compute-1 ovn_controller[130279]: 2025-12-06T08:02:21Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:70:ef 10.100.0.4
Dec 06 08:02:21 compute-1 ovn_controller[130279]: 2025-12-06T08:02:21Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:70:ef 10.100.0.4
Dec 06 08:02:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:22.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:23 compute-1 ceph-mon[81689]: pgmap v3286: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 06 08:02:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:23.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2139978182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:25 compute-1 ceph-mon[81689]: pgmap v3287: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 72 op/s
Dec 06 08:02:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3856105960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:25 compute-1 nova_compute[226101]: 2025-12-06 08:02:25.437 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:02:25 compute-1 nova_compute[226101]: 2025-12-06 08:02:25.439 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:02:25 compute-1 nova_compute[226101]: 2025-12-06 08:02:25.439 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 08:02:25 compute-1 nova_compute[226101]: 2025-12-06 08:02:25.439 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:02:25 compute-1 nova_compute[226101]: 2025-12-06 08:02:25.475 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:25 compute-1 nova_compute[226101]: 2025-12-06 08:02:25.476 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:02:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:25.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:27 compute-1 ceph-mon[81689]: pgmap v3288: 305 pgs: 305 active+clean; 244 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 115 op/s
Dec 06 08:02:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:27.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:28 compute-1 sshd-session[300609]: Received disconnect from 186.87.166.141 port 37224:11: Bye Bye [preauth]
Dec 06 08:02:28 compute-1 sshd-session[300609]: Disconnected from authenticating user root 186.87.166.141 port 37224 [preauth]
Dec 06 08:02:28 compute-1 nova_compute[226101]: 2025-12-06 08:02:28.228 226109 INFO nova.compute.manager [None req-f0f2c3ef-c913-4559-a70d-f9f57a9ac627 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Get console output
Dec 06 08:02:28 compute-1 nova_compute[226101]: 2025-12-06 08:02:28.234 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:02:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:28.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:29 compute-1 ceph-mon[81689]: pgmap v3289: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:02:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:29.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:30 compute-1 nova_compute[226101]: 2025-12-06 08:02:30.477 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:30 compute-1 nova_compute[226101]: 2025-12-06 08:02:30.478 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:31 compute-1 nova_compute[226101]: 2025-12-06 08:02:31.255 226109 INFO nova.compute.manager [None req-f2ea57d6-cd90-44ce-980d-b058451645ae 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Get console output
Dec 06 08:02:31 compute-1 nova_compute[226101]: 2025-12-06 08:02:31.259 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:02:31 compute-1 ceph-mon[81689]: pgmap v3290: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:02:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:31.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:31 compute-1 nova_compute[226101]: 2025-12-06 08:02:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:31 compute-1 nova_compute[226101]: 2025-12-06 08:02:31.645 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:31 compute-1 nova_compute[226101]: 2025-12-06 08:02:31.646 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:31 compute-1 nova_compute[226101]: 2025-12-06 08:02:31.646 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:31 compute-1 nova_compute[226101]: 2025-12-06 08:02:31.646 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:02:31 compute-1 nova_compute[226101]: 2025-12-06 08:02:31.646 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:02:32 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3778702547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.069 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.160 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.160 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.300 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.301 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4095MB free_disk=20.922134399414062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.302 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.302 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:32.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.440 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance a00396fd-1a78-4cad-9c38-7b0905ab5b9f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.441 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.441 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:02:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3778702547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:32 compute-1 nova_compute[226101]: 2025-12-06 08:02:32.619 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:02:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2859200518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:33 compute-1 nova_compute[226101]: 2025-12-06 08:02:33.041 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:33 compute-1 nova_compute[226101]: 2025-12-06 08:02:33.050 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:02:33 compute-1 nova_compute[226101]: 2025-12-06 08:02:33.076 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:02:33 compute-1 nova_compute[226101]: 2025-12-06 08:02:33.125 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:02:33 compute-1 nova_compute[226101]: 2025-12-06 08:02:33.126 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:33.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:33 compute-1 ceph-mon[81689]: pgmap v3291: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Dec 06 08:02:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2859200518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:34 compute-1 ceph-mon[81689]: pgmap v3292: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.126 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.127 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.127 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.154 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.154 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.154 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.154 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.479 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.481 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.481 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.481 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.518 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:35 compute-1 nova_compute[226101]: 2025-12-06 08:02:35.520 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:02:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:35.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:36.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:36 compute-1 ceph-mon[81689]: pgmap v3293: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:02:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2405441602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:37.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:37 compute-1 nova_compute[226101]: 2025-12-06 08:02:37.859 226109 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:02:38 compute-1 podman[300661]: 2025-12-06 08:02:38.100340733 +0000 UTC m=+0.073544241 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:02:38 compute-1 podman[300660]: 2025-12-06 08:02:38.11220309 +0000 UTC m=+0.088986133 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:02:38 compute-1 podman[300662]: 2025-12-06 08:02:38.19025474 +0000 UTC m=+0.155024882 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:02:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2406994185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1008802938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:38.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:38 compute-1 sshd-session[300611]: Connection closed by 165.154.55.146 port 37806 [preauth]
Dec 06 08:02:39 compute-1 nova_compute[226101]: 2025-12-06 08:02:39.176 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:02:39 compute-1 nova_compute[226101]: 2025-12-06 08:02:39.208 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:02:39 compute-1 nova_compute[226101]: 2025-12-06 08:02:39.208 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:02:39 compute-1 nova_compute[226101]: 2025-12-06 08:02:39.209 226109 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:02:39 compute-1 nova_compute[226101]: 2025-12-06 08:02:39.209 226109 DEBUG nova.network.neutron [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:02:39 compute-1 nova_compute[226101]: 2025-12-06 08:02:39.210 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:39 compute-1 nova_compute[226101]: 2025-12-06 08:02:39.210 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:02:39 compute-1 ceph-mon[81689]: pgmap v3294: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 73 KiB/s wr, 87 op/s
Dec 06 08:02:39 compute-1 nova_compute[226101]: 2025-12-06 08:02:39.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:39.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:39 compute-1 sshd-session[300719]: Received disconnect from 154.209.4.183 port 35666:11: Bye Bye [preauth]
Dec 06 08:02:39 compute-1 sshd-session[300719]: Disconnected from authenticating user root 154.209.4.183 port 35666 [preauth]
Dec 06 08:02:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:40 compute-1 nova_compute[226101]: 2025-12-06 08:02:40.520 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:40 compute-1 ceph-mon[81689]: pgmap v3295: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Dec 06 08:02:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:42 compute-1 nova_compute[226101]: 2025-12-06 08:02:42.727 226109 DEBUG nova.network.neutron [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:02:42 compute-1 nova_compute[226101]: 2025-12-06 08:02:42.759 226109 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:02:42 compute-1 nova_compute[226101]: 2025-12-06 08:02:42.896 226109 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 08:02:42 compute-1 nova_compute[226101]: 2025-12-06 08:02:42.897 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Creating file /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/d2d5c5cc21c147d2967250a67f22c1a2.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 08:02:42 compute-1 nova_compute[226101]: 2025-12-06 08:02:42.897 226109 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/d2d5c5cc21c147d2967250a67f22c1a2.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:43 compute-1 ceph-mon[81689]: pgmap v3296: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Dec 06 08:02:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2170586488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:43 compute-1 nova_compute[226101]: 2025-12-06 08:02:43.375 226109 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/d2d5c5cc21c147d2967250a67f22c1a2.tmp" returned: 1 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:43 compute-1 nova_compute[226101]: 2025-12-06 08:02:43.376 226109 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/d2d5c5cc21c147d2967250a67f22c1a2.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 08:02:43 compute-1 nova_compute[226101]: 2025-12-06 08:02:43.376 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Creating directory /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 08:02:43 compute-1 nova_compute[226101]: 2025-12-06 08:02:43.377 226109 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:43.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:43 compute-1 nova_compute[226101]: 2025-12-06 08:02:43.610 226109 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:43 compute-1 nova_compute[226101]: 2025-12-06 08:02:43.614 226109 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 08:02:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/702947871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:44.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:45 compute-1 nova_compute[226101]: 2025-12-06 08:02:45.522 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:45 compute-1 nova_compute[226101]: 2025-12-06 08:02:45.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:45.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:46 compute-1 kernel: tap7be900e8-79 (unregistering): left promiscuous mode
Dec 06 08:02:46 compute-1 NetworkManager[49031]: <info>  [1765008166.1631] device (tap7be900e8-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.174 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:46 compute-1 ovn_controller[130279]: 2025-12-06T08:02:46Z|00752|binding|INFO|Releasing lport 7be900e8-79cd-473a-8f1d-df5029d9e773 from this chassis (sb_readonly=0)
Dec 06 08:02:46 compute-1 ovn_controller[130279]: 2025-12-06T08:02:46Z|00753|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 down in Southbound
Dec 06 08:02:46 compute-1 ovn_controller[130279]: 2025-12-06T08:02:46Z|00754|binding|INFO|Removing iface tap7be900e8-79 ovn-installed in OVS
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.178 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:46.183 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:70:ef 10.100.0.4'], port_security=['fa:16:3e:eb:70:ef 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89003807-49b2-48b6-9510-52c4e2235abf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64cba72f-9df1-4bbf-91d5-2ef412c84dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=7be900e8-79cd-473a-8f1d-df5029d9e773) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:02:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:46.184 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 7be900e8-79cd-473a-8f1d-df5029d9e773 in datapath 26d75c28-bf40-4c60-9e29-1a7b2fb696a0 unbound from our chassis
Dec 06 08:02:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:46.185 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:02:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:46.187 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cfba60ae-1f66-42a9-93dc-bae7498165f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:46.188 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 namespace which is not needed anymore
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.196 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:46 compute-1 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Dec 06 08:02:46 compute-1 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b8.scope: Consumed 14.644s CPU time.
Dec 06 08:02:46 compute-1 systemd-machined[190302]: Machine qemu-86-instance-000000b8 terminated.
Dec 06 08:02:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:46.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.400 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.407 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.632 226109 INFO nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance shutdown successfully after 3 seconds.
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.642 226109 INFO nova.virt.libvirt.driver [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance destroyed successfully.
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.644 226109 DEBUG nova.virt.libvirt.vif [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:37Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1508194701", "vif_mac": "fa:16:3e:eb:70:ef"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.645 226109 DEBUG nova.network.os_vif_util [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1508194701", "vif_mac": "fa:16:3e:eb:70:ef"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.646 226109 DEBUG nova.network.os_vif_util [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.647 226109 DEBUG os_vif [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.650 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.651 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7be900e8-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.708 226109 DEBUG nova.compute.manager [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.709 226109 DEBUG oslo_concurrency.lockutils [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.710 226109 DEBUG oslo_concurrency.lockutils [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.710 226109 DEBUG oslo_concurrency.lockutils [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.711 226109 DEBUG nova.compute.manager [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.711 226109 WARNING nova.compute.manager [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state active and task_state resize_migrating.
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.712 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.715 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.719 226109 INFO os_vif [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79')
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.727 226109 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:02:46 compute-1 nova_compute[226101]: 2025-12-06 08:02:46.727 226109 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:02:46 compute-1 ceph-mon[81689]: pgmap v3297: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec 06 08:02:46 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[300592]: [NOTICE]   (300596) : haproxy version is 2.8.14-c23fe91
Dec 06 08:02:46 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[300592]: [NOTICE]   (300596) : path to executable is /usr/sbin/haproxy
Dec 06 08:02:46 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[300592]: [WARNING]  (300596) : Exiting Master process...
Dec 06 08:02:46 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[300592]: [WARNING]  (300596) : Exiting Master process...
Dec 06 08:02:46 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[300592]: [ALERT]    (300596) : Current worker (300598) exited with code 143 (Terminated)
Dec 06 08:02:46 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[300592]: [WARNING]  (300596) : All workers exited. Exiting... (0)
Dec 06 08:02:46 compute-1 systemd[1]: libpod-23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a.scope: Deactivated successfully.
Dec 06 08:02:47 compute-1 podman[300748]: 2025-12-06 08:02:47.003913987 +0000 UTC m=+0.713006280 container died 23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 08:02:47 compute-1 nova_compute[226101]: 2025-12-06 08:02:47.311 226109 DEBUG neutronclient.v2_0.client [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 7be900e8-79cd-473a-8f1d-df5029d9e773 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 08:02:47 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a-userdata-shm.mount: Deactivated successfully.
Dec 06 08:02:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-298576a2a325dd8c7226d226d341356f1b5ea838dedab26775a02f4b590e568e-merged.mount: Deactivated successfully.
Dec 06 08:02:47 compute-1 nova_compute[226101]: 2025-12-06 08:02:47.454 226109 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:47 compute-1 nova_compute[226101]: 2025-12-06 08:02:47.455 226109 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:47 compute-1 nova_compute[226101]: 2025-12-06 08:02:47.455 226109 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:47 compute-1 podman[300748]: 2025-12-06 08:02:47.456499434 +0000 UTC m=+1.165591727 container cleanup 23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 08:02:47 compute-1 podman[300788]: 2025-12-06 08:02:47.521817683 +0000 UTC m=+0.044769170 container remove 23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.527 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[daef9dda-1136-402f-8752-4880d2928699]: (4, ('Sat Dec  6 08:02:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 (23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a)\n23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a\nSat Dec  6 08:02:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 (23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a)\n23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.529 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6b01629d-ef43-4657-ba74-b804b017e8f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.530 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d75c28-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:47 compute-1 nova_compute[226101]: 2025-12-06 08:02:47.531 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:47 compute-1 kernel: tap26d75c28-b0: left promiscuous mode
Dec 06 08:02:47 compute-1 nova_compute[226101]: 2025-12-06 08:02:47.544 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.546 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a5341815-8997-4800-b82f-38e1f7da513d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.560 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[49cb5135-99cd-4301-8afe-6ec7085b4cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.561 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[19573c92-f5a1-4118-af3e-da02b1898217]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:47 compute-1 systemd[1]: libpod-conmon-23a3e2191d0368445a1a9cbc5cb07e3794a97736aec45e9a88acb3b0a7231b3a.scope: Deactivated successfully.
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.577 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[80b6e1f0-c172-411e-803c-14cad654ea7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840788, 'reachable_time': 30051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300805, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.579 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:02:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:02:47.579 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[8033c136-f644-4c92-bc9d-962e6de22c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:47 compute-1 systemd[1]: run-netns-ovnmeta\x2d26d75c28\x2dbf40\x2d4c60\x2d9e29\x2d1a7b2fb696a0.mount: Deactivated successfully.
Dec 06 08:02:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:47.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:47 compute-1 ceph-mon[81689]: pgmap v3298: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 06 08:02:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:48 compute-1 ceph-mon[81689]: pgmap v3299: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:02:48 compute-1 nova_compute[226101]: 2025-12-06 08:02:48.959 226109 DEBUG nova.compute.manager [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:48 compute-1 nova_compute[226101]: 2025-12-06 08:02:48.959 226109 DEBUG oslo_concurrency.lockutils [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:48 compute-1 nova_compute[226101]: 2025-12-06 08:02:48.959 226109 DEBUG oslo_concurrency.lockutils [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:48 compute-1 nova_compute[226101]: 2025-12-06 08:02:48.960 226109 DEBUG oslo_concurrency.lockutils [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:48 compute-1 nova_compute[226101]: 2025-12-06 08:02:48.960 226109 DEBUG nova.compute.manager [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:02:48 compute-1 nova_compute[226101]: 2025-12-06 08:02:48.960 226109 WARNING nova.compute.manager [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state active and task_state resize_migrated.
Dec 06 08:02:49 compute-1 nova_compute[226101]: 2025-12-06 08:02:49.519 226109 DEBUG nova.compute.manager [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:49 compute-1 nova_compute[226101]: 2025-12-06 08:02:49.519 226109 DEBUG nova.compute.manager [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing instance network info cache due to event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:02:49 compute-1 nova_compute[226101]: 2025-12-06 08:02:49.520 226109 DEBUG oslo_concurrency.lockutils [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:02:49 compute-1 nova_compute[226101]: 2025-12-06 08:02:49.520 226109 DEBUG oslo_concurrency.lockutils [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:02:49 compute-1 nova_compute[226101]: 2025-12-06 08:02:49.520 226109 DEBUG nova.network.neutron [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:02:49 compute-1 nova_compute[226101]: 2025-12-06 08:02:49.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:49.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:50.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:50 compute-1 nova_compute[226101]: 2025-12-06 08:02:50.523 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:50 compute-1 nova_compute[226101]: 2025-12-06 08:02:50.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:50 compute-1 ceph-mon[81689]: pgmap v3300: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:02:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:51.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:51 compute-1 nova_compute[226101]: 2025-12-06 08:02:51.703 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:52.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:53 compute-1 ceph-mon[81689]: pgmap v3301: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Dec 06 08:02:53 compute-1 nova_compute[226101]: 2025-12-06 08:02:53.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:53.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:54.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:55 compute-1 sudo[300806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:55 compute-1 sudo[300806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:55 compute-1 sudo[300806]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:55 compute-1 nova_compute[226101]: 2025-12-06 08:02:55.354 226109 DEBUG nova.network.neutron [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated VIF entry in instance network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:02:55 compute-1 nova_compute[226101]: 2025-12-06 08:02:55.355 226109 DEBUG nova.network.neutron [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:02:55 compute-1 sudo[300831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:02:55 compute-1 sudo[300831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:55 compute-1 sudo[300831]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:55 compute-1 nova_compute[226101]: 2025-12-06 08:02:55.428 226109 DEBUG oslo_concurrency.lockutils [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:02:55 compute-1 sudo[300856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:55 compute-1 sudo[300856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:55 compute-1 sudo[300856]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:55 compute-1 sudo[300881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:02:55 compute-1 sudo[300881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:55 compute-1 nova_compute[226101]: 2025-12-06 08:02:55.526 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:55.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:55 compute-1 sudo[300881]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:55 compute-1 ceph-mon[81689]: pgmap v3302: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 102 KiB/s wr, 17 op/s
Dec 06 08:02:56 compute-1 sudo[300926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:56 compute-1 sudo[300926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:56 compute-1 sudo[300926]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:56 compute-1 sudo[300951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:02:56 compute-1 sudo[300951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:56 compute-1 sudo[300951]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:56 compute-1 sudo[300976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:56 compute-1 sudo[300976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:56 compute-1 sudo[300976]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:56 compute-1 sudo[301001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:02:56 compute-1 sudo[301001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e411 e411: 3 total, 3 up, 3 in
Dec 06 08:02:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:56 compute-1 nova_compute[226101]: 2025-12-06 08:02:56.752 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:56 compute-1 sudo[301001]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:57 compute-1 ceph-mon[81689]: pgmap v3303: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 102 KiB/s wr, 17 op/s
Dec 06 08:02:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:57 compute-1 ceph-mon[81689]: osdmap e411: 3 total, 3 up, 3 in
Dec 06 08:02:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:02:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:02:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:02:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:02:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:02:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:57.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/200200904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/16996870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:58.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:59 compute-1 ceph-mon[81689]: pgmap v3305: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 204 B/s rd, 15 KiB/s wr, 0 op/s
Dec 06 08:02:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:02:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:59 compute-1 nova_compute[226101]: 2025-12-06 08:02:59.677 226109 DEBUG nova.compute.manager [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:59 compute-1 nova_compute[226101]: 2025-12-06 08:02:59.677 226109 DEBUG oslo_concurrency.lockutils [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:59 compute-1 nova_compute[226101]: 2025-12-06 08:02:59.677 226109 DEBUG oslo_concurrency.lockutils [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:59 compute-1 nova_compute[226101]: 2025-12-06 08:02:59.678 226109 DEBUG oslo_concurrency.lockutils [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:59 compute-1 nova_compute[226101]: 2025-12-06 08:02:59.678 226109 DEBUG nova.compute.manager [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:02:59 compute-1 nova_compute[226101]: 2025-12-06 08:02:59.678 226109 WARNING nova.compute.manager [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state active and task_state resize_finish.
Dec 06 08:02:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:59 compute-1 sshd-session[301057]: Received disconnect from 101.100.194.199 port 49912:11: Bye Bye [preauth]
Dec 06 08:02:59 compute-1 sshd-session[301057]: Disconnected from authenticating user root 101.100.194.199 port 49912 [preauth]
Dec 06 08:03:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:00.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:00 compute-1 ceph-mon[81689]: pgmap v3306: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 204 B/s rd, 15 KiB/s wr, 0 op/s
Dec 06 08:03:00 compute-1 nova_compute[226101]: 2025-12-06 08:03:00.528 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:01 compute-1 nova_compute[226101]: 2025-12-06 08:03:01.422 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008166.4214618, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:03:01 compute-1 nova_compute[226101]: 2025-12-06 08:03:01.422 226109 INFO nova.compute.manager [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Stopped (Lifecycle Event)
Dec 06 08:03:01 compute-1 nova_compute[226101]: 2025-12-06 08:03:01.492 226109 DEBUG nova.compute.manager [None req-46eec6ea-ac94-40eb-a6df-1b5044217393 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:03:01 compute-1 nova_compute[226101]: 2025-12-06 08:03:01.494 226109 DEBUG nova.compute.manager [None req-46eec6ea-ac94-40eb-a6df-1b5044217393 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:03:01 compute-1 nova_compute[226101]: 2025-12-06 08:03:01.525 226109 INFO nova.compute.manager [None req-46eec6ea-ac94-40eb-a6df-1b5044217393 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Dec 06 08:03:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:01.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:01.682 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:01.682 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:01.683 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:01 compute-1 nova_compute[226101]: 2025-12-06 08:03:01.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:02 compute-1 ceph-mon[81689]: pgmap v3307: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 689 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec 06 08:03:03 compute-1 nova_compute[226101]: 2025-12-06 08:03:03.587 226109 DEBUG nova.compute.manager [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:03 compute-1 nova_compute[226101]: 2025-12-06 08:03:03.588 226109 DEBUG oslo_concurrency.lockutils [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:03 compute-1 nova_compute[226101]: 2025-12-06 08:03:03.588 226109 DEBUG oslo_concurrency.lockutils [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:03 compute-1 nova_compute[226101]: 2025-12-06 08:03:03.588 226109 DEBUG oslo_concurrency.lockutils [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:03 compute-1 nova_compute[226101]: 2025-12-06 08:03:03.588 226109 DEBUG nova.compute.manager [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:03 compute-1 nova_compute[226101]: 2025-12-06 08:03:03.589 226109 WARNING nova.compute.manager [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state resized and task_state None.
Dec 06 08:03:03 compute-1 sudo[301059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:03 compute-1 sudo[301059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:03 compute-1 sudo[301059]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:03.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:03 compute-1 sudo[301084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:03:03 compute-1 sudo[301084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:03 compute-1 sudo[301084]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:03:04 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:03:04 compute-1 ceph-mon[81689]: pgmap v3308: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 689 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec 06 08:03:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:05 compute-1 nova_compute[226101]: 2025-12-06 08:03:05.530 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:05.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:06.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:06 compute-1 ceph-mon[81689]: pgmap v3309: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 KiB/s wr, 104 op/s
Dec 06 08:03:06 compute-1 nova_compute[226101]: 2025-12-06 08:03:06.792 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:07.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:08.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:08 compute-1 nova_compute[226101]: 2025-12-06 08:03:08.708 226109 DEBUG nova.compute.manager [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:08 compute-1 nova_compute[226101]: 2025-12-06 08:03:08.708 226109 DEBUG oslo_concurrency.lockutils [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:08 compute-1 nova_compute[226101]: 2025-12-06 08:03:08.709 226109 DEBUG oslo_concurrency.lockutils [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:08 compute-1 nova_compute[226101]: 2025-12-06 08:03:08.709 226109 DEBUG oslo_concurrency.lockutils [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:08 compute-1 nova_compute[226101]: 2025-12-06 08:03:08.709 226109 DEBUG nova.compute.manager [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:08 compute-1 nova_compute[226101]: 2025-12-06 08:03:08.709 226109 WARNING nova.compute.manager [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state resized and task_state resize_reverting.
Dec 06 08:03:08 compute-1 ceph-mon[81689]: pgmap v3310: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 93 op/s
Dec 06 08:03:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:03:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3763747548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:03:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:03:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3763747548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:03:09 compute-1 podman[301110]: 2025-12-06 08:03:09.093846909 +0000 UTC m=+0.073682263 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:03:09 compute-1 podman[301109]: 2025-12-06 08:03:09.098335859 +0000 UTC m=+0.078634596 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 08:03:09 compute-1 podman[301111]: 2025-12-06 08:03:09.164264154 +0000 UTC m=+0.145124756 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 08:03:09 compute-1 nova_compute[226101]: 2025-12-06 08:03:09.336 226109 INFO nova.compute.manager [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Swapping old allocation on dict_keys(['466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83']) held by migration c6765a8d-aa3f-4525-b048-7289f8cbbc8a for instance
Dec 06 08:03:09 compute-1 nova_compute[226101]: 2025-12-06 08:03:09.420 226109 DEBUG nova.scheduler.client.report [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Overwriting current allocation {'allocations': {'e75da5bf-16fa-49b1-b5e1-3aa61daf0433': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}, 'generation': 83}}, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'user_id': '2ed2d17026504d70b893923a85cece4d', 'consumer_generation': 1} on consumer a00396fd-1a78-4cad-9c38-7b0905ab5b9f move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Dec 06 08:03:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:09.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:09 compute-1 nova_compute[226101]: 2025-12-06 08:03:09.877 226109 INFO nova.network.neutron [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating port 7be900e8-79cd-473a-8f1d-df5029d9e773 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 08:03:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1554085838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3763747548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:03:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3763747548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:03:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:10.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:10 compute-1 nova_compute[226101]: 2025-12-06 08:03:10.534 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:10 compute-1 nova_compute[226101]: 2025-12-06 08:03:10.973 226109 DEBUG nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:10 compute-1 nova_compute[226101]: 2025-12-06 08:03:10.974 226109 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:10 compute-1 nova_compute[226101]: 2025-12-06 08:03:10.975 226109 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:10 compute-1 nova_compute[226101]: 2025-12-06 08:03:10.975 226109 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:10 compute-1 nova_compute[226101]: 2025-12-06 08:03:10.976 226109 DEBUG nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:10 compute-1 nova_compute[226101]: 2025-12-06 08:03:10.976 226109 WARNING nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state resized and task_state resize_reverting.
Dec 06 08:03:11 compute-1 ceph-mon[81689]: pgmap v3311: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 86 op/s
Dec 06 08:03:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:11 compute-1 sshd-session[301172]: Received disconnect from 154.219.116.39 port 41480:11: Bye Bye [preauth]
Dec 06 08:03:11 compute-1 sshd-session[301172]: Disconnected from authenticating user root 154.219.116.39 port 41480 [preauth]
Dec 06 08:03:11 compute-1 nova_compute[226101]: 2025-12-06 08:03:11.774 226109 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:03:11 compute-1 nova_compute[226101]: 2025-12-06 08:03:11.775 226109 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:03:11 compute-1 nova_compute[226101]: 2025-12-06 08:03:11.775 226109 DEBUG nova.network.neutron [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:03:11 compute-1 nova_compute[226101]: 2025-12-06 08:03:11.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:12.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:13 compute-1 ceph-mon[81689]: pgmap v3312: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 86 op/s
Dec 06 08:03:13 compute-1 nova_compute[226101]: 2025-12-06 08:03:13.617 226109 DEBUG nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:13 compute-1 nova_compute[226101]: 2025-12-06 08:03:13.619 226109 DEBUG nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing instance network info cache due to event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:03:13 compute-1 nova_compute[226101]: 2025-12-06 08:03:13.619 226109 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:03:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:13.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:15 compute-1 ceph-mon[81689]: pgmap v3313: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 45 op/s
Dec 06 08:03:15 compute-1 nova_compute[226101]: 2025-12-06 08:03:15.221 226109 DEBUG nova.network.neutron [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:03:15 compute-1 nova_compute[226101]: 2025-12-06 08:03:15.265 226109 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:03:15 compute-1 nova_compute[226101]: 2025-12-06 08:03:15.265 226109 DEBUG nova.virt.libvirt.driver [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Dec 06 08:03:15 compute-1 nova_compute[226101]: 2025-12-06 08:03:15.297 226109 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:03:15 compute-1 nova_compute[226101]: 2025-12-06 08:03:15.298 226109 DEBUG nova.network.neutron [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:03:15 compute-1 nova_compute[226101]: 2025-12-06 08:03:15.343 226109 DEBUG nova.storage.rbd_utils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rolling back rbd image(a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Dec 06 08:03:15 compute-1 nova_compute[226101]: 2025-12-06 08:03:15.436 226109 DEBUG nova.storage.rbd_utils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] removing snapshot(nova-resize) on rbd image(a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 08:03:15 compute-1 nova_compute[226101]: 2025-12-06 08:03:15.535 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:15.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:16.184 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.185 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:16 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:16.187 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:03:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e412 e412: 3 total, 3 up, 3 in
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.290 226109 DEBUG nova.virt.libvirt.driver [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Start _get_guest_xml network_info=[{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.295 226109 WARNING nova.virt.libvirt.driver [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.304 226109 DEBUG nova.virt.libvirt.host [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.305 226109 DEBUG nova.virt.libvirt.host [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.309 226109 DEBUG nova.virt.libvirt.host [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.310 226109 DEBUG nova.virt.libvirt.host [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.311 226109 DEBUG nova.virt.libvirt.driver [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.311 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.312 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.312 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.312 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.313 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.313 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.313 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.313 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.314 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.314 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.314 226109 DEBUG nova.virt.hardware [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.314 226109 DEBUG nova.objects.instance [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.407 226109 DEBUG oslo_concurrency.processutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:03:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:16.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.829 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:03:16 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1183410053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.867 226109 DEBUG oslo_concurrency.processutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:03:16 compute-1 nova_compute[226101]: 2025-12-06 08:03:16.907 226109 DEBUG oslo_concurrency.processutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:03:17 compute-1 ceph-mon[81689]: pgmap v3314: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 7.3 KiB/s wr, 48 op/s
Dec 06 08:03:17 compute-1 ceph-mon[81689]: osdmap e412: 3 total, 3 up, 3 in
Dec 06 08:03:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1183410053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:03:17 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3691660085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.344 226109 DEBUG oslo_concurrency.processutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.346 226109 DEBUG nova.virt.libvirt.vif [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:03:03Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.347 226109 DEBUG nova.network.os_vif_util [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.348 226109 DEBUG nova.network.os_vif_util [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.350 226109 DEBUG nova.virt.libvirt.driver [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <uuid>a00396fd-1a78-4cad-9c38-7b0905ab5b9f</uuid>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <name>instance-000000b8</name>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1312966031</nova:name>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:03:16</nova:creationTime>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <nova:port uuid="7be900e8-79cd-473a-8f1d-df5029d9e773">
Dec 06 08:03:17 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <system>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <entry name="serial">a00396fd-1a78-4cad-9c38-7b0905ab5b9f</entry>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <entry name="uuid">a00396fd-1a78-4cad-9c38-7b0905ab5b9f</entry>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </system>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <os>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   </os>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <features>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   </features>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk">
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       </source>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk.config">
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       </source>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:03:17 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:eb:70:ef"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <target dev="tap7be900e8-79"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/console.log" append="off"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <video>
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </video>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:03:17 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:03:17 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:03:17 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:03:17 compute-1 nova_compute[226101]: </domain>
Dec 06 08:03:17 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.351 226109 DEBUG nova.compute.manager [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Preparing to wait for external event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.352 226109 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.352 226109 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.352 226109 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.353 226109 DEBUG nova.virt.libvirt.vif [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:03:03Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.353 226109 DEBUG nova.network.os_vif_util [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.354 226109 DEBUG nova.network.os_vif_util [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.354 226109 DEBUG os_vif [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.354 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.355 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.355 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.357 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.358 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7be900e8-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.358 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7be900e8-79, col_values=(('external_ids', {'iface-id': '7be900e8-79cd-473a-8f1d-df5029d9e773', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:70:ef', 'vm-uuid': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.360 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 NetworkManager[49031]: <info>  [1765008197.3609] manager: (tap7be900e8-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.364 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.366 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.367 226109 INFO os_vif [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79')
Dec 06 08:03:17 compute-1 kernel: tap7be900e8-79: entered promiscuous mode
Dec 06 08:03:17 compute-1 NetworkManager[49031]: <info>  [1765008197.4409] manager: (tap7be900e8-79): new Tun device (/org/freedesktop/NetworkManager/Devices/342)
Dec 06 08:03:17 compute-1 ovn_controller[130279]: 2025-12-06T08:03:17Z|00755|binding|INFO|Claiming lport 7be900e8-79cd-473a-8f1d-df5029d9e773 for this chassis.
Dec 06 08:03:17 compute-1 ovn_controller[130279]: 2025-12-06T08:03:17Z|00756|binding|INFO|7be900e8-79cd-473a-8f1d-df5029d9e773: Claiming fa:16:3e:eb:70:ef 10.100.0.4
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.441 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.450 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:70:ef 10.100.0.4'], port_security=['fa:16:3e:eb:70:ef 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '89003807-49b2-48b6-9510-52c4e2235abf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64cba72f-9df1-4bbf-91d5-2ef412c84dfa, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=7be900e8-79cd-473a-8f1d-df5029d9e773) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.451 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 7be900e8-79cd-473a-8f1d-df5029d9e773 in datapath 26d75c28-bf40-4c60-9e29-1a7b2fb696a0 bound to our chassis
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.452 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:03:17 compute-1 ovn_controller[130279]: 2025-12-06T08:03:17Z|00757|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 ovn-installed in OVS
Dec 06 08:03:17 compute-1 ovn_controller[130279]: 2025-12-06T08:03:17Z|00758|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 up in Southbound
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.457 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.460 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.463 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fcad9eb8-4172-48a5-86ea-cba21a738403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.464 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26d75c28-b1 in ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.467 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26d75c28-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.467 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7df5b6a0-f143-48f1-b921-6a1f11193ee6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.468 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[deb0617a-59c3-477e-b5d4-f8439a81dd0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 systemd-udevd[301308]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:03:17 compute-1 systemd-machined[190302]: New machine qemu-87-instance-000000b8.
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.480 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2c457297-441c-425e-b5ef-31b08d2dc8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 NetworkManager[49031]: <info>  [1765008197.4940] device (tap7be900e8-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:03:17 compute-1 NetworkManager[49031]: <info>  [1765008197.4949] device (tap7be900e8-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:03:17 compute-1 systemd[1]: Started Virtual Machine qemu-87-instance-000000b8.
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.501 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[518222ef-d3fe-4481-b941-bd422277fcef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.527 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9e154b69-2b51-488b-b13d-7d79ab5b69e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 systemd-udevd[301311]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.532 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c430829b-18ac-4cf1-91c0-e4980b9d6dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 NetworkManager[49031]: <info>  [1765008197.5333] manager: (tap26d75c28-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/343)
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.560 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3c604b-df1d-4d39-87c1-670557ab0be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.563 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[36031ba3-a499-443d-80ac-06946a46dd22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 NetworkManager[49031]: <info>  [1765008197.5895] device (tap26d75c28-b0): carrier: link connected
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.594 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d9685854-fd96-43c2-b3c6-d53ae8c5a6ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.609 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[26b26bb9-8023-4384-a584-975db23e32c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d75c28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:c0:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847709, 'reachable_time': 37192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301339, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.623 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[115ac920-ecd0-40cb-9f94-6c349c3f32d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:c04e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 847709, 'tstamp': 847709}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301340, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.639 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4436c5-31ce-4d80-abcd-999bee2cab4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d75c28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:c0:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847709, 'reachable_time': 37192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301341, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:17.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.665 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[40fb46c4-c940-4bcc-996b-bd11231800c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.819 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9e04aef4-b0f1-414d-beed-4b6456befcb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.821 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d75c28-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.821 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.822 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d75c28-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.823 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 NetworkManager[49031]: <info>  [1765008197.8239] manager: (tap26d75c28-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Dec 06 08:03:17 compute-1 kernel: tap26d75c28-b0: entered promiscuous mode
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.825 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.825 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26d75c28-b0, col_values=(('external_ids', {'iface-id': '3e000649-26bb-4d0b-8171-d46f3570081b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.826 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 ovn_controller[130279]: 2025-12-06T08:03:17Z|00759|binding|INFO|Releasing lport 3e000649-26bb-4d0b-8171-d46f3570081b from this chassis (sb_readonly=0)
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.838 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.840 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.841 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1f26379a-7b05-4814-b6ce-9d3adfdba623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.841 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:03:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:17.842 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'env', 'PROCESS_TAG=haproxy-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.928 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008197.9280872, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.929 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Started (Lifecycle Event)
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.976 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.980 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008197.9309812, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:03:17 compute-1 nova_compute[226101]: 2025-12-06 08:03:17.980 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Paused (Lifecycle Event)
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.009 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.015 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.026 226109 DEBUG nova.compute.manager [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.027 226109 DEBUG oslo_concurrency.lockutils [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.027 226109 DEBUG oslo_concurrency.lockutils [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.028 226109 DEBUG oslo_concurrency.lockutils [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.029 226109 DEBUG nova.compute.manager [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Processing event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.031 226109 DEBUG nova.compute.manager [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.042 226109 INFO nova.virt.libvirt.driver [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance running successfully.
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.043 226109 DEBUG nova.virt.libvirt.driver [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.102 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.105 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008198.0377371, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.105 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Resumed (Lifecycle Event)
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.197 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.203 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.232 226109 INFO nova.compute.manager [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance to original state: 'active'
Dec 06 08:03:18 compute-1 nova_compute[226101]: 2025-12-06 08:03:18.236 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 08:03:18 compute-1 podman[301415]: 2025-12-06 08:03:18.24118662 +0000 UTC m=+0.061125788 container create 96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 08:03:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3691660085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:18 compute-1 systemd[1]: Started libpod-conmon-96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3.scope.
Dec 06 08:03:18 compute-1 podman[301415]: 2025-12-06 08:03:18.210673323 +0000 UTC m=+0.030612501 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:03:18 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:03:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb53260c39f0287f7e649a4ba426b75fa73844491db104a8f09471b28ed82bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:18 compute-1 podman[301415]: 2025-12-06 08:03:18.380738567 +0000 UTC m=+0.200677735 container init 96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:03:18 compute-1 podman[301415]: 2025-12-06 08:03:18.386057618 +0000 UTC m=+0.205996756 container start 96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:03:18 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[301432]: [NOTICE]   (301436) : New worker (301438) forked
Dec 06 08:03:18 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[301432]: [NOTICE]   (301436) : Loading success.
Dec 06 08:03:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:19 compute-1 ceph-mon[81689]: pgmap v3316: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 8.8 KiB/s wr, 2 op/s
Dec 06 08:03:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:19.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:19 compute-1 sshd-session[301428]: Received disconnect from 136.112.8.45 port 45480:11: Bye Bye [preauth]
Dec 06 08:03:19 compute-1 sshd-session[301428]: Disconnected from authenticating user root 136.112.8.45 port 45480 [preauth]
Dec 06 08:03:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.417 226109 DEBUG nova.compute.manager [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.417 226109 DEBUG oslo_concurrency.lockutils [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.418 226109 DEBUG oslo_concurrency.lockutils [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.418 226109 DEBUG oslo_concurrency.lockutils [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.419 226109 DEBUG nova.compute.manager [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.419 226109 WARNING nova.compute.manager [req-a72d1fb4-02a2-4f85-a19f-7461cf649d2a req-414a5afb-c0f3-445f-b5a3-66dc252c863f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state active and task_state None.
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.421 226109 DEBUG nova.network.neutron [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated VIF entry in instance network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.422 226109 DEBUG nova.network.neutron [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:03:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:20.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.445 226109 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:03:20 compute-1 nova_compute[226101]: 2025-12-06 08:03:20.538 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:21 compute-1 ceph-mon[81689]: pgmap v3317: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 8.8 KiB/s wr, 2 op/s
Dec 06 08:03:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:21.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:22.190 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:22 compute-1 nova_compute[226101]: 2025-12-06 08:03:22.360 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:23 compute-1 ceph-mon[81689]: pgmap v3318: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 135 op/s
Dec 06 08:03:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3120929876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:23.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 e413: 3 total, 3 up, 3 in
Dec 06 08:03:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:24.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:24 compute-1 ceph-mon[81689]: pgmap v3319: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 135 op/s
Dec 06 08:03:24 compute-1 ceph-mon[81689]: osdmap e413: 3 total, 3 up, 3 in
Dec 06 08:03:25 compute-1 nova_compute[226101]: 2025-12-06 08:03:25.541 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:25.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:26 compute-1 sshd-session[301449]: Received disconnect from 186.96.151.198 port 51742:11: Bye Bye [preauth]
Dec 06 08:03:26 compute-1 sshd-session[301449]: Disconnected from authenticating user root 186.96.151.198 port 51742 [preauth]
Dec 06 08:03:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:26.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:26 compute-1 sshd-session[301248]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:03:26 compute-1 sshd-session[301248]: banner exchange: Connection from 27.128.170.160 port 45344: Connection timed out
Dec 06 08:03:27 compute-1 ceph-mon[81689]: pgmap v3321: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 KiB/s wr, 143 op/s
Dec 06 08:03:27 compute-1 nova_compute[226101]: 2025-12-06 08:03:27.361 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:27.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:28.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:29 compute-1 ceph-mon[81689]: pgmap v3322: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 133 op/s
Dec 06 08:03:29 compute-1 ovn_controller[130279]: 2025-12-06T08:03:29Z|00760|binding|INFO|Releasing lport 3e000649-26bb-4d0b-8171-d46f3570081b from this chassis (sb_readonly=0)
Dec 06 08:03:29 compute-1 nova_compute[226101]: 2025-12-06 08:03:29.565 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:30.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:30 compute-1 nova_compute[226101]: 2025-12-06 08:03:30.542 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:30 compute-1 ovn_controller[130279]: 2025-12-06T08:03:30Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:70:ef 10.100.0.4
Dec 06 08:03:30 compute-1 sshd-session[301447]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:03:30 compute-1 sshd-session[301447]: banner exchange: Connection from 14.103.110.123 port 55056: Connection timed out
Dec 06 08:03:31 compute-1 ceph-mon[81689]: pgmap v3323: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 133 op/s
Dec 06 08:03:31 compute-1 nova_compute[226101]: 2025-12-06 08:03:31.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:31 compute-1 nova_compute[226101]: 2025-12-06 08:03:31.644 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:31 compute-1 nova_compute[226101]: 2025-12-06 08:03:31.645 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:31 compute-1 nova_compute[226101]: 2025-12-06 08:03:31.645 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:31 compute-1 nova_compute[226101]: 2025-12-06 08:03:31.645 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:03:31 compute-1 nova_compute[226101]: 2025-12-06 08:03:31.645 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:03:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:31.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:32 compute-1 nova_compute[226101]: 2025-12-06 08:03:32.362 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:32.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:34 compute-1 nova_compute[226101]: 2025-12-06 08:03:34.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:03:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1238206177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:34 compute-1 ceph-mon[81689]: pgmap v3324: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:03:34 compute-1 nova_compute[226101]: 2025-12-06 08:03:34.939 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.067 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.068 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.216 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.217 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4073MB free_disk=20.94266128540039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.218 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.218 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.297 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance a00396fd-1a78-4cad-9c38-7b0905ab5b9f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.298 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.298 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.334 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.545 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:35.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:03:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/969753334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.762 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.767 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.782 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.784 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:03:35 compute-1 nova_compute[226101]: 2025-12-06 08:03:35.785 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:35 compute-1 ceph-mon[81689]: pgmap v3325: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:03:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1238206177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/969753334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:36.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:37 compute-1 ceph-mon[81689]: pgmap v3326: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 13 KiB/s wr, 45 op/s
Dec 06 08:03:37 compute-1 nova_compute[226101]: 2025-12-06 08:03:37.365 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:37 compute-1 nova_compute[226101]: 2025-12-06 08:03:37.384 226109 INFO nova.compute.manager [None req-03c7b05d-b51f-4c7d-882f-36fa2f432905 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Get console output
Dec 06 08:03:37 compute-1 nova_compute[226101]: 2025-12-06 08:03:37.394 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:03:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:37.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2015708727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1661303541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:38.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:38 compute-1 nova_compute[226101]: 2025-12-06 08:03:38.648 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:38 compute-1 nova_compute[226101]: 2025-12-06 08:03:38.786 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:38 compute-1 nova_compute[226101]: 2025-12-06 08:03:38.786 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:03:38 compute-1 nova_compute[226101]: 2025-12-06 08:03:38.786 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.028 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.028 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.029 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.029 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.047 226109 DEBUG nova.compute.manager [req-e51f5ae2-9415-402a-a5ef-5bf399d79a72 req-ed254f2a-7f04-4a14-a40d-437b7692a3a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.048 226109 DEBUG nova.compute.manager [req-e51f5ae2-9415-402a-a5ef-5bf399d79a72 req-ed254f2a-7f04-4a14-a40d-437b7692a3a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing instance network info cache due to event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.048 226109 DEBUG oslo_concurrency.lockutils [req-e51f5ae2-9415-402a-a5ef-5bf399d79a72 req-ed254f2a-7f04-4a14-a40d-437b7692a3a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.116 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.117 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.117 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.117 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.118 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.119 226109 INFO nova.compute.manager [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Terminating instance
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.120 226109 DEBUG nova.compute.manager [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:03:39 compute-1 kernel: tap7be900e8-79 (unregistering): left promiscuous mode
Dec 06 08:03:39 compute-1 NetworkManager[49031]: <info>  [1765008219.1757] device (tap7be900e8-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:03:39 compute-1 ovn_controller[130279]: 2025-12-06T08:03:39Z|00761|binding|INFO|Releasing lport 7be900e8-79cd-473a-8f1d-df5029d9e773 from this chassis (sb_readonly=0)
Dec 06 08:03:39 compute-1 ovn_controller[130279]: 2025-12-06T08:03:39Z|00762|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 down in Southbound
Dec 06 08:03:39 compute-1 ovn_controller[130279]: 2025-12-06T08:03:39Z|00763|binding|INFO|Removing iface tap7be900e8-79 ovn-installed in OVS
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.191 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.194 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:39.202 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:70:ef 10.100.0.4'], port_security=['fa:16:3e:eb:70:ef 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '12', 'neutron:security_group_ids': '89003807-49b2-48b6-9510-52c4e2235abf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64cba72f-9df1-4bbf-91d5-2ef412c84dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=7be900e8-79cd-473a-8f1d-df5029d9e773) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:03:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:39.204 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 7be900e8-79cd-473a-8f1d-df5029d9e773 in datapath 26d75c28-bf40-4c60-9e29-1a7b2fb696a0 unbound from our chassis
Dec 06 08:03:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:39.207 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:03:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:39.209 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[82cd1e18-6f88-467b-a5db-6d784c14ca7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:39.210 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 namespace which is not needed anymore
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.227 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:39 compute-1 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Dec 06 08:03:39 compute-1 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b8.scope: Consumed 14.174s CPU time.
Dec 06 08:03:39 compute-1 systemd-machined[190302]: Machine qemu-87-instance-000000b8 terminated.
Dec 06 08:03:39 compute-1 podman[301497]: 2025-12-06 08:03:39.290520443 +0000 UTC m=+0.080466386 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:03:39 compute-1 podman[301500]: 2025-12-06 08:03:39.300177081 +0000 UTC m=+0.091654175 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.355 226109 INFO nova.virt.libvirt.driver [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance destroyed successfully.
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.356 226109 DEBUG nova.objects.instance [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.379 226109 DEBUG nova.virt.libvirt.vif [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:03:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:03:18Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.379 226109 DEBUG nova.network.os_vif_util [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.380 226109 DEBUG nova.network.os_vif_util [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.380 226109 DEBUG os_vif [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.382 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.382 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7be900e8-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.384 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.387 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:03:39 compute-1 nova_compute[226101]: 2025-12-06 08:03:39.390 226109 INFO os_vif [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79')
Dec 06 08:03:39 compute-1 podman[301501]: 2025-12-06 08:03:39.49546877 +0000 UTC m=+0.251595847 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 08:03:39 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[301432]: [NOTICE]   (301436) : haproxy version is 2.8.14-c23fe91
Dec 06 08:03:39 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[301432]: [NOTICE]   (301436) : path to executable is /usr/sbin/haproxy
Dec 06 08:03:39 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[301432]: [WARNING]  (301436) : Exiting Master process...
Dec 06 08:03:39 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[301432]: [WARNING]  (301436) : Exiting Master process...
Dec 06 08:03:39 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[301432]: [ALERT]    (301436) : Current worker (301438) exited with code 143 (Terminated)
Dec 06 08:03:39 compute-1 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[301432]: [WARNING]  (301436) : All workers exited. Exiting... (0)
Dec 06 08:03:39 compute-1 systemd[1]: libpod-96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3.scope: Deactivated successfully.
Dec 06 08:03:39 compute-1 podman[301578]: 2025-12-06 08:03:39.514387476 +0000 UTC m=+0.197600951 container died 96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:03:39 compute-1 ceph-mon[81689]: pgmap v3327: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 14 KiB/s wr, 43 op/s
Dec 06 08:03:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:39.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:39 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3-userdata-shm.mount: Deactivated successfully.
Dec 06 08:03:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-9fb53260c39f0287f7e649a4ba426b75fa73844491db104a8f09471b28ed82bf-merged.mount: Deactivated successfully.
Dec 06 08:03:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:40 compute-1 podman[301578]: 2025-12-06 08:03:40.270939742 +0000 UTC m=+0.954153207 container cleanup 96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 08:03:40 compute-1 systemd[1]: libpod-conmon-96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3.scope: Deactivated successfully.
Dec 06 08:03:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:40.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:40 compute-1 nova_compute[226101]: 2025-12-06 08:03:40.547 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:40 compute-1 podman[301646]: 2025-12-06 08:03:40.76763227 +0000 UTC m=+0.459620077 container remove 96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.777 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e81a1278-2310-4e2c-87ae-0c957cb979a2]: (4, ('Sat Dec  6 08:03:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 (96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3)\n96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3\nSat Dec  6 08:03:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 (96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3)\n96c09443d811b98bf874acf0286b559c5c55e2cb13a32d405e41b78394059ca3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.779 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e0ed47-4c50-4efe-8397-2a40c233008d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.782 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d75c28-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:40 compute-1 nova_compute[226101]: 2025-12-06 08:03:40.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:40 compute-1 kernel: tap26d75c28-b0: left promiscuous mode
Dec 06 08:03:40 compute-1 nova_compute[226101]: 2025-12-06 08:03:40.814 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.817 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0129d4-fe78-4af1-a49d-f3bc8e822c5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.838 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8b196444-7c56-46c5-ab3f-22c3db21c394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.840 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[948a40f7-4aff-4731-ab81-a25643f794ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.864 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a906130a-cb87-4093-ba9e-2264424f6e54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847702, 'reachable_time': 32162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301663, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.867 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:03:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:40.868 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[43a4832a-bf45-4a4e-b3d8-f983a1070ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:40 compute-1 systemd[1]: run-netns-ovnmeta\x2d26d75c28\x2dbf40\x2d4c60\x2d9e29\x2d1a7b2fb696a0.mount: Deactivated successfully.
Dec 06 08:03:41 compute-1 ceph-mon[81689]: pgmap v3328: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 14 KiB/s wr, 43 op/s
Dec 06 08:03:41 compute-1 nova_compute[226101]: 2025-12-06 08:03:41.396 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:03:41 compute-1 nova_compute[226101]: 2025-12-06 08:03:41.421 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:03:41 compute-1 nova_compute[226101]: 2025-12-06 08:03:41.421 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:03:41 compute-1 nova_compute[226101]: 2025-12-06 08:03:41.421 226109 DEBUG oslo_concurrency.lockutils [req-e51f5ae2-9415-402a-a5ef-5bf399d79a72 req-ed254f2a-7f04-4a14-a40d-437b7692a3a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:03:41 compute-1 nova_compute[226101]: 2025-12-06 08:03:41.422 226109 DEBUG nova.network.neutron [req-e51f5ae2-9415-402a-a5ef-5bf399d79a72 req-ed254f2a-7f04-4a14-a40d-437b7692a3a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:03:41 compute-1 nova_compute[226101]: 2025-12-06 08:03:41.422 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:41 compute-1 nova_compute[226101]: 2025-12-06 08:03:41.423 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:41 compute-1 nova_compute[226101]: 2025-12-06 08:03:41.423 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:03:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:41.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:42 compute-1 nova_compute[226101]: 2025-12-06 08:03:42.212 226109 INFO nova.virt.libvirt.driver [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Deleting instance files /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f_del
Dec 06 08:03:42 compute-1 nova_compute[226101]: 2025-12-06 08:03:42.213 226109 INFO nova.virt.libvirt.driver [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Deletion of /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f_del complete
Dec 06 08:03:42 compute-1 nova_compute[226101]: 2025-12-06 08:03:42.275 226109 INFO nova.compute.manager [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Took 3.16 seconds to destroy the instance on the hypervisor.
Dec 06 08:03:42 compute-1 nova_compute[226101]: 2025-12-06 08:03:42.276 226109 DEBUG oslo.service.loopingcall [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:03:42 compute-1 nova_compute[226101]: 2025-12-06 08:03:42.276 226109 DEBUG nova.compute.manager [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:03:42 compute-1 nova_compute[226101]: 2025-12-06 08:03:42.276 226109 DEBUG nova.network.neutron [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:03:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:42.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:43 compute-1 ceph-mon[81689]: pgmap v3329: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 23 KiB/s wr, 58 op/s
Dec 06 08:03:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:43.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.387 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:44.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.693 226109 DEBUG nova.network.neutron [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.719 226109 INFO nova.compute.manager [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Took 2.44 seconds to deallocate network for instance.
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.776 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.776 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.824 226109 DEBUG nova.compute.manager [req-d5f2603c-a5de-41d9-91bc-874b996af508 req-30d5e5e9-2f40-4519-bbf3-702b15fa20a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.825 226109 DEBUG oslo_concurrency.lockutils [req-d5f2603c-a5de-41d9-91bc-874b996af508 req-30d5e5e9-2f40-4519-bbf3-702b15fa20a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.826 226109 DEBUG oslo_concurrency.lockutils [req-d5f2603c-a5de-41d9-91bc-874b996af508 req-30d5e5e9-2f40-4519-bbf3-702b15fa20a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.826 226109 DEBUG oslo_concurrency.lockutils [req-d5f2603c-a5de-41d9-91bc-874b996af508 req-30d5e5e9-2f40-4519-bbf3-702b15fa20a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.827 226109 DEBUG nova.compute.manager [req-d5f2603c-a5de-41d9-91bc-874b996af508 req-30d5e5e9-2f40-4519-bbf3-702b15fa20a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.827 226109 WARNING nova.compute.manager [req-d5f2603c-a5de-41d9-91bc-874b996af508 req-30d5e5e9-2f40-4519-bbf3-702b15fa20a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state deleted and task_state None.
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.831 226109 DEBUG nova.compute.manager [req-0a6534b0-1770-4c74-9e96-c1a4061ae4ba req-9b67db85-8e46-4e10-82b3-b6878f405407 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-deleted-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:44 compute-1 nova_compute[226101]: 2025-12-06 08:03:44.837 226109 DEBUG oslo_concurrency.processutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:03:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:45 compute-1 nova_compute[226101]: 2025-12-06 08:03:45.168 226109 DEBUG nova.network.neutron [req-e51f5ae2-9415-402a-a5ef-5bf399d79a72 req-ed254f2a-7f04-4a14-a40d-437b7692a3a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated VIF entry in instance network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:03:45 compute-1 nova_compute[226101]: 2025-12-06 08:03:45.170 226109 DEBUG nova.network.neutron [req-e51f5ae2-9415-402a-a5ef-5bf399d79a72 req-ed254f2a-7f04-4a14-a40d-437b7692a3a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:03:45 compute-1 nova_compute[226101]: 2025-12-06 08:03:45.208 226109 DEBUG oslo_concurrency.lockutils [req-e51f5ae2-9415-402a-a5ef-5bf399d79a72 req-ed254f2a-7f04-4a14-a40d-437b7692a3a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:03:45 compute-1 nova_compute[226101]: 2025-12-06 08:03:45.549 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:45 compute-1 nova_compute[226101]: 2025-12-06 08:03:45.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:45.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:03:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3888401866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:46 compute-1 ceph-mon[81689]: pgmap v3330: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 11 KiB/s wr, 33 op/s
Dec 06 08:03:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1425635196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:46 compute-1 nova_compute[226101]: 2025-12-06 08:03:46.242 226109 DEBUG oslo_concurrency.processutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:03:46 compute-1 nova_compute[226101]: 2025-12-06 08:03:46.249 226109 DEBUG nova.compute.provider_tree [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:03:46 compute-1 nova_compute[226101]: 2025-12-06 08:03:46.267 226109 DEBUG nova.scheduler.client.report [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:03:46 compute-1 nova_compute[226101]: 2025-12-06 08:03:46.293 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:46 compute-1 nova_compute[226101]: 2025-12-06 08:03:46.334 226109 INFO nova.scheduler.client.report [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance a00396fd-1a78-4cad-9c38-7b0905ab5b9f
Dec 06 08:03:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:46.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:46 compute-1 nova_compute[226101]: 2025-12-06 08:03:46.480 226109 DEBUG oslo_concurrency.lockutils [None req-bd8d831f-2965-4f60-9606-bee72a65952f 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1345692704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:47 compute-1 ceph-mon[81689]: pgmap v3331: 305 pgs: 305 active+clean; 121 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 11 KiB/s wr, 47 op/s
Dec 06 08:03:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3888401866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:47.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:48.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:48 compute-1 nova_compute[226101]: 2025-12-06 08:03:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:48 compute-1 ceph-mon[81689]: pgmap v3332: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Dec 06 08:03:49 compute-1 nova_compute[226101]: 2025-12-06 08:03:49.435 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:49 compute-1 nova_compute[226101]: 2025-12-06 08:03:49.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:49.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:50.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:50 compute-1 nova_compute[226101]: 2025-12-06 08:03:50.594 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:50 compute-1 ceph-mon[81689]: pgmap v3333: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Dec 06 08:03:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2935238790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:51 compute-1 nova_compute[226101]: 2025-12-06 08:03:51.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:51.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:52.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:52 compute-1 ceph-mon[81689]: pgmap v3334: 305 pgs: 305 active+clean; 126 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 185 KiB/s wr, 29 op/s
Dec 06 08:03:53 compute-1 nova_compute[226101]: 2025-12-06 08:03:53.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:53.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:53 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Dec 06 08:03:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:53.989596) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:03:53 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Dec 06 08:03:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008233989656, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2371, "num_deletes": 257, "total_data_size": 5880790, "memory_usage": 5953136, "flush_reason": "Manual Compaction"}
Dec 06 08:03:53 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234017390, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 3819360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69922, "largest_seqno": 72288, "table_properties": {"data_size": 3809665, "index_size": 6124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19924, "raw_average_key_size": 20, "raw_value_size": 3790269, "raw_average_value_size": 3867, "num_data_blocks": 267, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008027, "oldest_key_time": 1765008027, "file_creation_time": 1765008233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 27868 microseconds, and 9125 cpu microseconds.
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.017471) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 3819360 bytes OK
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.017490) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.022081) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.022098) EVENT_LOG_v1 {"time_micros": 1765008234022092, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.022118) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 5870296, prev total WAL file size 5870296, number of live WAL files 2.
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.023764) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353237' seq:72057594037927935, type:22 .. '6C6F676D0032373738' seq:0, type:0; will stop at (end)
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(3729KB)], [141(10MB)]
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234023843, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 14476299, "oldest_snapshot_seqno": -1}
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 10275 keys, 14316630 bytes, temperature: kUnknown
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234147079, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 14316630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14249212, "index_size": 40617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25733, "raw_key_size": 270718, "raw_average_key_size": 26, "raw_value_size": 14068012, "raw_average_value_size": 1369, "num_data_blocks": 1554, "num_entries": 10275, "num_filter_entries": 10275, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008234, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.147467) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 14316630 bytes
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.149342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.4 rd, 116.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.5) write-amplify(3.7) OK, records in: 10810, records dropped: 535 output_compression: NoCompression
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.149363) EVENT_LOG_v1 {"time_micros": 1765008234149353, "job": 90, "event": "compaction_finished", "compaction_time_micros": 123354, "compaction_time_cpu_micros": 42072, "output_level": 6, "num_output_files": 1, "total_output_size": 14316630, "num_input_records": 10810, "num_output_records": 10275, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234150523, "job": 90, "event": "table_file_deletion", "file_number": 143}
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234152987, "job": 90, "event": "table_file_deletion", "file_number": 141}
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.023616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.153142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.153148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.153150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.153153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:54.153155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-1 nova_compute[226101]: 2025-12-06 08:03:54.354 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008219.3530662, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:03:54 compute-1 nova_compute[226101]: 2025-12-06 08:03:54.355 226109 INFO nova.compute.manager [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Stopped (Lifecycle Event)
Dec 06 08:03:54 compute-1 nova_compute[226101]: 2025-12-06 08:03:54.440 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:54 compute-1 nova_compute[226101]: 2025-12-06 08:03:54.452 226109 DEBUG nova.compute.manager [None req-0bef454f-c9ba-4cd0-a720-f5707a01ee90 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:03:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:54.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:54.759 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:03:54 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:03:54.760 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:03:54 compute-1 nova_compute[226101]: 2025-12-06 08:03:54.760 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:54 compute-1 ceph-mon[81689]: pgmap v3335: 305 pgs: 305 active+clean; 126 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 176 KiB/s wr, 14 op/s
Dec 06 08:03:55 compute-1 nova_compute[226101]: 2025-12-06 08:03:55.170 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:55 compute-1 nova_compute[226101]: 2025-12-06 08:03:55.302 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:55 compute-1 nova_compute[226101]: 2025-12-06 08:03:55.596 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:55.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.017084) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236017178, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 274, "num_deletes": 251, "total_data_size": 69933, "memory_usage": 75744, "flush_reason": "Manual Compaction"}
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236020221, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 45661, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72293, "largest_seqno": 72562, "table_properties": {"data_size": 43777, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4856, "raw_average_key_size": 18, "raw_value_size": 40126, "raw_average_value_size": 151, "num_data_blocks": 5, "num_entries": 265, "num_filter_entries": 265, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008234, "oldest_key_time": 1765008234, "file_creation_time": 1765008236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 3379 microseconds, and 1026 cpu microseconds.
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.020455) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 45661 bytes OK
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.020574) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.022055) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.022085) EVENT_LOG_v1 {"time_micros": 1765008236022074, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.022110) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 67848, prev total WAL file size 67848, number of live WAL files 2.
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.023382) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(44KB)], [144(13MB)]
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236023711, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 14362291, "oldest_snapshot_seqno": -1}
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 10030 keys, 12479242 bytes, temperature: kUnknown
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236140930, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 12479242, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12414954, "index_size": 38127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 266320, "raw_average_key_size": 26, "raw_value_size": 12239420, "raw_average_value_size": 1220, "num_data_blocks": 1442, "num_entries": 10030, "num_filter_entries": 10030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.141243) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 12479242 bytes
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.143284) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.5 rd, 106.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(587.8) write-amplify(273.3) OK, records in: 10540, records dropped: 510 output_compression: NoCompression
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.143306) EVENT_LOG_v1 {"time_micros": 1765008236143296, "job": 92, "event": "compaction_finished", "compaction_time_micros": 117244, "compaction_time_cpu_micros": 58055, "output_level": 6, "num_output_files": 1, "total_output_size": 12479242, "num_input_records": 10540, "num_output_records": 10030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236143449, "job": 92, "event": "table_file_deletion", "file_number": 146}
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236146315, "job": 92, "event": "table_file_deletion", "file_number": 144}
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.023287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.146407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.146445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.146447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.146449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:03:56.146451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:56.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:57 compute-1 ceph-mon[81689]: pgmap v3336: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec 06 08:03:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:57.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2113418163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:59 compute-1 ceph-mon[81689]: pgmap v3337: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:03:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3306292249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:59 compute-1 nova_compute[226101]: 2025-12-06 08:03:59.444 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:59 compute-1 nova_compute[226101]: 2025-12-06 08:03:59.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:03:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:59.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:00.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:00 compute-1 nova_compute[226101]: 2025-12-06 08:04:00.599 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:01 compute-1 ceph-mon[81689]: pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:04:01.683 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:04:01.683 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:04:01.683 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:01.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:04:01.763 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:03 compute-1 ceph-mon[81689]: pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 06 08:04:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:03.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:03 compute-1 sudo[301687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:03 compute-1 sudo[301687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:03 compute-1 sudo[301687]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:03 compute-1 sudo[301712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:04:03 compute-1 sudo[301712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:03 compute-1 sudo[301712]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:04 compute-1 sudo[301737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:04 compute-1 sudo[301737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:04 compute-1 sudo[301737]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:04 compute-1 sudo[301762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:04:04 compute-1 sudo[301762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:04 compute-1 nova_compute[226101]: 2025-12-06 08:04:04.448 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:04.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:04 compute-1 sudo[301762]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:04 compute-1 ceph-mon[81689]: pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.6 MiB/s wr, 36 op/s
Dec 06 08:04:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:05 compute-1 nova_compute[226101]: 2025-12-06 08:04:05.601 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:05.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:04:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:04:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:06.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:06 compute-1 ceph-mon[81689]: pgmap v3341: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 94 op/s
Dec 06 08:04:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:07.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:08.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:09 compute-1 nova_compute[226101]: 2025-12-06 08:04:09.452 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:09 compute-1 nova_compute[226101]: 2025-12-06 08:04:09.608 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:09 compute-1 nova_compute[226101]: 2025-12-06 08:04:09.608 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:04:09 compute-1 ceph-mon[81689]: pgmap v3342: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:09 compute-1 nova_compute[226101]: 2025-12-06 08:04:09.647 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:04:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:09.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:10 compute-1 podman[301819]: 2025-12-06 08:04:10.089614087 +0000 UTC m=+0.057001347 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 08:04:10 compute-1 podman[301818]: 2025-12-06 08:04:10.093874181 +0000 UTC m=+0.065392022 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 08:04:10 compute-1 podman[301820]: 2025-12-06 08:04:10.122195359 +0000 UTC m=+0.085928801 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:04:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:10.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:10 compute-1 nova_compute[226101]: 2025-12-06 08:04:10.604 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:10 compute-1 ceph-mon[81689]: pgmap v3343: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2412057235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2412057235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:04:10 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:04:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:11.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:12.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:12 compute-1 nova_compute[226101]: 2025-12-06 08:04:12.622 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:13 compute-1 ceph-mon[81689]: pgmap v3344: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4174871040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:14 compute-1 nova_compute[226101]: 2025-12-06 08:04:14.456 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:15 compute-1 ceph-mon[81689]: pgmap v3345: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Dec 06 08:04:15 compute-1 nova_compute[226101]: 2025-12-06 08:04:15.607 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:15.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:16.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:17 compute-1 sudo[301884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:17 compute-1 sudo[301884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:17 compute-1 sudo[301884]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:17 compute-1 ceph-mon[81689]: pgmap v3346: 305 pgs: 305 active+clean; 211 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 133 op/s
Dec 06 08:04:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:17 compute-1 sudo[301909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:04:17 compute-1 sudo[301909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:17 compute-1 sudo[301909]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:17.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:18.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:19 compute-1 ceph-mon[81689]: pgmap v3347: 305 pgs: 305 active+clean; 243 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 527 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 06 08:04:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/854582321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:19 compute-1 nova_compute[226101]: 2025-12-06 08:04:19.460 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:19.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3257155745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:20 compute-1 ceph-mon[81689]: pgmap v3348: 305 pgs: 305 active+clean; 243 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Dec 06 08:04:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:20.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:20 compute-1 nova_compute[226101]: 2025-12-06 08:04:20.610 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:21.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:22.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:23 compute-1 ceph-mon[81689]: pgmap v3349: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:04:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:23.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:24 compute-1 nova_compute[226101]: 2025-12-06 08:04:24.465 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:24.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:24 compute-1 nova_compute[226101]: 2025-12-06 08:04:24.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:24 compute-1 nova_compute[226101]: 2025-12-06 08:04:24.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:04:24 compute-1 ceph-mon[81689]: pgmap v3350: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:04:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:25 compute-1 nova_compute[226101]: 2025-12-06 08:04:25.611 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:25.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:25 compute-1 nova_compute[226101]: 2025-12-06 08:04:25.944 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:26.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:26 compute-1 ceph-mon[81689]: pgmap v3351: 305 pgs: 305 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Dec 06 08:04:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:27.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/848864160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:27 compute-1 sshd-session[301935]: Received disconnect from 154.219.116.39 port 44894:11: Bye Bye [preauth]
Dec 06 08:04:27 compute-1 sshd-session[301935]: Disconnected from authenticating user root 154.219.116.39 port 44894 [preauth]
Dec 06 08:04:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:28.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:28 compute-1 ceph-mon[81689]: pgmap v3352: 305 pgs: 305 active+clean; 201 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Dec 06 08:04:29 compute-1 nova_compute[226101]: 2025-12-06 08:04:29.511 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:29 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:04:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:29.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:29 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:04:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:30.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:30 compute-1 nova_compute[226101]: 2025-12-06 08:04:30.641 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:30 compute-1 ceph-mon[81689]: pgmap v3353: 305 pgs: 305 active+clean; 201 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 28 KiB/s wr, 95 op/s
Dec 06 08:04:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:31.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:32.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:32 compute-1 nova_compute[226101]: 2025-12-06 08:04:32.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:32 compute-1 nova_compute[226101]: 2025-12-06 08:04:32.653 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:32 compute-1 nova_compute[226101]: 2025-12-06 08:04:32.653 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:32 compute-1 nova_compute[226101]: 2025-12-06 08:04:32.653 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:32 compute-1 nova_compute[226101]: 2025-12-06 08:04:32.654 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:04:32 compute-1 nova_compute[226101]: 2025-12-06 08:04:32.654 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:32 compute-1 ceph-mon[81689]: pgmap v3354: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 109 op/s
Dec 06 08:04:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:04:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2893058911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.082 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.239 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.240 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4284MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.240 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.240 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.293 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.293 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.308 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:04:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1055536290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.721 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.727 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.746 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.769 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:04:33 compute-1 nova_compute[226101]: 2025-12-06 08:04:33.770 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:33.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2893058911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1055536290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:34 compute-1 nova_compute[226101]: 2025-12-06 08:04:34.517 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:34.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:34 compute-1 ceph-mon[81689]: pgmap v3355: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Dec 06 08:04:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:35 compute-1 nova_compute[226101]: 2025-12-06 08:04:35.643 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:35.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:36.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:36 compute-1 ceph-mon[81689]: pgmap v3356: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Dec 06 08:04:37 compute-1 nova_compute[226101]: 2025-12-06 08:04:37.770 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:37 compute-1 nova_compute[226101]: 2025-12-06 08:04:37.771 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:04:37 compute-1 nova_compute[226101]: 2025-12-06 08:04:37.771 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:04:37 compute-1 nova_compute[226101]: 2025-12-06 08:04:37.786 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:04:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:37.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1616088116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:37 compute-1 sshd-session[301937]: Connection closed by 165.154.55.146 port 54850 [preauth]
Dec 06 08:04:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:38.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:38 compute-1 ceph-mon[81689]: pgmap v3357: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 877 KiB/s wr, 53 op/s
Dec 06 08:04:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2876630916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:39 compute-1 sshd[164848]: Timeout before authentication for connection from 14.103.118.136 to 38.102.83.204, pid = 300658
Dec 06 08:04:39 compute-1 nova_compute[226101]: 2025-12-06 08:04:39.521 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:39.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:40.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:40 compute-1 nova_compute[226101]: 2025-12-06 08:04:40.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:40 compute-1 nova_compute[226101]: 2025-12-06 08:04:40.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:04:40 compute-1 nova_compute[226101]: 2025-12-06 08:04:40.646 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:40 compute-1 ceph-mon[81689]: pgmap v3358: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 441 KiB/s rd, 877 KiB/s wr, 28 op/s
Dec 06 08:04:41 compute-1 podman[301986]: 2025-12-06 08:04:41.091360297 +0000 UTC m=+0.069718517 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 06 08:04:41 compute-1 podman[301985]: 2025-12-06 08:04:41.092075636 +0000 UTC m=+0.072214925 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 08:04:41 compute-1 podman[301987]: 2025-12-06 08:04:41.143263717 +0000 UTC m=+0.109224545 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:04:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:41.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:42.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:42 compute-1 nova_compute[226101]: 2025-12-06 08:04:42.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:43.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:44 compute-1 ceph-mon[81689]: pgmap v3359: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Dec 06 08:04:44 compute-1 nova_compute[226101]: 2025-12-06 08:04:44.526 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:44.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:45 compute-1 ceph-mon[81689]: pgmap v3360: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:45 compute-1 nova_compute[226101]: 2025-12-06 08:04:45.705 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:45.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/826595747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:46.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:47 compute-1 ceph-mon[81689]: pgmap v3361: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2339609487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:47 compute-1 nova_compute[226101]: 2025-12-06 08:04:47.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:47.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:48.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:48 compute-1 nova_compute[226101]: 2025-12-06 08:04:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:49 compute-1 ceph-mon[81689]: pgmap v3362: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:49 compute-1 nova_compute[226101]: 2025-12-06 08:04:49.528 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:49.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4333110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:50.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:50 compute-1 nova_compute[226101]: 2025-12-06 08:04:50.736 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:51 compute-1 ceph-mon[81689]: pgmap v3363: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 241 KiB/s rd, 1.3 MiB/s wr, 48 op/s
Dec 06 08:04:51 compute-1 nova_compute[226101]: 2025-12-06 08:04:51.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:51.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:52.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:52 compute-1 nova_compute[226101]: 2025-12-06 08:04:52.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:52 compute-1 sshd-session[302048]: Received disconnect from 124.18.141.70 port 59634:11: Bye Bye [preauth]
Dec 06 08:04:52 compute-1 sshd-session[302048]: Disconnected from authenticating user root 124.18.141.70 port 59634 [preauth]
Dec 06 08:04:53 compute-1 ceph-mon[81689]: pgmap v3364: 305 pgs: 305 active+clean; 207 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 247 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Dec 06 08:04:53 compute-1 sshd-session[302050]: Received disconnect from 136.112.8.45 port 46636:11: Bye Bye [preauth]
Dec 06 08:04:53 compute-1 sshd-session[302050]: Disconnected from authenticating user root 136.112.8.45 port 46636 [preauth]
Dec 06 08:04:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:53.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:54 compute-1 nova_compute[226101]: 2025-12-06 08:04:54.532 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:54.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:55 compute-1 ceph-mon[81689]: pgmap v3365: 305 pgs: 305 active+clean; 207 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 KiB/s rd, 344 KiB/s wr, 12 op/s
Dec 06 08:04:55 compute-1 nova_compute[226101]: 2025-12-06 08:04:55.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:55 compute-1 nova_compute[226101]: 2025-12-06 08:04:55.738 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:55.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:04:56.167 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:04:56 compute-1 nova_compute[226101]: 2025-12-06 08:04:56.168 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:04:56.169 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:04:56 compute-1 ceph-mon[81689]: pgmap v3366: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:56.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3888507031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4063916969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:57.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:58 compute-1 ceph-mon[81689]: pgmap v3367: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:58.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:59 compute-1 nova_compute[226101]: 2025-12-06 08:04:59.535 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:04:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:59.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:00 compute-1 ceph-mon[81689]: pgmap v3368: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:00 compute-1 nova_compute[226101]: 2025-12-06 08:05:00.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:01 compute-1 ovn_controller[130279]: 2025-12-06T08:05:01Z|00764|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 08:05:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:01.684 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:01.684 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:01.684 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:01.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:02.171 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:02.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:02 compute-1 ceph-mon[81689]: pgmap v3369: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 607 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Dec 06 08:05:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:03.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:04 compute-1 nova_compute[226101]: 2025-12-06 08:05:04.540 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:04.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:04 compute-1 ceph-mon[81689]: pgmap v3370: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 1.5 MiB/s wr, 49 op/s
Dec 06 08:05:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:05 compute-1 nova_compute[226101]: 2025-12-06 08:05:05.787 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:05.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:07 compute-1 ceph-mon[81689]: pgmap v3371: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 94 op/s
Dec 06 08:05:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:07.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4284473718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:09 compute-1 ceph-mon[81689]: pgmap v3372: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Dec 06 08:05:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2107243988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:05:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2107243988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:05:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1560372794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:09 compute-1 nova_compute[226101]: 2025-12-06 08:05:09.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:09.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:10 compute-1 nova_compute[226101]: 2025-12-06 08:05:10.789 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:11 compute-1 ceph-mon[81689]: pgmap v3373: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Dec 06 08:05:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:11.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:12 compute-1 podman[302053]: 2025-12-06 08:05:12.08430568 +0000 UTC m=+0.067072137 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:05:12 compute-1 podman[302052]: 2025-12-06 08:05:12.087658819 +0000 UTC m=+0.073989301 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 06 08:05:12 compute-1 podman[302054]: 2025-12-06 08:05:12.133938598 +0000 UTC m=+0.106320576 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:05:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:12.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:13 compute-1 ceph-mon[81689]: pgmap v3374: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 80 op/s
Dec 06 08:05:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:13.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:14 compute-1 nova_compute[226101]: 2025-12-06 08:05:14.609 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:14.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:15 compute-1 ceph-mon[81689]: pgmap v3375: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 46 op/s
Dec 06 08:05:15 compute-1 nova_compute[226101]: 2025-12-06 08:05:15.791 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:15.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:16.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:17 compute-1 ceph-mon[81689]: pgmap v3376: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 114 op/s
Dec 06 08:05:17 compute-1 sudo[302115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:17 compute-1 sudo[302115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:17 compute-1 sudo[302115]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:17 compute-1 sudo[302140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:05:17 compute-1 sudo[302140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:17 compute-1 sudo[302140]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:17 compute-1 sudo[302165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:17 compute-1 sudo[302165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:17 compute-1 sudo[302165]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:17 compute-1 sudo[302190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:05:17 compute-1 sudo[302190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:17.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:18 compute-1 sudo[302190]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:19 compute-1 ceph-mon[81689]: pgmap v3377: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:05:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:05:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:05:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:05:19 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:05:19 compute-1 nova_compute[226101]: 2025-12-06 08:05:19.613 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:19.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:20 compute-1 ceph-mon[81689]: pgmap v3378: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:20.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:20 compute-1 nova_compute[226101]: 2025-12-06 08:05:20.792 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:21.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:22 compute-1 sshd-session[302246]: Received disconnect from 186.87.166.141 port 53646:11: Bye Bye [preauth]
Dec 06 08:05:22 compute-1 sshd-session[302246]: Disconnected from authenticating user root 186.87.166.141 port 53646 [preauth]
Dec 06 08:05:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:22.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:22 compute-1 ceph-mon[81689]: pgmap v3379: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 08:05:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:24 compute-1 sudo[302248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:24 compute-1 sudo[302248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:24 compute-1 sudo[302248]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:24 compute-1 sudo[302273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:05:24 compute-1 sudo[302273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:24 compute-1 sudo[302273]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:24.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:24 compute-1 nova_compute[226101]: 2025-12-06 08:05:24.656 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:25 compute-1 ceph-mon[81689]: pgmap v3380: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:25 compute-1 nova_compute[226101]: 2025-12-06 08:05:25.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:25.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:26 compute-1 ceph-mon[81689]: pgmap v3381: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 08:05:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:26.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:27 compute-1 sshd-session[302298]: Connection closed by authenticating user root 47.237.163.130 port 56164 [preauth]
Dec 06 08:05:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:27.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:28 compute-1 ceph-mon[81689]: pgmap v3382: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 123 KiB/s rd, 257 KiB/s wr, 26 op/s
Dec 06 08:05:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:28.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:29 compute-1 nova_compute[226101]: 2025-12-06 08:05:29.693 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:29.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:30.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:30 compute-1 nova_compute[226101]: 2025-12-06 08:05:30.846 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:31 compute-1 ceph-mon[81689]: pgmap v3383: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 13 KiB/s wr, 1 op/s
Dec 06 08:05:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:31.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:32 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2427222586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:32 compute-1 nova_compute[226101]: 2025-12-06 08:05:32.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:32 compute-1 nova_compute[226101]: 2025-12-06 08:05:32.624 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:32 compute-1 nova_compute[226101]: 2025-12-06 08:05:32.624 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:32 compute-1 nova_compute[226101]: 2025-12-06 08:05:32.625 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:32 compute-1 nova_compute[226101]: 2025-12-06 08:05:32.625 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:05:32 compute-1 nova_compute[226101]: 2025-12-06 08:05:32.626 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:32.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:05:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4130761412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.125 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:33 compute-1 ceph-mon[81689]: pgmap v3384: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 06 08:05:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4130761412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.264 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.265 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4288MB free_disk=20.94271469116211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.265 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.266 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.329 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.329 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.380 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:05:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/310815550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.794 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.804 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.828 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.829 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:05:33 compute-1 nova_compute[226101]: 2025-12-06 08:05:33.830 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:33.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/310815550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:34.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:34 compute-1 nova_compute[226101]: 2025-12-06 08:05:34.695 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:35 compute-1 ceph-mon[81689]: pgmap v3385: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 06 08:05:35 compute-1 nova_compute[226101]: 2025-12-06 08:05:35.849 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:35.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:36.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.231 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "51170699-7556-40a4-95f3-b2d8b499f1b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.231 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.251 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:05:37 compute-1 ceph-mon[81689]: pgmap v3386: 305 pgs: 305 active+clean; 224 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 733 KiB/s wr, 16 op/s
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.330 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.330 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.338 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.338 226109 INFO nova.compute.claims [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.443 226109 DEBUG nova.scheduler.client.report [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.465 226109 DEBUG nova.scheduler.client.report [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.465 226109 DEBUG nova.compute.provider_tree [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.490 226109 DEBUG nova.scheduler.client.report [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.515 226109 DEBUG nova.scheduler.client.report [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.557 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.831 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.832 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.832 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.856 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 08:05:37 compute-1 nova_compute[226101]: 2025-12-06 08:05:37.857 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:05:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:37.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:05:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3336857505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.028 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.035 226109 DEBUG nova.compute.provider_tree [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.063 226109 DEBUG nova.scheduler.client.report [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.094 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.094 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:05:38 compute-1 sshd-session[302345]: Received disconnect from 186.96.151.198 port 41900:11: Bye Bye [preauth]
Dec 06 08:05:38 compute-1 sshd-session[302345]: Disconnected from authenticating user root 186.96.151.198 port 41900 [preauth]
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.177 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.178 226109 DEBUG nova.network.neutron [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.202 226109 INFO nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.218 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.312 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.314 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.314 226109 INFO nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Creating image(s)
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.346 226109 DEBUG nova.storage.rbd_utils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 51170699-7556-40a4-95f3-b2d8b499f1b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.373 226109 DEBUG nova.storage.rbd_utils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 51170699-7556-40a4-95f3-b2d8b499f1b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.395 226109 DEBUG nova.storage.rbd_utils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 51170699-7556-40a4-95f3-b2d8b499f1b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.398 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.478 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.479 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.480 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.480 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.502 226109 DEBUG nova.storage.rbd_utils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 51170699-7556-40a4-95f3-b2d8b499f1b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.506 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 51170699-7556-40a4-95f3-b2d8b499f1b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3552322534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/107381487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3336857505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3964016892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-1 nova_compute[226101]: 2025-12-06 08:05:38.535 226109 DEBUG nova.policy [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:05:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:38.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.415 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 51170699-7556-40a4-95f3-b2d8b499f1b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.909s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.491 226109 DEBUG nova.storage.rbd_utils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image 51170699-7556-40a4-95f3-b2d8b499f1b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:05:39 compute-1 ceph-mon[81689]: pgmap v3387: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3101918147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.725 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.733 226109 DEBUG nova.objects.instance [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid 51170699-7556-40a4-95f3-b2d8b499f1b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:05:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:39.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.903 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.904 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Ensure instance console log exists: /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.904 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.905 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:39 compute-1 nova_compute[226101]: 2025-12-06 08:05:39.905 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:40.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:40 compute-1 ceph-mon[81689]: pgmap v3388: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:40.732 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:05:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:40.734 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:05:40 compute-1 nova_compute[226101]: 2025-12-06 08:05:40.772 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:40 compute-1 nova_compute[226101]: 2025-12-06 08:05:40.850 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:41 compute-1 sshd-session[302516]: Received disconnect from 101.100.194.199 port 54530:11: Bye Bye [preauth]
Dec 06 08:05:41 compute-1 sshd-session[302516]: Disconnected from authenticating user root 101.100.194.199 port 54530 [preauth]
Dec 06 08:05:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:41 compute-1 nova_compute[226101]: 2025-12-06 08:05:41.195 226109 DEBUG nova.network.neutron [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Successfully created port: 174801fe-dd50-44bc-a386-88a83f955c86 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:05:41 compute-1 nova_compute[226101]: 2025-12-06 08:05:41.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:41 compute-1 nova_compute[226101]: 2025-12-06 08:05:41.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:05:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:41.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:42.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:42 compute-1 ceph-mon[81689]: pgmap v3389: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 06 08:05:43 compute-1 podman[302537]: 2025-12-06 08:05:43.11307554 +0000 UTC m=+0.079743046 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 08:05:43 compute-1 podman[302536]: 2025-12-06 08:05:43.123706515 +0000 UTC m=+0.098164109 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:05:43 compute-1 podman[302538]: 2025-12-06 08:05:43.156087172 +0000 UTC m=+0.109525764 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:05:43 compute-1 nova_compute[226101]: 2025-12-06 08:05:43.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:43 compute-1 nova_compute[226101]: 2025-12-06 08:05:43.824 226109 DEBUG nova.network.neutron [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Successfully updated port: 174801fe-dd50-44bc-a386-88a83f955c86 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:05:43 compute-1 nova_compute[226101]: 2025-12-06 08:05:43.854 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:05:43 compute-1 nova_compute[226101]: 2025-12-06 08:05:43.854 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:05:43 compute-1 nova_compute[226101]: 2025-12-06 08:05:43.855 226109 DEBUG nova.network.neutron [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:05:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:43.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:43 compute-1 nova_compute[226101]: 2025-12-06 08:05:43.956 226109 DEBUG nova.compute.manager [req-7a1cf3b7-1296-42ff-a400-bf5c977eec30 req-91aa2602-a9c2-4b66-9b47-cc18631edb88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-changed-174801fe-dd50-44bc-a386-88a83f955c86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:43 compute-1 nova_compute[226101]: 2025-12-06 08:05:43.956 226109 DEBUG nova.compute.manager [req-7a1cf3b7-1296-42ff-a400-bf5c977eec30 req-91aa2602-a9c2-4b66-9b47-cc18631edb88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Refreshing instance network info cache due to event network-changed-174801fe-dd50-44bc-a386-88a83f955c86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:05:43 compute-1 nova_compute[226101]: 2025-12-06 08:05:43.956 226109 DEBUG oslo_concurrency.lockutils [req-7a1cf3b7-1296-42ff-a400-bf5c977eec30 req-91aa2602-a9c2-4b66-9b47-cc18631edb88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:05:44 compute-1 nova_compute[226101]: 2025-12-06 08:05:44.333 226109 DEBUG nova.network.neutron [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:05:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:44.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:44 compute-1 nova_compute[226101]: 2025-12-06 08:05:44.764 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.262 226109 DEBUG nova.network.neutron [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Updating instance_info_cache with network_info: [{"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.290 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.290 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Instance network_info: |[{"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.291 226109 DEBUG oslo_concurrency.lockutils [req-7a1cf3b7-1296-42ff-a400-bf5c977eec30 req-91aa2602-a9c2-4b66-9b47-cc18631edb88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.291 226109 DEBUG nova.network.neutron [req-7a1cf3b7-1296-42ff-a400-bf5c977eec30 req-91aa2602-a9c2-4b66-9b47-cc18631edb88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Refreshing network info cache for port 174801fe-dd50-44bc-a386-88a83f955c86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.293 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Start _get_guest_xml network_info=[{"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.297 226109 WARNING nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.335 226109 DEBUG nova.virt.libvirt.host [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.336 226109 DEBUG nova.virt.libvirt.host [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.341 226109 DEBUG nova.virt.libvirt.host [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.342 226109 DEBUG nova.virt.libvirt.host [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.343 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.343 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.344 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.344 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.344 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.344 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.344 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.344 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.345 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.345 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.345 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.345 226109 DEBUG nova.virt.hardware [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.348 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:45 compute-1 ceph-mon[81689]: pgmap v3390: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Dec 06 08:05:45 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:45.736 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:05:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/825215880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.759 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.798 226109 DEBUG nova.storage.rbd_utils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 51170699-7556-40a4-95f3-b2d8b499f1b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.802 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:45 compute-1 nova_compute[226101]: 2025-12-06 08:05:45.893 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:45.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:05:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2786519022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.239 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.241 226109 DEBUG nova.virt.libvirt.vif [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:05:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1191883859',display_name='tempest-TestNetworkAdvancedServerOps-server-1191883859',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1191883859',id=190,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLPh67iVlMNBwVIFOg+K6SNA2aP27r1EHLDEGSOfvk9eDH8d5wwuJ4kOqTmKVBrgDAV0izmfVrlh1RNi6QP+tXbAf20JyAnTGYkLX2hIIzOE8kvF4A0s67F/yJUxPQuW5A==',key_name='tempest-TestNetworkAdvancedServerOps-633807334',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-10lcag3z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:05:38Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=51170699-7556-40a4-95f3-b2d8b499f1b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.242 226109 DEBUG nova.network.os_vif_util [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.243 226109 DEBUG nova.network.os_vif_util [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4b:07,bridge_name='br-int',has_traffic_filtering=True,id=174801fe-dd50-44bc-a386-88a83f955c86,network=Network(25fac239-4a45-4b5a-b8cc-e7aa2e8096f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap174801fe-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.245 226109 DEBUG nova.objects.instance [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 51170699-7556-40a4-95f3-b2d8b499f1b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.266 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <uuid>51170699-7556-40a4-95f3-b2d8b499f1b0</uuid>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <name>instance-000000be</name>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1191883859</nova:name>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:05:45</nova:creationTime>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <nova:port uuid="174801fe-dd50-44bc-a386-88a83f955c86">
Dec 06 08:05:46 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <system>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <entry name="serial">51170699-7556-40a4-95f3-b2d8b499f1b0</entry>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <entry name="uuid">51170699-7556-40a4-95f3-b2d8b499f1b0</entry>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </system>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <os>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   </os>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <features>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   </features>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/51170699-7556-40a4-95f3-b2d8b499f1b0_disk">
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       </source>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/51170699-7556-40a4-95f3-b2d8b499f1b0_disk.config">
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       </source>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:05:46 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:9d:4b:07"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <target dev="tap174801fe-dd"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0/console.log" append="off"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <video>
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </video>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:05:46 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:05:46 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:05:46 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:05:46 compute-1 nova_compute[226101]: </domain>
Dec 06 08:05:46 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.267 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Preparing to wait for external event network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.268 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.268 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.268 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.269 226109 DEBUG nova.virt.libvirt.vif [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:05:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1191883859',display_name='tempest-TestNetworkAdvancedServerOps-server-1191883859',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1191883859',id=190,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLPh67iVlMNBwVIFOg+K6SNA2aP27r1EHLDEGSOfvk9eDH8d5wwuJ4kOqTmKVBrgDAV0izmfVrlh1RNi6QP+tXbAf20JyAnTGYkLX2hIIzOE8kvF4A0s67F/yJUxPQuW5A==',key_name='tempest-TestNetworkAdvancedServerOps-633807334',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-10lcag3z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:05:38Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=51170699-7556-40a4-95f3-b2d8b499f1b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.270 226109 DEBUG nova.network.os_vif_util [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.270 226109 DEBUG nova.network.os_vif_util [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4b:07,bridge_name='br-int',has_traffic_filtering=True,id=174801fe-dd50-44bc-a386-88a83f955c86,network=Network(25fac239-4a45-4b5a-b8cc-e7aa2e8096f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap174801fe-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.271 226109 DEBUG os_vif [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4b:07,bridge_name='br-int',has_traffic_filtering=True,id=174801fe-dd50-44bc-a386-88a83f955c86,network=Network(25fac239-4a45-4b5a-b8cc-e7aa2e8096f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap174801fe-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.271 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.271 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.272 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.275 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.275 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap174801fe-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.275 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap174801fe-dd, col_values=(('external_ids', {'iface-id': '174801fe-dd50-44bc-a386-88a83f955c86', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:4b:07', 'vm-uuid': '51170699-7556-40a4-95f3-b2d8b499f1b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.277 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:46 compute-1 NetworkManager[49031]: <info>  [1765008346.2782] manager: (tap174801fe-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.279 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.284 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.285 226109 INFO os_vif [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4b:07,bridge_name='br-int',has_traffic_filtering=True,id=174801fe-dd50-44bc-a386-88a83f955c86,network=Network(25fac239-4a45-4b5a-b8cc-e7aa2e8096f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap174801fe-dd')
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.334 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.335 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.335 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:9d:4b:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.336 226109 INFO nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Using config drive
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.366 226109 DEBUG nova.storage.rbd_utils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 51170699-7556-40a4-95f3-b2d8b499f1b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/825215880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2786519022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:46.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:46 compute-1 sshd-session[302599]: Received disconnect from 154.219.116.39 port 35334:11: Bye Bye [preauth]
Dec 06 08:05:46 compute-1 sshd-session[302599]: Disconnected from authenticating user root 154.219.116.39 port 35334 [preauth]
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.937 226109 INFO nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Creating config drive at /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0/disk.config
Dec 06 08:05:46 compute-1 nova_compute[226101]: 2025-12-06 08:05:46.943 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw2y2x1dd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:47 compute-1 nova_compute[226101]: 2025-12-06 08:05:47.098 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw2y2x1dd" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:47 compute-1 nova_compute[226101]: 2025-12-06 08:05:47.137 226109 DEBUG nova.storage.rbd_utils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 51170699-7556-40a4-95f3-b2d8b499f1b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:47 compute-1 nova_compute[226101]: 2025-12-06 08:05:47.142 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0/disk.config 51170699-7556-40a4-95f3-b2d8b499f1b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:47 compute-1 ceph-mon[81689]: pgmap v3391: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Dec 06 08:05:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/641287599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:47 compute-1 nova_compute[226101]: 2025-12-06 08:05:47.837 226109 DEBUG nova.network.neutron [req-7a1cf3b7-1296-42ff-a400-bf5c977eec30 req-91aa2602-a9c2-4b66-9b47-cc18631edb88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Updated VIF entry in instance network info cache for port 174801fe-dd50-44bc-a386-88a83f955c86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:05:47 compute-1 nova_compute[226101]: 2025-12-06 08:05:47.839 226109 DEBUG nova.network.neutron [req-7a1cf3b7-1296-42ff-a400-bf5c977eec30 req-91aa2602-a9c2-4b66-9b47-cc18631edb88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Updating instance_info_cache with network_info: [{"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:05:47 compute-1 nova_compute[226101]: 2025-12-06 08:05:47.871 226109 DEBUG oslo_concurrency.lockutils [req-7a1cf3b7-1296-42ff-a400-bf5c977eec30 req-91aa2602-a9c2-4b66-9b47-cc18631edb88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:05:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.189 226109 DEBUG oslo_concurrency.processutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0/disk.config 51170699-7556-40a4-95f3-b2d8b499f1b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.190 226109 INFO nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Deleting local config drive /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0/disk.config because it was imported into RBD.
Dec 06 08:05:48 compute-1 kernel: tap174801fe-dd: entered promiscuous mode
Dec 06 08:05:48 compute-1 NetworkManager[49031]: <info>  [1765008348.2730] manager: (tap174801fe-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Dec 06 08:05:48 compute-1 ovn_controller[130279]: 2025-12-06T08:05:48Z|00765|binding|INFO|Claiming lport 174801fe-dd50-44bc-a386-88a83f955c86 for this chassis.
Dec 06 08:05:48 compute-1 ovn_controller[130279]: 2025-12-06T08:05:48Z|00766|binding|INFO|174801fe-dd50-44bc-a386-88a83f955c86: Claiming fa:16:3e:9d:4b:07 10.100.0.6
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.276 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:48 compute-1 systemd-udevd[302732]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:05:48 compute-1 NetworkManager[49031]: <info>  [1765008348.3291] device (tap174801fe-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:05:48 compute-1 NetworkManager[49031]: <info>  [1765008348.3305] device (tap174801fe-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:05:48 compute-1 systemd-machined[190302]: New machine qemu-88-instance-000000be.
Dec 06 08:05:48 compute-1 systemd[1]: Started Virtual Machine qemu-88-instance-000000be.
Dec 06 08:05:48 compute-1 ovn_controller[130279]: 2025-12-06T08:05:48Z|00767|binding|INFO|Setting lport 174801fe-dd50-44bc-a386-88a83f955c86 ovn-installed in OVS
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.415 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:48 compute-1 ovn_controller[130279]: 2025-12-06T08:05:48Z|00768|binding|INFO|Setting lport 174801fe-dd50-44bc-a386-88a83f955c86 up in Southbound
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.442 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4b:07 10.100.0.6'], port_security=['fa:16:3e:9d:4b:07 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '51170699-7556-40a4-95f3-b2d8b499f1b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c97cc05a-848a-4069-80b3-8e90d28fbcc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5131d9-0bd3-410a-948f-4280459ef61c, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=174801fe-dd50-44bc-a386-88a83f955c86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.442 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 174801fe-dd50-44bc-a386-88a83f955c86 in datapath 25fac239-4a45-4b5a-b8cc-e7aa2e8096f2 bound to our chassis
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.443 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25fac239-4a45-4b5a-b8cc-e7aa2e8096f2
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.456 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0d0f10-ff51-4ffc-a773-a62c22d9474c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.457 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap25fac239-41 in ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.459 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap25fac239-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.459 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[86220807-422c-4d17-990a-2152ded7badb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.461 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d3547658-8db9-4292-8240-339b8f514011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.472 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4a36a7-b8e0-4a66-bf91-43e521b548ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.494 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[251d7223-13cc-400e-bf45-227bb770d9f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.523 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cc98f8-3e8c-46db-97f3-39bb5bbac103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 NetworkManager[49031]: <info>  [1765008348.5304] manager: (tap25fac239-40): new Veth device (/org/freedesktop/NetworkManager/Devices/347)
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.529 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[222767d1-3ec3-424f-87c8-5da6fc423932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.561 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f47513-09ae-4d10-bb2f-6570f9275896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.564 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[1682eee0-9849-4ae6-b503-6fc199289b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 NetworkManager[49031]: <info>  [1765008348.5889] device (tap25fac239-40): carrier: link connected
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.596 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b69fb06e-9c49-4656-87cf-e60765a161aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.612 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[20a598fe-79d7-41c7-901a-e4203f0199e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25fac239-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862809, 'reachable_time': 42319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302768, 'error': None, 'target': 'ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.624 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4c1b2b-d2c7-4cab-bb16-5d48b82045ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:2acc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 862809, 'tstamp': 862809}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302769, 'error': None, 'target': 'ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.640 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1cfd80-003d-4fec-8117-5afcb73b0bbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25fac239-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862809, 'reachable_time': 42319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302770, 'error': None, 'target': 'ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.670 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5dccc7c9-2b5a-4604-b661-cd5eaa495f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.745 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4133bdb1-9580-4f0b-94e1-540814866b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.747 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25fac239-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.747 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.747 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25fac239-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:48 compute-1 kernel: tap25fac239-40: entered promiscuous mode
Dec 06 08:05:48 compute-1 NetworkManager[49031]: <info>  [1765008348.7498] manager: (tap25fac239-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.750 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.754 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25fac239-40, col_values=(('external_ids', {'iface-id': 'c3150ab9-4a73-4058-9e42-35c5cfea4fb5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:48 compute-1 ovn_controller[130279]: 2025-12-06T08:05:48Z|00769|binding|INFO|Releasing lport c3150ab9-4a73-4058-9e42-35c5cfea4fb5 from this chassis (sb_readonly=0)
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.758 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/25fac239-4a45-4b5a-b8cc-e7aa2e8096f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/25fac239-4a45-4b5a-b8cc-e7aa2e8096f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.759 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1284125a-f686-40db-9a58-eaf60bd2c985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.759 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/25fac239-4a45-4b5a-b8cc-e7aa2e8096f2.pid.haproxy
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 25fac239-4a45-4b5a-b8cc-e7aa2e8096f2
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:05:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:05:48.761 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2', 'env', 'PROCESS_TAG=haproxy-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/25fac239-4a45-4b5a-b8cc-e7aa2e8096f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.969 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008348.968605, 51170699-7556-40a4-95f3-b2d8b499f1b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:05:48 compute-1 nova_compute[226101]: 2025-12-06 08:05:48.970 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] VM Started (Lifecycle Event)
Dec 06 08:05:49 compute-1 ceph-mon[81689]: pgmap v3392: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 113 op/s
Dec 06 08:05:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/795117279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:49 compute-1 podman[302844]: 2025-12-06 08:05:49.141351894 +0000 UTC m=+0.067122488 container create 845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 08:05:49 compute-1 systemd[1]: Started libpod-conmon-845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586.scope.
Dec 06 08:05:49 compute-1 podman[302844]: 2025-12-06 08:05:49.112670515 +0000 UTC m=+0.038441139 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:05:49 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:05:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc1819cbeff52805b9acaa42e948e19234be551028266650d2d8455a61a57d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:49 compute-1 podman[302844]: 2025-12-06 08:05:49.241095044 +0000 UTC m=+0.166865668 container init 845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:05:49 compute-1 podman[302844]: 2025-12-06 08:05:49.249017946 +0000 UTC m=+0.174788550 container start 845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:05:49 compute-1 neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2[302859]: [NOTICE]   (302863) : New worker (302865) forked
Dec 06 08:05:49 compute-1 neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2[302859]: [NOTICE]   (302863) : Loading success.
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.433 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.437 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008348.969778, 51170699-7556-40a4-95f3-b2d8b499f1b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.437 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] VM Paused (Lifecycle Event)
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.520 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.524 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.546 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:49.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.920 226109 DEBUG nova.compute.manager [req-bb555163-dd8d-4c2b-a524-61405ebeb82e req-84fc81fc-8e30-4319-97e4-da84abc6cdbc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.921 226109 DEBUG oslo_concurrency.lockutils [req-bb555163-dd8d-4c2b-a524-61405ebeb82e req-84fc81fc-8e30-4319-97e4-da84abc6cdbc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.921 226109 DEBUG oslo_concurrency.lockutils [req-bb555163-dd8d-4c2b-a524-61405ebeb82e req-84fc81fc-8e30-4319-97e4-da84abc6cdbc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.922 226109 DEBUG oslo_concurrency.lockutils [req-bb555163-dd8d-4c2b-a524-61405ebeb82e req-84fc81fc-8e30-4319-97e4-da84abc6cdbc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.922 226109 DEBUG nova.compute.manager [req-bb555163-dd8d-4c2b-a524-61405ebeb82e req-84fc81fc-8e30-4319-97e4-da84abc6cdbc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Processing event network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.922 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.926 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008349.926062, 51170699-7556-40a4-95f3-b2d8b499f1b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.926 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] VM Resumed (Lifecycle Event)
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.928 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.932 226109 INFO nova.virt.libvirt.driver [-] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Instance spawned successfully.
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.933 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.945 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:49 compute-1 nova_compute[226101]: 2025-12-06 08:05:49.949 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.027 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.031 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.032 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.032 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.033 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.033 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.034 226109 DEBUG nova.virt.libvirt.driver [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.215 226109 INFO nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Took 11.90 seconds to spawn the instance on the hypervisor.
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.218 226109 DEBUG nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.281 226109 INFO nova.compute.manager [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Took 12.97 seconds to build instance.
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.644 226109 DEBUG oslo_concurrency.lockutils [None req-a96a9186-b3ba-4deb-9d2d-6609b70a7a42 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:50 compute-1 nova_compute[226101]: 2025-12-06 08:05:50.897 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:51 compute-1 ceph-mon[81689]: pgmap v3393: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 08:05:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:51 compute-1 nova_compute[226101]: 2025-12-06 08:05:51.277 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:51.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:52 compute-1 nova_compute[226101]: 2025-12-06 08:05:52.010 226109 DEBUG nova.compute.manager [req-8adb97df-aef2-4f46-a5cd-5eaa5e830719 req-4e5cc35a-d993-4cb2-84d6-de130d47986b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:52 compute-1 nova_compute[226101]: 2025-12-06 08:05:52.010 226109 DEBUG oslo_concurrency.lockutils [req-8adb97df-aef2-4f46-a5cd-5eaa5e830719 req-4e5cc35a-d993-4cb2-84d6-de130d47986b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:52 compute-1 nova_compute[226101]: 2025-12-06 08:05:52.011 226109 DEBUG oslo_concurrency.lockutils [req-8adb97df-aef2-4f46-a5cd-5eaa5e830719 req-4e5cc35a-d993-4cb2-84d6-de130d47986b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:52 compute-1 nova_compute[226101]: 2025-12-06 08:05:52.011 226109 DEBUG oslo_concurrency.lockutils [req-8adb97df-aef2-4f46-a5cd-5eaa5e830719 req-4e5cc35a-d993-4cb2-84d6-de130d47986b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:52 compute-1 nova_compute[226101]: 2025-12-06 08:05:52.011 226109 DEBUG nova.compute.manager [req-8adb97df-aef2-4f46-a5cd-5eaa5e830719 req-4e5cc35a-d993-4cb2-84d6-de130d47986b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] No waiting events found dispatching network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:05:52 compute-1 nova_compute[226101]: 2025-12-06 08:05:52.012 226109 WARNING nova.compute.manager [req-8adb97df-aef2-4f46-a5cd-5eaa5e830719 req-4e5cc35a-d993-4cb2-84d6-de130d47986b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received unexpected event network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 for instance with vm_state active and task_state None.
Dec 06 08:05:52 compute-1 nova_compute[226101]: 2025-12-06 08:05:52.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:52.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:53 compute-1 NetworkManager[49031]: <info>  [1765008353.2676] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Dec 06 08:05:53 compute-1 NetworkManager[49031]: <info>  [1765008353.2685] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.268 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.333 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:53 compute-1 ovn_controller[130279]: 2025-12-06T08:05:53Z|00770|binding|INFO|Releasing lport c3150ab9-4a73-4058-9e42-35c5cfea4fb5 from this chassis (sb_readonly=0)
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.343 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.572 226109 DEBUG nova.compute.manager [req-9a954071-6bb2-47d5-8629-6537f9a915be req-61b3d311-8b4b-4b2e-b505-ca731f56137f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-changed-174801fe-dd50-44bc-a386-88a83f955c86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.572 226109 DEBUG nova.compute.manager [req-9a954071-6bb2-47d5-8629-6537f9a915be req-61b3d311-8b4b-4b2e-b505-ca731f56137f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Refreshing instance network info cache due to event network-changed-174801fe-dd50-44bc-a386-88a83f955c86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.572 226109 DEBUG oslo_concurrency.lockutils [req-9a954071-6bb2-47d5-8629-6537f9a915be req-61b3d311-8b4b-4b2e-b505-ca731f56137f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.572 226109 DEBUG oslo_concurrency.lockutils [req-9a954071-6bb2-47d5-8629-6537f9a915be req-61b3d311-8b4b-4b2e-b505-ca731f56137f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.573 226109 DEBUG nova.network.neutron [req-9a954071-6bb2-47d5-8629-6537f9a915be req-61b3d311-8b4b-4b2e-b505-ca731f56137f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Refreshing network info cache for port 174801fe-dd50-44bc-a386-88a83f955c86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:05:53 compute-1 nova_compute[226101]: 2025-12-06 08:05:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:53.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:53 compute-1 ceph-mon[81689]: pgmap v3394: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 08:05:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:54.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:54 compute-1 nova_compute[226101]: 2025-12-06 08:05:54.944 226109 DEBUG nova.network.neutron [req-9a954071-6bb2-47d5-8629-6537f9a915be req-61b3d311-8b4b-4b2e-b505-ca731f56137f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Updated VIF entry in instance network info cache for port 174801fe-dd50-44bc-a386-88a83f955c86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:05:54 compute-1 nova_compute[226101]: 2025-12-06 08:05:54.946 226109 DEBUG nova.network.neutron [req-9a954071-6bb2-47d5-8629-6537f9a915be req-61b3d311-8b4b-4b2e-b505-ca731f56137f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Updating instance_info_cache with network_info: [{"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:05:54 compute-1 nova_compute[226101]: 2025-12-06 08:05:54.969 226109 DEBUG oslo_concurrency.lockutils [req-9a954071-6bb2-47d5-8629-6537f9a915be req-61b3d311-8b4b-4b2e-b505-ca731f56137f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:05:55 compute-1 ceph-mon[81689]: pgmap v3395: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 80 op/s
Dec 06 08:05:55 compute-1 nova_compute[226101]: 2025-12-06 08:05:55.899 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:55.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:56 compute-1 nova_compute[226101]: 2025-12-06 08:05:56.279 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:56 compute-1 nova_compute[226101]: 2025-12-06 08:05:56.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:56.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:57 compute-1 ceph-mon[81689]: pgmap v3396: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 154 op/s
Dec 06 08:05:57 compute-1 sshd-session[302875]: Received disconnect from 154.209.4.183 port 44606:11: Bye Bye [preauth]
Dec 06 08:05:57 compute-1 sshd-session[302875]: Disconnected from authenticating user root 154.209.4.183 port 44606 [preauth]
Dec 06 08:05:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:57.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:58.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:58 compute-1 ceph-mon[81689]: pgmap v3397: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Dec 06 08:05:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:05:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:59.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:00.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:00 compute-1 nova_compute[226101]: 2025-12-06 08:06:00.936 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:00 compute-1 ceph-mon[81689]: pgmap v3398: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Dec 06 08:06:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:01 compute-1 nova_compute[226101]: 2025-12-06 08:06:01.281 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:01.685 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:01.686 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:01.686 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:01.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:03 compute-1 ceph-mon[81689]: pgmap v3399: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 08:06:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:03.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:04.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:05 compute-1 ovn_controller[130279]: 2025-12-06T08:06:05Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:4b:07 10.100.0.6
Dec 06 08:06:05 compute-1 ovn_controller[130279]: 2025-12-06T08:06:05Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:4b:07 10.100.0.6
Dec 06 08:06:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:05.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:05 compute-1 nova_compute[226101]: 2025-12-06 08:06:05.958 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:05 compute-1 ceph-mon[81689]: pgmap v3400: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Dec 06 08:06:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:06 compute-1 nova_compute[226101]: 2025-12-06 08:06:06.283 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:06.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:07 compute-1 ceph-mon[81689]: pgmap v3401: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Dec 06 08:06:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:08.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:08 compute-1 sshd-session[302878]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:06:08 compute-1 sshd-session[302878]: banner exchange: Connection from 120.240.95.27 port 42932: Connection timed out
Dec 06 08:06:09 compute-1 ceph-mon[81689]: pgmap v3402: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 621 KiB/s rd, 3.2 MiB/s wr, 102 op/s
Dec 06 08:06:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:09.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3387559026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:06:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3387559026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:06:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:10.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:10 compute-1 nova_compute[226101]: 2025-12-06 08:06:10.850 226109 INFO nova.compute.manager [None req-abffe1a2-757c-4ac7-ab3b-2f9dd3c161a4 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Get console output
Dec 06 08:06:10 compute-1 nova_compute[226101]: 2025-12-06 08:06:10.856 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:06:10 compute-1 nova_compute[226101]: 2025-12-06 08:06:10.960 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.163 226109 INFO nova.compute.manager [None req-88e0a94d-a042-4eb4-8ab6-00d616d3a41a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Pausing
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.164 226109 DEBUG nova.objects.instance [None req-88e0a94d-a042-4eb4-8ab6-00d616d3a41a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'flavor' on Instance uuid 51170699-7556-40a4-95f3-b2d8b499f1b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.195 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008371.1946225, 51170699-7556-40a4-95f3-b2d8b499f1b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.196 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] VM Paused (Lifecycle Event)
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.199 226109 DEBUG nova.compute.manager [None req-88e0a94d-a042-4eb4-8ab6-00d616d3a41a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.245 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.249 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.286 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:11 compute-1 nova_compute[226101]: 2025-12-06 08:06:11.292 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] During sync_power_state the instance has a pending task (pausing). Skip.
Dec 06 08:06:11 compute-1 ceph-mon[81689]: pgmap v3403: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.8 MiB/s wr, 95 op/s
Dec 06 08:06:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:12.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:13 compute-1 ceph-mon[81689]: pgmap v3404: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 643 KiB/s rd, 2.9 MiB/s wr, 132 op/s
Dec 06 08:06:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1533816120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:13 compute-1 nova_compute[226101]: 2025-12-06 08:06:13.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:13 compute-1 sshd-session[302880]: Received disconnect from 124.18.141.70 port 57494:11: Bye Bye [preauth]
Dec 06 08:06:13 compute-1 sshd-session[302880]: Disconnected from authenticating user root 124.18.141.70 port 57494 [preauth]
Dec 06 08:06:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:14 compute-1 podman[302883]: 2025-12-06 08:06:14.099348524 +0000 UTC m=+0.067567200 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:06:14 compute-1 podman[302882]: 2025-12-06 08:06:14.114442068 +0000 UTC m=+0.095603971 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:06:14 compute-1 podman[302884]: 2025-12-06 08:06:14.144641746 +0000 UTC m=+0.108630939 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:06:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:14.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.155 226109 INFO nova.compute.manager [None req-4d88725f-536e-459c-9129-2d19f9e099ac 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Get console output
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.160 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.353 226109 INFO nova.compute.manager [None req-f13c2866-6a9e-4772-952c-11cc7b6832b3 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Unpausing
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.354 226109 DEBUG nova.objects.instance [None req-f13c2866-6a9e-4772-952c-11cc7b6832b3 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'flavor' on Instance uuid 51170699-7556-40a4-95f3-b2d8b499f1b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:06:15 compute-1 ceph-mon[81689]: pgmap v3405: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.386 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008375.3863728, 51170699-7556-40a4-95f3-b2d8b499f1b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.386 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] VM Resumed (Lifecycle Event)
Dec 06 08:06:15 compute-1 virtqemud[225710]: argument unsupported: QEMU guest agent is not configured
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.390 226109 DEBUG nova.virt.libvirt.guest [None req-f13c2866-6a9e-4772-952c-11cc7b6832b3 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.391 226109 DEBUG nova.compute.manager [None req-f13c2866-6a9e-4772-952c-11cc7b6832b3 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.436 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.439 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.463 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] During sync_power_state the instance has a pending task (unpausing). Skip.
Dec 06 08:06:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:15 compute-1 nova_compute[226101]: 2025-12-06 08:06:15.964 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:16 compute-1 nova_compute[226101]: 2025-12-06 08:06:16.288 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1944957738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:16.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:17 compute-1 ceph-mon[81689]: pgmap v3406: 305 pgs: 305 active+clean; 209 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Dec 06 08:06:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:17.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:18 compute-1 nova_compute[226101]: 2025-12-06 08:06:18.175 226109 INFO nova.compute.manager [None req-27a6eaf6-8fc4-4516-8d20-d9ce6e7a5a4b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Get console output
Dec 06 08:06:18 compute-1 nova_compute[226101]: 2025-12-06 08:06:18.180 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:06:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:18.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.194 226109 DEBUG nova.compute.manager [req-602049b9-6035-4581-b1f9-1b8f4ac8e2aa req-e4926e0f-be8f-4c9a-a57f-ea78e95a0ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-changed-174801fe-dd50-44bc-a386-88a83f955c86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.195 226109 DEBUG nova.compute.manager [req-602049b9-6035-4581-b1f9-1b8f4ac8e2aa req-e4926e0f-be8f-4c9a-a57f-ea78e95a0ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Refreshing instance network info cache due to event network-changed-174801fe-dd50-44bc-a386-88a83f955c86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.195 226109 DEBUG oslo_concurrency.lockutils [req-602049b9-6035-4581-b1f9-1b8f4ac8e2aa req-e4926e0f-be8f-4c9a-a57f-ea78e95a0ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.195 226109 DEBUG oslo_concurrency.lockutils [req-602049b9-6035-4581-b1f9-1b8f4ac8e2aa req-e4926e0f-be8f-4c9a-a57f-ea78e95a0ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.195 226109 DEBUG nova.network.neutron [req-602049b9-6035-4581-b1f9-1b8f4ac8e2aa req-e4926e0f-be8f-4c9a-a57f-ea78e95a0ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Refreshing network info cache for port 174801fe-dd50-44bc-a386-88a83f955c86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.290 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "51170699-7556-40a4-95f3-b2d8b499f1b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.290 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.291 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.291 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.291 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.292 226109 INFO nova.compute.manager [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Terminating instance
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.293 226109 DEBUG nova.compute.manager [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:06:19 compute-1 kernel: tap174801fe-dd (unregistering): left promiscuous mode
Dec 06 08:06:19 compute-1 NetworkManager[49031]: <info>  [1765008379.3537] device (tap174801fe-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:06:19 compute-1 ovn_controller[130279]: 2025-12-06T08:06:19Z|00771|binding|INFO|Releasing lport 174801fe-dd50-44bc-a386-88a83f955c86 from this chassis (sb_readonly=0)
Dec 06 08:06:19 compute-1 ovn_controller[130279]: 2025-12-06T08:06:19Z|00772|binding|INFO|Setting lport 174801fe-dd50-44bc-a386-88a83f955c86 down in Southbound
Dec 06 08:06:19 compute-1 ovn_controller[130279]: 2025-12-06T08:06:19Z|00773|binding|INFO|Removing iface tap174801fe-dd ovn-installed in OVS
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.358 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.376 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:19 compute-1 ceph-mon[81689]: pgmap v3407: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 274 KiB/s wr, 75 op/s
Dec 06 08:06:19 compute-1 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000be.scope: Deactivated successfully.
Dec 06 08:06:19 compute-1 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000be.scope: Consumed 14.111s CPU time.
Dec 06 08:06:19 compute-1 systemd-machined[190302]: Machine qemu-88-instance-000000be terminated.
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.530 226109 INFO nova.virt.libvirt.driver [-] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Instance destroyed successfully.
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.531 226109 DEBUG nova.objects.instance [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid 51170699-7556-40a4-95f3-b2d8b499f1b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.549 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4b:07 10.100.0.6'], port_security=['fa:16:3e:9d:4b:07 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '51170699-7556-40a4-95f3-b2d8b499f1b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c97cc05a-848a-4069-80b3-8e90d28fbcc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5131d9-0bd3-410a-948f-4280459ef61c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=174801fe-dd50-44bc-a386-88a83f955c86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.550 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 174801fe-dd50-44bc-a386-88a83f955c86 in datapath 25fac239-4a45-4b5a-b8cc-e7aa2e8096f2 unbound from our chassis
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.552 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25fac239-4a45-4b5a-b8cc-e7aa2e8096f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.553 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c21a4e17-1269-42b3-a20b-8a71b3430727]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.554 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2 namespace which is not needed anymore
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.582 226109 DEBUG nova.virt.libvirt.vif [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:05:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1191883859',display_name='tempest-TestNetworkAdvancedServerOps-server-1191883859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1191883859',id=190,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLPh67iVlMNBwVIFOg+K6SNA2aP27r1EHLDEGSOfvk9eDH8d5wwuJ4kOqTmKVBrgDAV0izmfVrlh1RNi6QP+tXbAf20JyAnTGYkLX2hIIzOE8kvF4A0s67F/yJUxPQuW5A==',key_name='tempest-TestNetworkAdvancedServerOps-633807334',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:05:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-10lcag3z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:06:15Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=51170699-7556-40a4-95f3-b2d8b499f1b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.583 226109 DEBUG nova.network.os_vif_util [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.583 226109 DEBUG nova.network.os_vif_util [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4b:07,bridge_name='br-int',has_traffic_filtering=True,id=174801fe-dd50-44bc-a386-88a83f955c86,network=Network(25fac239-4a45-4b5a-b8cc-e7aa2e8096f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap174801fe-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.584 226109 DEBUG os_vif [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4b:07,bridge_name='br-int',has_traffic_filtering=True,id=174801fe-dd50-44bc-a386-88a83f955c86,network=Network(25fac239-4a45-4b5a-b8cc-e7aa2e8096f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap174801fe-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.585 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.586 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap174801fe-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.587 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.588 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.591 226109 INFO os_vif [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:4b:07,bridge_name='br-int',has_traffic_filtering=True,id=174801fe-dd50-44bc-a386-88a83f955c86,network=Network(25fac239-4a45-4b5a-b8cc-e7aa2e8096f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap174801fe-dd')
Dec 06 08:06:19 compute-1 neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2[302859]: [NOTICE]   (302863) : haproxy version is 2.8.14-c23fe91
Dec 06 08:06:19 compute-1 neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2[302859]: [NOTICE]   (302863) : path to executable is /usr/sbin/haproxy
Dec 06 08:06:19 compute-1 neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2[302859]: [WARNING]  (302863) : Exiting Master process...
Dec 06 08:06:19 compute-1 neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2[302859]: [ALERT]    (302863) : Current worker (302865) exited with code 143 (Terminated)
Dec 06 08:06:19 compute-1 neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2[302859]: [WARNING]  (302863) : All workers exited. Exiting... (0)
Dec 06 08:06:19 compute-1 systemd[1]: libpod-845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586.scope: Deactivated successfully.
Dec 06 08:06:19 compute-1 podman[302998]: 2025-12-06 08:06:19.679307846 +0000 UTC m=+0.041955364 container died 845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:06:19 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586-userdata-shm.mount: Deactivated successfully.
Dec 06 08:06:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-6cc1819cbeff52805b9acaa42e948e19234be551028266650d2d8455a61a57d8-merged.mount: Deactivated successfully.
Dec 06 08:06:19 compute-1 podman[302998]: 2025-12-06 08:06:19.719491652 +0000 UTC m=+0.082139170 container cleanup 845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:06:19 compute-1 systemd[1]: libpod-conmon-845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586.scope: Deactivated successfully.
Dec 06 08:06:19 compute-1 podman[303030]: 2025-12-06 08:06:19.781307987 +0000 UTC m=+0.041704908 container remove 845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.788 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[23c568a0-6773-48cf-bf06-0ea51eef2da3]: (4, ('Sat Dec  6 08:06:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2 (845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586)\n845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586\nSat Dec  6 08:06:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2 (845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586)\n845e5caa7c1c10b2049a102d76565fbd88f6914f4aaea6271583629abac5e586\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.789 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d8cf59-45c0-4b3e-85d9-9071742069e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.790 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25fac239-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:19 compute-1 kernel: tap25fac239-40: left promiscuous mode
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.806 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a3995a5f-19b4-4210-b5dc-be6dc9dfaf9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.819 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad1dfa3-cc9f-439f-a19a-97f5ba863749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.820 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6202fd46-e1aa-483f-bd82-5511a39121ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.837 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[acf13c48-c938-453b-be09-de8c132a05c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862802, 'reachable_time': 16250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303046, 'error': None, 'target': 'ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:19 compute-1 systemd[1]: run-netns-ovnmeta\x2d25fac239\x2d4a45\x2d4b5a\x2db8cc\x2de7aa2e8096f2.mount: Deactivated successfully.
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.840 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-25fac239-4a45-4b5a-b8cc-e7aa2e8096f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:06:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:19.840 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3377e4-fff3-451f-9fb7-ec913572fbb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.907 226109 DEBUG nova.compute.manager [req-f5e1e2d3-b9a8-4b9e-9821-059486f0f070 req-39ad5c2a-8f07-45d8-a190-ba5d2cf749ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-vif-unplugged-174801fe-dd50-44bc-a386-88a83f955c86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.908 226109 DEBUG oslo_concurrency.lockutils [req-f5e1e2d3-b9a8-4b9e-9821-059486f0f070 req-39ad5c2a-8f07-45d8-a190-ba5d2cf749ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.909 226109 DEBUG oslo_concurrency.lockutils [req-f5e1e2d3-b9a8-4b9e-9821-059486f0f070 req-39ad5c2a-8f07-45d8-a190-ba5d2cf749ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.909 226109 DEBUG oslo_concurrency.lockutils [req-f5e1e2d3-b9a8-4b9e-9821-059486f0f070 req-39ad5c2a-8f07-45d8-a190-ba5d2cf749ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.909 226109 DEBUG nova.compute.manager [req-f5e1e2d3-b9a8-4b9e-9821-059486f0f070 req-39ad5c2a-8f07-45d8-a190-ba5d2cf749ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] No waiting events found dispatching network-vif-unplugged-174801fe-dd50-44bc-a386-88a83f955c86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:06:19 compute-1 nova_compute[226101]: 2025-12-06 08:06:19.909 226109 DEBUG nova.compute.manager [req-f5e1e2d3-b9a8-4b9e-9821-059486f0f070 req-39ad5c2a-8f07-45d8-a190-ba5d2cf749ae 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-vif-unplugged-174801fe-dd50-44bc-a386-88a83f955c86 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:06:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:19.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:20 compute-1 nova_compute[226101]: 2025-12-06 08:06:20.006 226109 INFO nova.virt.libvirt.driver [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Deleting instance files /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0_del
Dec 06 08:06:20 compute-1 nova_compute[226101]: 2025-12-06 08:06:20.007 226109 INFO nova.virt.libvirt.driver [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Deletion of /var/lib/nova/instances/51170699-7556-40a4-95f3-b2d8b499f1b0_del complete
Dec 06 08:06:20 compute-1 nova_compute[226101]: 2025-12-06 08:06:20.075 226109 INFO nova.compute.manager [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Took 0.78 seconds to destroy the instance on the hypervisor.
Dec 06 08:06:20 compute-1 nova_compute[226101]: 2025-12-06 08:06:20.076 226109 DEBUG oslo.service.loopingcall [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:06:20 compute-1 nova_compute[226101]: 2025-12-06 08:06:20.076 226109 DEBUG nova.compute.manager [-] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:06:20 compute-1 nova_compute[226101]: 2025-12-06 08:06:20.076 226109 DEBUG nova.network.neutron [-] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:06:20 compute-1 ceph-mon[81689]: pgmap v3408: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 62 KiB/s wr, 68 op/s
Dec 06 08:06:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:20.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:20 compute-1 nova_compute[226101]: 2025-12-06 08:06:20.937 226109 DEBUG nova.network.neutron [-] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:06:20 compute-1 nova_compute[226101]: 2025-12-06 08:06:20.976 226109 INFO nova.compute.manager [-] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Took 0.90 seconds to deallocate network for instance.
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.017 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.018 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.054 226109 DEBUG nova.compute.manager [req-f062f9d5-3a38-4c20-9499-4e046a16b94b req-c27829c1-5420-4287-b6d6-7a6115595535 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-vif-deleted-174801fe-dd50-44bc-a386-88a83f955c86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.082 226109 DEBUG oslo_concurrency.processutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.180 226109 DEBUG nova.network.neutron [req-602049b9-6035-4581-b1f9-1b8f4ac8e2aa req-e4926e0f-be8f-4c9a-a57f-ea78e95a0ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Updated VIF entry in instance network info cache for port 174801fe-dd50-44bc-a386-88a83f955c86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.181 226109 DEBUG nova.network.neutron [req-602049b9-6035-4581-b1f9-1b8f4ac8e2aa req-e4926e0f-be8f-4c9a-a57f-ea78e95a0ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Updating instance_info_cache with network_info: [{"id": "174801fe-dd50-44bc-a386-88a83f955c86", "address": "fa:16:3e:9d:4b:07", "network": {"id": "25fac239-4a45-4b5a-b8cc-e7aa2e8096f2", "bridge": "br-int", "label": "tempest-network-smoke--124731404", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap174801fe-dd", "ovs_interfaceid": "174801fe-dd50-44bc-a386-88a83f955c86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.201 226109 DEBUG oslo_concurrency.lockutils [req-602049b9-6035-4581-b1f9-1b8f4ac8e2aa req-e4926e0f-be8f-4c9a-a57f-ea78e95a0ae3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-51170699-7556-40a4-95f3-b2d8b499f1b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.360 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.471 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:06:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/98723092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.542 226109 DEBUG oslo_concurrency.processutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.552 226109 DEBUG nova.compute.provider_tree [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.575 226109 DEBUG nova.scheduler.client.report [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.612 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.651 226109 INFO nova.scheduler.client.report [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance 51170699-7556-40a4-95f3-b2d8b499f1b0
Dec 06 08:06:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/98723092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:21 compute-1 nova_compute[226101]: 2025-12-06 08:06:21.787 226109 DEBUG oslo_concurrency.lockutils [None req-a28d93ec-1315-462c-96f3-4a490871e522 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:22 compute-1 nova_compute[226101]: 2025-12-06 08:06:22.011 226109 DEBUG nova.compute.manager [req-fa22179a-f24a-40dc-8b56-50b4de7d2444 req-242c3299-ba93-4d5a-8d41-3c68d0d31518 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received event network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:22 compute-1 nova_compute[226101]: 2025-12-06 08:06:22.012 226109 DEBUG oslo_concurrency.lockutils [req-fa22179a-f24a-40dc-8b56-50b4de7d2444 req-242c3299-ba93-4d5a-8d41-3c68d0d31518 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:22 compute-1 nova_compute[226101]: 2025-12-06 08:06:22.012 226109 DEBUG oslo_concurrency.lockutils [req-fa22179a-f24a-40dc-8b56-50b4de7d2444 req-242c3299-ba93-4d5a-8d41-3c68d0d31518 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:22 compute-1 nova_compute[226101]: 2025-12-06 08:06:22.012 226109 DEBUG oslo_concurrency.lockutils [req-fa22179a-f24a-40dc-8b56-50b4de7d2444 req-242c3299-ba93-4d5a-8d41-3c68d0d31518 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "51170699-7556-40a4-95f3-b2d8b499f1b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:22 compute-1 nova_compute[226101]: 2025-12-06 08:06:22.012 226109 DEBUG nova.compute.manager [req-fa22179a-f24a-40dc-8b56-50b4de7d2444 req-242c3299-ba93-4d5a-8d41-3c68d0d31518 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] No waiting events found dispatching network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:06:22 compute-1 nova_compute[226101]: 2025-12-06 08:06:22.013 226109 WARNING nova.compute.manager [req-fa22179a-f24a-40dc-8b56-50b4de7d2444 req-242c3299-ba93-4d5a-8d41-3c68d0d31518 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Received unexpected event network-vif-plugged-174801fe-dd50-44bc-a386-88a83f955c86 for instance with vm_state deleted and task_state None.
Dec 06 08:06:22 compute-1 ceph-mon[81689]: pgmap v3409: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 75 KiB/s wr, 95 op/s
Dec 06 08:06:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:22.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:23.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:24 compute-1 nova_compute[226101]: 2025-12-06 08:06:24.589 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:24 compute-1 ceph-mon[81689]: pgmap v3410: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Dec 06 08:06:24 compute-1 sudo[303070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:24 compute-1 sudo[303070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:24 compute-1 sudo[303070]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:24.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:24 compute-1 sudo[303095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:06:24 compute-1 sudo[303095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:24 compute-1 sudo[303095]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:24 compute-1 sudo[303120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:24 compute-1 sudo[303120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:24 compute-1 sudo[303120]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:24 compute-1 sudo[303145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:06:24 compute-1 sudo[303145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:25 compute-1 sudo[303145]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:06:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:06:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:06:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:06:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:06:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:26 compute-1 nova_compute[226101]: 2025-12-06 08:06:26.002 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:26.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:26 compute-1 ceph-mon[81689]: pgmap v3411: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 59 op/s
Dec 06 08:06:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:28.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:29 compute-1 ceph-mon[81689]: pgmap v3412: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 31 op/s
Dec 06 08:06:29 compute-1 nova_compute[226101]: 2025-12-06 08:06:29.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:29.643 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:06:29 compute-1 nova_compute[226101]: 2025-12-06 08:06:29.644 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:29.644 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:06:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:29.645 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:29.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:30.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:31 compute-1 nova_compute[226101]: 2025-12-06 08:06:31.003 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:31 compute-1 ceph-mon[81689]: pgmap v3413: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:06:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:32 compute-1 sudo[303204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:32 compute-1 sudo[303204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:32 compute-1 sudo[303204]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:32 compute-1 sudo[303229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:06:32 compute-1 sudo[303229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:32 compute-1 sudo[303229]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:32 compute-1 nova_compute[226101]: 2025-12-06 08:06:32.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:32 compute-1 nova_compute[226101]: 2025-12-06 08:06:32.624 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:32 compute-1 nova_compute[226101]: 2025-12-06 08:06:32.624 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:32 compute-1 nova_compute[226101]: 2025-12-06 08:06:32.624 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:32 compute-1 nova_compute[226101]: 2025-12-06 08:06:32.625 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:06:32 compute-1 nova_compute[226101]: 2025-12-06 08:06:32.625 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:32.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:06:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2064629927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.041 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:33 compute-1 ceph-mon[81689]: pgmap v3414: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:06:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2064629927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.187 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.189 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4292MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.189 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.189 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.250 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.251 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.265 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:06:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4058813300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.681 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.686 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.844 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.927 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:06:33 compute-1 nova_compute[226101]: 2025-12-06 08:06:33.928 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4058813300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:34 compute-1 nova_compute[226101]: 2025-12-06 08:06:34.530 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008379.528618, 51170699-7556-40a4-95f3-b2d8b499f1b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:34 compute-1 nova_compute[226101]: 2025-12-06 08:06:34.530 226109 INFO nova.compute.manager [-] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] VM Stopped (Lifecycle Event)
Dec 06 08:06:34 compute-1 nova_compute[226101]: 2025-12-06 08:06:34.595 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:34.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:35 compute-1 nova_compute[226101]: 2025-12-06 08:06:35.100 226109 DEBUG nova.compute.manager [None req-1b458527-353d-430a-b4d7-52e8785c6ab8 - - - - - -] [instance: 51170699-7556-40a4-95f3-b2d8b499f1b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:35 compute-1 ceph-mon[81689]: pgmap v3415: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 08:06:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:35.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:36 compute-1 nova_compute[226101]: 2025-12-06 08:06:36.039 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:36.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:37.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:38 compute-1 ceph-mon[81689]: pgmap v3416: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 08:06:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:38.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:38 compute-1 nova_compute[226101]: 2025-12-06 08:06:38.929 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:38 compute-1 nova_compute[226101]: 2025-12-06 08:06:38.930 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:06:38 compute-1 nova_compute[226101]: 2025-12-06 08:06:38.930 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:06:38 compute-1 nova_compute[226101]: 2025-12-06 08:06:38.949 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:06:39 compute-1 ceph-mon[81689]: pgmap v3417: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:39 compute-1 sshd-session[303301]: Received disconnect from 136.112.8.45 port 41136:11: Bye Bye [preauth]
Dec 06 08:06:39 compute-1 sshd-session[303301]: Disconnected from authenticating user root 136.112.8.45 port 41136 [preauth]
Dec 06 08:06:39 compute-1 nova_compute[226101]: 2025-12-06 08:06:39.598 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:39.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2401945160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:40.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:41 compute-1 nova_compute[226101]: 2025-12-06 08:06:41.041 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:41 compute-1 ceph-mon[81689]: pgmap v3418: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/378523446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:41 compute-1 nova_compute[226101]: 2025-12-06 08:06:41.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:41 compute-1 nova_compute[226101]: 2025-12-06 08:06:41.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:06:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:41.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:42 compute-1 ceph-mon[81689]: pgmap v3419: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:43.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2229911353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:44 compute-1 nova_compute[226101]: 2025-12-06 08:06:44.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:44 compute-1 nova_compute[226101]: 2025-12-06 08:06:44.602 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:45 compute-1 podman[303303]: 2025-12-06 08:06:45.100974347 +0000 UTC m=+0.077133046 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:06:45 compute-1 podman[303304]: 2025-12-06 08:06:45.117476708 +0000 UTC m=+0.092973540 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:06:45 compute-1 podman[303306]: 2025-12-06 08:06:45.149931177 +0000 UTC m=+0.109205284 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:06:45 compute-1 ceph-mon[81689]: pgmap v3420: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:46 compute-1 nova_compute[226101]: 2025-12-06 08:06:46.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:46 compute-1 nova_compute[226101]: 2025-12-06 08:06:46.666 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:46 compute-1 nova_compute[226101]: 2025-12-06 08:06:46.666 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:46 compute-1 nova_compute[226101]: 2025-12-06 08:06:46.692 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:06:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:46.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:46 compute-1 nova_compute[226101]: 2025-12-06 08:06:46.811 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:46 compute-1 nova_compute[226101]: 2025-12-06 08:06:46.812 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:46 compute-1 nova_compute[226101]: 2025-12-06 08:06:46.820 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:06:46 compute-1 nova_compute[226101]: 2025-12-06 08:06:46.820 226109 INFO nova.compute.claims [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.014 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:47 compute-1 ceph-mon[81689]: pgmap v3421: 305 pgs: 305 active+clean; 154 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 893 KiB/s wr, 24 op/s
Dec 06 08:06:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:06:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1111685073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.498 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.505 226109 DEBUG nova.compute.provider_tree [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.527 226109 DEBUG nova.scheduler.client.report [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.577 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.579 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.637 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.638 226109 DEBUG nova.network.neutron [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.669 226109 INFO nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.696 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.824 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.826 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.826 226109 INFO nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Creating image(s)
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.854 226109 DEBUG nova.storage.rbd_utils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.888 226109 DEBUG nova.storage.rbd_utils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.915 226109 DEBUG nova.storage.rbd_utils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.920 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:47 compute-1 nova_compute[226101]: 2025-12-06 08:06:47.960 226109 DEBUG nova.policy [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:06:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.011 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.012 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.012 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.013 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.033 226109 DEBUG nova.storage.rbd_utils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.036 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1111685073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.338 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.413 226109 DEBUG nova.storage.rbd_utils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.524 226109 DEBUG nova.objects.instance [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.545 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.545 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Ensure instance console log exists: /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.546 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.546 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:48 compute-1 nova_compute[226101]: 2025-12-06 08:06:48.546 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:48.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:49 compute-1 ceph-mon[81689]: pgmap v3422: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 08:06:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/27461616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:49 compute-1 nova_compute[226101]: 2025-12-06 08:06:49.545 226109 DEBUG nova.network.neutron [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Successfully created port: 580b9a18-5cd3-4fa5-95ba-d260b21e0197 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:06:49 compute-1 nova_compute[226101]: 2025-12-06 08:06:49.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:49 compute-1 nova_compute[226101]: 2025-12-06 08:06:49.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:49 compute-1 nova_compute[226101]: 2025-12-06 08:06:49.606 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1249092426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/756859241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3203557611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:50.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:50 compute-1 nova_compute[226101]: 2025-12-06 08:06:50.871 226109 DEBUG nova.network.neutron [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Successfully updated port: 580b9a18-5cd3-4fa5-95ba-d260b21e0197 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:06:50 compute-1 nova_compute[226101]: 2025-12-06 08:06:50.886 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:06:50 compute-1 nova_compute[226101]: 2025-12-06 08:06:50.886 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:06:50 compute-1 nova_compute[226101]: 2025-12-06 08:06:50.887 226109 DEBUG nova.network.neutron [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:06:50 compute-1 nova_compute[226101]: 2025-12-06 08:06:50.942 226109 DEBUG nova.compute.manager [req-6d7282f2-76fe-422a-9c38-2c184d8b5c41 req-e2e77924-6336-4fbd-9f14-7d84208dc5a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-changed-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:50 compute-1 nova_compute[226101]: 2025-12-06 08:06:50.943 226109 DEBUG nova.compute.manager [req-6d7282f2-76fe-422a-9c38-2c184d8b5c41 req-e2e77924-6336-4fbd-9f14-7d84208dc5a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Refreshing instance network info cache due to event network-changed-580b9a18-5cd3-4fa5-95ba-d260b21e0197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:06:50 compute-1 nova_compute[226101]: 2025-12-06 08:06:50.943 226109 DEBUG oslo_concurrency.lockutils [req-6d7282f2-76fe-422a-9c38-2c184d8b5c41 req-e2e77924-6336-4fbd-9f14-7d84208dc5a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:06:51 compute-1 nova_compute[226101]: 2025-12-06 08:06:51.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:51 compute-1 ceph-mon[81689]: pgmap v3423: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 08:06:51 compute-1 nova_compute[226101]: 2025-12-06 08:06:51.687 226109 DEBUG nova.network.neutron [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:06:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:52.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:53 compute-1 sshd-session[303556]: Received disconnect from 186.87.166.141 port 54754:11: Bye Bye [preauth]
Dec 06 08:06:53 compute-1 sshd-session[303556]: Disconnected from authenticating user root 186.87.166.141 port 54754 [preauth]
Dec 06 08:06:53 compute-1 ceph-mon[81689]: pgmap v3424: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 56 op/s
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.879 226109 DEBUG nova.network.neutron [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updating instance_info_cache with network_info: [{"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.903 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.903 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Instance network_info: |[{"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.904 226109 DEBUG oslo_concurrency.lockutils [req-6d7282f2-76fe-422a-9c38-2c184d8b5c41 req-e2e77924-6336-4fbd-9f14-7d84208dc5a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.904 226109 DEBUG nova.network.neutron [req-6d7282f2-76fe-422a-9c38-2c184d8b5c41 req-e2e77924-6336-4fbd-9f14-7d84208dc5a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Refreshing network info cache for port 580b9a18-5cd3-4fa5-95ba-d260b21e0197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.908 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Start _get_guest_xml network_info=[{"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.914 226109 WARNING nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.920 226109 DEBUG nova.virt.libvirt.host [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.921 226109 DEBUG nova.virt.libvirt.host [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.927 226109 DEBUG nova.virt.libvirt.host [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.927 226109 DEBUG nova.virt.libvirt.host [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.930 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.931 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.931 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.932 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.932 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.933 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.933 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.933 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.934 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.934 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.934 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.935 226109 DEBUG nova.virt.hardware [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:06:53 compute-1 nova_compute[226101]: 2025-12-06 08:06:53.939 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:06:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/614127371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.370 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.404 226109 DEBUG nova.storage.rbd_utils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.410 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/614127371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.643 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:54.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:06:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1753148160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.840 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.841 226109 DEBUG nova.virt.libvirt.vif [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-39574177',display_name='tempest-TestNetworkAdvancedServerOps-server-39574177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-39574177',id=192,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEn9VmFG5AKDWUUMC+9jjLchOz9ok8ZoheVEL8Lpk5Ek0wEhTQ3QYoYfW2hmtWVDKNVdYvLeyszMS7A3hcKtZg/qa2spqV12IhYYvt9ANdB3mURsEw+igU2XgRBC9k0QQ==',key_name='tempest-TestNetworkAdvancedServerOps-829698560',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-7a8dmqq3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:06:47Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=926a54a1-96cf-4d45-a783-3b13dc5fbbe1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.841 226109 DEBUG nova.network.os_vif_util [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.842 226109 DEBUG nova.network.os_vif_util [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:6c:4e,bridge_name='br-int',has_traffic_filtering=True,id=580b9a18-5cd3-4fa5-95ba-d260b21e0197,network=Network(b77bbda0-3ec0-4eca-bc98-409d33462f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580b9a18-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.843 226109 DEBUG nova.objects.instance [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.865 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <uuid>926a54a1-96cf-4d45-a783-3b13dc5fbbe1</uuid>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <name>instance-000000c0</name>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-39574177</nova:name>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:06:53</nova:creationTime>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <nova:port uuid="580b9a18-5cd3-4fa5-95ba-d260b21e0197">
Dec 06 08:06:54 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <system>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <entry name="serial">926a54a1-96cf-4d45-a783-3b13dc5fbbe1</entry>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <entry name="uuid">926a54a1-96cf-4d45-a783-3b13dc5fbbe1</entry>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </system>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <os>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   </os>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <features>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   </features>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk">
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       </source>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk.config">
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       </source>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:06:54 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:48:6c:4e"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <target dev="tap580b9a18-5c"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1/console.log" append="off"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <video>
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </video>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:06:54 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:06:54 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:06:54 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:06:54 compute-1 nova_compute[226101]: </domain>
Dec 06 08:06:54 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.868 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Preparing to wait for external event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.868 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.869 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.869 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.870 226109 DEBUG nova.virt.libvirt.vif [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-39574177',display_name='tempest-TestNetworkAdvancedServerOps-server-39574177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-39574177',id=192,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEn9VmFG5AKDWUUMC+9jjLchOz9ok8ZoheVEL8Lpk5Ek0wEhTQ3QYoYfW2hmtWVDKNVdYvLeyszMS7A3hcKtZg/qa2spqV12IhYYvt9ANdB3mURsEw+igU2XgRBC9k0QQ==',key_name='tempest-TestNetworkAdvancedServerOps-829698560',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-7a8dmqq3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:06:47Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=926a54a1-96cf-4d45-a783-3b13dc5fbbe1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.871 226109 DEBUG nova.network.os_vif_util [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.872 226109 DEBUG nova.network.os_vif_util [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:6c:4e,bridge_name='br-int',has_traffic_filtering=True,id=580b9a18-5cd3-4fa5-95ba-d260b21e0197,network=Network(b77bbda0-3ec0-4eca-bc98-409d33462f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580b9a18-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.872 226109 DEBUG os_vif [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:6c:4e,bridge_name='br-int',has_traffic_filtering=True,id=580b9a18-5cd3-4fa5-95ba-d260b21e0197,network=Network(b77bbda0-3ec0-4eca-bc98-409d33462f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580b9a18-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.873 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.874 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.875 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.879 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.879 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580b9a18-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.880 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap580b9a18-5c, col_values=(('external_ids', {'iface-id': '580b9a18-5cd3-4fa5-95ba-d260b21e0197', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:6c:4e', 'vm-uuid': '926a54a1-96cf-4d45-a783-3b13dc5fbbe1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:54 compute-1 NetworkManager[49031]: <info>  [1765008414.8834] manager: (tap580b9a18-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.883 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.889 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.890 226109 INFO os_vif [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:6c:4e,bridge_name='br-int',has_traffic_filtering=True,id=580b9a18-5cd3-4fa5-95ba-d260b21e0197,network=Network(b77bbda0-3ec0-4eca-bc98-409d33462f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580b9a18-5c')
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.956 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.956 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.956 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:48:6c:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.957 226109 INFO nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Using config drive
Dec 06 08:06:54 compute-1 nova_compute[226101]: 2025-12-06 08:06:54.982 226109 DEBUG nova.storage.rbd_utils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:55 compute-1 ceph-mon[81689]: pgmap v3425: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 56 op/s
Dec 06 08:06:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1753148160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:55 compute-1 nova_compute[226101]: 2025-12-06 08:06:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:55 compute-1 nova_compute[226101]: 2025-12-06 08:06:55.737 226109 DEBUG nova.network.neutron [req-6d7282f2-76fe-422a-9c38-2c184d8b5c41 req-e2e77924-6336-4fbd-9f14-7d84208dc5a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updated VIF entry in instance network info cache for port 580b9a18-5cd3-4fa5-95ba-d260b21e0197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:06:55 compute-1 nova_compute[226101]: 2025-12-06 08:06:55.737 226109 DEBUG nova.network.neutron [req-6d7282f2-76fe-422a-9c38-2c184d8b5c41 req-e2e77924-6336-4fbd-9f14-7d84208dc5a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updating instance_info_cache with network_info: [{"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:06:55 compute-1 nova_compute[226101]: 2025-12-06 08:06:55.754 226109 DEBUG oslo_concurrency.lockutils [req-6d7282f2-76fe-422a-9c38-2c184d8b5c41 req-e2e77924-6336-4fbd-9f14-7d84208dc5a0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:06:55 compute-1 nova_compute[226101]: 2025-12-06 08:06:55.908 226109 INFO nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Creating config drive at /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1/disk.config
Dec 06 08:06:55 compute-1 nova_compute[226101]: 2025-12-06 08:06:55.915 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplqspx9h3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:56.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:56 compute-1 sshd-session[303640]: Received disconnect from 186.96.151.198 port 35512:11: Bye Bye [preauth]
Dec 06 08:06:56 compute-1 sshd-session[303640]: Disconnected from authenticating user root 186.96.151.198 port 35512 [preauth]
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.058 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplqspx9h3" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.098 226109 DEBUG nova.storage.rbd_utils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.104 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1/disk.config 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.134 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.332 226109 DEBUG oslo_concurrency.processutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1/disk.config 926a54a1-96cf-4d45-a783-3b13dc5fbbe1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.332 226109 INFO nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Deleting local config drive /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1/disk.config because it was imported into RBD.
Dec 06 08:06:56 compute-1 kernel: tap580b9a18-5c: entered promiscuous mode
Dec 06 08:06:56 compute-1 NetworkManager[49031]: <info>  [1765008416.3832] manager: (tap580b9a18-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.382 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 ovn_controller[130279]: 2025-12-06T08:06:56Z|00774|binding|INFO|Claiming lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 for this chassis.
Dec 06 08:06:56 compute-1 ovn_controller[130279]: 2025-12-06T08:06:56Z|00775|binding|INFO|580b9a18-5cd3-4fa5-95ba-d260b21e0197: Claiming fa:16:3e:48:6c:4e 10.100.0.12
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.385 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.388 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.393 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.398 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:6c:4e 10.100.0.12'], port_security=['fa:16:3e:48:6c:4e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '926a54a1-96cf-4d45-a783-3b13dc5fbbe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62277ff5-e918-4041-90af-6e386137855d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b94bd56a-ca1d-4102-be16-616915bb0b8a, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=580b9a18-5cd3-4fa5-95ba-d260b21e0197) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.399 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 580b9a18-5cd3-4fa5-95ba-d260b21e0197 in datapath b77bbda0-3ec0-4eca-bc98-409d33462f81 bound to our chassis
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.401 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b77bbda0-3ec0-4eca-bc98-409d33462f81
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.414 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[14d0edae-ef19-4c09-97d7-6655e3303766]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.415 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb77bbda0-31 in ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:06:56 compute-1 systemd-machined[190302]: New machine qemu-89-instance-000000c0.
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.417 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb77bbda0-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.417 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e84e7827-c958-4963-a09e-5e8662bff689]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.418 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bd969449-0dcc-4a48-839b-749462fd5579]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ceph-mon[81689]: pgmap v3426: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 948 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.434 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[28ba2c34-6558-4abc-a855-f17993b99b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 systemd[1]: Started Virtual Machine qemu-89-instance-000000c0.
Dec 06 08:06:56 compute-1 ovn_controller[130279]: 2025-12-06T08:06:56Z|00776|binding|INFO|Setting lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 ovn-installed in OVS
Dec 06 08:06:56 compute-1 ovn_controller[130279]: 2025-12-06T08:06:56Z|00777|binding|INFO|Setting lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 up in Southbound
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.451 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.457 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[89307eba-ba3e-466a-b0cb-003827b80428]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 systemd-udevd[303700]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:06:56 compute-1 NetworkManager[49031]: <info>  [1765008416.4732] device (tap580b9a18-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:06:56 compute-1 NetworkManager[49031]: <info>  [1765008416.4743] device (tap580b9a18-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.485 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b346aa-affa-4881-85a3-2c23990f9f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.490 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[abd16fee-bfd3-4f9d-9129-04d7bc4bf631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 systemd-udevd[303704]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:06:56 compute-1 NetworkManager[49031]: <info>  [1765008416.4916] manager: (tapb77bbda0-30): new Veth device (/org/freedesktop/NetworkManager/Devices/353)
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.518 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[98f7a636-c275-41c1-9cf8-13c4389c6894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.521 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d65a9614-bc34-43b7-8eaa-c85c6fe7f4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 NetworkManager[49031]: <info>  [1765008416.5458] device (tapb77bbda0-30): carrier: link connected
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.553 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9e6678-4dc3-4b0f-bf28-c82b96a24db6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.573 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[32320c41-1c05-4f63-99a8-803bb282e15d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb77bbda0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:e0:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869604, 'reachable_time': 28654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303730, 'error': None, 'target': 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.586 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7bddd94d-4911-4bb9-a99c-6534077d539f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:e0e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869604, 'tstamp': 869604}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303731, 'error': None, 'target': 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.601 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[90132905-96bf-41f7-9a19-207e938532c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb77bbda0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:e0:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869604, 'reachable_time': 28654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303732, 'error': None, 'target': 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.628 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f94d02d2-7808-4d70-bf0c-ec12e834c04b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.679 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e7561d06-5453-466d-9d10-175905d2c938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.681 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb77bbda0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.681 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.681 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb77bbda0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:56 compute-1 NetworkManager[49031]: <info>  [1765008416.6839] manager: (tapb77bbda0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Dec 06 08:06:56 compute-1 kernel: tapb77bbda0-30: entered promiscuous mode
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.683 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.685 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.686 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb77bbda0-30, col_values=(('external_ids', {'iface-id': '1ecd9e10-10b5-4b9c-ba67-e57c9f44b5b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:56 compute-1 ovn_controller[130279]: 2025-12-06T08:06:56Z|00778|binding|INFO|Releasing lport 1ecd9e10-10b5-4b9c-ba67-e57c9f44b5b1 from this chassis (sb_readonly=0)
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.700 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.701 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b77bbda0-3ec0-4eca-bc98-409d33462f81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b77bbda0-3ec0-4eca-bc98-409d33462f81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.702 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d8009f-4512-45af-bf7e-c37ef843bb74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.703 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-b77bbda0-3ec0-4eca-bc98-409d33462f81
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/b77bbda0-3ec0-4eca-bc98-409d33462f81.pid.haproxy
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID b77bbda0-3ec0-4eca-bc98-409d33462f81
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:06:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:06:56.703 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'env', 'PROCESS_TAG=haproxy-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b77bbda0-3ec0-4eca-bc98-409d33462f81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:06:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.891 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008416.8905282, 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.891 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] VM Started (Lifecycle Event)
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.911 226109 DEBUG nova.compute.manager [req-be41a314-698a-4708-8519-2234ab697136 req-de4eff56-aa69-4714-a0cb-c8d814b20a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.911 226109 DEBUG oslo_concurrency.lockutils [req-be41a314-698a-4708-8519-2234ab697136 req-de4eff56-aa69-4714-a0cb-c8d814b20a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.912 226109 DEBUG oslo_concurrency.lockutils [req-be41a314-698a-4708-8519-2234ab697136 req-de4eff56-aa69-4714-a0cb-c8d814b20a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.912 226109 DEBUG oslo_concurrency.lockutils [req-be41a314-698a-4708-8519-2234ab697136 req-de4eff56-aa69-4714-a0cb-c8d814b20a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.912 226109 DEBUG nova.compute.manager [req-be41a314-698a-4708-8519-2234ab697136 req-de4eff56-aa69-4714-a0cb-c8d814b20a88 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Processing event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.913 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.913 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.918 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.920 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.928 226109 INFO nova.virt.libvirt.driver [-] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Instance spawned successfully.
Dec 06 08:06:56 compute-1 nova_compute[226101]: 2025-12-06 08:06:56.928 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.037 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.037 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008416.8910897, 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.038 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] VM Paused (Lifecycle Event)
Dec 06 08:06:57 compute-1 sshd-session[303642]: Received disconnect from 101.100.194.199 port 58508:11: Bye Bye [preauth]
Dec 06 08:06:57 compute-1 sshd-session[303642]: Disconnected from authenticating user root 101.100.194.199 port 58508 [preauth]
Dec 06 08:06:57 compute-1 podman[303806]: 2025-12-06 08:06:57.080652039 +0000 UTC m=+0.046658291 container create d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:06:57 compute-1 systemd[1]: Started libpod-conmon-d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479.scope.
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.113 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.119 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.119 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.120 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.120 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.121 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.121 226109 DEBUG nova.virt.libvirt.driver [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.125 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008416.9164553, 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.126 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] VM Resumed (Lifecycle Event)
Dec 06 08:06:57 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:06:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15400b9c272acdc9be43f94412a1d9dcc439b0ba64b90e5160ccf5be963997b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:57 compute-1 podman[303806]: 2025-12-06 08:06:57.055378122 +0000 UTC m=+0.021384394 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:06:57 compute-1 podman[303806]: 2025-12-06 08:06:57.158715568 +0000 UTC m=+0.124721830 container init d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.162 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.166 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:06:57 compute-1 podman[303806]: 2025-12-06 08:06:57.167205106 +0000 UTC m=+0.133211358 container start d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:06:57 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[303821]: [NOTICE]   (303825) : New worker (303827) forked
Dec 06 08:06:57 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[303821]: [NOTICE]   (303825) : Loading success.
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.204 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.225 226109 INFO nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Took 9.40 seconds to spawn the instance on the hypervisor.
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.225 226109 DEBUG nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.293 226109 INFO nova.compute.manager [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Took 10.52 seconds to build instance.
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.307 226109 DEBUG oslo_concurrency.lockutils [None req-1eb8e013-5beb-46ec-82d7-e8f52476562e 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:57 compute-1 nova_compute[226101]: 2025-12-06 08:06:57.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:58 compute-1 ceph-mon[81689]: pgmap v3427: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 104 op/s
Dec 06 08:06:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:06:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:58.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:59 compute-1 nova_compute[226101]: 2025-12-06 08:06:59.018 226109 DEBUG nova.compute.manager [req-0f481d37-4496-427a-a681-dfe99127471c req-f4713ab7-2027-422b-be3e-2c822f4edc9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:59 compute-1 nova_compute[226101]: 2025-12-06 08:06:59.019 226109 DEBUG oslo_concurrency.lockutils [req-0f481d37-4496-427a-a681-dfe99127471c req-f4713ab7-2027-422b-be3e-2c822f4edc9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:59 compute-1 nova_compute[226101]: 2025-12-06 08:06:59.019 226109 DEBUG oslo_concurrency.lockutils [req-0f481d37-4496-427a-a681-dfe99127471c req-f4713ab7-2027-422b-be3e-2c822f4edc9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:59 compute-1 nova_compute[226101]: 2025-12-06 08:06:59.020 226109 DEBUG oslo_concurrency.lockutils [req-0f481d37-4496-427a-a681-dfe99127471c req-f4713ab7-2027-422b-be3e-2c822f4edc9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:59 compute-1 nova_compute[226101]: 2025-12-06 08:06:59.020 226109 DEBUG nova.compute.manager [req-0f481d37-4496-427a-a681-dfe99127471c req-f4713ab7-2027-422b-be3e-2c822f4edc9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] No waiting events found dispatching network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:06:59 compute-1 nova_compute[226101]: 2025-12-06 08:06:59.021 226109 WARNING nova.compute.manager [req-0f481d37-4496-427a-a681-dfe99127471c req-f4713ab7-2027-422b-be3e-2c822f4edc9b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received unexpected event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 for instance with vm_state active and task_state None.
Dec 06 08:06:59 compute-1 nova_compute[226101]: 2025-12-06 08:06:59.884 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:00 compute-1 ceph-mon[81689]: pgmap v3428: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 06 08:07:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:00.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:01 compute-1 NetworkManager[49031]: <info>  [1765008421.1227] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Dec 06 08:07:01 compute-1 NetworkManager[49031]: <info>  [1765008421.1261] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.130 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.232 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:01 compute-1 ovn_controller[130279]: 2025-12-06T08:07:01Z|00779|binding|INFO|Releasing lport 1ecd9e10-10b5-4b9c-ba67-e57c9f44b5b1 from this chassis (sb_readonly=0)
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.241 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:01.686 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:01.688 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:01.689 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.837 226109 DEBUG nova.compute.manager [req-9b56929c-a2d4-4433-8a91-df2ae05b63fa req-1750bf21-fdfc-4a12-878b-25e2a93ab71d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-changed-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.838 226109 DEBUG nova.compute.manager [req-9b56929c-a2d4-4433-8a91-df2ae05b63fa req-1750bf21-fdfc-4a12-878b-25e2a93ab71d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Refreshing instance network info cache due to event network-changed-580b9a18-5cd3-4fa5-95ba-d260b21e0197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.838 226109 DEBUG oslo_concurrency.lockutils [req-9b56929c-a2d4-4433-8a91-df2ae05b63fa req-1750bf21-fdfc-4a12-878b-25e2a93ab71d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.838 226109 DEBUG oslo_concurrency.lockutils [req-9b56929c-a2d4-4433-8a91-df2ae05b63fa req-1750bf21-fdfc-4a12-878b-25e2a93ab71d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:07:01 compute-1 nova_compute[226101]: 2025-12-06 08:07:01.838 226109 DEBUG nova.network.neutron [req-9b56929c-a2d4-4433-8a91-df2ae05b63fa req-1750bf21-fdfc-4a12-878b-25e2a93ab71d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Refreshing network info cache for port 580b9a18-5cd3-4fa5-95ba-d260b21e0197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:07:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:02 compute-1 ceph-mon[81689]: pgmap v3429: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Dec 06 08:07:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:02.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:03 compute-1 nova_compute[226101]: 2025-12-06 08:07:03.294 226109 DEBUG nova.network.neutron [req-9b56929c-a2d4-4433-8a91-df2ae05b63fa req-1750bf21-fdfc-4a12-878b-25e2a93ab71d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updated VIF entry in instance network info cache for port 580b9a18-5cd3-4fa5-95ba-d260b21e0197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:07:03 compute-1 nova_compute[226101]: 2025-12-06 08:07:03.295 226109 DEBUG nova.network.neutron [req-9b56929c-a2d4-4433-8a91-df2ae05b63fa req-1750bf21-fdfc-4a12-878b-25e2a93ab71d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updating instance_info_cache with network_info: [{"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:07:03 compute-1 nova_compute[226101]: 2025-12-06 08:07:03.313 226109 DEBUG oslo_concurrency.lockutils [req-9b56929c-a2d4-4433-8a91-df2ae05b63fa req-1750bf21-fdfc-4a12-878b-25e2a93ab71d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:07:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:04 compute-1 sshd-session[303838]: Received disconnect from 154.219.116.39 port 48444:11: Bye Bye [preauth]
Dec 06 08:07:04 compute-1 sshd-session[303838]: Disconnected from authenticating user root 154.219.116.39 port 48444 [preauth]
Dec 06 08:07:04 compute-1 ceph-mon[81689]: pgmap v3430: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 144 op/s
Dec 06 08:07:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:04.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:04 compute-1 nova_compute[226101]: 2025-12-06 08:07:04.888 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:04 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Dec 06 08:07:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:06.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:06 compute-1 nova_compute[226101]: 2025-12-06 08:07:06.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:06 compute-1 ceph-mon[81689]: pgmap v3431: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 653 KiB/s wr, 161 op/s
Dec 06 08:07:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:06.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:08.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:08.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:09 compute-1 ceph-mon[81689]: pgmap v3432: 305 pgs: 305 active+clean; 239 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 147 op/s
Dec 06 08:07:09 compute-1 nova_compute[226101]: 2025-12-06 08:07:09.893 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2782371719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:07:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2782371719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:07:10 compute-1 ceph-mon[81689]: pgmap v3433: 305 pgs: 305 active+clean; 239 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 111 op/s
Dec 06 08:07:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:10.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:10 compute-1 ovn_controller[130279]: 2025-12-06T08:07:10Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:6c:4e 10.100.0.12
Dec 06 08:07:10 compute-1 ovn_controller[130279]: 2025-12-06T08:07:10Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:6c:4e 10.100.0.12
Dec 06 08:07:11 compute-1 nova_compute[226101]: 2025-12-06 08:07:11.102 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:12 compute-1 ceph-mon[81689]: pgmap v3434: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 181 op/s
Dec 06 08:07:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:12.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:14.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:14 compute-1 ceph-mon[81689]: pgmap v3435: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 508 KiB/s rd, 4.1 MiB/s wr, 108 op/s
Dec 06 08:07:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:14.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:14 compute-1 nova_compute[226101]: 2025-12-06 08:07:14.896 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:16 compute-1 podman[303840]: 2025-12-06 08:07:16.09617282 +0000 UTC m=+0.062776202 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 06 08:07:16 compute-1 nova_compute[226101]: 2025-12-06 08:07:16.104 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:16 compute-1 podman[303842]: 2025-12-06 08:07:16.127783835 +0000 UTC m=+0.097652954 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:07:16 compute-1 podman[303841]: 2025-12-06 08:07:16.127963701 +0000 UTC m=+0.092184139 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 06 08:07:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:16 compute-1 ceph-mon[81689]: pgmap v3436: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Dec 06 08:07:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:16.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:17 compute-1 nova_compute[226101]: 2025-12-06 08:07:17.375 226109 INFO nova.compute.manager [None req-e54d22ff-7ffa-4706-9d93-dd4f28d089d0 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Get console output
Dec 06 08:07:17 compute-1 nova_compute[226101]: 2025-12-06 08:07:17.382 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:07:17 compute-1 nova_compute[226101]: 2025-12-06 08:07:17.824 226109 DEBUG oslo_concurrency.lockutils [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:17 compute-1 nova_compute[226101]: 2025-12-06 08:07:17.825 226109 DEBUG oslo_concurrency.lockutils [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:17 compute-1 nova_compute[226101]: 2025-12-06 08:07:17.825 226109 INFO nova.compute.manager [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Rebooting instance
Dec 06 08:07:17 compute-1 nova_compute[226101]: 2025-12-06 08:07:17.846 226109 DEBUG oslo_concurrency.lockutils [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:07:17 compute-1 nova_compute[226101]: 2025-12-06 08:07:17.847 226109 DEBUG oslo_concurrency.lockutils [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:07:17 compute-1 nova_compute[226101]: 2025-12-06 08:07:17.847 226109 DEBUG nova.network.neutron [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:07:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:18.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:18 compute-1 ceph-mon[81689]: pgmap v3437: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec 06 08:07:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:18.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:19 compute-1 nova_compute[226101]: 2025-12-06 08:07:19.901 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:20 compute-1 nova_compute[226101]: 2025-12-06 08:07:20.271 226109 DEBUG nova.network.neutron [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updating instance_info_cache with network_info: [{"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:07:20 compute-1 nova_compute[226101]: 2025-12-06 08:07:20.301 226109 DEBUG oslo_concurrency.lockutils [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:07:20 compute-1 nova_compute[226101]: 2025-12-06 08:07:20.303 226109 DEBUG nova.compute.manager [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:07:20 compute-1 ceph-mon[81689]: pgmap v3438: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.7 MiB/s wr, 88 op/s
Dec 06 08:07:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:20.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:21 compute-1 nova_compute[226101]: 2025-12-06 08:07:21.107 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:22 compute-1 ceph-mon[81689]: pgmap v3439: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 478 KiB/s rd, 2.8 MiB/s wr, 91 op/s
Dec 06 08:07:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 08:07:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:22.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 08:07:23 compute-1 kernel: tap580b9a18-5c (unregistering): left promiscuous mode
Dec 06 08:07:23 compute-1 NetworkManager[49031]: <info>  [1765008443.0142] device (tap580b9a18-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:07:23 compute-1 ovn_controller[130279]: 2025-12-06T08:07:23Z|00780|binding|INFO|Releasing lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 from this chassis (sb_readonly=0)
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.027 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 ovn_controller[130279]: 2025-12-06T08:07:23Z|00781|binding|INFO|Setting lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 down in Southbound
Dec 06 08:07:23 compute-1 ovn_controller[130279]: 2025-12-06T08:07:23Z|00782|binding|INFO|Removing iface tap580b9a18-5c ovn-installed in OVS
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.031 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.035 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:6c:4e 10.100.0.12'], port_security=['fa:16:3e:48:6c:4e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '926a54a1-96cf-4d45-a783-3b13dc5fbbe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62277ff5-e918-4041-90af-6e386137855d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b94bd56a-ca1d-4102-be16-616915bb0b8a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=580b9a18-5cd3-4fa5-95ba-d260b21e0197) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.037 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 580b9a18-5cd3-4fa5-95ba-d260b21e0197 in datapath b77bbda0-3ec0-4eca-bc98-409d33462f81 unbound from our chassis
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.038 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b77bbda0-3ec0-4eca-bc98-409d33462f81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.039 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d6952e11-9864-4a2c-816c-6fd4fa91161e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.040 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 namespace which is not needed anymore
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.053 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c0.scope: Deactivated successfully.
Dec 06 08:07:23 compute-1 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c0.scope: Consumed 14.373s CPU time.
Dec 06 08:07:23 compute-1 systemd-machined[190302]: Machine qemu-89-instance-000000c0 terminated.
Dec 06 08:07:23 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[303821]: [NOTICE]   (303825) : haproxy version is 2.8.14-c23fe91
Dec 06 08:07:23 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[303821]: [NOTICE]   (303825) : path to executable is /usr/sbin/haproxy
Dec 06 08:07:23 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[303821]: [WARNING]  (303825) : Exiting Master process...
Dec 06 08:07:23 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[303821]: [ALERT]    (303825) : Current worker (303827) exited with code 143 (Terminated)
Dec 06 08:07:23 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[303821]: [WARNING]  (303825) : All workers exited. Exiting... (0)
Dec 06 08:07:23 compute-1 systemd[1]: libpod-d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479.scope: Deactivated successfully.
Dec 06 08:07:23 compute-1 podman[303930]: 2025-12-06 08:07:23.234620238 +0000 UTC m=+0.067537469 container died d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:07:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-e15400b9c272acdc9be43f94412a1d9dcc439b0ba64b90e5160ccf5be963997b-merged.mount: Deactivated successfully.
Dec 06 08:07:23 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479-userdata-shm.mount: Deactivated successfully.
Dec 06 08:07:23 compute-1 podman[303930]: 2025-12-06 08:07:23.274557857 +0000 UTC m=+0.107475088 container cleanup d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:07:23 compute-1 systemd[1]: libpod-conmon-d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479.scope: Deactivated successfully.
Dec 06 08:07:23 compute-1 podman[303971]: 2025-12-06 08:07:23.342917938 +0000 UTC m=+0.046093185 container remove d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.348 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cdafa715-ca27-4284-ba36-6c6083334ce5]: (4, ('Sat Dec  6 08:07:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 (d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479)\nd35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479\nSat Dec  6 08:07:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 (d35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479)\nd35adee394b54a56c0329bad6dc91c30aff09b1050ff8061d2b08bf8dcb97479\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.349 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ad6730-a2cd-4abd-b8cb-423bc525afaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.350 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb77bbda0-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.352 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 kernel: tapb77bbda0-30: left promiscuous mode
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.368 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.370 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.370 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ca8123-e4c5-453a-af89-28e1eba93ffe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.385 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[54708eae-fa83-499e-bf7f-fb5b7cc4971f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.387 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1b2e59-2811-4669-9e87-c053963ec150]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.403 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[272ede09-365a-4173-aa52-ee7ce0c37921]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869598, 'reachable_time': 43000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303990, 'error': None, 'target': 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.405 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:07:23 compute-1 systemd[1]: run-netns-ovnmeta\x2db77bbda0\x2d3ec0\x2d4eca\x2dbc98\x2d409d33462f81.mount: Deactivated successfully.
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.405 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[233e9006-fbb2-4a1d-a91a-645e139c0950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.662 226109 INFO nova.virt.libvirt.driver [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Instance shutdown successfully.
Dec 06 08:07:23 compute-1 kernel: tap580b9a18-5c: entered promiscuous mode
Dec 06 08:07:23 compute-1 systemd-udevd[303909]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:07:23 compute-1 NetworkManager[49031]: <info>  [1765008443.7264] manager: (tap580b9a18-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/357)
Dec 06 08:07:23 compute-1 NetworkManager[49031]: <info>  [1765008443.7640] device (tap580b9a18-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:07:23 compute-1 NetworkManager[49031]: <info>  [1765008443.7647] device (tap580b9a18-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:07:23 compute-1 ovn_controller[130279]: 2025-12-06T08:07:23Z|00783|binding|INFO|Claiming lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 for this chassis.
Dec 06 08:07:23 compute-1 ovn_controller[130279]: 2025-12-06T08:07:23Z|00784|binding|INFO|580b9a18-5cd3-4fa5-95ba-d260b21e0197: Claiming fa:16:3e:48:6c:4e 10.100.0.12
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.766 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.772 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:6c:4e 10.100.0.12'], port_security=['fa:16:3e:48:6c:4e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '926a54a1-96cf-4d45-a783-3b13dc5fbbe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '62277ff5-e918-4041-90af-6e386137855d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b94bd56a-ca1d-4102-be16-616915bb0b8a, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=580b9a18-5cd3-4fa5-95ba-d260b21e0197) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.773 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 580b9a18-5cd3-4fa5-95ba-d260b21e0197 in datapath b77bbda0-3ec0-4eca-bc98-409d33462f81 bound to our chassis
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.774 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b77bbda0-3ec0-4eca-bc98-409d33462f81
Dec 06 08:07:23 compute-1 ovn_controller[130279]: 2025-12-06T08:07:23Z|00785|binding|INFO|Setting lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 ovn-installed in OVS
Dec 06 08:07:23 compute-1 ovn_controller[130279]: 2025-12-06T08:07:23Z|00786|binding|INFO|Setting lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 up in Southbound
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.780 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 nova_compute[226101]: 2025-12-06 08:07:23.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.786 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[40e6bc21-a671-4169-a3dd-97d84427a632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.787 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb77bbda0-31 in ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.789 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb77bbda0-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.789 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b905138e-cb07-4ef3-b77a-4cd4a1518ee5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.790 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7307970c-d184-40ab-ba2d-dbada3dba715]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 systemd-machined[190302]: New machine qemu-90-instance-000000c0.
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.802 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[c155b0ea-16d3-4022-9b18-c3ce6c834649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 systemd[1]: Started Virtual Machine qemu-90-instance-000000c0.
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.827 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[51187077-69ba-469a-82ce-e183e855804c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.856 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[efd3c924-93b9-4017-b736-0c4762d232c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 NetworkManager[49031]: <info>  [1765008443.8627] manager: (tapb77bbda0-30): new Veth device (/org/freedesktop/NetworkManager/Devices/358)
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.862 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[83ebb810-cb4b-4527-bee6-18bd5f2ef8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.894 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[00be8a56-83d9-4211-84b3-c7a180e83122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.897 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9551b4b7-0253-46f0-886e-3040bb8f30fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 NetworkManager[49031]: <info>  [1765008443.9173] device (tapb77bbda0-30): carrier: link connected
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.925 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[26f3bdab-6206-47f8-aef6-5be46e2a5b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.940 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[70f9d243-fe63-4be6-9176-7dfddd96e795]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb77bbda0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:e0:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 872342, 'reachable_time': 32767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304034, 'error': None, 'target': 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.954 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[21c4b59e-9ba9-48f0-ac48-7efdbcbb575a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:e0e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 872342, 'tstamp': 872342}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304035, 'error': None, 'target': 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.968 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9c916c82-ed50-4b1d-a399-77207b63d658]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb77bbda0-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:e0:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 872342, 'reachable_time': 32767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304036, 'error': None, 'target': 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:23.994 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9ed0bc-24ad-4644-a5cc-d407b4b48bf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:24.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.045 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a119b8d6-e863-4cd0-9717-a333e64f314b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.046 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb77bbda0-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.047 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:07:24 compute-1 kernel: tapb77bbda0-30: entered promiscuous mode
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.047 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb77bbda0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:24 compute-1 NetworkManager[49031]: <info>  [1765008444.0494] manager: (tapb77bbda0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.048 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.050 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.050 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb77bbda0-30, col_values=(('external_ids', {'iface-id': '1ecd9e10-10b5-4b9c-ba67-e57c9f44b5b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:24 compute-1 ovn_controller[130279]: 2025-12-06T08:07:24Z|00787|binding|INFO|Releasing lport 1ecd9e10-10b5-4b9c-ba67-e57c9f44b5b1 from this chassis (sb_readonly=0)
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.051 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.063 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.063 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b77bbda0-3ec0-4eca-bc98-409d33462f81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b77bbda0-3ec0-4eca-bc98-409d33462f81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.065 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e7246895-9f48-49d2-a8b0-4a4b48c38573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.065 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-b77bbda0-3ec0-4eca-bc98-409d33462f81
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/b77bbda0-3ec0-4eca-bc98-409d33462f81.pid.haproxy
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID b77bbda0-3ec0-4eca-bc98-409d33462f81
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:07:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:24.066 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'env', 'PROCESS_TAG=haproxy-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b77bbda0-3ec0-4eca-bc98-409d33462f81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.252 226109 DEBUG nova.compute.manager [req-6db4a0b1-7009-4394-b811-a7d187a132ed req-2549edd7-3694-460e-9b97-5cf8f4e187d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-unplugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.253 226109 DEBUG oslo_concurrency.lockutils [req-6db4a0b1-7009-4394-b811-a7d187a132ed req-2549edd7-3694-460e-9b97-5cf8f4e187d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.253 226109 DEBUG oslo_concurrency.lockutils [req-6db4a0b1-7009-4394-b811-a7d187a132ed req-2549edd7-3694-460e-9b97-5cf8f4e187d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.254 226109 DEBUG oslo_concurrency.lockutils [req-6db4a0b1-7009-4394-b811-a7d187a132ed req-2549edd7-3694-460e-9b97-5cf8f4e187d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.254 226109 DEBUG nova.compute.manager [req-6db4a0b1-7009-4394-b811-a7d187a132ed req-2549edd7-3694-460e-9b97-5cf8f4e187d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] No waiting events found dispatching network-vif-unplugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.254 226109 WARNING nova.compute.manager [req-6db4a0b1-7009-4394-b811-a7d187a132ed req-2549edd7-3694-460e-9b97-5cf8f4e187d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received unexpected event network-vif-unplugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 for instance with vm_state active and task_state reboot_started.
Dec 06 08:07:24 compute-1 podman[304110]: 2025-12-06 08:07:24.395142009 +0000 UTC m=+0.041917644 container create 3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.398 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.399 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008444.3979645, 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.399 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] VM Resumed (Lifecycle Event)
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.405 226109 INFO nova.virt.libvirt.driver [-] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Instance running successfully.
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.405 226109 INFO nova.virt.libvirt.driver [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Instance soft rebooted successfully.
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.405 226109 DEBUG nova.compute.manager [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:07:24 compute-1 systemd[1]: Started libpod-conmon-3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233.scope.
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.444 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.447 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.456 226109 DEBUG oslo_concurrency.lockutils [None req-db2ef104-ed7d-4ee3-a57e-9be8af1fc14b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:24 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:07:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e4dbf89d8d5856ece9902c1f98a4c92698dde91c0d835ea377245536623a1e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:24 compute-1 podman[304110]: 2025-12-06 08:07:24.372933824 +0000 UTC m=+0.019709479 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.470 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008444.4024806, 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.470 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] VM Started (Lifecycle Event)
Dec 06 08:07:24 compute-1 podman[304110]: 2025-12-06 08:07:24.482375654 +0000 UTC m=+0.129151309 container init 3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.489 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:07:24 compute-1 podman[304110]: 2025-12-06 08:07:24.493888623 +0000 UTC m=+0.140664258 container start 3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.493 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:07:24 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[304126]: [NOTICE]   (304130) : New worker (304132) forked
Dec 06 08:07:24 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[304126]: [NOTICE]   (304130) : Loading success.
Dec 06 08:07:24 compute-1 ceph-mon[81689]: pgmap v3440: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 192 KiB/s wr, 20 op/s
Dec 06 08:07:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:24.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:24 compute-1 nova_compute[226101]: 2025-12-06 08:07:24.946 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:26.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.151 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.367 226109 DEBUG nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.367 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.368 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.368 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.368 226109 DEBUG nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] No waiting events found dispatching network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.368 226109 WARNING nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received unexpected event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 for instance with vm_state active and task_state None.
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.368 226109 DEBUG nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.369 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.369 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.369 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.369 226109 DEBUG nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] No waiting events found dispatching network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.370 226109 WARNING nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received unexpected event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 for instance with vm_state active and task_state None.
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.370 226109 DEBUG nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.370 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.370 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.371 226109 DEBUG oslo_concurrency.lockutils [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.371 226109 DEBUG nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] No waiting events found dispatching network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:07:26 compute-1 nova_compute[226101]: 2025-12-06 08:07:26.371 226109 WARNING nova.compute.manager [req-ce0a47e5-f2a1-488f-bf62-0cda94d08fd8 req-14427713-f9c5-4762-8347-121ef4405e60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received unexpected event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 for instance with vm_state active and task_state None.
Dec 06 08:07:26 compute-1 ceph-mon[81689]: pgmap v3441: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 200 KiB/s wr, 35 op/s
Dec 06 08:07:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:26.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:27 compute-1 nova_compute[226101]: 2025-12-06 08:07:27.395 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:27.395 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:07:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:27.396 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.834396) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447834490, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2355, "num_deletes": 251, "total_data_size": 5709439, "memory_usage": 5759616, "flush_reason": "Manual Compaction"}
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447857922, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3722802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72567, "largest_seqno": 74917, "table_properties": {"data_size": 3713404, "index_size": 5891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19496, "raw_average_key_size": 20, "raw_value_size": 3694645, "raw_average_value_size": 3848, "num_data_blocks": 258, "num_entries": 960, "num_filter_entries": 960, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008237, "oldest_key_time": 1765008237, "file_creation_time": 1765008447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 23534 microseconds, and 8158 cpu microseconds.
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.857972) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3722802 bytes OK
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.857994) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.859802) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.859817) EVENT_LOG_v1 {"time_micros": 1765008447859811, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.859836) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5699099, prev total WAL file size 5699099, number of live WAL files 2.
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.861240) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3635KB)], [147(11MB)]
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447861314, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 16202044, "oldest_snapshot_seqno": -1}
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 10473 keys, 14220628 bytes, temperature: kUnknown
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447956249, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 14220628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14152040, "index_size": 41275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26245, "raw_key_size": 276246, "raw_average_key_size": 26, "raw_value_size": 13967566, "raw_average_value_size": 1333, "num_data_blocks": 1573, "num_entries": 10473, "num_filter_entries": 10473, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.956558) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 14220628 bytes
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.958111) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.5 rd, 149.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.9 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 10990, records dropped: 517 output_compression: NoCompression
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.958136) EVENT_LOG_v1 {"time_micros": 1765008447958125, "job": 94, "event": "compaction_finished", "compaction_time_micros": 95014, "compaction_time_cpu_micros": 31361, "output_level": 6, "num_output_files": 1, "total_output_size": 14220628, "num_input_records": 10990, "num_output_records": 10473, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447959177, "job": 94, "event": "table_file_deletion", "file_number": 149}
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447962333, "job": 94, "event": "table_file_deletion", "file_number": 147}
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.861097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.962381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.962387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.962389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.962390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:27.962391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:28.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:28 compute-1 ceph-mon[81689]: pgmap v3442: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 34 KiB/s wr, 36 op/s
Dec 06 08:07:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:28.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:29 compute-1 nova_compute[226101]: 2025-12-06 08:07:29.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:30.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:30.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:30 compute-1 ceph-mon[81689]: pgmap v3443: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 847 KiB/s rd, 34 KiB/s wr, 36 op/s
Dec 06 08:07:31 compute-1 nova_compute[226101]: 2025-12-06 08:07:31.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:32.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:32 compute-1 sudo[304141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:32 compute-1 sudo[304141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:32 compute-1 sudo[304141]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:32 compute-1 nova_compute[226101]: 2025-12-06 08:07:32.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:32 compute-1 sudo[304166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:07:32 compute-1 sudo[304166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:32 compute-1 sudo[304166]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:32 compute-1 sudo[304191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:32 compute-1 sudo[304191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:32 compute-1 sudo[304191]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:32 compute-1 sudo[304216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:07:32 compute-1 sudo[304216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:32 compute-1 nova_compute[226101]: 2025-12-06 08:07:32.732 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:32 compute-1 nova_compute[226101]: 2025-12-06 08:07:32.732 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:32 compute-1 nova_compute[226101]: 2025-12-06 08:07:32.732 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:32 compute-1 nova_compute[226101]: 2025-12-06 08:07:32.732 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:07:32 compute-1 nova_compute[226101]: 2025-12-06 08:07:32.732 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:07:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:32.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:32 compute-1 ceph-mon[81689]: pgmap v3444: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 73 op/s
Dec 06 08:07:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:07:33 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1761730012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.169 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:07:33 compute-1 sudo[304216]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.292 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000c0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.293 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000c0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:07:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:33.398 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.429 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.430 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4122MB free_disk=20.897109985351562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.430 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.431 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.619 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.619 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.619 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:07:33 compute-1 nova_compute[226101]: 2025-12-06 08:07:33.704 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:07:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1761730012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:07:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:07:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:07:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:07:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:07:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:07:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:34.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:07:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2083970358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:34 compute-1 nova_compute[226101]: 2025-12-06 08:07:34.122 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:07:34 compute-1 nova_compute[226101]: 2025-12-06 08:07:34.128 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:07:34 compute-1 nova_compute[226101]: 2025-12-06 08:07:34.149 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:07:34 compute-1 nova_compute[226101]: 2025-12-06 08:07:34.177 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:07:34 compute-1 nova_compute[226101]: 2025-12-06 08:07:34.177 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:34.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:34 compute-1 ceph-mon[81689]: pgmap v3445: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.0 KiB/s wr, 70 op/s
Dec 06 08:07:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2083970358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:35 compute-1 nova_compute[226101]: 2025-12-06 08:07:35.031 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:36.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:36 compute-1 nova_compute[226101]: 2025-12-06 08:07:36.203 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:36.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:36 compute-1 ceph-mon[81689]: pgmap v3446: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Dec 06 08:07:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:38.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:38 compute-1 ovn_controller[130279]: 2025-12-06T08:07:38Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:6c:4e 10.100.0.12
Dec 06 08:07:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:38.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:38 compute-1 ceph-mon[81689]: pgmap v3447: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 KiB/s wr, 61 op/s
Dec 06 08:07:39 compute-1 nova_compute[226101]: 2025-12-06 08:07:39.177 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:39 compute-1 nova_compute[226101]: 2025-12-06 08:07:39.178 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:07:39 compute-1 nova_compute[226101]: 2025-12-06 08:07:39.178 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:07:39 compute-1 sudo[304318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:39 compute-1 sudo[304318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:39 compute-1 sudo[304318]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:39 compute-1 sudo[304343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:07:39 compute-1 sudo[304343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:39 compute-1 sudo[304343]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:40 compute-1 nova_compute[226101]: 2025-12-06 08:07:40.035 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:40.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:40 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:40 compute-1 nova_compute[226101]: 2025-12-06 08:07:40.442 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:07:40 compute-1 nova_compute[226101]: 2025-12-06 08:07:40.443 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:07:40 compute-1 nova_compute[226101]: 2025-12-06 08:07:40.443 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:07:40 compute-1 nova_compute[226101]: 2025-12-06 08:07:40.443 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:07:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:40.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:41 compute-1 nova_compute[226101]: 2025-12-06 08:07:41.206 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:41 compute-1 ceph-mon[81689]: pgmap v3448: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.3 KiB/s wr, 42 op/s
Dec 06 08:07:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3044460620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:42.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3191702422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/847495634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:42 compute-1 ceph-mon[81689]: pgmap v3449: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 21 KiB/s wr, 81 op/s
Dec 06 08:07:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:42.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:43 compute-1 nova_compute[226101]: 2025-12-06 08:07:43.309 226109 INFO nova.compute.manager [None req-1813cd28-5638-4ff0-852e-2a6ed5840e50 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Get console output
Dec 06 08:07:43 compute-1 nova_compute[226101]: 2025-12-06 08:07:43.316 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:07:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:44.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:44 compute-1 ceph-mon[81689]: pgmap v3450: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 20 KiB/s wr, 44 op/s
Dec 06 08:07:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:44.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:45 compute-1 nova_compute[226101]: 2025-12-06 08:07:45.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.015 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updating instance_info_cache with network_info: [{"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:07:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:46.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.134 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.135 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.136 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.136 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:07:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.207 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:46 compute-1 sshd-session[304368]: Received disconnect from 124.18.141.70 port 48650:11: Bye Bye [preauth]
Dec 06 08:07:46 compute-1 sshd-session[304368]: Disconnected from authenticating user root 124.18.141.70 port 48650 [preauth]
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:46 compute-1 ceph-mon[81689]: pgmap v3451: 305 pgs: 305 active+clean; 316 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 537 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.858 226109 DEBUG nova.compute.manager [req-94ce853e-8fd3-4bdd-9c9d-aae5e0f9a4fa req-09747d31-fce0-4ff0-a4c4-592f1f916d44 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-changed-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.859 226109 DEBUG nova.compute.manager [req-94ce853e-8fd3-4bdd-9c9d-aae5e0f9a4fa req-09747d31-fce0-4ff0-a4c4-592f1f916d44 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Refreshing instance network info cache due to event network-changed-580b9a18-5cd3-4fa5-95ba-d260b21e0197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.859 226109 DEBUG oslo_concurrency.lockutils [req-94ce853e-8fd3-4bdd-9c9d-aae5e0f9a4fa req-09747d31-fce0-4ff0-a4c4-592f1f916d44 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.859 226109 DEBUG oslo_concurrency.lockutils [req-94ce853e-8fd3-4bdd-9c9d-aae5e0f9a4fa req-09747d31-fce0-4ff0-a4c4-592f1f916d44 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:07:46 compute-1 nova_compute[226101]: 2025-12-06 08:07:46.860 226109 DEBUG nova.network.neutron [req-94ce853e-8fd3-4bdd-9c9d-aae5e0f9a4fa req-09747d31-fce0-4ff0-a4c4-592f1f916d44 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Refreshing network info cache for port 580b9a18-5cd3-4fa5-95ba-d260b21e0197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:07:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:46.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:47 compute-1 podman[304371]: 2025-12-06 08:07:47.080395658 +0000 UTC m=+0.060517251 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 08:07:47 compute-1 podman[304370]: 2025-12-06 08:07:47.087222791 +0000 UTC m=+0.066969844 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:07:47 compute-1 podman[304372]: 2025-12-06 08:07:47.125469605 +0000 UTC m=+0.094143621 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.295 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.295 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.296 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.297 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.297 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.299 226109 INFO nova.compute.manager [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Terminating instance
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.301 226109 DEBUG nova.compute.manager [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:07:47 compute-1 kernel: tap580b9a18-5c (unregistering): left promiscuous mode
Dec 06 08:07:47 compute-1 NetworkManager[49031]: <info>  [1765008467.3592] device (tap580b9a18-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:07:47 compute-1 ovn_controller[130279]: 2025-12-06T08:07:47Z|00788|binding|INFO|Releasing lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 from this chassis (sb_readonly=0)
Dec 06 08:07:47 compute-1 ovn_controller[130279]: 2025-12-06T08:07:47Z|00789|binding|INFO|Setting lport 580b9a18-5cd3-4fa5-95ba-d260b21e0197 down in Southbound
Dec 06 08:07:47 compute-1 ovn_controller[130279]: 2025-12-06T08:07:47Z|00790|binding|INFO|Removing iface tap580b9a18-5c ovn-installed in OVS
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.370 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.385 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:6c:4e 10.100.0.12'], port_security=['fa:16:3e:48:6c:4e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '926a54a1-96cf-4d45-a783-3b13dc5fbbe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '62277ff5-e918-4041-90af-6e386137855d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b94bd56a-ca1d-4102-be16-616915bb0b8a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=580b9a18-5cd3-4fa5-95ba-d260b21e0197) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.387 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 580b9a18-5cd3-4fa5-95ba-d260b21e0197 in datapath b77bbda0-3ec0-4eca-bc98-409d33462f81 unbound from our chassis
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.390 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b77bbda0-3ec0-4eca-bc98-409d33462f81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.393 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.392 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[08bad68c-fd0c-4220-a93e-fdfc7c5e188f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.393 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 namespace which is not needed anymore
Dec 06 08:07:47 compute-1 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c0.scope: Deactivated successfully.
Dec 06 08:07:47 compute-1 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c0.scope: Consumed 13.769s CPU time.
Dec 06 08:07:47 compute-1 systemd-machined[190302]: Machine qemu-90-instance-000000c0 terminated.
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.538 226109 INFO nova.virt.libvirt.driver [-] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Instance destroyed successfully.
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.539 226109 DEBUG nova.objects.instance [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.580 226109 DEBUG nova.virt.libvirt.vif [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-39574177',display_name='tempest-TestNetworkAdvancedServerOps-server-39574177',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-39574177',id=192,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEn9VmFG5AKDWUUMC+9jjLchOz9ok8ZoheVEL8Lpk5Ek0wEhTQ3QYoYfW2hmtWVDKNVdYvLeyszMS7A3hcKtZg/qa2spqV12IhYYvt9ANdB3mURsEw+igU2XgRBC9k0QQ==',key_name='tempest-TestNetworkAdvancedServerOps-829698560',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:06:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-7a8dmqq3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:07:24Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=926a54a1-96cf-4d45-a783-3b13dc5fbbe1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.581 226109 DEBUG nova.network.os_vif_util [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.582 226109 DEBUG nova.network.os_vif_util [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:6c:4e,bridge_name='br-int',has_traffic_filtering=True,id=580b9a18-5cd3-4fa5-95ba-d260b21e0197,network=Network(b77bbda0-3ec0-4eca-bc98-409d33462f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580b9a18-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.583 226109 DEBUG os_vif [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:6c:4e,bridge_name='br-int',has_traffic_filtering=True,id=580b9a18-5cd3-4fa5-95ba-d260b21e0197,network=Network(b77bbda0-3ec0-4eca-bc98-409d33462f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580b9a18-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.586 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.587 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580b9a18-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.636 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.639 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.642 226109 INFO os_vif [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:6c:4e,bridge_name='br-int',has_traffic_filtering=True,id=580b9a18-5cd3-4fa5-95ba-d260b21e0197,network=Network(b77bbda0-3ec0-4eca-bc98-409d33462f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580b9a18-5c')
Dec 06 08:07:47 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[304126]: [NOTICE]   (304130) : haproxy version is 2.8.14-c23fe91
Dec 06 08:07:47 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[304126]: [NOTICE]   (304130) : path to executable is /usr/sbin/haproxy
Dec 06 08:07:47 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[304126]: [WARNING]  (304130) : Exiting Master process...
Dec 06 08:07:47 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[304126]: [WARNING]  (304130) : Exiting Master process...
Dec 06 08:07:47 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[304126]: [ALERT]    (304130) : Current worker (304132) exited with code 143 (Terminated)
Dec 06 08:07:47 compute-1 neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81[304126]: [WARNING]  (304130) : All workers exited. Exiting... (0)
Dec 06 08:07:47 compute-1 systemd[1]: libpod-3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233.scope: Deactivated successfully.
Dec 06 08:07:47 compute-1 podman[304458]: 2025-12-06 08:07:47.753335645 +0000 UTC m=+0.267111682 container died 3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.761 226109 DEBUG nova.compute.manager [req-03e8a740-df07-4483-ae6e-e3f45dcddcf7 req-fa7a559f-461a-4c9f-949e-f59bb0051162 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-unplugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.761 226109 DEBUG oslo_concurrency.lockutils [req-03e8a740-df07-4483-ae6e-e3f45dcddcf7 req-fa7a559f-461a-4c9f-949e-f59bb0051162 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.761 226109 DEBUG oslo_concurrency.lockutils [req-03e8a740-df07-4483-ae6e-e3f45dcddcf7 req-fa7a559f-461a-4c9f-949e-f59bb0051162 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.761 226109 DEBUG oslo_concurrency.lockutils [req-03e8a740-df07-4483-ae6e-e3f45dcddcf7 req-fa7a559f-461a-4c9f-949e-f59bb0051162 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.761 226109 DEBUG nova.compute.manager [req-03e8a740-df07-4483-ae6e-e3f45dcddcf7 req-fa7a559f-461a-4c9f-949e-f59bb0051162 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] No waiting events found dispatching network-vif-unplugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.762 226109 DEBUG nova.compute.manager [req-03e8a740-df07-4483-ae6e-e3f45dcddcf7 req-fa7a559f-461a-4c9f-949e-f59bb0051162 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-unplugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:07:47 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233-userdata-shm.mount: Deactivated successfully.
Dec 06 08:07:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-98e4dbf89d8d5856ece9902c1f98a4c92698dde91c0d835ea377245536623a1e-merged.mount: Deactivated successfully.
Dec 06 08:07:47 compute-1 podman[304458]: 2025-12-06 08:07:47.810513576 +0000 UTC m=+0.324289573 container cleanup 3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:07:47 compute-1 systemd[1]: libpod-conmon-3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233.scope: Deactivated successfully.
Dec 06 08:07:47 compute-1 podman[304515]: 2025-12-06 08:07:47.896766245 +0000 UTC m=+0.062183365 container remove 3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.904 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[046e3415-cffa-470e-a2ff-de34648f5371]: (4, ('Sat Dec  6 08:07:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 (3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233)\n3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233\nSat Dec  6 08:07:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 (3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233)\n3d0d6bdb59442065ef623fb389f075a9087d9b6f81d54df212b2f7cb213f3233\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.907 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[76c592c4-3cb6-4581-b1e2-09b6ef5da889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.909 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb77bbda0-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.913 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:47 compute-1 kernel: tapb77bbda0-30: left promiscuous mode
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.916 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.919 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d228cfd-0e99-4711-8cf7-aab646aab841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:47 compute-1 nova_compute[226101]: 2025-12-06 08:07:47.931 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.939 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9a849109-6a43-45a6-bc64-1de756ca62e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.940 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa9d34f-9824-412b-80cc-831a8dd903da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.964 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf3c765-cb4c-4bf4-9fee-9a60a077246a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 872335, 'reachable_time': 15026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304530, 'error': None, 'target': 'ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:47 compute-1 systemd[1]: run-netns-ovnmeta\x2db77bbda0\x2d3ec0\x2d4eca\x2dbc98\x2d409d33462f81.mount: Deactivated successfully.
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.968 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b77bbda0-3ec0-4eca-bc98-409d33462f81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:07:47 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:07:47.968 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[e1380d3a-092f-47ef-9e8e-211e7175bc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:48.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.151 226109 INFO nova.virt.libvirt.driver [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Deleting instance files /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1_del
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.153 226109 INFO nova.virt.libvirt.driver [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Deletion of /var/lib/nova/instances/926a54a1-96cf-4d45-a783-3b13dc5fbbe1_del complete
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.289 226109 INFO nova.compute.manager [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Took 0.99 seconds to destroy the instance on the hypervisor.
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.290 226109 DEBUG oslo.service.loopingcall [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.290 226109 DEBUG nova.compute.manager [-] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.290 226109 DEBUG nova.network.neutron [-] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.357 226109 DEBUG nova.network.neutron [req-94ce853e-8fd3-4bdd-9c9d-aae5e0f9a4fa req-09747d31-fce0-4ff0-a4c4-592f1f916d44 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updated VIF entry in instance network info cache for port 580b9a18-5cd3-4fa5-95ba-d260b21e0197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.358 226109 DEBUG nova.network.neutron [req-94ce853e-8fd3-4bdd-9c9d-aae5e0f9a4fa req-09747d31-fce0-4ff0-a4c4-592f1f916d44 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updating instance_info_cache with network_info: [{"id": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "address": "fa:16:3e:48:6c:4e", "network": {"id": "b77bbda0-3ec0-4eca-bc98-409d33462f81", "bridge": "br-int", "label": "tempest-network-smoke--267491019", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580b9a18-5c", "ovs_interfaceid": "580b9a18-5cd3-4fa5-95ba-d260b21e0197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:07:48 compute-1 nova_compute[226101]: 2025-12-06 08:07:48.386 226109 DEBUG oslo_concurrency.lockutils [req-94ce853e-8fd3-4bdd-9c9d-aae5e0f9a4fa req-09747d31-fce0-4ff0-a4c4-592f1f916d44 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-926a54a1-96cf-4d45-a783-3b13dc5fbbe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:07:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:48.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:48 compute-1 ceph-mon[81689]: pgmap v3452: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.259 226109 DEBUG nova.network.neutron [-] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.289 226109 INFO nova.compute.manager [-] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Took 1.00 seconds to deallocate network for instance.
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.354 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.354 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.380 226109 DEBUG nova.compute.manager [req-e58c2c21-956d-43f6-96de-d98d70033a8c req-b53544df-db45-4f2e-b341-5c988459a5fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-deleted-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.417 226109 DEBUG oslo_concurrency.processutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:07:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2436695836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.844 226109 DEBUG oslo_concurrency.processutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.853 226109 DEBUG nova.compute.manager [req-9378377b-9234-484d-9e00-c16d9352d03e req-2f0da674-2849-4baa-9f86-425b55ee0f35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.854 226109 DEBUG oslo_concurrency.lockutils [req-9378377b-9234-484d-9e00-c16d9352d03e req-2f0da674-2849-4baa-9f86-425b55ee0f35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.854 226109 DEBUG oslo_concurrency.lockutils [req-9378377b-9234-484d-9e00-c16d9352d03e req-2f0da674-2849-4baa-9f86-425b55ee0f35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.855 226109 DEBUG oslo_concurrency.lockutils [req-9378377b-9234-484d-9e00-c16d9352d03e req-2f0da674-2849-4baa-9f86-425b55ee0f35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.855 226109 DEBUG nova.compute.manager [req-9378377b-9234-484d-9e00-c16d9352d03e req-2f0da674-2849-4baa-9f86-425b55ee0f35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] No waiting events found dispatching network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.855 226109 WARNING nova.compute.manager [req-9378377b-9234-484d-9e00-c16d9352d03e req-2f0da674-2849-4baa-9f86-425b55ee0f35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Received unexpected event network-vif-plugged-580b9a18-5cd3-4fa5-95ba-d260b21e0197 for instance with vm_state deleted and task_state None.
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.860 226109 DEBUG nova.compute.provider_tree [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.875 226109 DEBUG nova.scheduler.client.report [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:07:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/856270045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:07:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3251160969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2436695836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.908 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:49 compute-1 nova_compute[226101]: 2025-12-06 08:07:49.933 226109 INFO nova.scheduler.client.report [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance 926a54a1-96cf-4d45-a783-3b13dc5fbbe1
Dec 06 08:07:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:07:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/419021482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:07:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:50.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:50 compute-1 nova_compute[226101]: 2025-12-06 08:07:50.124 226109 DEBUG oslo_concurrency.lockutils [None req-71201bbe-d875-462e-8e8c-1f50294d797b 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "926a54a1-96cf-4d45-a783-3b13dc5fbbe1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:50 compute-1 nova_compute[226101]: 2025-12-06 08:07:50.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:50.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:50 compute-1 ceph-mon[81689]: pgmap v3453: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 498 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:07:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/419021482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:07:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/228762705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:51 compute-1 nova_compute[226101]: 2025-12-06 08:07:51.209 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:52.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:52 compute-1 nova_compute[226101]: 2025-12-06 08:07:52.638 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:52.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:52 compute-1 ceph-mon[81689]: pgmap v3454: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Dec 06 08:07:53 compute-1 nova_compute[226101]: 2025-12-06 08:07:53.453 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:53 compute-1 nova_compute[226101]: 2025-12-06 08:07:53.584 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:54.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:54.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:55 compute-1 ceph-mon[81689]: pgmap v3455: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Dec 06 08:07:55 compute-1 nova_compute[226101]: 2025-12-06 08:07:55.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:56.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:56 compute-1 nova_compute[226101]: 2025-12-06 08:07:56.211 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:56.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:57 compute-1 ceph-mon[81689]: pgmap v3456: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Dec 06 08:07:57 compute-1 nova_compute[226101]: 2025-12-06 08:07:57.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:57 compute-1 nova_compute[226101]: 2025-12-06 08:07:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:57 compute-1 nova_compute[226101]: 2025-12-06 08:07:57.642 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:58.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:07:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:58.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:59 compute-1 ceph-mon[81689]: pgmap v3457: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 305 KiB/s wr, 115 op/s
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.332787) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479332851, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 576, "num_deletes": 251, "total_data_size": 856435, "memory_usage": 867024, "flush_reason": "Manual Compaction"}
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479337582, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 426863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74922, "largest_seqno": 75493, "table_properties": {"data_size": 424076, "index_size": 758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7616, "raw_average_key_size": 20, "raw_value_size": 418292, "raw_average_value_size": 1142, "num_data_blocks": 33, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008448, "oldest_key_time": 1765008448, "file_creation_time": 1765008479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 4815 microseconds, and 1682 cpu microseconds.
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.337616) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 426863 bytes OK
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.337629) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.339827) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.339847) EVENT_LOG_v1 {"time_micros": 1765008479339840, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.339867) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 853124, prev total WAL file size 853124, number of live WAL files 2.
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.340652) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353236' seq:72057594037927935, type:22 .. '6D6772737461740032373738' seq:0, type:0; will stop at (end)
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(416KB)], [150(13MB)]
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479340780, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 14647491, "oldest_snapshot_seqno": -1}
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 10333 keys, 10917334 bytes, temperature: kUnknown
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479419749, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 10917334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10854178, "index_size": 36227, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25861, "raw_key_size": 273574, "raw_average_key_size": 26, "raw_value_size": 10676593, "raw_average_value_size": 1033, "num_data_blocks": 1362, "num_entries": 10333, "num_filter_entries": 10333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.420086) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10917334 bytes
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.422045) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.2 rd, 138.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(59.9) write-amplify(25.6) OK, records in: 10839, records dropped: 506 output_compression: NoCompression
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.422074) EVENT_LOG_v1 {"time_micros": 1765008479422061, "job": 96, "event": "compaction_finished", "compaction_time_micros": 79074, "compaction_time_cpu_micros": 32857, "output_level": 6, "num_output_files": 1, "total_output_size": 10917334, "num_input_records": 10839, "num_output_records": 10333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479422344, "job": 96, "event": "table_file_deletion", "file_number": 152}
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479426776, "job": 96, "event": "table_file_deletion", "file_number": 150}
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.340437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.426901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.426907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.426909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.426911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:07:59.426912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:08:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:00.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:00.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:01 compute-1 nova_compute[226101]: 2025-12-06 08:08:01.212 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:01 compute-1 ceph-mon[81689]: pgmap v3458: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 102 op/s
Dec 06 08:08:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:01.687 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:01.687 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:01.687 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:02.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:02 compute-1 nova_compute[226101]: 2025-12-06 08:08:02.538 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008467.5355377, 926a54a1-96cf-4d45-a783-3b13dc5fbbe1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:08:02 compute-1 nova_compute[226101]: 2025-12-06 08:08:02.538 226109 INFO nova.compute.manager [-] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] VM Stopped (Lifecycle Event)
Dec 06 08:08:02 compute-1 nova_compute[226101]: 2025-12-06 08:08:02.645 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:02 compute-1 nova_compute[226101]: 2025-12-06 08:08:02.651 226109 DEBUG nova.compute.manager [None req-a895545f-086a-4c02-82e3-af0bbcb866d7 - - - - - -] [instance: 926a54a1-96cf-4d45-a783-3b13dc5fbbe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:08:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:02.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:03 compute-1 ceph-mon[81689]: pgmap v3459: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Dec 06 08:08:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:04.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:05 compute-1 ceph-mon[81689]: pgmap v3460: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:08:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:06.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:06 compute-1 nova_compute[226101]: 2025-12-06 08:08:06.252 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:06.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:07 compute-1 sshd-session[304556]: Received disconnect from 186.96.151.198 port 40038:11: Bye Bye [preauth]
Dec 06 08:08:07 compute-1 sshd-session[304556]: Disconnected from authenticating user root 186.96.151.198 port 40038 [preauth]
Dec 06 08:08:07 compute-1 ceph-mon[81689]: pgmap v3461: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 127 op/s
Dec 06 08:08:07 compute-1 nova_compute[226101]: 2025-12-06 08:08:07.690 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:08.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:08 compute-1 ceph-mon[81689]: pgmap v3462: 305 pgs: 305 active+clean; 275 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 672 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Dec 06 08:08:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:08.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/840833753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:08:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/840833753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:08:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:10.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:10 compute-1 ceph-mon[81689]: pgmap v3463: 305 pgs: 305 active+clean; 275 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 06 08:08:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:10.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:11 compute-1 sshd-session[304558]: Received disconnect from 101.100.194.199 port 50304:11: Bye Bye [preauth]
Dec 06 08:08:11 compute-1 sshd-session[304558]: Disconnected from authenticating user root 101.100.194.199 port 50304 [preauth]
Dec 06 08:08:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:11 compute-1 nova_compute[226101]: 2025-12-06 08:08:11.255 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:12.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:12 compute-1 nova_compute[226101]: 2025-12-06 08:08:12.694 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:12 compute-1 ceph-mon[81689]: pgmap v3464: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 08:08:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:12.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:13 compute-1 sshd-session[304560]: Received disconnect from 136.112.8.45 port 46230:11: Bye Bye [preauth]
Dec 06 08:08:13 compute-1 sshd-session[304560]: Disconnected from authenticating user root 136.112.8.45 port 46230 [preauth]
Dec 06 08:08:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:14.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:14 compute-1 ceph-mon[81689]: pgmap v3465: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:08:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:14.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:16.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:16 compute-1 nova_compute[226101]: 2025-12-06 08:08:16.306 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:16 compute-1 ceph-mon[81689]: pgmap v3466: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 06 08:08:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:16.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:17 compute-1 nova_compute[226101]: 2025-12-06 08:08:17.740 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:18 compute-1 podman[304563]: 2025-12-06 08:08:18.110835505 +0000 UTC m=+0.079777597 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:08:18 compute-1 podman[304562]: 2025-12-06 08:08:18.114856083 +0000 UTC m=+0.093359540 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:08:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:18.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:18 compute-1 podman[304564]: 2025-12-06 08:08:18.142176865 +0000 UTC m=+0.109221325 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 08:08:18 compute-1 nova_compute[226101]: 2025-12-06 08:08:18.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:18 compute-1 ceph-mon[81689]: pgmap v3467: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 290 KiB/s wr, 12 op/s
Dec 06 08:08:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:18.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:20.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:20 compute-1 ceph-mon[81689]: pgmap v3468: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 64 KiB/s wr, 6 op/s
Dec 06 08:08:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:20.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:21 compute-1 nova_compute[226101]: 2025-12-06 08:08:21.343 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:21 compute-1 sshd-session[304628]: Received disconnect from 154.219.116.39 port 38114:11: Bye Bye [preauth]
Dec 06 08:08:21 compute-1 sshd-session[304628]: Disconnected from authenticating user root 154.219.116.39 port 38114 [preauth]
Dec 06 08:08:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:22.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:08:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6005.4 total, 600.0 interval
                                           Cumulative writes: 60K writes, 237K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s
                                           Cumulative WAL: 60K writes, 21K syncs, 2.80 writes per sync, written: 0.22 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4238 writes, 16K keys, 4238 commit groups, 1.0 writes per commit group, ingest: 17.71 MB, 0.03 MB/s
                                           Interval WAL: 4238 writes, 1708 syncs, 2.48 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 08:08:22 compute-1 nova_compute[226101]: 2025-12-06 08:08:22.795 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:22 compute-1 ceph-mon[81689]: pgmap v3469: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 66 KiB/s wr, 6 op/s
Dec 06 08:08:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:22.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:23 compute-1 sshd-session[304630]: Received disconnect from 186.87.166.141 port 55872:11: Bye Bye [preauth]
Dec 06 08:08:23 compute-1 sshd-session[304630]: Disconnected from authenticating user root 186.87.166.141 port 55872 [preauth]
Dec 06 08:08:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:24.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:24 compute-1 ceph-mon[81689]: pgmap v3470: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 06 08:08:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:25 compute-1 sshd-session[303202]: Connection reset by 165.154.55.146 port 44974 [preauth]
Dec 06 08:08:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:26.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:26 compute-1 nova_compute[226101]: 2025-12-06 08:08:26.391 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:26 compute-1 nova_compute[226101]: 2025-12-06 08:08:26.504 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:26 compute-1 nova_compute[226101]: 2025-12-06 08:08:26.504 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:26 compute-1 nova_compute[226101]: 2025-12-06 08:08:26.711 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:08:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:26.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:26 compute-1 ceph-mon[81689]: pgmap v3471: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s wr, 1 op/s
Dec 06 08:08:27 compute-1 nova_compute[226101]: 2025-12-06 08:08:27.061 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:27 compute-1 nova_compute[226101]: 2025-12-06 08:08:27.061 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:27 compute-1 nova_compute[226101]: 2025-12-06 08:08:27.071 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:08:27 compute-1 nova_compute[226101]: 2025-12-06 08:08:27.072 226109 INFO nova.compute.claims [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:08:27 compute-1 nova_compute[226101]: 2025-12-06 08:08:27.433 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:27 compute-1 nova_compute[226101]: 2025-12-06 08:08:27.829 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:08:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/118683916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:27 compute-1 nova_compute[226101]: 2025-12-06 08:08:27.878 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:27 compute-1 nova_compute[226101]: 2025-12-06 08:08:27.886 226109 DEBUG nova.compute.provider_tree [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:08:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/118683916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:28.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.216 226109 DEBUG nova.scheduler.client.report [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.417 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.418 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.483 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.484 226109 DEBUG nova.network.neutron [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.509 226109 INFO nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.538 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.672 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.675 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.676 226109 INFO nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Creating image(s)
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.723 226109 DEBUG nova.storage.rbd_utils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.753 226109 DEBUG nova.storage.rbd_utils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.787 226109 DEBUG nova.storage.rbd_utils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.791 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.826 226109 DEBUG nova.policy [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.873 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.874 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.875 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.876 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.924 226109 DEBUG nova.storage.rbd_utils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:08:28 compute-1 nova_compute[226101]: 2025-12-06 08:08:28.929 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef edc85e5c-cfda-452d-85a8-773bfd82fa80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:28 compute-1 ceph-mon[81689]: pgmap v3472: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:08:29 compute-1 nova_compute[226101]: 2025-12-06 08:08:29.294 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef edc85e5c-cfda-452d-85a8-773bfd82fa80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:29 compute-1 nova_compute[226101]: 2025-12-06 08:08:29.417 226109 DEBUG nova.storage.rbd_utils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:08:29 compute-1 nova_compute[226101]: 2025-12-06 08:08:29.575 226109 DEBUG nova.objects.instance [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:08:29 compute-1 nova_compute[226101]: 2025-12-06 08:08:29.592 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:08:29 compute-1 nova_compute[226101]: 2025-12-06 08:08:29.592 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Ensure instance console log exists: /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:08:29 compute-1 nova_compute[226101]: 2025-12-06 08:08:29.593 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:29 compute-1 nova_compute[226101]: 2025-12-06 08:08:29.593 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:29 compute-1 nova_compute[226101]: 2025-12-06 08:08:29.593 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:30.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:30 compute-1 nova_compute[226101]: 2025-12-06 08:08:30.585 226109 DEBUG nova.network.neutron [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Successfully created port: a19b9290-ac1a-4a64-a4c4-8486895f7d27 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:08:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:30.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:31 compute-1 ceph-mon[81689]: pgmap v3473: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:08:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:31 compute-1 nova_compute[226101]: 2025-12-06 08:08:31.427 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:31.719 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:08:31 compute-1 nova_compute[226101]: 2025-12-06 08:08:31.720 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:31.720 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:08:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:32.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:32.722 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.768 226109 DEBUG nova.network.neutron [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Successfully updated port: a19b9290-ac1a-4a64-a4c4-8486895f7d27 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.784 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.784 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.785 226109 DEBUG nova.network.neutron [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.860 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.886 226109 DEBUG nova.compute.manager [req-13d6c7f5-2d07-4c36-bfa6-a232b2a6b042 req-86e4eaee-0f51-48cc-83bb-453b6b129397 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-changed-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.886 226109 DEBUG nova.compute.manager [req-13d6c7f5-2d07-4c36-bfa6-a232b2a6b042 req-86e4eaee-0f51-48cc-83bb-453b6b129397 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Refreshing instance network info cache due to event network-changed-a19b9290-ac1a-4a64-a4c4-8486895f7d27. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.886 226109 DEBUG oslo_concurrency.lockutils [req-13d6c7f5-2d07-4c36-bfa6-a232b2a6b042 req-86e4eaee-0f51-48cc-83bb-453b6b129397 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:08:32 compute-1 nova_compute[226101]: 2025-12-06 08:08:32.954 226109 DEBUG nova.network.neutron [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:08:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:32.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:33 compute-1 ceph-mon[81689]: pgmap v3474: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:33 compute-1 nova_compute[226101]: 2025-12-06 08:08:33.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:33 compute-1 nova_compute[226101]: 2025-12-06 08:08:33.654 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:33 compute-1 nova_compute[226101]: 2025-12-06 08:08:33.655 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:33 compute-1 nova_compute[226101]: 2025-12-06 08:08:33.656 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:33 compute-1 nova_compute[226101]: 2025-12-06 08:08:33.656 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:08:33 compute-1 nova_compute[226101]: 2025-12-06 08:08:33.657 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:08:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1624875607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.124 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:34.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.278 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.279 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4291MB free_disk=20.87622833251953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.280 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.280 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.462 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance edc85e5c-cfda-452d-85a8-773bfd82fa80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.462 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.462 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.506 226109 DEBUG nova.network.neutron [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updating instance_info_cache with network_info: [{"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.508 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.540 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.540 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance network_info: |[{"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.541 226109 DEBUG oslo_concurrency.lockutils [req-13d6c7f5-2d07-4c36-bfa6-a232b2a6b042 req-86e4eaee-0f51-48cc-83bb-453b6b129397 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.542 226109 DEBUG nova.network.neutron [req-13d6c7f5-2d07-4c36-bfa6-a232b2a6b042 req-86e4eaee-0f51-48cc-83bb-453b6b129397 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Refreshing network info cache for port a19b9290-ac1a-4a64-a4c4-8486895f7d27 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.546 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Start _get_guest_xml network_info=[{"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.551 226109 WARNING nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.556 226109 DEBUG nova.virt.libvirt.host [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.557 226109 DEBUG nova.virt.libvirt.host [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.561 226109 DEBUG nova.virt.libvirt.host [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.561 226109 DEBUG nova.virt.libvirt.host [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.562 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.563 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.563 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.563 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.563 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.564 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.564 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.564 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.564 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.564 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.565 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.565 226109 DEBUG nova.virt.hardware [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.568 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:08:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3781957031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.958 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:34.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.967 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:08:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:08:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/282098574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:08:34 compute-1 nova_compute[226101]: 2025-12-06 08:08:34.996 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.033 226109 DEBUG nova.storage.rbd_utils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:08:35 compute-1 ceph-mon[81689]: pgmap v3475: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1624875607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3781957031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/282098574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.037 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.071 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.171 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.171 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:08:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2223726682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.470 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.472 226109 DEBUG nova.virt.libvirt.vif [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-467980544',display_name='tempest-TestNetworkAdvancedServerOps-server-467980544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-467980544',id=194,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKEdmmh0VM1iBaqWaluaeF2j9ngqe4njuJkat/3uLhqjfhHgbzuAtvYzitHcxwF2qXQ9t7nIFqmkQTt1LeMRmkC4aUsNP90Y/I2kcGpPkdrg5U1kMm16WjPnKC0Thty5+w==',key_name='tempest-TestNetworkAdvancedServerOps-115705315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-87k00tve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:08:28Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=edc85e5c-cfda-452d-85a8-773bfd82fa80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.472 226109 DEBUG nova.network.os_vif_util [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.473 226109 DEBUG nova.network.os_vif_util [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.474 226109 DEBUG nova.objects.instance [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.557 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <uuid>edc85e5c-cfda-452d-85a8-773bfd82fa80</uuid>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <name>instance-000000c2</name>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-467980544</nova:name>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:08:34</nova:creationTime>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <nova:port uuid="a19b9290-ac1a-4a64-a4c4-8486895f7d27">
Dec 06 08:08:35 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <system>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <entry name="serial">edc85e5c-cfda-452d-85a8-773bfd82fa80</entry>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <entry name="uuid">edc85e5c-cfda-452d-85a8-773bfd82fa80</entry>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </system>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <os>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   </os>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <features>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   </features>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/edc85e5c-cfda-452d-85a8-773bfd82fa80_disk">
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       </source>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config">
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       </source>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:08:35 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:33:b4:12"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <target dev="tapa19b9290-ac"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/console.log" append="off"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <video>
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </video>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:08:35 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:08:35 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:08:35 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:08:35 compute-1 nova_compute[226101]: </domain>
Dec 06 08:08:35 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.559 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Preparing to wait for external event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.559 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.560 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.560 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.560 226109 DEBUG nova.virt.libvirt.vif [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-467980544',display_name='tempest-TestNetworkAdvancedServerOps-server-467980544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-467980544',id=194,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKEdmmh0VM1iBaqWaluaeF2j9ngqe4njuJkat/3uLhqjfhHgbzuAtvYzitHcxwF2qXQ9t7nIFqmkQTt1LeMRmkC4aUsNP90Y/I2kcGpPkdrg5U1kMm16WjPnKC0Thty5+w==',key_name='tempest-TestNetworkAdvancedServerOps-115705315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-87k00tve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:08:28Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=edc85e5c-cfda-452d-85a8-773bfd82fa80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.561 226109 DEBUG nova.network.os_vif_util [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.561 226109 DEBUG nova.network.os_vif_util [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.562 226109 DEBUG os_vif [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.562 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.563 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.563 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.565 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.565 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19b9290-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.566 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa19b9290-ac, col_values=(('external_ids', {'iface-id': 'a19b9290-ac1a-4a64-a4c4-8486895f7d27', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:b4:12', 'vm-uuid': 'edc85e5c-cfda-452d-85a8-773bfd82fa80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:35 compute-1 NetworkManager[49031]: <info>  [1765008515.5679] manager: (tapa19b9290-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.569 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.575 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.576 226109 INFO os_vif [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac')
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.624 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.624 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.625 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:33:b4:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.625 226109 INFO nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Using config drive
Dec 06 08:08:35 compute-1 nova_compute[226101]: 2025-12-06 08:08:35.655 226109 DEBUG nova.storage.rbd_utils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:08:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2223726682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:08:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:36.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.186 226109 INFO nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Creating config drive at /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.191 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpviz8k6z8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.309 226109 DEBUG nova.network.neutron [req-13d6c7f5-2d07-4c36-bfa6-a232b2a6b042 req-86e4eaee-0f51-48cc-83bb-453b6b129397 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updated VIF entry in instance network info cache for port a19b9290-ac1a-4a64-a4c4-8486895f7d27. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.310 226109 DEBUG nova.network.neutron [req-13d6c7f5-2d07-4c36-bfa6-a232b2a6b042 req-86e4eaee-0f51-48cc-83bb-453b6b129397 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updating instance_info_cache with network_info: [{"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.345 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpviz8k6z8" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.381 226109 DEBUG nova.storage.rbd_utils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.385 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.421 226109 DEBUG oslo_concurrency.lockutils [req-13d6c7f5-2d07-4c36-bfa6-a232b2a6b042 req-86e4eaee-0f51-48cc-83bb-453b6b129397 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.472 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:36 compute-1 sshd[164848]: Timeout before authentication for connection from 14.103.118.136 to 38.102.83.204, pid = 303299
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.570 226109 DEBUG oslo_concurrency.processutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.571 226109 INFO nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Deleting local config drive /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config because it was imported into RBD.
Dec 06 08:08:36 compute-1 kernel: tapa19b9290-ac: entered promiscuous mode
Dec 06 08:08:36 compute-1 NetworkManager[49031]: <info>  [1765008516.6400] manager: (tapa19b9290-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Dec 06 08:08:36 compute-1 ovn_controller[130279]: 2025-12-06T08:08:36Z|00791|binding|INFO|Claiming lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 for this chassis.
Dec 06 08:08:36 compute-1 ovn_controller[130279]: 2025-12-06T08:08:36Z|00792|binding|INFO|a19b9290-ac1a-4a64-a4c4-8486895f7d27: Claiming fa:16:3e:33:b4:12 10.100.0.6
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.640 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:36 compute-1 systemd-udevd[305003]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:08:36 compute-1 systemd-machined[190302]: New machine qemu-91-instance-000000c2.
Dec 06 08:08:36 compute-1 NetworkManager[49031]: <info>  [1765008516.6870] device (tapa19b9290-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:08:36 compute-1 NetworkManager[49031]: <info>  [1765008516.6884] device (tapa19b9290-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:08:36 compute-1 systemd[1]: Started Virtual Machine qemu-91-instance-000000c2.
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.738 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:36 compute-1 ovn_controller[130279]: 2025-12-06T08:08:36Z|00793|binding|INFO|Setting lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 ovn-installed in OVS
Dec 06 08:08:36 compute-1 nova_compute[226101]: 2025-12-06 08:08:36.746 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:36 compute-1 ovn_controller[130279]: 2025-12-06T08:08:36Z|00794|binding|INFO|Setting lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 up in Southbound
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.847 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:b4:12 10.100.0.6'], port_security=['fa:16:3e:33:b4:12 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'edc85e5c-cfda-452d-85a8-773bfd82fa80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53432c94-36c7-4c90-893c-889139ac292d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36e97ae2-762b-40b3-9b92-6fd63e91a888', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03bf42ee-2199-4635-9d21-101eaed85f1c, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=a19b9290-ac1a-4a64-a4c4-8486895f7d27) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.848 139580 INFO neutron.agent.ovn.metadata.agent [-] Port a19b9290-ac1a-4a64-a4c4-8486895f7d27 in datapath 53432c94-36c7-4c90-893c-889139ac292d bound to our chassis
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.850 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 53432c94-36c7-4c90-893c-889139ac292d
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.860 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3218a6-39ae-446d-b229-c388852a3a6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.861 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap53432c94-31 in ovnmeta-53432c94-36c7-4c90-893c-889139ac292d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.863 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap53432c94-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.863 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[87e233b5-ed4e-4c70-a757-1500f8a1d01f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.864 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb45fce-1943-47de-af26-1333d9ea532f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.875 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[a25ef476-e2d7-447c-8eda-fe4e58e5f72f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.892 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d0287079-1a05-423b-be95-37d7d5f50b8e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.929 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbce412-71d3-4512-bebc-4746c32e64b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:36 compute-1 systemd-udevd[305007]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.935 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[393c9b39-6894-4387-8b93-e2af9543f058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:36 compute-1 NetworkManager[49031]: <info>  [1765008516.9363] manager: (tap53432c94-30): new Veth device (/org/freedesktop/NetworkManager/Devices/362)
Dec 06 08:08:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:36.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.968 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[36b8709e-5b52-4ade-bf3f-73f74e1dde78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:36 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:36.972 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4eae61-28b2-4f64-aa9a-d00155f24312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:37 compute-1 NetworkManager[49031]: <info>  [1765008517.0033] device (tap53432c94-30): carrier: link connected
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.015 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[279abf1b-ff47-4111-9fc1-45b698f04d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.032 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[149b5604-333f-4542-a5ac-a2320e5f254e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53432c94-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:81:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879650, 'reachable_time': 28070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305038, 'error': None, 'target': 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.049 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[638511c9-4cec-450b-9b8d-6771a6483483]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:8182'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 879650, 'tstamp': 879650}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305039, 'error': None, 'target': 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:37 compute-1 ceph-mon[81689]: pgmap v3476: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.068 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[69c69337-8a25-492d-81c0-8e9b2b3f2ba7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53432c94-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:81:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879650, 'reachable_time': 28070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305040, 'error': None, 'target': 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.107 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[670a74b5-4afb-43b3-ad4b-f25e3b101a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.169 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2c142c-5917-4a59-9c9b-1989367a9d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.171 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53432c94-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.171 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.172 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53432c94-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.174 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:37 compute-1 NetworkManager[49031]: <info>  [1765008517.1754] manager: (tap53432c94-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Dec 06 08:08:37 compute-1 kernel: tap53432c94-30: entered promiscuous mode
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.176 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.180 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap53432c94-30, col_values=(('external_ids', {'iface-id': 'f2a51215-e436-4c42-97c7-62a155b1a3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.181 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:37 compute-1 ovn_controller[130279]: 2025-12-06T08:08:37Z|00795|binding|INFO|Releasing lport f2a51215-e436-4c42-97c7-62a155b1a3d5 from this chassis (sb_readonly=0)
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.184 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/53432c94-36c7-4c90-893c-889139ac292d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/53432c94-36c7-4c90-893c-889139ac292d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.185 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[17e91c81-ee48-4681-b01b-1982906b3019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.186 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-53432c94-36c7-4c90-893c-889139ac292d
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/53432c94-36c7-4c90-893c-889139ac292d.pid.haproxy
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 53432c94-36c7-4c90-893c-889139ac292d
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:08:37 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:08:37.187 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'env', 'PROCESS_TAG=haproxy-53432c94-36c7-4c90-893c-889139ac292d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/53432c94-36c7-4c90-893c-889139ac292d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.194 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:37 compute-1 podman[305109]: 2025-12-06 08:08:37.579479327 +0000 UTC m=+0.044416351 container create 764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:08:37 compute-1 systemd[1]: Started libpod-conmon-764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969.scope.
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.632 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008517.631894, edc85e5c-cfda-452d-85a8-773bfd82fa80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.633 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] VM Started (Lifecycle Event)
Dec 06 08:08:37 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:08:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb25ed6c05d8b4e43f388cb9486b726bf63978b3d53d2365027d43fc46b8e25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:37 compute-1 podman[305109]: 2025-12-06 08:08:37.556566284 +0000 UTC m=+0.021503338 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:08:37 compute-1 podman[305109]: 2025-12-06 08:08:37.654434304 +0000 UTC m=+0.119371378 container init 764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 08:08:37 compute-1 podman[305109]: 2025-12-06 08:08:37.659678504 +0000 UTC m=+0.124615548 container start 764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 08:08:37 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305127]: [NOTICE]   (305131) : New worker (305133) forked
Dec 06 08:08:37 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305127]: [NOTICE]   (305131) : Loading success.
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.699 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.704 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008517.6321871, edc85e5c-cfda-452d-85a8-773bfd82fa80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.705 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] VM Paused (Lifecycle Event)
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.755 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.760 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.790 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.904 226109 DEBUG nova.compute.manager [req-e01faf99-2b56-4e35-8a8c-e8e22eb7cf59 req-4e845702-68f6-4d60-a4dd-3256c0077d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.905 226109 DEBUG oslo_concurrency.lockutils [req-e01faf99-2b56-4e35-8a8c-e8e22eb7cf59 req-4e845702-68f6-4d60-a4dd-3256c0077d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.906 226109 DEBUG oslo_concurrency.lockutils [req-e01faf99-2b56-4e35-8a8c-e8e22eb7cf59 req-4e845702-68f6-4d60-a4dd-3256c0077d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.907 226109 DEBUG oslo_concurrency.lockutils [req-e01faf99-2b56-4e35-8a8c-e8e22eb7cf59 req-4e845702-68f6-4d60-a4dd-3256c0077d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.907 226109 DEBUG nova.compute.manager [req-e01faf99-2b56-4e35-8a8c-e8e22eb7cf59 req-4e845702-68f6-4d60-a4dd-3256c0077d60 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Processing event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.908 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.912 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008517.9119856, edc85e5c-cfda-452d-85a8-773bfd82fa80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.912 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] VM Resumed (Lifecycle Event)
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.915 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.919 226109 INFO nova.virt.libvirt.driver [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance spawned successfully.
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.919 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.939 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.946 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.946 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.947 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.947 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.948 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.948 226109 DEBUG nova.virt.libvirt.driver [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.953 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:08:37 compute-1 nova_compute[226101]: 2025-12-06 08:08:37.988 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:08:38 compute-1 nova_compute[226101]: 2025-12-06 08:08:38.115 226109 INFO nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Took 9.44 seconds to spawn the instance on the hypervisor.
Dec 06 08:08:38 compute-1 nova_compute[226101]: 2025-12-06 08:08:38.115 226109 DEBUG nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:08:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:38.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:38 compute-1 nova_compute[226101]: 2025-12-06 08:08:38.387 226109 INFO nova.compute.manager [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Took 11.37 seconds to build instance.
Dec 06 08:08:38 compute-1 nova_compute[226101]: 2025-12-06 08:08:38.436 226109 DEBUG oslo_concurrency.lockutils [None req-2531a320-7f30-491e-b7ea-ad5e3617911c 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:39 compute-1 ceph-mon[81689]: pgmap v3477: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:39 compute-1 sudo[305142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:39 compute-1 sudo[305142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:39 compute-1 sudo[305142]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:39 compute-1 sudo[305167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:08:39 compute-1 sudo[305167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:39 compute-1 sudo[305167]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:39 compute-1 sudo[305192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:39 compute-1 sudo[305192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:39 compute-1 sudo[305192]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:39 compute-1 sudo[305217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:08:39 compute-1 sudo[305217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:39 compute-1 nova_compute[226101]: 2025-12-06 08:08:39.974 226109 DEBUG nova.compute.manager [req-f4af8cfc-22ce-4db5-a623-6847e2873cad req-44e8a8d3-9ebc-43b3-a9e0-3f73542bfd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:39 compute-1 nova_compute[226101]: 2025-12-06 08:08:39.975 226109 DEBUG oslo_concurrency.lockutils [req-f4af8cfc-22ce-4db5-a623-6847e2873cad req-44e8a8d3-9ebc-43b3-a9e0-3f73542bfd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:39 compute-1 nova_compute[226101]: 2025-12-06 08:08:39.975 226109 DEBUG oslo_concurrency.lockutils [req-f4af8cfc-22ce-4db5-a623-6847e2873cad req-44e8a8d3-9ebc-43b3-a9e0-3f73542bfd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:39 compute-1 nova_compute[226101]: 2025-12-06 08:08:39.979 226109 DEBUG oslo_concurrency.lockutils [req-f4af8cfc-22ce-4db5-a623-6847e2873cad req-44e8a8d3-9ebc-43b3-a9e0-3f73542bfd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:39 compute-1 nova_compute[226101]: 2025-12-06 08:08:39.980 226109 DEBUG nova.compute.manager [req-f4af8cfc-22ce-4db5-a623-6847e2873cad req-44e8a8d3-9ebc-43b3-a9e0-3f73542bfd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] No waiting events found dispatching network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:08:39 compute-1 nova_compute[226101]: 2025-12-06 08:08:39.980 226109 WARNING nova.compute.manager [req-f4af8cfc-22ce-4db5-a623-6847e2873cad req-44e8a8d3-9ebc-43b3-a9e0-3f73542bfd6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received unexpected event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 for instance with vm_state active and task_state None.
Dec 06 08:08:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:40.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:40 compute-1 nova_compute[226101]: 2025-12-06 08:08:40.171 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:40 compute-1 nova_compute[226101]: 2025-12-06 08:08:40.172 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:08:40 compute-1 nova_compute[226101]: 2025-12-06 08:08:40.172 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:08:40 compute-1 sudo[305217]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:40 compute-1 nova_compute[226101]: 2025-12-06 08:08:40.462 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:08:40 compute-1 nova_compute[226101]: 2025-12-06 08:08:40.463 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:08:40 compute-1 nova_compute[226101]: 2025-12-06 08:08:40.463 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:08:40 compute-1 nova_compute[226101]: 2025-12-06 08:08:40.464 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:08:40 compute-1 nova_compute[226101]: 2025-12-06 08:08:40.567 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:40.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:41 compute-1 ceph-mon[81689]: pgmap v3478: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:08:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2677702468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:08:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:08:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:08:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:08:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:08:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:41 compute-1 nova_compute[226101]: 2025-12-06 08:08:41.513 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:42.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2671105623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:42 compute-1 nova_compute[226101]: 2025-12-06 08:08:42.879 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updating instance_info_cache with network_info: [{"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:42.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:43 compute-1 nova_compute[226101]: 2025-12-06 08:08:43.043 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:08:43 compute-1 nova_compute[226101]: 2025-12-06 08:08:43.043 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:08:43 compute-1 ceph-mon[81689]: pgmap v3479: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Dec 06 08:08:43 compute-1 nova_compute[226101]: 2025-12-06 08:08:43.976 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:43 compute-1 NetworkManager[49031]: <info>  [1765008523.9773] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Dec 06 08:08:43 compute-1 NetworkManager[49031]: <info>  [1765008523.9786] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Dec 06 08:08:44 compute-1 nova_compute[226101]: 2025-12-06 08:08:44.061 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:44 compute-1 ovn_controller[130279]: 2025-12-06T08:08:44Z|00796|binding|INFO|Releasing lport f2a51215-e436-4c42-97c7-62a155b1a3d5 from this chassis (sb_readonly=0)
Dec 06 08:08:44 compute-1 nova_compute[226101]: 2025-12-06 08:08:44.069 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:44.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:44 compute-1 sshd-session[304821]: Connection closed by 165.154.55.146 port 53874 [preauth]
Dec 06 08:08:44 compute-1 nova_compute[226101]: 2025-12-06 08:08:44.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:44 compute-1 nova_compute[226101]: 2025-12-06 08:08:44.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:08:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1075791896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:44.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:45 compute-1 nova_compute[226101]: 2025-12-06 08:08:45.244 226109 DEBUG nova.compute.manager [req-f71219d6-4ac2-416b-9c32-e98725032fae req-41397fe3-ff1a-423b-a5e4-40a1ef6e86da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-changed-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:45 compute-1 nova_compute[226101]: 2025-12-06 08:08:45.245 226109 DEBUG nova.compute.manager [req-f71219d6-4ac2-416b-9c32-e98725032fae req-41397fe3-ff1a-423b-a5e4-40a1ef6e86da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Refreshing instance network info cache due to event network-changed-a19b9290-ac1a-4a64-a4c4-8486895f7d27. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:08:45 compute-1 nova_compute[226101]: 2025-12-06 08:08:45.245 226109 DEBUG oslo_concurrency.lockutils [req-f71219d6-4ac2-416b-9c32-e98725032fae req-41397fe3-ff1a-423b-a5e4-40a1ef6e86da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:08:45 compute-1 nova_compute[226101]: 2025-12-06 08:08:45.245 226109 DEBUG oslo_concurrency.lockutils [req-f71219d6-4ac2-416b-9c32-e98725032fae req-41397fe3-ff1a-423b-a5e4-40a1ef6e86da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:08:45 compute-1 nova_compute[226101]: 2025-12-06 08:08:45.245 226109 DEBUG nova.network.neutron [req-f71219d6-4ac2-416b-9c32-e98725032fae req-41397fe3-ff1a-423b-a5e4-40a1ef6e86da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Refreshing network info cache for port a19b9290-ac1a-4a64-a4c4-8486895f7d27 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:08:45 compute-1 nova_compute[226101]: 2025-12-06 08:08:45.568 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:46 compute-1 ceph-mon[81689]: pgmap v3480: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 83 op/s
Dec 06 08:08:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:46.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:46 compute-1 nova_compute[226101]: 2025-12-06 08:08:46.550 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:46.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:47 compute-1 nova_compute[226101]: 2025-12-06 08:08:47.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:47 compute-1 nova_compute[226101]: 2025-12-06 08:08:47.755 226109 DEBUG nova.network.neutron [req-f71219d6-4ac2-416b-9c32-e98725032fae req-41397fe3-ff1a-423b-a5e4-40a1ef6e86da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updated VIF entry in instance network info cache for port a19b9290-ac1a-4a64-a4c4-8486895f7d27. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:08:47 compute-1 nova_compute[226101]: 2025-12-06 08:08:47.756 226109 DEBUG nova.network.neutron [req-f71219d6-4ac2-416b-9c32-e98725032fae req-41397fe3-ff1a-423b-a5e4-40a1ef6e86da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updating instance_info_cache with network_info: [{"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:47 compute-1 nova_compute[226101]: 2025-12-06 08:08:47.781 226109 DEBUG oslo_concurrency.lockutils [req-f71219d6-4ac2-416b-9c32-e98725032fae req-41397fe3-ff1a-423b-a5e4-40a1ef6e86da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:08:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:48.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:48 compute-1 ceph-mon[81689]: pgmap v3481: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 102 op/s
Dec 06 08:08:48 compute-1 sudo[305275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:48 compute-1 sudo[305275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:48 compute-1 sudo[305275]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:48 compute-1 podman[305299]: 2025-12-06 08:08:48.940283599 +0000 UTC m=+0.067780445 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:08:48 compute-1 sudo[305314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:08:48 compute-1 sudo[305314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:48 compute-1 sudo[305314]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:48 compute-1 podman[305300]: 2025-12-06 08:08:48.961486017 +0000 UTC m=+0.086888267 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 06 08:08:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:48.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:49 compute-1 podman[305301]: 2025-12-06 08:08:49.05237087 +0000 UTC m=+0.164269549 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:08:49 compute-1 ceph-mon[81689]: pgmap v3482: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 102 op/s
Dec 06 08:08:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:49 compute-1 nova_compute[226101]: 2025-12-06 08:08:49.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:50.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:50 compute-1 ceph-mon[81689]: pgmap v3483: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 102 op/s
Dec 06 08:08:50 compute-1 nova_compute[226101]: 2025-12-06 08:08:50.571 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:50 compute-1 nova_compute[226101]: 2025-12-06 08:08:50.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:50.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/6449198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2217186686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:51 compute-1 nova_compute[226101]: 2025-12-06 08:08:51.552 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:52 compute-1 ovn_controller[130279]: 2025-12-06T08:08:52Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:b4:12 10.100.0.6
Dec 06 08:08:52 compute-1 ovn_controller[130279]: 2025-12-06T08:08:52Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:b4:12 10.100.0.6
Dec 06 08:08:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:52.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:52 compute-1 ceph-mon[81689]: pgmap v3484: 305 pgs: 305 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 139 op/s
Dec 06 08:08:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:52.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:54.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:54 compute-1 ceph-mon[81689]: pgmap v3485: 305 pgs: 305 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 MiB/s wr, 56 op/s
Dec 06 08:08:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1854929407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:54.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:55 compute-1 nova_compute[226101]: 2025-12-06 08:08:55.572 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:56.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:56 compute-1 nova_compute[226101]: 2025-12-06 08:08:56.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:56 compute-1 nova_compute[226101]: 2025-12-06 08:08:56.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:56 compute-1 ceph-mon[81689]: pgmap v3486: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Dec 06 08:08:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:56.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:57 compute-1 nova_compute[226101]: 2025-12-06 08:08:57.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:58.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:58 compute-1 nova_compute[226101]: 2025-12-06 08:08:58.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:58 compute-1 ceph-mon[81689]: pgmap v3487: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 06 08:08:58 compute-1 nova_compute[226101]: 2025-12-06 08:08:58.879 226109 INFO nova.compute.manager [None req-f47a4976-a19c-46f3-a2da-d9dee69c68d0 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Get console output
Dec 06 08:08:58 compute-1 nova_compute[226101]: 2025-12-06 08:08:58.886 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:08:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:08:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:58.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:59 compute-1 nova_compute[226101]: 2025-12-06 08:08:59.961 226109 INFO nova.compute.manager [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Rebuilding instance
Dec 06 08:09:00 compute-1 ovn_controller[130279]: 2025-12-06T08:09:00Z|00797|binding|INFO|Releasing lport f2a51215-e436-4c42-97c7-62a155b1a3d5 from this chassis (sb_readonly=0)
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.137 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.193 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'trusted_certs' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.209 226109 DEBUG nova.compute.manager [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.272 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_requests' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.286 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.301 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.311 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.323 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.326 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 08:09:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:00 compute-1 nova_compute[226101]: 2025-12-06 08:09:00.574 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:00 compute-1 ceph-mon[81689]: pgmap v3488: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 06 08:09:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:01.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:01 compute-1 nova_compute[226101]: 2025-12-06 08:09:01.596 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:01.687 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:01.688 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:01.688 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:02.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:02 compute-1 kernel: tapa19b9290-ac (unregistering): left promiscuous mode
Dec 06 08:09:02 compute-1 ceph-mon[81689]: pgmap v3489: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:09:02 compute-1 NetworkManager[49031]: <info>  [1765008542.8447] device (tapa19b9290-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:09:02 compute-1 nova_compute[226101]: 2025-12-06 08:09:02.908 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:02 compute-1 ovn_controller[130279]: 2025-12-06T08:09:02Z|00798|binding|INFO|Releasing lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 from this chassis (sb_readonly=0)
Dec 06 08:09:02 compute-1 ovn_controller[130279]: 2025-12-06T08:09:02Z|00799|binding|INFO|Setting lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 down in Southbound
Dec 06 08:09:02 compute-1 ovn_controller[130279]: 2025-12-06T08:09:02Z|00800|binding|INFO|Removing iface tapa19b9290-ac ovn-installed in OVS
Dec 06 08:09:02 compute-1 nova_compute[226101]: 2025-12-06 08:09:02.911 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:02.916 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:b4:12 10.100.0.6'], port_security=['fa:16:3e:33:b4:12 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'edc85e5c-cfda-452d-85a8-773bfd82fa80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53432c94-36c7-4c90-893c-889139ac292d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36e97ae2-762b-40b3-9b92-6fd63e91a888', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03bf42ee-2199-4635-9d21-101eaed85f1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=a19b9290-ac1a-4a64-a4c4-8486895f7d27) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:09:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:02.917 139580 INFO neutron.agent.ovn.metadata.agent [-] Port a19b9290-ac1a-4a64-a4c4-8486895f7d27 in datapath 53432c94-36c7-4c90-893c-889139ac292d unbound from our chassis
Dec 06 08:09:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:02.919 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53432c94-36c7-4c90-893c-889139ac292d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:09:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:02.920 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3833e8-3f31-4568-b82d-78e0aea53485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:02.920 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-53432c94-36c7-4c90-893c-889139ac292d namespace which is not needed anymore
Dec 06 08:09:02 compute-1 nova_compute[226101]: 2025-12-06 08:09:02.929 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:02 compute-1 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Dec 06 08:09:02 compute-1 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c2.scope: Consumed 14.544s CPU time.
Dec 06 08:09:02 compute-1 systemd-machined[190302]: Machine qemu-91-instance-000000c2 terminated.
Dec 06 08:09:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:03.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:03 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305127]: [NOTICE]   (305131) : haproxy version is 2.8.14-c23fe91
Dec 06 08:09:03 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305127]: [NOTICE]   (305131) : path to executable is /usr/sbin/haproxy
Dec 06 08:09:03 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305127]: [WARNING]  (305131) : Exiting Master process...
Dec 06 08:09:03 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305127]: [ALERT]    (305131) : Current worker (305133) exited with code 143 (Terminated)
Dec 06 08:09:03 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305127]: [WARNING]  (305131) : All workers exited. Exiting... (0)
Dec 06 08:09:03 compute-1 systemd[1]: libpod-764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969.scope: Deactivated successfully.
Dec 06 08:09:03 compute-1 podman[305413]: 2025-12-06 08:09:03.190401524 +0000 UTC m=+0.191085137 container died 764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:09:03 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969-userdata-shm.mount: Deactivated successfully.
Dec 06 08:09:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-8cb25ed6c05d8b4e43f388cb9486b726bf63978b3d53d2365027d43fc46b8e25-merged.mount: Deactivated successfully.
Dec 06 08:09:03 compute-1 podman[305413]: 2025-12-06 08:09:03.225391341 +0000 UTC m=+0.226074954 container cleanup 764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:09:03 compute-1 systemd[1]: libpod-conmon-764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969.scope: Deactivated successfully.
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.340 226109 INFO nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance shutdown successfully after 3 seconds.
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.345 226109 INFO nova.virt.libvirt.driver [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance destroyed successfully.
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.349 226109 INFO nova.virt.libvirt.driver [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance destroyed successfully.
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.350 226109 DEBUG nova.virt.libvirt.vif [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-467980544',display_name='tempest-TestNetworkAdvancedServerOps-server-467980544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-467980544',id=194,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKEdmmh0VM1iBaqWaluaeF2j9ngqe4njuJkat/3uLhqjfhHgbzuAtvYzitHcxwF2qXQ9t7nIFqmkQTt1LeMRmkC4aUsNP90Y/I2kcGpPkdrg5U1kMm16WjPnKC0Thty5+w==',key_name='tempest-TestNetworkAdvancedServerOps-115705315',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:08:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-87k00tve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:08:59Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=edc85e5c-cfda-452d-85a8-773bfd82fa80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.350 226109 DEBUG nova.network.os_vif_util [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.351 226109 DEBUG nova.network.os_vif_util [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.351 226109 DEBUG os_vif [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.353 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.353 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19b9290-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.354 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.357 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.359 226109 INFO os_vif [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac')
Dec 06 08:09:03 compute-1 podman[305455]: 2025-12-06 08:09:03.370511925 +0000 UTC m=+0.125841939 container remove 764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.376 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[284d1e02-601c-4590-8c77-88e6d214a74b]: (4, ('Sat Dec  6 08:09:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d (764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969)\n764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969\nSat Dec  6 08:09:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d (764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969)\n764f35f92488bca061e05de5dd80ab443421fce54f43e2ed80df21666aeaa969\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.378 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7cf3a3-ce64-4a4c-a1b3-1f237096bfdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.379 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53432c94-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:03 compute-1 kernel: tap53432c94-30: left promiscuous mode
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.380 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.392 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.396 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1ccf68-ce4f-4d30-9bb0-a7012fd9a581]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.416 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[600e034b-b2c0-4f11-9b2c-fd3c89d42ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.417 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[04752d82-1ce1-4d95-a997-1540971b6cf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.434 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[37ff0eaa-d968-4b5d-b4db-c01d5f082506]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879642, 'reachable_time': 34303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305488, 'error': None, 'target': 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:03 compute-1 systemd[1]: run-netns-ovnmeta\x2d53432c94\x2d36c7\x2d4c90\x2d893c\x2d889139ac292d.mount: Deactivated successfully.
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.438 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-53432c94-36c7-4c90-893c-889139ac292d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:09:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:03.439 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ab9daf02-80b4-4589-8b25-2959c32eacaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.975 226109 DEBUG nova.compute.manager [req-644feeb6-e708-4aed-8488-d93d8b40acc9 req-cd02e715-dd9d-4bab-9575-eeb256f6b8c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-unplugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.977 226109 DEBUG oslo_concurrency.lockutils [req-644feeb6-e708-4aed-8488-d93d8b40acc9 req-cd02e715-dd9d-4bab-9575-eeb256f6b8c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.977 226109 DEBUG oslo_concurrency.lockutils [req-644feeb6-e708-4aed-8488-d93d8b40acc9 req-cd02e715-dd9d-4bab-9575-eeb256f6b8c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.978 226109 DEBUG oslo_concurrency.lockutils [req-644feeb6-e708-4aed-8488-d93d8b40acc9 req-cd02e715-dd9d-4bab-9575-eeb256f6b8c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.978 226109 DEBUG nova.compute.manager [req-644feeb6-e708-4aed-8488-d93d8b40acc9 req-cd02e715-dd9d-4bab-9575-eeb256f6b8c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] No waiting events found dispatching network-vif-unplugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:09:03 compute-1 nova_compute[226101]: 2025-12-06 08:09:03.979 226109 WARNING nova.compute.manager [req-644feeb6-e708-4aed-8488-d93d8b40acc9 req-cd02e715-dd9d-4bab-9575-eeb256f6b8c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received unexpected event network-vif-unplugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 for instance with vm_state active and task_state rebuilding.
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.125 226109 INFO nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Deleting instance files /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80_del
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.125 226109 INFO nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Deletion of /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80_del complete
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.277 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.278 226109 INFO nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Creating image(s)
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.307 226109 DEBUG nova.storage.rbd_utils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.336 226109 DEBUG nova.storage.rbd_utils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.370 226109 DEBUG nova.storage.rbd_utils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.374 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:04.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.444 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.445 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.446 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.447 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.479 226109 DEBUG nova.storage.rbd_utils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.483 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 edc85e5c-cfda-452d-85a8-773bfd82fa80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.639 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.753 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 edc85e5c-cfda-452d-85a8-773bfd82fa80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.837 226109 DEBUG nova.storage.rbd_utils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:09:04 compute-1 ceph-mon[81689]: pgmap v3490: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 247 KiB/s rd, 193 KiB/s wr, 55 op/s
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.976 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.978 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Ensure instance console log exists: /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.978 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.979 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.980 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.984 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Start _get_guest_xml network_info=[{"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:09:04 compute-1 nova_compute[226101]: 2025-12-06 08:09:04.990 226109 WARNING nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.001 226109 DEBUG nova.virt.libvirt.host [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.003 226109 DEBUG nova.virt.libvirt.host [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:09:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:05.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.010 226109 DEBUG nova.virt.libvirt.host [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.011 226109 DEBUG nova.virt.libvirt.host [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.012 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.013 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.013 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.013 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.014 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.014 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.014 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.015 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.015 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.015 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.016 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.016 226109 DEBUG nova.virt.hardware [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.016 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.048 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:09:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1965870762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.487 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.538 226109 DEBUG nova.storage.rbd_utils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:09:05 compute-1 nova_compute[226101]: 2025-12-06 08:09:05.545 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1965870762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:09:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2693546772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.095 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.096 226109 DEBUG nova.virt.libvirt.vif [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T08:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-467980544',display_name='tempest-TestNetworkAdvancedServerOps-server-467980544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-467980544',id=194,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKEdmmh0VM1iBaqWaluaeF2j9ngqe4njuJkat/3uLhqjfhHgbzuAtvYzitHcxwF2qXQ9t7nIFqmkQTt1LeMRmkC4aUsNP90Y/I2kcGpPkdrg5U1kMm16WjPnKC0Thty5+w==',key_name='tempest-TestNetworkAdvancedServerOps-115705315',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:08:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-87k00tve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:09:04Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=edc85e5c-cfda-452d-85a8-773bfd82fa80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.097 226109 DEBUG nova.network.os_vif_util [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.097 226109 DEBUG nova.network.os_vif_util [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.100 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <uuid>edc85e5c-cfda-452d-85a8-773bfd82fa80</uuid>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <name>instance-000000c2</name>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-467980544</nova:name>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:09:04</nova:creationTime>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <nova:port uuid="a19b9290-ac1a-4a64-a4c4-8486895f7d27">
Dec 06 08:09:06 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <system>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <entry name="serial">edc85e5c-cfda-452d-85a8-773bfd82fa80</entry>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <entry name="uuid">edc85e5c-cfda-452d-85a8-773bfd82fa80</entry>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </system>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <os>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   </os>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <features>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   </features>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/edc85e5c-cfda-452d-85a8-773bfd82fa80_disk">
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       </source>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config">
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       </source>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:09:06 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:33:b4:12"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <target dev="tapa19b9290-ac"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/console.log" append="off"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <video>
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </video>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:09:06 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:09:06 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:09:06 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:09:06 compute-1 nova_compute[226101]: </domain>
Dec 06 08:09:06 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.101 226109 DEBUG nova.virt.libvirt.vif [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T08:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-467980544',display_name='tempest-TestNetworkAdvancedServerOps-server-467980544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-467980544',id=194,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKEdmmh0VM1iBaqWaluaeF2j9ngqe4njuJkat/3uLhqjfhHgbzuAtvYzitHcxwF2qXQ9t7nIFqmkQTt1LeMRmkC4aUsNP90Y/I2kcGpPkdrg5U1kMm16WjPnKC0Thty5+w==',key_name='tempest-TestNetworkAdvancedServerOps-115705315',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:08:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-87k00tve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:09:04Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=edc85e5c-cfda-452d-85a8-773bfd82fa80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.102 226109 DEBUG nova.network.os_vif_util [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.102 226109 DEBUG nova.network.os_vif_util [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.103 226109 DEBUG os_vif [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.103 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.104 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.104 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.107 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.108 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19b9290-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.109 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa19b9290-ac, col_values=(('external_ids', {'iface-id': 'a19b9290-ac1a-4a64-a4c4-8486895f7d27', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:b4:12', 'vm-uuid': 'edc85e5c-cfda-452d-85a8-773bfd82fa80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:06 compute-1 NetworkManager[49031]: <info>  [1765008546.1131] manager: (tapa19b9290-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.117 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.122 226109 DEBUG nova.compute.manager [req-f2449eec-1d8d-4cea-acc2-b31413868bd1 req-021cc4a9-bd8d-4a4c-937d-02dbf269bbb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.123 226109 DEBUG oslo_concurrency.lockutils [req-f2449eec-1d8d-4cea-acc2-b31413868bd1 req-021cc4a9-bd8d-4a4c-937d-02dbf269bbb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.123 226109 DEBUG oslo_concurrency.lockutils [req-f2449eec-1d8d-4cea-acc2-b31413868bd1 req-021cc4a9-bd8d-4a4c-937d-02dbf269bbb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.124 226109 DEBUG oslo_concurrency.lockutils [req-f2449eec-1d8d-4cea-acc2-b31413868bd1 req-021cc4a9-bd8d-4a4c-937d-02dbf269bbb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.124 226109 DEBUG nova.compute.manager [req-f2449eec-1d8d-4cea-acc2-b31413868bd1 req-021cc4a9-bd8d-4a4c-937d-02dbf269bbb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] No waiting events found dispatching network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.125 226109 WARNING nova.compute.manager [req-f2449eec-1d8d-4cea-acc2-b31413868bd1 req-021cc4a9-bd8d-4a4c-937d-02dbf269bbb9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received unexpected event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 for instance with vm_state active and task_state rebuild_spawning.
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.126 226109 INFO os_vif [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac')
Dec 06 08:09:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.213 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.214 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.214 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:33:b4:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.215 226109 INFO nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Using config drive
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.243 226109 DEBUG nova.storage.rbd_utils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.335 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'ec2_ids' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:06.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.449 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'keypairs' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.598 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.932 226109 INFO nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Creating config drive at /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config
Dec 06 08:09:06 compute-1 nova_compute[226101]: 2025-12-06 08:09:06.941 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8x_43193 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:06 compute-1 ceph-mon[81689]: pgmap v3491: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.2 MiB/s wr, 111 op/s
Dec 06 08:09:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2693546772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:07.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.085 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8x_43193" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.113 226109 DEBUG nova.storage.rbd_utils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.117 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.498 226109 DEBUG oslo_concurrency.processutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config edc85e5c-cfda-452d-85a8-773bfd82fa80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.499 226109 INFO nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Deleting local config drive /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80/disk.config because it was imported into RBD.
Dec 06 08:09:07 compute-1 kernel: tapa19b9290-ac: entered promiscuous mode
Dec 06 08:09:07 compute-1 NetworkManager[49031]: <info>  [1765008547.5433] manager: (tapa19b9290-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Dec 06 08:09:07 compute-1 systemd-udevd[305791]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:09:07 compute-1 ovn_controller[130279]: 2025-12-06T08:09:07Z|00801|binding|INFO|Claiming lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 for this chassis.
Dec 06 08:09:07 compute-1 ovn_controller[130279]: 2025-12-06T08:09:07Z|00802|binding|INFO|a19b9290-ac1a-4a64-a4c4-8486895f7d27: Claiming fa:16:3e:33:b4:12 10.100.0.6
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.587 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.597 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:b4:12 10.100.0.6'], port_security=['fa:16:3e:33:b4:12 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'edc85e5c-cfda-452d-85a8-773bfd82fa80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53432c94-36c7-4c90-893c-889139ac292d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '36e97ae2-762b-40b3-9b92-6fd63e91a888', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03bf42ee-2199-4635-9d21-101eaed85f1c, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=a19b9290-ac1a-4a64-a4c4-8486895f7d27) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:09:07 compute-1 NetworkManager[49031]: <info>  [1765008547.5998] device (tapa19b9290-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.598 139580 INFO neutron.agent.ovn.metadata.agent [-] Port a19b9290-ac1a-4a64-a4c4-8486895f7d27 in datapath 53432c94-36c7-4c90-893c-889139ac292d bound to our chassis
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.600 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 53432c94-36c7-4c90-893c-889139ac292d
Dec 06 08:09:07 compute-1 NetworkManager[49031]: <info>  [1765008547.6010] device (tapa19b9290-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:09:07 compute-1 ovn_controller[130279]: 2025-12-06T08:09:07Z|00803|binding|INFO|Setting lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 ovn-installed in OVS
Dec 06 08:09:07 compute-1 ovn_controller[130279]: 2025-12-06T08:09:07Z|00804|binding|INFO|Setting lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 up in Southbound
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.604 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.611 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b854c5d1-7f4b-4a7d-8672-331a28fa2fc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.611 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap53432c94-31 in ovnmeta-53432c94-36c7-4c90-893c-889139ac292d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:09:07 compute-1 systemd-machined[190302]: New machine qemu-92-instance-000000c2.
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.613 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap53432c94-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.614 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8a7feb-6449-4339-abce-28b2d229670f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.615 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f725af75-2872-4601-861b-3d48e2de06c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 systemd[1]: Started Virtual Machine qemu-92-instance-000000c2.
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.625 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[8c35ac3d-2368-444a-b901-c70f6c56332e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.636 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[748a83f1-c2a0-49f9-b220-dd1a53844b9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.665 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[411df2e4-a4a0-4dc5-bf23-69715c019614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.669 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[aca06c15-7649-49fd-a211-f193cacddc73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 NetworkManager[49031]: <info>  [1765008547.6705] manager: (tap53432c94-30): new Veth device (/org/freedesktop/NetworkManager/Devices/368)
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.701 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[44fd5345-fccc-4a78-b385-a5e600dad371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.704 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5a42c206-b905-46c9-9950-1bac6e959988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 NetworkManager[49031]: <info>  [1765008547.7227] device (tap53432c94-30): carrier: link connected
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.727 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3415dd-d4d2-4b3a-8288-c21dff3bde91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.744 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[961dc27c-f466-4fbe-b967-f5abdeddf906]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53432c94-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:81:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 239], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 882722, 'reachable_time': 25592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305827, 'error': None, 'target': 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.761 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ffe1b5-50b0-4fb1-8478-0e48b366fd13]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:8182'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 882722, 'tstamp': 882722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305828, 'error': None, 'target': 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.777 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f73a5adb-8e6e-41d5-b10c-1b3fcd10d86a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53432c94-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:81:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 239], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 882722, 'reachable_time': 25592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305829, 'error': None, 'target': 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.805 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7c6513-5753-421a-a03e-e84dd2fea4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.863 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[de1cc273-e113-482d-9f84-bef1b8c39563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.865 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53432c94-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.865 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.865 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53432c94-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.867 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:07 compute-1 NetworkManager[49031]: <info>  [1765008547.8679] manager: (tap53432c94-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Dec 06 08:09:07 compute-1 kernel: tap53432c94-30: entered promiscuous mode
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.870 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap53432c94-30, col_values=(('external_ids', {'iface-id': 'f2a51215-e436-4c42-97c7-62a155b1a3d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:07 compute-1 ovn_controller[130279]: 2025-12-06T08:09:07Z|00805|binding|INFO|Releasing lport f2a51215-e436-4c42-97c7-62a155b1a3d5 from this chassis (sb_readonly=0)
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.873 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.873 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/53432c94-36c7-4c90-893c-889139ac292d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/53432c94-36c7-4c90-893c-889139ac292d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.874 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9102de22-47c5-440e-ad9b-eb9fba946503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.876 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-53432c94-36c7-4c90-893c-889139ac292d
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/53432c94-36c7-4c90-893c-889139ac292d.pid.haproxy
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 53432c94-36c7-4c90-893c-889139ac292d
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:09:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:07.877 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'env', 'PROCESS_TAG=haproxy-53432c94-36c7-4c90-893c-889139ac292d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/53432c94-36c7-4c90-893c-889139ac292d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:09:07 compute-1 nova_compute[226101]: 2025-12-06 08:09:07.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.239 226109 DEBUG nova.compute.manager [req-284014c2-f5be-4372-9393-5c1da55da733 req-3fcfb35c-5cf1-420b-aef3-8ef5137d1870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.240 226109 DEBUG oslo_concurrency.lockutils [req-284014c2-f5be-4372-9393-5c1da55da733 req-3fcfb35c-5cf1-420b-aef3-8ef5137d1870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.240 226109 DEBUG oslo_concurrency.lockutils [req-284014c2-f5be-4372-9393-5c1da55da733 req-3fcfb35c-5cf1-420b-aef3-8ef5137d1870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.241 226109 DEBUG oslo_concurrency.lockutils [req-284014c2-f5be-4372-9393-5c1da55da733 req-3fcfb35c-5cf1-420b-aef3-8ef5137d1870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.241 226109 DEBUG nova.compute.manager [req-284014c2-f5be-4372-9393-5c1da55da733 req-3fcfb35c-5cf1-420b-aef3-8ef5137d1870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] No waiting events found dispatching network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.241 226109 WARNING nova.compute.manager [req-284014c2-f5be-4372-9393-5c1da55da733 req-3fcfb35c-5cf1-420b-aef3-8ef5137d1870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received unexpected event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 for instance with vm_state active and task_state rebuild_spawning.
Dec 06 08:09:08 compute-1 podman[305898]: 2025-12-06 08:09:08.417899758 +0000 UTC m=+0.230224944 container create 19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 08:09:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:08.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:08 compute-1 systemd[1]: Started libpod-conmon-19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03.scope.
Dec 06 08:09:08 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:09:08 compute-1 podman[305898]: 2025-12-06 08:09:08.389688393 +0000 UTC m=+0.202013609 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:09:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26847922542ea8943d1d7d89fc1b41e4a900d993f1820cc85f0ef9194b959ad9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.490 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for edc85e5c-cfda-452d-85a8-773bfd82fa80 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.491 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008548.4897718, edc85e5c-cfda-452d-85a8-773bfd82fa80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.491 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] VM Resumed (Lifecycle Event)
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.494 226109 DEBUG nova.compute.manager [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.494 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.497 226109 INFO nova.virt.libvirt.driver [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance spawned successfully.
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.497 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:09:08 compute-1 podman[305898]: 2025-12-06 08:09:08.498904057 +0000 UTC m=+0.311229263 container init 19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 08:09:08 compute-1 podman[305898]: 2025-12-06 08:09:08.504692852 +0000 UTC m=+0.317018038 container start 19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:09:08 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305920]: [NOTICE]   (305924) : New worker (305926) forked
Dec 06 08:09:08 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305920]: [NOTICE]   (305924) : Loading success.
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.525 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.530 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.533 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.534 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.534 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.535 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.535 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.535 226109 DEBUG nova.virt.libvirt.driver [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.585 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.585 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008548.4919677, edc85e5c-cfda-452d-85a8-773bfd82fa80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.585 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] VM Started (Lifecycle Event)
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.621 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.625 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.632 226109 DEBUG nova.compute.manager [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.645 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.695 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.696 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.696 226109 DEBUG nova.objects.instance [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 08:09:08 compute-1 nova_compute[226101]: 2025-12-06 08:09:08.767 226109 DEBUG oslo_concurrency.lockutils [None req-6b23c8d2-abf8-4a7a-b781-abc2f9106579 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:09.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:09 compute-1 ceph-mon[81689]: pgmap v3492: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:09:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4283153530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:09:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4283153530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:09:09 compute-1 nova_compute[226101]: 2025-12-06 08:09:09.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:09 compute-1 nova_compute[226101]: 2025-12-06 08:09:09.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:09:09 compute-1 nova_compute[226101]: 2025-12-06 08:09:09.608 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:09:10 compute-1 nova_compute[226101]: 2025-12-06 08:09:10.358 226109 DEBUG nova.compute.manager [req-64dd0f08-7e9a-4797-b936-bf12d8335f7a req-38b6ef55-9406-4187-b27a-ab5bd51cd763 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:09:10 compute-1 nova_compute[226101]: 2025-12-06 08:09:10.358 226109 DEBUG oslo_concurrency.lockutils [req-64dd0f08-7e9a-4797-b936-bf12d8335f7a req-38b6ef55-9406-4187-b27a-ab5bd51cd763 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:10 compute-1 nova_compute[226101]: 2025-12-06 08:09:10.359 226109 DEBUG oslo_concurrency.lockutils [req-64dd0f08-7e9a-4797-b936-bf12d8335f7a req-38b6ef55-9406-4187-b27a-ab5bd51cd763 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:10 compute-1 nova_compute[226101]: 2025-12-06 08:09:10.359 226109 DEBUG oslo_concurrency.lockutils [req-64dd0f08-7e9a-4797-b936-bf12d8335f7a req-38b6ef55-9406-4187-b27a-ab5bd51cd763 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:10 compute-1 nova_compute[226101]: 2025-12-06 08:09:10.359 226109 DEBUG nova.compute.manager [req-64dd0f08-7e9a-4797-b936-bf12d8335f7a req-38b6ef55-9406-4187-b27a-ab5bd51cd763 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] No waiting events found dispatching network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:09:10 compute-1 nova_compute[226101]: 2025-12-06 08:09:10.359 226109 WARNING nova.compute.manager [req-64dd0f08-7e9a-4797-b936-bf12d8335f7a req-38b6ef55-9406-4187-b27a-ab5bd51cd763 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received unexpected event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 for instance with vm_state active and task_state None.
Dec 06 08:09:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:10.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:10 compute-1 nova_compute[226101]: 2025-12-06 08:09:10.781 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:11.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:11 compute-1 nova_compute[226101]: 2025-12-06 08:09:11.112 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:11 compute-1 ceph-mon[81689]: pgmap v3493: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 08:09:11 compute-1 nova_compute[226101]: 2025-12-06 08:09:11.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:11 compute-1 nova_compute[226101]: 2025-12-06 08:09:11.643 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:12.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:12 compute-1 ceph-mon[81689]: pgmap v3494: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Dec 06 08:09:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:13.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:13 compute-1 sshd-session[305935]: Received disconnect from 154.209.4.183 port 36832:11: Bye Bye [preauth]
Dec 06 08:09:13 compute-1 sshd-session[305935]: Disconnected from authenticating user root 154.209.4.183 port 36832 [preauth]
Dec 06 08:09:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:14.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:15 compute-1 ceph-mon[81689]: pgmap v3495: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:09:16 compute-1 sshd-session[305490]: ssh_dispatch_run_fatal: Connection from 14.103.118.198 port 58774: Connection timed out [preauth]
Dec 06 08:09:16 compute-1 nova_compute[226101]: 2025-12-06 08:09:16.114 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:16.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:16 compute-1 nova_compute[226101]: 2025-12-06 08:09:16.690 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:17 compute-1 ceph-mon[81689]: pgmap v3496: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:09:18 compute-1 sshd-session[305939]: Received disconnect from 186.96.151.198 port 46382:11: Bye Bye [preauth]
Dec 06 08:09:18 compute-1 sshd-session[305939]: Disconnected from authenticating user root 186.96.151.198 port 46382 [preauth]
Dec 06 08:09:18 compute-1 sshd-session[305937]: Received disconnect from 124.18.141.70 port 38666:11: Bye Bye [preauth]
Dec 06 08:09:18 compute-1 sshd-session[305937]: Disconnected from authenticating user root 124.18.141.70 port 38666 [preauth]
Dec 06 08:09:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:19.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:19 compute-1 ceph-mon[81689]: pgmap v3497: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 766 KiB/s wr, 74 op/s
Dec 06 08:09:19 compute-1 podman[305942]: 2025-12-06 08:09:19.107618624 +0000 UTC m=+0.086129816 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 08:09:19 compute-1 podman[305941]: 2025-12-06 08:09:19.117681603 +0000 UTC m=+0.091278664 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:09:19 compute-1 podman[305978]: 2025-12-06 08:09:19.22734261 +0000 UTC m=+0.115833963 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:09:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:20.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:21.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:21 compute-1 ceph-mon[81689]: pgmap v3498: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:09:21 compute-1 nova_compute[226101]: 2025-12-06 08:09:21.115 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:21 compute-1 nova_compute[226101]: 2025-12-06 08:09:21.691 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/600799336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:22 compute-1 ovn_controller[130279]: 2025-12-06T08:09:22Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:b4:12 10.100.0.6
Dec 06 08:09:22 compute-1 ovn_controller[130279]: 2025-12-06T08:09:22Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:b4:12 10.100.0.6
Dec 06 08:09:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:22.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:23 compute-1 ceph-mon[81689]: pgmap v3499: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 899 KiB/s wr, 91 op/s
Dec 06 08:09:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:24.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:25 compute-1 ceph-mon[81689]: pgmap v3500: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 887 KiB/s wr, 17 op/s
Dec 06 08:09:25 compute-1 sshd-session[306008]: Received disconnect from 101.100.194.199 port 43252:11: Bye Bye [preauth]
Dec 06 08:09:25 compute-1 sshd-session[306008]: Disconnected from authenticating user root 101.100.194.199 port 43252 [preauth]
Dec 06 08:09:26 compute-1 nova_compute[226101]: 2025-12-06 08:09:26.118 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:26.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/295627267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1005039998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:26 compute-1 nova_compute[226101]: 2025-12-06 08:09:26.746 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:27.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:27 compute-1 ceph-mon[81689]: pgmap v3501: 305 pgs: 305 active+clean; 231 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 205 KiB/s rd, 3.7 MiB/s wr, 70 op/s
Dec 06 08:09:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:28.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:29 compute-1 nova_compute[226101]: 2025-12-06 08:09:29.038 226109 INFO nova.compute.manager [None req-75bd9799-3c7d-4858-9c48-7e0e992a9528 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Get console output
Dec 06 08:09:29 compute-1 nova_compute[226101]: 2025-12-06 08:09:29.044 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:09:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:29.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:29 compute-1 ceph-mon[81689]: pgmap v3502: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Dec 06 08:09:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:09:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.5 total, 600.0 interval
                                           Cumulative writes: 14K writes, 76K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1513 writes, 7491 keys, 1513 commit groups, 1.0 writes per commit group, ingest: 15.53 MB, 0.03 MB/s
                                           Interval WAL: 1513 writes, 1513 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     16.2      5.83              0.28        48    0.122       0      0       0.0       0.0
                                             L6      1/0   10.41 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2     42.6     36.4     13.51              1.35        47    0.288    367K    25K       0.0       0.0
                                            Sum      1/0   10.41 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     29.8     30.3     19.35              1.63        95    0.204    367K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     80.9     79.3      1.02              0.26        12    0.085     63K   3035       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     42.6     36.4     13.51              1.35        47    0.288    367K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     16.3      5.79              0.28        47    0.123       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.092, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.57 GB write, 0.10 MB/s write, 0.56 GB read, 0.10 MB/s read, 19.3 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 62.91 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000545 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3590,60.29 MB,19.8339%) FilterBlock(95,1005.80 KB,0.3231%) IndexBlock(95,1.63 MB,0.536321%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:09:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:30.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:30 compute-1 ceph-mon[81689]: pgmap v3503: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Dec 06 08:09:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:31.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.119 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.909 226109 DEBUG nova.compute.manager [req-7a4f0395-4f1c-4ef1-b7d8-12cf3ab83ce5 req-ea56b3c8-60b1-455b-8600-e9bd152befd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-changed-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.910 226109 DEBUG nova.compute.manager [req-7a4f0395-4f1c-4ef1-b7d8-12cf3ab83ce5 req-ea56b3c8-60b1-455b-8600-e9bd152befd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Refreshing instance network info cache due to event network-changed-a19b9290-ac1a-4a64-a4c4-8486895f7d27. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.911 226109 DEBUG oslo_concurrency.lockutils [req-7a4f0395-4f1c-4ef1-b7d8-12cf3ab83ce5 req-ea56b3c8-60b1-455b-8600-e9bd152befd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.911 226109 DEBUG oslo_concurrency.lockutils [req-7a4f0395-4f1c-4ef1-b7d8-12cf3ab83ce5 req-ea56b3c8-60b1-455b-8600-e9bd152befd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.912 226109 DEBUG nova.network.neutron [req-7a4f0395-4f1c-4ef1-b7d8-12cf3ab83ce5 req-ea56b3c8-60b1-455b-8600-e9bd152befd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Refreshing network info cache for port a19b9290-ac1a-4a64-a4c4-8486895f7d27 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.989 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.990 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.990 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.990 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.990 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.991 226109 INFO nova.compute.manager [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Terminating instance
Dec 06 08:09:31 compute-1 nova_compute[226101]: 2025-12-06 08:09:31.992 226109 DEBUG nova.compute.manager [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.052 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.054 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.053 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 kernel: tapa19b9290-ac (unregistering): left promiscuous mode
Dec 06 08:09:32 compute-1 NetworkManager[49031]: <info>  [1765008572.0750] device (tapa19b9290-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:09:32 compute-1 ovn_controller[130279]: 2025-12-06T08:09:32Z|00806|binding|INFO|Releasing lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 from this chassis (sb_readonly=0)
Dec 06 08:09:32 compute-1 ovn_controller[130279]: 2025-12-06T08:09:32Z|00807|binding|INFO|Setting lport a19b9290-ac1a-4a64-a4c4-8486895f7d27 down in Southbound
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.080 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 ovn_controller[130279]: 2025-12-06T08:09:32Z|00808|binding|INFO|Removing iface tapa19b9290-ac ovn-installed in OVS
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.083 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.086 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:b4:12 10.100.0.6'], port_security=['fa:16:3e:33:b4:12 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'edc85e5c-cfda-452d-85a8-773bfd82fa80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53432c94-36c7-4c90-893c-889139ac292d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '36e97ae2-762b-40b3-9b92-6fd63e91a888', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03bf42ee-2199-4635-9d21-101eaed85f1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=a19b9290-ac1a-4a64-a4c4-8486895f7d27) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.087 139580 INFO neutron.agent.ovn.metadata.agent [-] Port a19b9290-ac1a-4a64-a4c4-8486895f7d27 in datapath 53432c94-36c7-4c90-893c-889139ac292d unbound from our chassis
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.088 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53432c94-36c7-4c90-893c-889139ac292d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.090 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[98116095-afdf-4d30-90ed-db14f9e75ccc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.090 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-53432c94-36c7-4c90-893c-889139ac292d namespace which is not needed anymore
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.124 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Dec 06 08:09:32 compute-1 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c2.scope: Consumed 13.726s CPU time.
Dec 06 08:09:32 compute-1 systemd-machined[190302]: Machine qemu-92-instance-000000c2 terminated.
Dec 06 08:09:32 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305920]: [NOTICE]   (305924) : haproxy version is 2.8.14-c23fe91
Dec 06 08:09:32 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305920]: [NOTICE]   (305924) : path to executable is /usr/sbin/haproxy
Dec 06 08:09:32 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305920]: [WARNING]  (305924) : Exiting Master process...
Dec 06 08:09:32 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305920]: [WARNING]  (305924) : Exiting Master process...
Dec 06 08:09:32 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305920]: [ALERT]    (305924) : Current worker (305926) exited with code 143 (Terminated)
Dec 06 08:09:32 compute-1 neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d[305920]: [WARNING]  (305924) : All workers exited. Exiting... (0)
Dec 06 08:09:32 compute-1 systemd[1]: libpod-19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03.scope: Deactivated successfully.
Dec 06 08:09:32 compute-1 podman[306035]: 2025-12-06 08:09:32.23230522 +0000 UTC m=+0.049421194 container died 19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.232 226109 INFO nova.virt.libvirt.driver [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Instance destroyed successfully.
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.233 226109 DEBUG nova.objects.instance [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid edc85e5c-cfda-452d-85a8-773bfd82fa80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:09:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03-userdata-shm.mount: Deactivated successfully.
Dec 06 08:09:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-26847922542ea8943d1d7d89fc1b41e4a900d993f1820cc85f0ef9194b959ad9-merged.mount: Deactivated successfully.
Dec 06 08:09:32 compute-1 podman[306035]: 2025-12-06 08:09:32.271018707 +0000 UTC m=+0.088134681 container cleanup 19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:09:32 compute-1 systemd[1]: libpod-conmon-19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03.scope: Deactivated successfully.
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.327 226109 DEBUG nova.virt.libvirt.vif [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T08:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-467980544',display_name='tempest-TestNetworkAdvancedServerOps-server-467980544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-467980544',id=194,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKEdmmh0VM1iBaqWaluaeF2j9ngqe4njuJkat/3uLhqjfhHgbzuAtvYzitHcxwF2qXQ9t7nIFqmkQTt1LeMRmkC4aUsNP90Y/I2kcGpPkdrg5U1kMm16WjPnKC0Thty5+w==',key_name='tempest-TestNetworkAdvancedServerOps-115705315',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:09:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-87k00tve',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:09:08Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=edc85e5c-cfda-452d-85a8-773bfd82fa80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.328 226109 DEBUG nova.network.os_vif_util [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.328 226109 DEBUG nova.network.os_vif_util [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.329 226109 DEBUG os_vif [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.330 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.331 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19b9290-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.332 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.335 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:09:32 compute-1 podman[306075]: 2025-12-06 08:09:32.337379753 +0000 UTC m=+0.044989635 container remove 19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.337 226109 INFO os_vif [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:b4:12,bridge_name='br-int',has_traffic_filtering=True,id=a19b9290-ac1a-4a64-a4c4-8486895f7d27,network=Network(53432c94-36c7-4c90-893c-889139ac292d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa19b9290-ac')
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.347 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[affad3ea-08c7-438d-bfd8-9c589f7ed899]: (4, ('Sat Dec  6 08:09:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d (19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03)\n19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03\nSat Dec  6 08:09:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-53432c94-36c7-4c90-893c-889139ac292d (19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03)\n19ba30273573513f6eff4a1b1dca35a67c5541f847abcb3c048fc31c13bfed03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.348 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba72708-e017-46bd-8ea2-69c4367311f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.350 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53432c94-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:32 compute-1 kernel: tap53432c94-30: left promiscuous mode
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.355 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.365 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.366 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.368 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a48efa-e83b-4c1c-a571-1329ec5fcb69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.385 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[729a8e23-4079-4195-be8f-6f2de620ddc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.386 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[17e2fc38-e592-45e8-9093-667fe2580a61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.400 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[eff7a572-a367-428d-a176-38e77276cda2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 882716, 'reachable_time': 26373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306108, 'error': None, 'target': 'ovnmeta-53432c94-36c7-4c90-893c-889139ac292d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:32 compute-1 systemd[1]: run-netns-ovnmeta\x2d53432c94\x2d36c7\x2d4c90\x2d893c\x2d889139ac292d.mount: Deactivated successfully.
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.404 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-53432c94-36c7-4c90-893c-889139ac292d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:09:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:32.404 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6ec863-a77c-4e5f-a21f-030f51166162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:09:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:32.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.525 226109 DEBUG nova.compute.manager [req-d52ea848-a73e-4781-b07a-b2dc79b994fc req-96ea5a53-a111-4005-8eed-511a765cd4fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-unplugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.525 226109 DEBUG oslo_concurrency.lockutils [req-d52ea848-a73e-4781-b07a-b2dc79b994fc req-96ea5a53-a111-4005-8eed-511a765cd4fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.526 226109 DEBUG oslo_concurrency.lockutils [req-d52ea848-a73e-4781-b07a-b2dc79b994fc req-96ea5a53-a111-4005-8eed-511a765cd4fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.526 226109 DEBUG oslo_concurrency.lockutils [req-d52ea848-a73e-4781-b07a-b2dc79b994fc req-96ea5a53-a111-4005-8eed-511a765cd4fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.526 226109 DEBUG nova.compute.manager [req-d52ea848-a73e-4781-b07a-b2dc79b994fc req-96ea5a53-a111-4005-8eed-511a765cd4fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] No waiting events found dispatching network-vif-unplugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:09:32 compute-1 nova_compute[226101]: 2025-12-06 08:09:32.526 226109 DEBUG nova.compute.manager [req-d52ea848-a73e-4781-b07a-b2dc79b994fc req-96ea5a53-a111-4005-8eed-511a765cd4fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-unplugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:09:32 compute-1 ceph-mon[81689]: pgmap v3504: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Dec 06 08:09:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:33.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.131 226109 INFO nova.virt.libvirt.driver [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Deleting instance files /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80_del
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.132 226109 INFO nova.virt.libvirt.driver [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Deletion of /var/lib/nova/instances/edc85e5c-cfda-452d-85a8-773bfd82fa80_del complete
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.187 226109 INFO nova.compute.manager [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Took 1.19 seconds to destroy the instance on the hypervisor.
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.187 226109 DEBUG oslo.service.loopingcall [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.188 226109 DEBUG nova.compute.manager [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.188 226109 DEBUG nova.network.neutron [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.645 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.674 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.674 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.675 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.675 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:09:33 compute-1 nova_compute[226101]: 2025-12-06 08:09:33.675 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:09:34 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1780192475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.136 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.321 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.322 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4296MB free_disk=20.92186737060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.322 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.323 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.455 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance edc85e5c-cfda-452d-85a8-773bfd82fa80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.456 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.457 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:09:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.615 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.672 226109 DEBUG nova.compute.manager [req-0f679a33-9a4e-4dbf-85f2-0433457dcb26 req-649b7961-467d-4234-965c-16d8dfbe8d7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.673 226109 DEBUG oslo_concurrency.lockutils [req-0f679a33-9a4e-4dbf-85f2-0433457dcb26 req-649b7961-467d-4234-965c-16d8dfbe8d7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.673 226109 DEBUG oslo_concurrency.lockutils [req-0f679a33-9a4e-4dbf-85f2-0433457dcb26 req-649b7961-467d-4234-965c-16d8dfbe8d7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.673 226109 DEBUG oslo_concurrency.lockutils [req-0f679a33-9a4e-4dbf-85f2-0433457dcb26 req-649b7961-467d-4234-965c-16d8dfbe8d7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.673 226109 DEBUG nova.compute.manager [req-0f679a33-9a4e-4dbf-85f2-0433457dcb26 req-649b7961-467d-4234-965c-16d8dfbe8d7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] No waiting events found dispatching network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.674 226109 WARNING nova.compute.manager [req-0f679a33-9a4e-4dbf-85f2-0433457dcb26 req-649b7961-467d-4234-965c-16d8dfbe8d7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received unexpected event network-vif-plugged-a19b9290-ac1a-4a64-a4c4-8486895f7d27 for instance with vm_state active and task_state deleting.
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.825 226109 DEBUG nova.network.neutron [req-7a4f0395-4f1c-4ef1-b7d8-12cf3ab83ce5 req-ea56b3c8-60b1-455b-8600-e9bd152befd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updated VIF entry in instance network info cache for port a19b9290-ac1a-4a64-a4c4-8486895f7d27. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.826 226109 DEBUG nova.network.neutron [req-7a4f0395-4f1c-4ef1-b7d8-12cf3ab83ce5 req-ea56b3c8-60b1-455b-8600-e9bd152befd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updating instance_info_cache with network_info: [{"id": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "address": "fa:16:3e:33:b4:12", "network": {"id": "53432c94-36c7-4c90-893c-889139ac292d", "bridge": "br-int", "label": "tempest-network-smoke--1307142518", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19b9290-ac", "ovs_interfaceid": "a19b9290-ac1a-4a64-a4c4-8486895f7d27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:09:34 compute-1 ceph-mon[81689]: pgmap v3505: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 3.1 MiB/s wr, 85 op/s
Dec 06 08:09:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1780192475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.891 226109 DEBUG oslo_concurrency.lockutils [req-7a4f0395-4f1c-4ef1-b7d8-12cf3ab83ce5 req-ea56b3c8-60b1-455b-8600-e9bd152befd0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-edc85e5c-cfda-452d-85a8-773bfd82fa80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.892 226109 DEBUG nova.network.neutron [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.991 226109 DEBUG nova.compute.manager [req-8d18dcf6-04e5-493e-9606-4bd2f04206e9 req-4d75ef4d-38b1-4f09-a506-5d5c27724b86 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Received event network-vif-deleted-a19b9290-ac1a-4a64-a4c4-8486895f7d27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.992 226109 INFO nova.compute.manager [req-8d18dcf6-04e5-493e-9606-4bd2f04206e9 req-4d75ef4d-38b1-4f09-a506-5d5c27724b86 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Neutron deleted interface a19b9290-ac1a-4a64-a4c4-8486895f7d27; detaching it from the instance and deleting it from the info cache
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.992 226109 DEBUG nova.network.neutron [req-8d18dcf6-04e5-493e-9606-4bd2f04206e9 req-4d75ef4d-38b1-4f09-a506-5d5c27724b86 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:09:34 compute-1 nova_compute[226101]: 2025-12-06 08:09:34.995 226109 INFO nova.compute.manager [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Took 1.81 seconds to deallocate network for instance.
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.051 226109 DEBUG nova.compute.manager [req-8d18dcf6-04e5-493e-9606-4bd2f04206e9 req-4d75ef4d-38b1-4f09-a506-5d5c27724b86 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Detach interface failed, port_id=a19b9290-ac1a-4a64-a4c4-8486895f7d27, reason: Instance edc85e5c-cfda-452d-85a8-773bfd82fa80 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 08:09:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:35.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:09:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/136787216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.079 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.084 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.094 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.098 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.117 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.117 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.118 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.119 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.119 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.164 226109 DEBUG oslo_concurrency.processutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:09:35 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/512540478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.587 226109 DEBUG oslo_concurrency.processutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.592 226109 DEBUG nova.compute.provider_tree [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.608 226109 DEBUG nova.scheduler.client.report [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.626 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.651 226109 INFO nova.scheduler.client.report [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocations for instance edc85e5c-cfda-452d-85a8-773bfd82fa80
Dec 06 08:09:35 compute-1 nova_compute[226101]: 2025-12-06 08:09:35.728 226109 DEBUG oslo_concurrency.lockutils [None req-74eccebd-efe9-4380-86f0-3f8c8491b85d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "edc85e5c-cfda-452d-85a8-773bfd82fa80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/136787216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/512540478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:36.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:36 compute-1 sshd[164848]: Timeout before authentication for connection from 154.209.4.183 to 38.102.83.204, pid = 304294
Dec 06 08:09:36 compute-1 nova_compute[226101]: 2025-12-06 08:09:36.852 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:36 compute-1 ceph-mon[81689]: pgmap v3506: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 165 op/s
Dec 06 08:09:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:37.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:37 compute-1 nova_compute[226101]: 2025-12-06 08:09:37.332 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:37 compute-1 sshd-session[306175]: Connection reset by authenticating user games 45.135.232.92 port 30888 [preauth]
Dec 06 08:09:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:38.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:38 compute-1 ceph-mon[81689]: pgmap v3507: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 255 KiB/s wr, 116 op/s
Dec 06 08:09:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:39.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:39 compute-1 sshd-session[306179]: Connection reset by authenticating user root 45.135.232.92 port 30898 [preauth]
Dec 06 08:09:40 compute-1 sshd-session[306181]: Received disconnect from 154.219.116.39 port 42478:11: Bye Bye [preauth]
Dec 06 08:09:40 compute-1 sshd-session[306181]: Disconnected from authenticating user root 154.219.116.39 port 42478 [preauth]
Dec 06 08:09:40 compute-1 nova_compute[226101]: 2025-12-06 08:09:40.079 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:40 compute-1 nova_compute[226101]: 2025-12-06 08:09:40.079 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:09:40 compute-1 nova_compute[226101]: 2025-12-06 08:09:40.079 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:09:40 compute-1 nova_compute[226101]: 2025-12-06 08:09:40.337 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:09:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:40.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:40 compute-1 ceph-mon[81689]: pgmap v3508: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 101 op/s
Dec 06 08:09:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:09:41.056 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:41.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:41 compute-1 nova_compute[226101]: 2025-12-06 08:09:41.853 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:42 compute-1 nova_compute[226101]: 2025-12-06 08:09:42.335 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:42.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:43 compute-1 ceph-mon[81689]: pgmap v3509: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 128 op/s
Dec 06 08:09:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1139757730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1052256903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:44.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:44 compute-1 nova_compute[226101]: 2025-12-06 08:09:44.602 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:44 compute-1 nova_compute[226101]: 2025-12-06 08:09:44.761 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:44 compute-1 sshd-session[306183]: Invalid user user from 45.135.232.92 port 30904
Dec 06 08:09:45 compute-1 sshd-session[306183]: Connection reset by invalid user user 45.135.232.92 port 30904 [preauth]
Dec 06 08:09:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:45.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:45 compute-1 ceph-mon[81689]: pgmap v3510: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 110 op/s
Dec 06 08:09:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2057202737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:45 compute-1 sshd-session[306185]: Received disconnect from 106.51.92.114 port 35257:11: Bye Bye [preauth]
Dec 06 08:09:45 compute-1 sshd-session[306185]: Disconnected from authenticating user root 106.51.92.114 port 35257 [preauth]
Dec 06 08:09:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:46 compute-1 nova_compute[226101]: 2025-12-06 08:09:46.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:46 compute-1 nova_compute[226101]: 2025-12-06 08:09:46.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:09:46 compute-1 nova_compute[226101]: 2025-12-06 08:09:46.893 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:47 compute-1 nova_compute[226101]: 2025-12-06 08:09:47.230 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008572.2295716, edc85e5c-cfda-452d-85a8-773bfd82fa80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:09:47 compute-1 nova_compute[226101]: 2025-12-06 08:09:47.231 226109 INFO nova.compute.manager [-] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] VM Stopped (Lifecycle Event)
Dec 06 08:09:47 compute-1 nova_compute[226101]: 2025-12-06 08:09:47.296 226109 DEBUG nova.compute.manager [None req-adc566c4-edee-452c-ac87-23feeeaca925 - - - - - -] [instance: edc85e5c-cfda-452d-85a8-773bfd82fa80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:09:47 compute-1 nova_compute[226101]: 2025-12-06 08:09:47.336 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:47 compute-1 ceph-mon[81689]: pgmap v3511: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 110 op/s
Dec 06 08:09:47 compute-1 nova_compute[226101]: 2025-12-06 08:09:47.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:48.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:48 compute-1 ceph-mon[81689]: pgmap v3512: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 06 08:09:48 compute-1 sshd-session[306188]: Connection reset by authenticating user root 45.135.232.92 port 44828 [preauth]
Dec 06 08:09:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:49 compute-1 sudo[306191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:49 compute-1 sudo[306191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:49 compute-1 sudo[306191]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:49 compute-1 podman[306215]: 2025-12-06 08:09:49.287325332 +0000 UTC m=+0.057474239 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 08:09:49 compute-1 podman[306216]: 2025-12-06 08:09:49.28724942 +0000 UTC m=+0.055932888 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 08:09:49 compute-1 sudo[306237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:09:49 compute-1 sudo[306237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:49 compute-1 sudo[306237]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:49 compute-1 sudo[306284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:49 compute-1 sudo[306284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:49 compute-1 sudo[306284]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:49 compute-1 podman[306276]: 2025-12-06 08:09:49.444794288 +0000 UTC m=+0.133625068 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 08:09:49 compute-1 sudo[306325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:09:49 compute-1 sudo[306325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:49 compute-1 sudo[306325]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:50 compute-1 ceph-mon[81689]: pgmap v3513: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:09:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1999819615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:51 compute-1 sshd-session[306190]: Invalid user user from 45.135.232.92 port 44846
Dec 06 08:09:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:51.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:51 compute-1 sshd-session[306190]: Connection reset by invalid user user 45.135.232.92 port 44846 [preauth]
Dec 06 08:09:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:51 compute-1 nova_compute[226101]: 2025-12-06 08:09:51.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:51 compute-1 nova_compute[226101]: 2025-12-06 08:09:51.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:51 compute-1 nova_compute[226101]: 2025-12-06 08:09:51.896 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1936916397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:09:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:09:52 compute-1 sshd-session[306386]: Received disconnect from 136.112.8.45 port 43952:11: Bye Bye [preauth]
Dec 06 08:09:52 compute-1 sshd-session[306386]: Disconnected from authenticating user root 136.112.8.45 port 43952 [preauth]
Dec 06 08:09:52 compute-1 nova_compute[226101]: 2025-12-06 08:09:52.339 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:52.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:53 compute-1 ceph-mon[81689]: pgmap v3514: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:09:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:53.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:54 compute-1 sshd-session[306388]: Received disconnect from 186.87.166.141 port 56988:11: Bye Bye [preauth]
Dec 06 08:09:54 compute-1 sshd-session[306388]: Disconnected from authenticating user root 186.87.166.141 port 56988 [preauth]
Dec 06 08:09:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:54.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:55 compute-1 ceph-mon[81689]: pgmap v3515: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:55.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:56.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:56 compute-1 nova_compute[226101]: 2025-12-06 08:09:56.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:56 compute-1 nova_compute[226101]: 2025-12-06 08:09:56.898 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:57 compute-1 ceph-mon[81689]: pgmap v3516: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3535291815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:57 compute-1 nova_compute[226101]: 2025-12-06 08:09:57.342 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:57 compute-1 nova_compute[226101]: 2025-12-06 08:09:57.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:57 compute-1 sudo[306390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:57 compute-1 sudo[306390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:57 compute-1 sudo[306390]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:57 compute-1 sudo[306415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:09:57 compute-1 sudo[306415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:57 compute-1 sudo[306415]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:58.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:58 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:58 compute-1 ceph-mon[81689]: pgmap v3517: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:09:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:00.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:00 compute-1 nova_compute[226101]: 2025-12-06 08:10:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:00 compute-1 ceph-mon[81689]: pgmap v3518: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:10:00 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 08:10:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:01.688 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:01.689 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:01.689 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:01 compute-1 nova_compute[226101]: 2025-12-06 08:10:01.905 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:02 compute-1 nova_compute[226101]: 2025-12-06 08:10:02.344 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:02.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:02 compute-1 ceph-mon[81689]: pgmap v3519: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:10:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2883649163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:03.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3216410970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:04.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:04 compute-1 nova_compute[226101]: 2025-12-06 08:10:04.783 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:04 compute-1 nova_compute[226101]: 2025-12-06 08:10:04.784 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:04 compute-1 nova_compute[226101]: 2025-12-06 08:10:04.801 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:10:04 compute-1 ceph-mon[81689]: pgmap v3520: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:10:04 compute-1 nova_compute[226101]: 2025-12-06 08:10:04.879 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:04 compute-1 nova_compute[226101]: 2025-12-06 08:10:04.880 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:04 compute-1 nova_compute[226101]: 2025-12-06 08:10:04.886 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:10:04 compute-1 nova_compute[226101]: 2025-12-06 08:10:04.887 226109 INFO nova.compute.claims [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.028 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:05.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:10:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/230461093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.476 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.482 226109 DEBUG nova.compute.provider_tree [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.499 226109 DEBUG nova.scheduler.client.report [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.522 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.522 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.588 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.588 226109 DEBUG nova.network.neutron [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.614 226109 INFO nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.635 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.741 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.743 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.743 226109 INFO nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Creating image(s)
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.771 226109 DEBUG nova.storage.rbd_utils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 5f0de650-9c62-4323-9354-1e018f4f06df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.797 226109 DEBUG nova.storage.rbd_utils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 5f0de650-9c62-4323-9354-1e018f4f06df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.827 226109 DEBUG nova.storage.rbd_utils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 5f0de650-9c62-4323-9354-1e018f4f06df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/230461093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.832 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.867 226109 DEBUG nova.policy [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.917 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.917 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.918 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.918 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.948 226109 DEBUG nova.storage.rbd_utils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 5f0de650-9c62-4323-9354-1e018f4f06df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:05 compute-1 nova_compute[226101]: 2025-12-06 08:10:05.953 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5f0de650-9c62-4323-9354-1e018f4f06df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.257 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5f0de650-9c62-4323-9354-1e018f4f06df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.329 226109 DEBUG nova.storage.rbd_utils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image 5f0de650-9c62-4323-9354-1e018f4f06df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.424 226109 DEBUG nova.objects.instance [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:10:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.438 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.438 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Ensure instance console log exists: /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.439 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.439 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.439 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:06.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:06 compute-1 ceph-mon[81689]: pgmap v3521: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.907 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:06 compute-1 nova_compute[226101]: 2025-12-06 08:10:06.946 226109 DEBUG nova.network.neutron [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Successfully created port: 77949a7e-19a2-4c1f-812b-a7d9452af1e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:10:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:07.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:07 compute-1 nova_compute[226101]: 2025-12-06 08:10:07.345 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:08 compute-1 nova_compute[226101]: 2025-12-06 08:10:08.001 226109 DEBUG nova.network.neutron [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Successfully updated port: 77949a7e-19a2-4c1f-812b-a7d9452af1e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:10:08 compute-1 nova_compute[226101]: 2025-12-06 08:10:08.015 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:10:08 compute-1 nova_compute[226101]: 2025-12-06 08:10:08.015 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:10:08 compute-1 nova_compute[226101]: 2025-12-06 08:10:08.015 226109 DEBUG nova.network.neutron [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:10:08 compute-1 nova_compute[226101]: 2025-12-06 08:10:08.135 226109 DEBUG nova.compute.manager [req-6dbd6e1a-8037-4131-bcf8-d3e6b708d81c req-af071817-1b33-4dc4-bea3-2e8e15d1680b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:10:08 compute-1 nova_compute[226101]: 2025-12-06 08:10:08.136 226109 DEBUG nova.compute.manager [req-6dbd6e1a-8037-4131-bcf8-d3e6b708d81c req-af071817-1b33-4dc4-bea3-2e8e15d1680b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing instance network info cache due to event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:10:08 compute-1 nova_compute[226101]: 2025-12-06 08:10:08.136 226109 DEBUG oslo_concurrency.lockutils [req-6dbd6e1a-8037-4131-bcf8-d3e6b708d81c req-af071817-1b33-4dc4-bea3-2e8e15d1680b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:10:08 compute-1 nova_compute[226101]: 2025-12-06 08:10:08.223 226109 DEBUG nova.network.neutron [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:10:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:08.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:08 compute-1 ceph-mon[81689]: pgmap v3522: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Dec 06 08:10:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:09.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.679 226109 DEBUG nova.network.neutron [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.703 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.704 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance network_info: |[{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.704 226109 DEBUG oslo_concurrency.lockutils [req-6dbd6e1a-8037-4131-bcf8-d3e6b708d81c req-af071817-1b33-4dc4-bea3-2e8e15d1680b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.704 226109 DEBUG nova.network.neutron [req-6dbd6e1a-8037-4131-bcf8-d3e6b708d81c req-af071817-1b33-4dc4-bea3-2e8e15d1680b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.707 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Start _get_guest_xml network_info=[{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.710 226109 WARNING nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.721 226109 DEBUG nova.virt.libvirt.host [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.722 226109 DEBUG nova.virt.libvirt.host [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.729 226109 DEBUG nova.virt.libvirt.host [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.730 226109 DEBUG nova.virt.libvirt.host [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.731 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.731 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.731 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.731 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.732 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.732 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.732 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.732 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.732 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.733 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.733 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.733 226109 DEBUG nova.virt.hardware [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:10:09 compute-1 nova_compute[226101]: 2025-12-06 08:10:09.736 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1560501208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:10:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1560501208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:10:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:10:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1048770933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.150 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.174 226109 DEBUG nova.storage.rbd_utils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 5f0de650-9c62-4323-9354-1e018f4f06df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.178 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:10:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/918454138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.572 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.575 226109 DEBUG nova.virt.libvirt.vif [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-522215966',display_name='tempest-TestNetworkAdvancedServerOps-server-522215966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-522215966',id=197,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBGFC8f+teCAymzAzt+sw9RK9rNVgD4R9/B96GYFflUw/uMoFI69FGiUrAFSq23wDgsBI/tn77SEM4/cRaOgXgCYvCjaKqwg+7LU55D+cQXQ9YzNukb2X32Bw/xb16Ugg==',key_name='tempest-TestNetworkAdvancedServerOps-562647975',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-5fgxfuhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:10:05Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=5f0de650-9c62-4323-9354-1e018f4f06df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.576 226109 DEBUG nova.network.os_vif_util [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.578 226109 DEBUG nova.network.os_vif_util [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.579 226109 DEBUG nova.objects.instance [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.603 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <uuid>5f0de650-9c62-4323-9354-1e018f4f06df</uuid>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <name>instance-000000c5</name>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-522215966</nova:name>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:10:09</nova:creationTime>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <nova:port uuid="77949a7e-19a2-4c1f-812b-a7d9452af1e3">
Dec 06 08:10:10 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <system>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <entry name="serial">5f0de650-9c62-4323-9354-1e018f4f06df</entry>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <entry name="uuid">5f0de650-9c62-4323-9354-1e018f4f06df</entry>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </system>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <os>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   </os>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <features>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   </features>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/5f0de650-9c62-4323-9354-1e018f4f06df_disk">
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       </source>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/5f0de650-9c62-4323-9354-1e018f4f06df_disk.config">
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       </source>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:10:10 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:f6:d1:7b"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <target dev="tap77949a7e-19"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/console.log" append="off"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <video>
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </video>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:10:10 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:10:10 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:10:10 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:10:10 compute-1 nova_compute[226101]: </domain>
Dec 06 08:10:10 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.604 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Preparing to wait for external event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.605 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.606 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.606 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.607 226109 DEBUG nova.virt.libvirt.vif [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-522215966',display_name='tempest-TestNetworkAdvancedServerOps-server-522215966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-522215966',id=197,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBGFC8f+teCAymzAzt+sw9RK9rNVgD4R9/B96GYFflUw/uMoFI69FGiUrAFSq23wDgsBI/tn77SEM4/cRaOgXgCYvCjaKqwg+7LU55D+cQXQ9YzNukb2X32Bw/xb16Ugg==',key_name='tempest-TestNetworkAdvancedServerOps-562647975',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-5fgxfuhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:10:05Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=5f0de650-9c62-4323-9354-1e018f4f06df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.607 226109 DEBUG nova.network.os_vif_util [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.608 226109 DEBUG nova.network.os_vif_util [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.609 226109 DEBUG os_vif [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.609 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.610 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.611 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.613 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.614 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77949a7e-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.614 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77949a7e-19, col_values=(('external_ids', {'iface-id': '77949a7e-19a2-4c1f-812b-a7d9452af1e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:d1:7b', 'vm-uuid': '5f0de650-9c62-4323-9354-1e018f4f06df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.649 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:10 compute-1 NetworkManager[49031]: <info>  [1765008610.6500] manager: (tap77949a7e-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.655 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.659 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.659 226109 INFO os_vif [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19')
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.724 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.724 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.725 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:f6:d1:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.725 226109 INFO nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Using config drive
Dec 06 08:10:10 compute-1 nova_compute[226101]: 2025-12-06 08:10:10.748 226109 DEBUG nova.storage.rbd_utils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 5f0de650-9c62-4323-9354-1e018f4f06df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:10 compute-1 ceph-mon[81689]: pgmap v3523: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Dec 06 08:10:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1048770933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2754319733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/918454138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:11.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:11 compute-1 nova_compute[226101]: 2025-12-06 08:10:11.909 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:12.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:12 compute-1 nova_compute[226101]: 2025-12-06 08:10:12.804 226109 INFO nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Creating config drive at /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/disk.config
Dec 06 08:10:12 compute-1 nova_compute[226101]: 2025-12-06 08:10:12.814 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgcfrpemk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:12 compute-1 nova_compute[226101]: 2025-12-06 08:10:12.852 226109 DEBUG nova.network.neutron [req-6dbd6e1a-8037-4131-bcf8-d3e6b708d81c req-af071817-1b33-4dc4-bea3-2e8e15d1680b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updated VIF entry in instance network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:10:12 compute-1 nova_compute[226101]: 2025-12-06 08:10:12.853 226109 DEBUG nova.network.neutron [req-6dbd6e1a-8037-4131-bcf8-d3e6b708d81c req-af071817-1b33-4dc4-bea3-2e8e15d1680b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:10:12 compute-1 ceph-mon[81689]: pgmap v3524: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Dec 06 08:10:12 compute-1 nova_compute[226101]: 2025-12-06 08:10:12.925 226109 DEBUG oslo_concurrency.lockutils [req-6dbd6e1a-8037-4131-bcf8-d3e6b708d81c req-af071817-1b33-4dc4-bea3-2e8e15d1680b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:10:12 compute-1 nova_compute[226101]: 2025-12-06 08:10:12.955 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgcfrpemk" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:12 compute-1 nova_compute[226101]: 2025-12-06 08:10:12.996 226109 DEBUG nova.storage.rbd_utils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 5f0de650-9c62-4323-9354-1e018f4f06df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.001 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/disk.config 5f0de650-9c62-4323-9354-1e018f4f06df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.158 226109 DEBUG oslo_concurrency.processutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/disk.config 5f0de650-9c62-4323-9354-1e018f4f06df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.159 226109 INFO nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Deleting local config drive /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/disk.config because it was imported into RBD.
Dec 06 08:10:13 compute-1 kernel: tap77949a7e-19: entered promiscuous mode
Dec 06 08:10:13 compute-1 NetworkManager[49031]: <info>  [1765008613.2244] manager: (tap77949a7e-19): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Dec 06 08:10:13 compute-1 ovn_controller[130279]: 2025-12-06T08:10:13Z|00809|binding|INFO|Claiming lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 for this chassis.
Dec 06 08:10:13 compute-1 ovn_controller[130279]: 2025-12-06T08:10:13Z|00810|binding|INFO|77949a7e-19a2-4c1f-812b-a7d9452af1e3: Claiming fa:16:3e:f6:d1:7b 10.100.0.12
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.225 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.228 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:13.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:13 compute-1 systemd-udevd[306761]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:10:13 compute-1 NetworkManager[49031]: <info>  [1765008613.2763] device (tap77949a7e-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:10:13 compute-1 NetworkManager[49031]: <info>  [1765008613.2776] device (tap77949a7e-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:10:13 compute-1 systemd-machined[190302]: New machine qemu-93-instance-000000c5.
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.294 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 ovn_controller[130279]: 2025-12-06T08:10:13Z|00811|binding|INFO|Setting lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 ovn-installed in OVS
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.299 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 ovn_controller[130279]: 2025-12-06T08:10:13Z|00812|binding|INFO|Setting lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 up in Southbound
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.310 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:d1:7b 10.100.0.12'], port_security=['fa:16:3e:f6:d1:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f0de650-9c62-4323-9354-1e018f4f06df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2de988d7-fe23-4a58-9aaa-b9003f6854a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ca9842-9e49-4df6-a044-c7e848c3bb8a, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=77949a7e-19a2-4c1f-812b-a7d9452af1e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.311 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 in datapath 69c999d2-fbd1-484e-8d21-c2d6762854f2 bound to our chassis
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.311 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69c999d2-fbd1-484e-8d21-c2d6762854f2
Dec 06 08:10:13 compute-1 systemd[1]: Started Virtual Machine qemu-93-instance-000000c5.
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.322 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae9fdf5-212c-4934-afa4-48e93b229a8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.323 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69c999d2-f1 in ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.324 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69c999d2-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.325 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3f0f7b-ee49-4f24-a238-95f2db94ad8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.325 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ed91f9ab-44c6-43ca-9d6a-2e310358e025]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.338 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[6667f55f-3f25-4414-941a-7b0191a53f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.365 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e72477b0-e623-47ea-b221-f8ea8beb51c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.397 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[6db5554f-d360-4bab-a065-0a775e48716d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.404 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f920b35d-8577-4ad0-adba-302fb9674706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 NetworkManager[49031]: <info>  [1765008613.4053] manager: (tap69c999d2-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/372)
Dec 06 08:10:13 compute-1 systemd-udevd[306764]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.435 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5ccbc4-85f6-4fd5-b672-8198bfd0e0a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.439 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[75e34adc-8fa1-41e1-b1e7-e5857b5ad2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 NetworkManager[49031]: <info>  [1765008613.4625] device (tap69c999d2-f0): carrier: link connected
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.467 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a5314b01-f3f2-48a4-8122-38bb81dff675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.485 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5abc65c1-95da-4ff2-9444-a632ea256f02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69c999d2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:6d:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889296, 'reachable_time': 18480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306799, 'error': None, 'target': 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.502 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0563af61-1d48-4583-a281-5b810fab4f7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:6df6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889296, 'tstamp': 889296}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306800, 'error': None, 'target': 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.521 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9795ea3a-2707-4d65-9268-064e6e897ddc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69c999d2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:6d:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889296, 'reachable_time': 18480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306801, 'error': None, 'target': 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.557 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a80cee31-627e-4b6f-8a81-82362fcffd34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.614 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca117ca-d61b-4e0f-add6-1cfdb0761799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.616 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69c999d2-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.616 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.616 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69c999d2-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.618 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 NetworkManager[49031]: <info>  [1765008613.6187] manager: (tap69c999d2-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Dec 06 08:10:13 compute-1 kernel: tap69c999d2-f0: entered promiscuous mode
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.619 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.621 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69c999d2-f0, col_values=(('external_ids', {'iface-id': '471bbdbd-5382-4586-8184-aed859e8406d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.622 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 ovn_controller[130279]: 2025-12-06T08:10:13Z|00813|binding|INFO|Releasing lport 471bbdbd-5382-4586-8184-aed859e8406d from this chassis (sb_readonly=0)
Dec 06 08:10:13 compute-1 nova_compute[226101]: 2025-12-06 08:10:13.633 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.634 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69c999d2-fbd1-484e-8d21-c2d6762854f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69c999d2-fbd1-484e-8d21-c2d6762854f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.635 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef9d28b-286e-41a6-926e-4b93c007a796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.636 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-69c999d2-fbd1-484e-8d21-c2d6762854f2
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/69c999d2-fbd1-484e-8d21-c2d6762854f2.pid.haproxy
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 69c999d2-fbd1-484e-8d21-c2d6762854f2
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:10:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:13.637 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'env', 'PROCESS_TAG=haproxy-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69c999d2-fbd1-484e-8d21-c2d6762854f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:10:13 compute-1 podman[306849]: 2025-12-06 08:10:13.991153307 +0000 UTC m=+0.052760514 container create 104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:10:14 compute-1 systemd[1]: Started libpod-conmon-104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a.scope.
Dec 06 08:10:14 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:10:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8d1202d497e470fc25a7bf370c0f93b7da5af2fa3df567ca860a3f0c4ab368/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:10:14 compute-1 podman[306849]: 2025-12-06 08:10:13.964784251 +0000 UTC m=+0.026391478 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.063 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008614.0632162, 5f0de650-9c62-4323-9354-1e018f4f06df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.064 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] VM Started (Lifecycle Event)
Dec 06 08:10:14 compute-1 podman[306849]: 2025-12-06 08:10:14.069306378 +0000 UTC m=+0.130913625 container init 104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:10:14 compute-1 podman[306849]: 2025-12-06 08:10:14.075834313 +0000 UTC m=+0.137441530 container start 104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.089 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.093 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008614.063313, 5f0de650-9c62-4323-9354-1e018f4f06df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.093 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] VM Paused (Lifecycle Event)
Dec 06 08:10:14 compute-1 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[306887]: [NOTICE]   (306893) : New worker (306895) forked
Dec 06 08:10:14 compute-1 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[306887]: [NOTICE]   (306893) : Loading success.
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.117 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.121 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.140 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.359 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:14.362 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:10:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:14.364 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:10:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:14.365 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.520 226109 DEBUG nova.compute.manager [req-9fee4448-6f2f-4a1b-b169-a5011a8b1c75 req-8b9017ae-837c-4450-9672-0a40a114ff1a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:10:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:14.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.522 226109 DEBUG oslo_concurrency.lockutils [req-9fee4448-6f2f-4a1b-b169-a5011a8b1c75 req-8b9017ae-837c-4450-9672-0a40a114ff1a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.522 226109 DEBUG oslo_concurrency.lockutils [req-9fee4448-6f2f-4a1b-b169-a5011a8b1c75 req-8b9017ae-837c-4450-9672-0a40a114ff1a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.522 226109 DEBUG oslo_concurrency.lockutils [req-9fee4448-6f2f-4a1b-b169-a5011a8b1c75 req-8b9017ae-837c-4450-9672-0a40a114ff1a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.523 226109 DEBUG nova.compute.manager [req-9fee4448-6f2f-4a1b-b169-a5011a8b1c75 req-8b9017ae-837c-4450-9672-0a40a114ff1a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Processing event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.524 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.526 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008614.5264997, 5f0de650-9c62-4323-9354-1e018f4f06df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.527 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] VM Resumed (Lifecycle Event)
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.529 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.531 226109 INFO nova.virt.libvirt.driver [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance spawned successfully.
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.531 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.691 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.697 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.698 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.698 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.698 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.699 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.699 226109 DEBUG nova.virt.libvirt.driver [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.703 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.745 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.773 226109 INFO nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Took 9.03 seconds to spawn the instance on the hypervisor.
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.773 226109 DEBUG nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.836 226109 INFO nova.compute.manager [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Took 9.99 seconds to build instance.
Dec 06 08:10:14 compute-1 nova_compute[226101]: 2025-12-06 08:10:14.854 226109 DEBUG oslo_concurrency.lockutils [None req-a169d3a0-1824-4275-b69e-3d05a587f1cf 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:14 compute-1 ceph-mon[81689]: pgmap v3525: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 06 08:10:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:15.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:15 compute-1 nova_compute[226101]: 2025-12-06 08:10:15.650 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:16.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:16 compute-1 nova_compute[226101]: 2025-12-06 08:10:16.643 226109 DEBUG nova.compute.manager [req-7c91417a-bd4c-4f5e-b370-3e85af2b5c3d req-b41c7193-ce53-4e8d-a969-85d020a0d356 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:10:16 compute-1 nova_compute[226101]: 2025-12-06 08:10:16.643 226109 DEBUG oslo_concurrency.lockutils [req-7c91417a-bd4c-4f5e-b370-3e85af2b5c3d req-b41c7193-ce53-4e8d-a969-85d020a0d356 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:16 compute-1 nova_compute[226101]: 2025-12-06 08:10:16.643 226109 DEBUG oslo_concurrency.lockutils [req-7c91417a-bd4c-4f5e-b370-3e85af2b5c3d req-b41c7193-ce53-4e8d-a969-85d020a0d356 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:16 compute-1 nova_compute[226101]: 2025-12-06 08:10:16.644 226109 DEBUG oslo_concurrency.lockutils [req-7c91417a-bd4c-4f5e-b370-3e85af2b5c3d req-b41c7193-ce53-4e8d-a969-85d020a0d356 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:16 compute-1 nova_compute[226101]: 2025-12-06 08:10:16.644 226109 DEBUG nova.compute.manager [req-7c91417a-bd4c-4f5e-b370-3e85af2b5c3d req-b41c7193-ce53-4e8d-a969-85d020a0d356 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:10:16 compute-1 nova_compute[226101]: 2025-12-06 08:10:16.644 226109 WARNING nova.compute.manager [req-7c91417a-bd4c-4f5e-b370-3e85af2b5c3d req-b41c7193-ce53-4e8d-a969-85d020a0d356 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state active and task_state None.
Dec 06 08:10:16 compute-1 nova_compute[226101]: 2025-12-06 08:10:16.911 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:16 compute-1 ceph-mon[81689]: pgmap v3526: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 08:10:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:17.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:18.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:19 compute-1 ceph-mon[81689]: pgmap v3527: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Dec 06 08:10:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:19.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:20 compute-1 nova_compute[226101]: 2025-12-06 08:10:20.002 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:20 compute-1 NetworkManager[49031]: <info>  [1765008620.0157] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Dec 06 08:10:20 compute-1 NetworkManager[49031]: <info>  [1765008620.0202] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Dec 06 08:10:20 compute-1 nova_compute[226101]: 2025-12-06 08:10:20.063 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:20 compute-1 ovn_controller[130279]: 2025-12-06T08:10:20Z|00814|binding|INFO|Releasing lport 471bbdbd-5382-4586-8184-aed859e8406d from this chassis (sb_readonly=0)
Dec 06 08:10:20 compute-1 nova_compute[226101]: 2025-12-06 08:10:20.069 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:20 compute-1 podman[306904]: 2025-12-06 08:10:20.08774877 +0000 UTC m=+0.057562852 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:10:20 compute-1 podman[306905]: 2025-12-06 08:10:20.108707302 +0000 UTC m=+0.065188497 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true)
Dec 06 08:10:20 compute-1 podman[306906]: 2025-12-06 08:10:20.152406981 +0000 UTC m=+0.116390667 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:10:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:20.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:20 compute-1 nova_compute[226101]: 2025-12-06 08:10:20.688 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:21 compute-1 ceph-mon[81689]: pgmap v3528: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Dec 06 08:10:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:21.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:21 compute-1 nova_compute[226101]: 2025-12-06 08:10:21.913 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:22 compute-1 nova_compute[226101]: 2025-12-06 08:10:22.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:23 compute-1 ceph-mon[81689]: pgmap v3529: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.1 MiB/s wr, 150 op/s
Dec 06 08:10:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:23.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:24 compute-1 nova_compute[226101]: 2025-12-06 08:10:24.698 226109 DEBUG nova.compute.manager [req-1587bac3-b92f-4b04-af6e-30475b48fd63 req-00799617-aa84-4a72-a27f-05412f47c6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:10:24 compute-1 nova_compute[226101]: 2025-12-06 08:10:24.699 226109 DEBUG nova.compute.manager [req-1587bac3-b92f-4b04-af6e-30475b48fd63 req-00799617-aa84-4a72-a27f-05412f47c6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing instance network info cache due to event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:10:24 compute-1 nova_compute[226101]: 2025-12-06 08:10:24.700 226109 DEBUG oslo_concurrency.lockutils [req-1587bac3-b92f-4b04-af6e-30475b48fd63 req-00799617-aa84-4a72-a27f-05412f47c6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:10:24 compute-1 nova_compute[226101]: 2025-12-06 08:10:24.700 226109 DEBUG oslo_concurrency.lockutils [req-1587bac3-b92f-4b04-af6e-30475b48fd63 req-00799617-aa84-4a72-a27f-05412f47c6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:10:24 compute-1 nova_compute[226101]: 2025-12-06 08:10:24.701 226109 DEBUG nova.network.neutron [req-1587bac3-b92f-4b04-af6e-30475b48fd63 req-00799617-aa84-4a72-a27f-05412f47c6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:10:25 compute-1 ceph-mon[81689]: pgmap v3530: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:10:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:25.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:25 compute-1 nova_compute[226101]: 2025-12-06 08:10:25.692 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:26.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:26 compute-1 nova_compute[226101]: 2025-12-06 08:10:26.956 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:27 compute-1 ceph-mon[81689]: pgmap v3531: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:10:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:27.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:28 compute-1 ovn_controller[130279]: 2025-12-06T08:10:28Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:d1:7b 10.100.0.12
Dec 06 08:10:28 compute-1 ovn_controller[130279]: 2025-12-06T08:10:28Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:d1:7b 10.100.0.12
Dec 06 08:10:28 compute-1 ovn_controller[130279]: 2025-12-06T08:10:28Z|00815|binding|INFO|Releasing lport 471bbdbd-5382-4586-8184-aed859e8406d from this chassis (sb_readonly=0)
Dec 06 08:10:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:28.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:28 compute-1 nova_compute[226101]: 2025-12-06 08:10:28.544 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:28 compute-1 nova_compute[226101]: 2025-12-06 08:10:28.633 226109 DEBUG nova.network.neutron [req-1587bac3-b92f-4b04-af6e-30475b48fd63 req-00799617-aa84-4a72-a27f-05412f47c6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updated VIF entry in instance network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:10:28 compute-1 nova_compute[226101]: 2025-12-06 08:10:28.634 226109 DEBUG nova.network.neutron [req-1587bac3-b92f-4b04-af6e-30475b48fd63 req-00799617-aa84-4a72-a27f-05412f47c6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:10:28 compute-1 nova_compute[226101]: 2025-12-06 08:10:28.660 226109 DEBUG oslo_concurrency.lockutils [req-1587bac3-b92f-4b04-af6e-30475b48fd63 req-00799617-aa84-4a72-a27f-05412f47c6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:10:29 compute-1 ceph-mon[81689]: pgmap v3532: 305 pgs: 305 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 226 KiB/s wr, 59 op/s
Dec 06 08:10:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:29.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:30.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:30 compute-1 nova_compute[226101]: 2025-12-06 08:10:30.697 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:31 compute-1 sshd-session[306966]: Received disconnect from 45.120.216.232 port 40072:11: Bye Bye [preauth]
Dec 06 08:10:31 compute-1 sshd-session[306966]: Disconnected from authenticating user root 45.120.216.232 port 40072 [preauth]
Dec 06 08:10:31 compute-1 ceph-mon[81689]: pgmap v3533: 305 pgs: 305 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 226 KiB/s wr, 35 op/s
Dec 06 08:10:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:31.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:31 compute-1 nova_compute[226101]: 2025-12-06 08:10:31.958 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:32.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:33.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:33 compute-1 sshd-session[306968]: Received disconnect from 186.96.151.198 port 59798:11: Bye Bye [preauth]
Dec 06 08:10:33 compute-1 sshd-session[306968]: Disconnected from authenticating user root 186.96.151.198 port 59798 [preauth]
Dec 06 08:10:33 compute-1 ceph-mon[81689]: pgmap v3534: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:10:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:34.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:34 compute-1 ceph-mon[81689]: pgmap v3535: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.195 226109 INFO nova.compute.manager [None req-f790afa8-0c3f-45e1-81e9-6a4b1fac3171 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Get console output
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.201 269846 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:10:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:35.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.642 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.643 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.643 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.643 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.643 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:35 compute-1 nova_compute[226101]: 2025-12-06 08:10:35.702 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:10:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1969955887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.091 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.330 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.331 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.332 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.487 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.488 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4066MB free_disk=20.942764282226562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.488 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.489 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:36.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.793 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 5f0de650-9c62-4323-9354-1e018f4f06df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.794 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.794 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.878 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:36 compute-1 nova_compute[226101]: 2025-12-06 08:10:36.959 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:37 compute-1 ceph-mon[81689]: pgmap v3536: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:37 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1969955887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:37.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:10:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3610654472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:37 compute-1 nova_compute[226101]: 2025-12-06 08:10:37.388 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:37 compute-1 nova_compute[226101]: 2025-12-06 08:10:37.396 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:10:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3610654472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:38.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:38 compute-1 nova_compute[226101]: 2025-12-06 08:10:38.873 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:10:39 compute-1 sshd[164848]: Timeout before authentication for connection from 14.103.118.136 to 38.102.83.204, pid = 304953
Dec 06 08:10:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:39.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:39 compute-1 ceph-mon[81689]: pgmap v3537: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:39 compute-1 nova_compute[226101]: 2025-12-06 08:10:39.563 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:10:39 compute-1 nova_compute[226101]: 2025-12-06 08:10:39.564 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:40 compute-1 sshd-session[307015]: Received disconnect from 101.100.194.199 port 59986:11: Bye Bye [preauth]
Dec 06 08:10:40 compute-1 sshd-session[307015]: Disconnected from authenticating user root 101.100.194.199 port 59986 [preauth]
Dec 06 08:10:40 compute-1 ceph-mon[81689]: pgmap v3538: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Dec 06 08:10:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:40.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:40 compute-1 nova_compute[226101]: 2025-12-06 08:10:40.705 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:41 compute-1 sshd-session[307017]: Received disconnect from 14.225.3.79 port 49780:11: Bye Bye [preauth]
Dec 06 08:10:41 compute-1 sshd-session[307017]: Disconnected from authenticating user root 14.225.3.79 port 49780 [preauth]
Dec 06 08:10:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:41.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:41 compute-1 nova_compute[226101]: 2025-12-06 08:10:41.962 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:42.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:42 compute-1 nova_compute[226101]: 2025-12-06 08:10:42.565 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:42 compute-1 nova_compute[226101]: 2025-12-06 08:10:42.565 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:10:42 compute-1 nova_compute[226101]: 2025-12-06 08:10:42.566 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:10:42 compute-1 ceph-mon[81689]: pgmap v3539: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Dec 06 08:10:42 compute-1 sshd-session[307019]: Received disconnect from 124.18.141.70 port 38408:11: Bye Bye [preauth]
Dec 06 08:10:42 compute-1 sshd-session[307019]: Disconnected from authenticating user root 124.18.141.70 port 38408 [preauth]
Dec 06 08:10:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:43.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:43 compute-1 nova_compute[226101]: 2025-12-06 08:10:43.998 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:10:43 compute-1 nova_compute[226101]: 2025-12-06 08:10:43.998 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:10:43 compute-1 nova_compute[226101]: 2025-12-06 08:10:43.999 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:10:43 compute-1 nova_compute[226101]: 2025-12-06 08:10:43.999 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:10:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:44.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:44 compute-1 ceph-mon[81689]: pgmap v3540: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Dec 06 08:10:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2941166648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:45.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:45 compute-1 nova_compute[226101]: 2025-12-06 08:10:45.709 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:46 compute-1 nova_compute[226101]: 2025-12-06 08:10:46.187 226109 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:10:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:46.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:46 compute-1 nova_compute[226101]: 2025-12-06 08:10:46.843 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:10:46 compute-1 nova_compute[226101]: 2025-12-06 08:10:46.863 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:10:46 compute-1 nova_compute[226101]: 2025-12-06 08:10:46.863 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:10:46 compute-1 nova_compute[226101]: 2025-12-06 08:10:46.864 226109 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:10:46 compute-1 nova_compute[226101]: 2025-12-06 08:10:46.864 226109 DEBUG nova.network.neutron [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:10:47 compute-1 nova_compute[226101]: 2025-12-06 08:10:47.007 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:47.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:47 compute-1 ceph-mon[81689]: pgmap v3541: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Dec 06 08:10:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3041411772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2452691386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:47 compute-1 nova_compute[226101]: 2025-12-06 08:10:47.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:47 compute-1 nova_compute[226101]: 2025-12-06 08:10:47.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:10:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:48.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:48 compute-1 nova_compute[226101]: 2025-12-06 08:10:48.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:49 compute-1 nova_compute[226101]: 2025-12-06 08:10:49.055 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:49.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:49 compute-1 ceph-mon[81689]: pgmap v3542: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s wr, 0 op/s
Dec 06 08:10:49 compute-1 sshd[164848]: drop connection #0 from [154.209.4.183]:37408 on [38.102.83.204]:22 penalty: exceeded LoginGraceTime
Dec 06 08:10:49 compute-1 nova_compute[226101]: 2025-12-06 08:10:49.846 226109 DEBUG nova.network.neutron [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:10:49 compute-1 nova_compute[226101]: 2025-12-06 08:10:49.888 226109 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.063 226109 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.063 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Creating file /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/50707ae67b2641578789b98cf38b4b32.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.064 226109 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/50707ae67b2641578789b98cf38b4b32.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.475 226109 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/50707ae67b2641578789b98cf38b4b32.tmp" returned: 1 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.476 226109 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df/50707ae67b2641578789b98cf38b4b32.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.477 226109 DEBUG nova.virt.libvirt.volume.remotefs [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Creating directory /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.478 226109 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:50.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.744 226109 DEBUG oslo_concurrency.processutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/5f0de650-9c62-4323-9354-1e018f4f06df" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.745 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:50 compute-1 nova_compute[226101]: 2025-12-06 08:10:50.748 226109 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 08:10:51 compute-1 podman[307023]: 2025-12-06 08:10:51.081401382 +0000 UTC m=+0.066112441 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:10:51 compute-1 podman[307024]: 2025-12-06 08:10:51.093209158 +0000 UTC m=+0.066005838 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 08:10:51 compute-1 podman[307025]: 2025-12-06 08:10:51.139141357 +0000 UTC m=+0.116660163 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 08:10:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:51.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:51 compute-1 ceph-mon[81689]: pgmap v3543: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s wr, 0 op/s
Dec 06 08:10:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:51 compute-1 nova_compute[226101]: 2025-12-06 08:10:51.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:51 compute-1 nova_compute[226101]: 2025-12-06 08:10:51.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:52 compute-1 nova_compute[226101]: 2025-12-06 08:10:52.008 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4129445202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2490917646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:52.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:53 compute-1 kernel: tap77949a7e-19 (unregistering): left promiscuous mode
Dec 06 08:10:53 compute-1 NetworkManager[49031]: <info>  [1765008653.0661] device (tap77949a7e-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:10:53 compute-1 ovn_controller[130279]: 2025-12-06T08:10:53Z|00816|binding|INFO|Releasing lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 from this chassis (sb_readonly=0)
Dec 06 08:10:53 compute-1 ovn_controller[130279]: 2025-12-06T08:10:53Z|00817|binding|INFO|Setting lport 77949a7e-19a2-4c1f-812b-a7d9452af1e3 down in Southbound
Dec 06 08:10:53 compute-1 ovn_controller[130279]: 2025-12-06T08:10:53Z|00818|binding|INFO|Removing iface tap77949a7e-19 ovn-installed in OVS
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.076 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.079 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.102 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-1 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Dec 06 08:10:53 compute-1 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c5.scope: Consumed 15.149s CPU time.
Dec 06 08:10:53 compute-1 systemd-machined[190302]: Machine qemu-93-instance-000000c5 terminated.
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.163 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:d1:7b 10.100.0.12'], port_security=['fa:16:3e:f6:d1:7b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f0de650-9c62-4323-9354-1e018f4f06df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2de988d7-fe23-4a58-9aaa-b9003f6854a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ca9842-9e49-4df6-a044-c7e848c3bb8a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=77949a7e-19a2-4c1f-812b-a7d9452af1e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.164 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 in datapath 69c999d2-fbd1-484e-8d21-c2d6762854f2 unbound from our chassis
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.165 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69c999d2-fbd1-484e-8d21-c2d6762854f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.166 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d577dbb4-1835-439d-aeb3-9707e2bc15e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.167 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 namespace which is not needed anymore
Dec 06 08:10:53 compute-1 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[306887]: [NOTICE]   (306893) : haproxy version is 2.8.14-c23fe91
Dec 06 08:10:53 compute-1 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[306887]: [NOTICE]   (306893) : path to executable is /usr/sbin/haproxy
Dec 06 08:10:53 compute-1 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[306887]: [WARNING]  (306893) : Exiting Master process...
Dec 06 08:10:53 compute-1 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[306887]: [ALERT]    (306893) : Current worker (306895) exited with code 143 (Terminated)
Dec 06 08:10:53 compute-1 neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2[306887]: [WARNING]  (306893) : All workers exited. Exiting... (0)
Dec 06 08:10:53 compute-1 systemd[1]: libpod-104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a.scope: Deactivated successfully.
Dec 06 08:10:53 compute-1 conmon[306887]: conmon 104c765ad7d57eeeee5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a.scope/container/memory.events
Dec 06 08:10:53 compute-1 podman[307107]: 2025-12-06 08:10:53.294457881 +0000 UTC m=+0.046613689 container died 104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 08:10:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:53.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:53 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a-userdata-shm.mount: Deactivated successfully.
Dec 06 08:10:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-fc8d1202d497e470fc25a7bf370c0f93b7da5af2fa3df567ca860a3f0c4ab368-merged.mount: Deactivated successfully.
Dec 06 08:10:53 compute-1 podman[307107]: 2025-12-06 08:10:53.331154504 +0000 UTC m=+0.083310312 container cleanup 104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:10:53 compute-1 systemd[1]: libpod-conmon-104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a.scope: Deactivated successfully.
Dec 06 08:10:53 compute-1 podman[307149]: 2025-12-06 08:10:53.38670018 +0000 UTC m=+0.037196196 container remove 104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.391 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[bdcd44b2-b8d3-44fc-b2fa-f1937ec4af3f]: (4, ('Sat Dec  6 08:10:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 (104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a)\n104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a\nSat Dec  6 08:10:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 (104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a)\n104c765ad7d57eeeee5ea8a2320ae6cd86e3b3857175eee1f4ad4cd73433cd2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.394 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9b75b27b-7f2c-4e6c-8f54-e32955bf0057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.395 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69c999d2-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.397 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-1 kernel: tap69c999d2-f0: left promiscuous mode
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.412 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.414 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6969c2-e811-4ca2-92e9-3c89191b1ec1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.439 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7e6c89-f8e7-4df0-aba6-25d7bab18ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.441 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4476e253-7c60-41eb-b2a1-18c8ad1ec0d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:53 compute-1 ceph-mon[81689]: pgmap v3544: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.453 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b337d05e-6ab1-4b91-8179-23f53a46e8d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889289, 'reachable_time': 42891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307167, 'error': None, 'target': 'ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.455 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69c999d2-fbd1-484e-8d21-c2d6762854f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:10:53 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:10:53.455 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2cc5ec-b7b0-4ea3-a979-ca25086889d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:10:53 compute-1 systemd[1]: run-netns-ovnmeta\x2d69c999d2\x2dfbd1\x2d484e\x2d8d21\x2dc2d6762854f2.mount: Deactivated successfully.
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.765 226109 INFO nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance shutdown successfully after 3 seconds.
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.772 226109 INFO nova.virt.libvirt.driver [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Instance destroyed successfully.
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.773 226109 DEBUG nova.virt.libvirt.vif [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-522215966',display_name='tempest-TestNetworkAdvancedServerOps-server-522215966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-522215966',id=197,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBGFC8f+teCAymzAzt+sw9RK9rNVgD4R9/B96GYFflUw/uMoFI69FGiUrAFSq23wDgsBI/tn77SEM4/cRaOgXgCYvCjaKqwg+7LU55D+cQXQ9YzNukb2X32Bw/xb16Ugg==',key_name='tempest-TestNetworkAdvancedServerOps-562647975',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:10:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-5fgxfuhj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:10:45Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=5f0de650-9c62-4323-9354-1e018f4f06df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1487736673", "vif_mac": "fa:16:3e:f6:d1:7b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.773 226109 DEBUG nova.network.os_vif_util [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1487736673", "vif_mac": "fa:16:3e:f6:d1:7b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.774 226109 DEBUG nova.network.os_vif_util [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.774 226109 DEBUG os_vif [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.776 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.776 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77949a7e-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.778 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.779 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.782 226109 INFO os_vif [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19')
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.786 226109 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:10:53 compute-1 nova_compute[226101]: 2025-12-06 08:10:53.787 226109 DEBUG nova.virt.libvirt.driver [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.024 226109 DEBUG neutronclient.v2_0.client [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.148 226109 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.149 226109 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.149 226109 DEBUG oslo_concurrency.lockutils [None req-ed1544f6-01a9-4295-be8c-f51c732b91cc 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.396023) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654396068, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1925, "num_deletes": 258, "total_data_size": 4545311, "memory_usage": 4590648, "flush_reason": "Manual Compaction"}
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654412475, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 2977267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75498, "largest_seqno": 77418, "table_properties": {"data_size": 2969422, "index_size": 4659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16291, "raw_average_key_size": 19, "raw_value_size": 2953709, "raw_average_value_size": 3602, "num_data_blocks": 205, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008480, "oldest_key_time": 1765008480, "file_creation_time": 1765008654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 16479 microseconds, and 5592 cpu microseconds.
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.412505) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 2977267 bytes OK
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.412519) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.413938) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.413955) EVENT_LOG_v1 {"time_micros": 1765008654413951, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.413971) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 4536666, prev total WAL file size 4536666, number of live WAL files 2.
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.414931) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373737' seq:72057594037927935, type:22 .. '6C6F676D0033303331' seq:0, type:0; will stop at (end)
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(2907KB)], [153(10MB)]
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654414987, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 13894601, "oldest_snapshot_seqno": -1}
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.485 226109 DEBUG nova.compute.manager [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.486 226109 DEBUG oslo_concurrency.lockutils [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.486 226109 DEBUG oslo_concurrency.lockutils [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.486 226109 DEBUG oslo_concurrency.lockutils [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.486 226109 DEBUG nova.compute.manager [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:10:54 compute-1 nova_compute[226101]: 2025-12-06 08:10:54.487 226109 WARNING nova.compute.manager [req-102e1324-4ac5-459d-babc-34421d02f335 req-6abffd97-c2ca-451e-a648-4aacc2f0437d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-unplugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state active and task_state resize_migrating.
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 10624 keys, 13755228 bytes, temperature: kUnknown
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654523356, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 13755228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13687221, "index_size": 40374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 280618, "raw_average_key_size": 26, "raw_value_size": 13501691, "raw_average_value_size": 1270, "num_data_blocks": 1537, "num_entries": 10624, "num_filter_entries": 10624, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.523625) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13755228 bytes
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.525840) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.1 rd, 126.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.4 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(9.3) write-amplify(4.6) OK, records in: 11153, records dropped: 529 output_compression: NoCompression
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.525879) EVENT_LOG_v1 {"time_micros": 1765008654525862, "job": 98, "event": "compaction_finished", "compaction_time_micros": 108453, "compaction_time_cpu_micros": 41551, "output_level": 6, "num_output_files": 1, "total_output_size": 13755228, "num_input_records": 11153, "num_output_records": 10624, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654527850, "job": 98, "event": "table_file_deletion", "file_number": 155}
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654532022, "job": 98, "event": "table_file_deletion", "file_number": 153}
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.414831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.532363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.532372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.532375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.532378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:10:54.532381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:54.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:55.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:55 compute-1 ceph-mon[81689]: pgmap v3545: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Dec 06 08:10:55 compute-1 nova_compute[226101]: 2025-12-06 08:10:55.644 226109 DEBUG nova.compute.manager [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:10:55 compute-1 nova_compute[226101]: 2025-12-06 08:10:55.644 226109 DEBUG nova.compute.manager [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing instance network info cache due to event network-changed-77949a7e-19a2-4c1f-812b-a7d9452af1e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:10:55 compute-1 nova_compute[226101]: 2025-12-06 08:10:55.645 226109 DEBUG oslo_concurrency.lockutils [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:10:55 compute-1 nova_compute[226101]: 2025-12-06 08:10:55.645 226109 DEBUG oslo_concurrency.lockutils [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:10:55 compute-1 nova_compute[226101]: 2025-12-06 08:10:55.645 226109 DEBUG nova.network.neutron [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Refreshing network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:10:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:56.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:56 compute-1 nova_compute[226101]: 2025-12-06 08:10:56.649 226109 DEBUG nova.compute.manager [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:10:56 compute-1 nova_compute[226101]: 2025-12-06 08:10:56.650 226109 DEBUG oslo_concurrency.lockutils [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:56 compute-1 nova_compute[226101]: 2025-12-06 08:10:56.650 226109 DEBUG oslo_concurrency.lockutils [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:56 compute-1 nova_compute[226101]: 2025-12-06 08:10:56.650 226109 DEBUG oslo_concurrency.lockutils [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:56 compute-1 nova_compute[226101]: 2025-12-06 08:10:56.650 226109 DEBUG nova.compute.manager [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:10:56 compute-1 nova_compute[226101]: 2025-12-06 08:10:56.650 226109 WARNING nova.compute.manager [req-2f19a4f5-2ff3-40c0-a999-b72b3ec3806d req-d533477c-59ff-4308-81f7-82ea814c4f2f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state active and task_state resize_migrated.
Dec 06 08:10:57 compute-1 nova_compute[226101]: 2025-12-06 08:10:57.012 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:57.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:57 compute-1 ceph-mon[81689]: pgmap v3546: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 18 KiB/s wr, 3 op/s
Dec 06 08:10:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2075003298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e414 e414: 3 total, 3 up, 3 in
Dec 06 08:10:57 compute-1 nova_compute[226101]: 2025-12-06 08:10:57.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:58 compute-1 sudo[307168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:58 compute-1 sudo[307168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-1 sudo[307168]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-1 sudo[307193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:10:58 compute-1 sudo[307193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-1 sudo[307193]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-1 sudo[307218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:58 compute-1 sudo[307218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-1 sudo[307218]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-1 sudo[307243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:10:58 compute-1 sudo[307243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-1 ceph-mon[81689]: osdmap e414: 3 total, 3 up, 3 in
Dec 06 08:10:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/19922362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:58.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:58 compute-1 nova_compute[226101]: 2025-12-06 08:10:58.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:58 compute-1 nova_compute[226101]: 2025-12-06 08:10:58.778 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:58 compute-1 podman[307343]: 2025-12-06 08:10:58.848853429 +0000 UTC m=+0.071906786 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:10:58 compute-1 nova_compute[226101]: 2025-12-06 08:10:58.915 226109 DEBUG nova.network.neutron [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updated VIF entry in instance network info cache for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:10:58 compute-1 nova_compute[226101]: 2025-12-06 08:10:58.915 226109 DEBUG nova.network.neutron [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:10:58 compute-1 nova_compute[226101]: 2025-12-06 08:10:58.971 226109 DEBUG oslo_concurrency.lockutils [req-4fd2735f-6f96-4a49-8705-3b55bc38c0ed req-91e2bd8f-dcc1-4112-b82c-0aee181c5111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:10:59 compute-1 podman[307343]: 2025-12-06 08:10:59.000845839 +0000 UTC m=+0.223899196 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:10:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:10:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:59.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:59 compute-1 sudo[307243]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:59 compute-1 ceph-mon[81689]: pgmap v3548: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 21 KiB/s wr, 4 op/s
Dec 06 08:10:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:10:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:10:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2171125071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:59 compute-1 sudo[307467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:59 compute-1 sudo[307467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:59 compute-1 sudo[307467]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:59 compute-1 sudo[307492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:10:59 compute-1 sudo[307492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:59 compute-1 sudo[307492]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:59 compute-1 sudo[307517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:59 compute-1 sudo[307517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:59 compute-1 sudo[307517]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:59 compute-1 sudo[307542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:10:59 compute-1 sudo[307542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:00 compute-1 sudo[307542]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:11:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:11:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:11:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:11:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.508615) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660508703, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 383, "num_deletes": 251, "total_data_size": 382837, "memory_usage": 391496, "flush_reason": "Manual Compaction"}
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660513477, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 252984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77423, "largest_seqno": 77801, "table_properties": {"data_size": 250560, "index_size": 523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6131, "raw_average_key_size": 19, "raw_value_size": 245661, "raw_average_value_size": 777, "num_data_blocks": 21, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008654, "oldest_key_time": 1765008654, "file_creation_time": 1765008660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 4903 microseconds, and 2251 cpu microseconds.
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.513529) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 252984 bytes OK
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.513550) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.515540) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.515560) EVENT_LOG_v1 {"time_micros": 1765008660515555, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.515576) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 380265, prev total WAL file size 380265, number of live WAL files 2.
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.516059) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(247KB)], [156(13MB)]
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660516186, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 14008212, "oldest_snapshot_seqno": -1}
Dec 06 08:11:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:00 compute-1 nova_compute[226101]: 2025-12-06 08:11:00.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 10420 keys, 11996637 bytes, temperature: kUnknown
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660599557, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 11996637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11931381, "index_size": 38071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26117, "raw_key_size": 277095, "raw_average_key_size": 26, "raw_value_size": 11750901, "raw_average_value_size": 1127, "num_data_blocks": 1432, "num_entries": 10420, "num_filter_entries": 10420, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.599957) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 11996637 bytes
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.602204) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.8 rd, 143.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.1 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(102.8) write-amplify(47.4) OK, records in: 10940, records dropped: 520 output_compression: NoCompression
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.602237) EVENT_LOG_v1 {"time_micros": 1765008660602223, "job": 100, "event": "compaction_finished", "compaction_time_micros": 83493, "compaction_time_cpu_micros": 35588, "output_level": 6, "num_output_files": 1, "total_output_size": 11996637, "num_input_records": 10940, "num_output_records": 10420, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660602591, "job": 100, "event": "table_file_deletion", "file_number": 158}
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660607387, "job": 100, "event": "table_file_deletion", "file_number": 156}
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.515990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.607517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.607524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.607528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.607531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:11:00.607534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-1 sshd-session[307465]: Received disconnect from 154.219.116.39 port 40946:11: Bye Bye [preauth]
Dec 06 08:11:00 compute-1 sshd-session[307465]: Disconnected from authenticating user root 154.219.116.39 port 40946 [preauth]
Dec 06 08:11:00 compute-1 nova_compute[226101]: 2025-12-06 08:11:00.953 226109 DEBUG nova.compute.manager [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:00 compute-1 nova_compute[226101]: 2025-12-06 08:11:00.954 226109 DEBUG oslo_concurrency.lockutils [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:00 compute-1 nova_compute[226101]: 2025-12-06 08:11:00.954 226109 DEBUG oslo_concurrency.lockutils [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:00 compute-1 nova_compute[226101]: 2025-12-06 08:11:00.955 226109 DEBUG oslo_concurrency.lockutils [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:00 compute-1 nova_compute[226101]: 2025-12-06 08:11:00.955 226109 DEBUG nova.compute.manager [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:11:00 compute-1 nova_compute[226101]: 2025-12-06 08:11:00.955 226109 WARNING nova.compute.manager [req-0463687c-11b3-487a-8a60-79a62785d3f0 req-e46d64f6-af56-4738-968b-31ae108daee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state resized and task_state None.
Dec 06 08:11:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:01.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:01 compute-1 ceph-mon[81689]: pgmap v3549: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 21 KiB/s wr, 4 op/s
Dec 06 08:11:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:01.689 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:01.690 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:01.690 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:02 compute-1 nova_compute[226101]: 2025-12-06 08:11:02.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:02 compute-1 ceph-mon[81689]: pgmap v3550: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 916 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:11:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:02.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.098 226109 DEBUG nova.compute.manager [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.099 226109 DEBUG oslo_concurrency.lockutils [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.100 226109 DEBUG oslo_concurrency.lockutils [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.100 226109 DEBUG oslo_concurrency.lockutils [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.101 226109 DEBUG nova.compute.manager [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] No waiting events found dispatching network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.101 226109 WARNING nova.compute.manager [req-085feaf0-7a88-4a57-b435-5b15c3c3411d req-0b10a99b-f20f-45a9-9000-804ce0bdc640 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Received unexpected event network-vif-plugged-77949a7e-19a2-4c1f-812b-a7d9452af1e3 for instance with vm_state resized and task_state None.
Dec 06 08:11:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.441 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "5f0de650-9c62-4323-9354-1e018f4f06df" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.442 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.442 226109 DEBUG nova.compute.manager [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Going to confirm migration 24 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Dec 06 08:11:03 compute-1 nova_compute[226101]: 2025-12-06 08:11:03.781 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:04 compute-1 nova_compute[226101]: 2025-12-06 08:11:04.096 226109 DEBUG neutronclient.v2_0.client [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 77949a7e-19a2-4c1f-812b-a7d9452af1e3 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 08:11:04 compute-1 nova_compute[226101]: 2025-12-06 08:11:04.096 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:11:04 compute-1 nova_compute[226101]: 2025-12-06 08:11:04.097 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:11:04 compute-1 nova_compute[226101]: 2025-12-06 08:11:04.097 226109 DEBUG nova.network.neutron [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:11:04 compute-1 nova_compute[226101]: 2025-12-06 08:11:04.097 226109 DEBUG nova.objects.instance [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'info_cache' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:11:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:04.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:04 compute-1 ceph-mon[81689]: pgmap v3551: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 916 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:11:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:05.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:05 compute-1 nova_compute[226101]: 2025-12-06 08:11:05.851 226109 DEBUG nova.network.neutron [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Updating instance_info_cache with network_info: [{"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:11:05 compute-1 nova_compute[226101]: 2025-12-06 08:11:05.876 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-5f0de650-9c62-4323-9354-1e018f4f06df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:11:05 compute-1 nova_compute[226101]: 2025-12-06 08:11:05.876 226109 DEBUG nova.objects.instance [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid 5f0de650-9c62-4323-9354-1e018f4f06df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:11:05 compute-1 sshd-session[307597]: Received disconnect from 107.150.106.178 port 50410:11: Bye Bye [preauth]
Dec 06 08:11:05 compute-1 sshd-session[307597]: Disconnected from authenticating user root 107.150.106.178 port 50410 [preauth]
Dec 06 08:11:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2928991331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:11:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2064527638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:11:06 compute-1 nova_compute[226101]: 2025-12-06 08:11:06.295 226109 DEBUG nova.storage.rbd_utils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] removing snapshot(nova-resize) on rbd image(5f0de650-9c62-4323-9354-1e018f4f06df_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 08:11:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:06.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:06 compute-1 sudo[307635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:06 compute-1 sudo[307635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:06 compute-1 sudo[307635]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:06 compute-1 sudo[307660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:11:06 compute-1 sudo[307660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:06 compute-1 sudo[307660]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.022 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e415 e415: 3 total, 3 up, 3 in
Dec 06 08:11:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:07.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.375 226109 DEBUG nova.virt.libvirt.vif [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-522215966',display_name='tempest-TestNetworkAdvancedServerOps-server-522215966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-522215966',id=197,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBGFC8f+teCAymzAzt+sw9RK9rNVgD4R9/B96GYFflUw/uMoFI69FGiUrAFSq23wDgsBI/tn77SEM4/cRaOgXgCYvCjaKqwg+7LU55D+cQXQ9YzNukb2X32Bw/xb16Ugg==',key_name='tempest-TestNetworkAdvancedServerOps-562647975',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:11:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-5fgxfuhj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:11:00Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=5f0de650-9c62-4323-9354-1e018f4f06df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.376 226109 DEBUG nova.network.os_vif_util [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "address": "fa:16:3e:f6:d1:7b", "network": {"id": "69c999d2-fbd1-484e-8d21-c2d6762854f2", "bridge": "br-int", "label": "tempest-network-smoke--1487736673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77949a7e-19", "ovs_interfaceid": "77949a7e-19a2-4c1f-812b-a7d9452af1e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.377 226109 DEBUG nova.network.os_vif_util [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.378 226109 DEBUG os_vif [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.382 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.382 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77949a7e-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.383 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.385 226109 INFO os_vif [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:d1:7b,bridge_name='br-int',has_traffic_filtering=True,id=77949a7e-19a2-4c1f-812b-a7d9452af1e3,network=Network(69c999d2-fbd1-484e-8d21-c2d6762854f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77949a7e-19')
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.386 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.386 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:07 compute-1 ceph-mon[81689]: pgmap v3552: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 08:11:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:07 compute-1 ceph-mon[81689]: osdmap e415: 3 total, 3 up, 3 in
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.636 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:07.636 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:11:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:07.638 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.649 226109 DEBUG nova.scheduler.client.report [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.704 226109 DEBUG nova.scheduler.client.report [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.705 226109 DEBUG nova.compute.provider_tree [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.737 226109 DEBUG nova.scheduler.client.report [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.763 226109 DEBUG nova.scheduler.client.report [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:11:07 compute-1 nova_compute[226101]: 2025-12-06 08:11:07.805 226109 DEBUG oslo_concurrency.processutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:11:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2249248437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.284 226109 DEBUG oslo_concurrency.processutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.291 226109 DEBUG nova.compute.provider_tree [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.308 226109 DEBUG nova.scheduler.client.report [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.311 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008653.3107562, 5f0de650-9c62-4323-9354-1e018f4f06df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.312 226109 INFO nova.compute.manager [-] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] VM Stopped (Lifecycle Event)
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.334 226109 DEBUG nova.compute.manager [None req-68a4aed1-24db-4d74-88bc-9a05fc6f9a1e - - - - - -] [instance: 5f0de650-9c62-4323-9354-1e018f4f06df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.363 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2249248437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.554 226109 INFO nova.scheduler.client.report [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocation for migration 3c119e16-f7b2-44e7-8cac-2d8634d372fd
Dec 06 08:11:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:08.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.637 226109 DEBUG oslo_concurrency.lockutils [None req-7db18def-052e-4558-a67b-c6641102067d 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "5f0de650-9c62-4323-9354-1e018f4f06df" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:08 compute-1 nova_compute[226101]: 2025-12-06 08:11:08.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:09.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:09 compute-1 ceph-mon[81689]: pgmap v3554: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 08:11:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1874445790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:11:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1874445790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:11:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:10 compute-1 ceph-mon[81689]: pgmap v3555: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 08:11:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:11.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:12 compute-1 nova_compute[226101]: 2025-12-06 08:11:12.023 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:12.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:12 compute-1 ceph-mon[81689]: pgmap v3556: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 190 op/s
Dec 06 08:11:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:13.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:13.640 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:13 compute-1 nova_compute[226101]: 2025-12-06 08:11:13.784 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 e416: 3 total, 3 up, 3 in
Dec 06 08:11:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:14.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:15.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:15 compute-1 ceph-mon[81689]: pgmap v3557: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 190 op/s
Dec 06 08:11:15 compute-1 ceph-mon[81689]: osdmap e416: 3 total, 3 up, 3 in
Dec 06 08:11:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:17 compute-1 nova_compute[226101]: 2025-12-06 08:11:17.025 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:17.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:17 compute-1 ceph-mon[81689]: pgmap v3559: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 34 KiB/s wr, 266 op/s
Dec 06 08:11:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:18.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:18 compute-1 nova_compute[226101]: 2025-12-06 08:11:18.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:18 compute-1 ceph-mon[81689]: pgmap v3560: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 261 op/s
Dec 06 08:11:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:19.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:20.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:20 compute-1 ceph-mon[81689]: pgmap v3561: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 261 op/s
Dec 06 08:11:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:21.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:22 compute-1 nova_compute[226101]: 2025-12-06 08:11:22.027 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:22 compute-1 podman[307708]: 2025-12-06 08:11:22.096494367 +0000 UTC m=+0.062427064 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:11:22 compute-1 podman[307707]: 2025-12-06 08:11:22.104670636 +0000 UTC m=+0.072330340 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 08:11:22 compute-1 podman[307709]: 2025-12-06 08:11:22.142930902 +0000 UTC m=+0.102998522 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:11:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:22.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:22 compute-1 ceph-mon[81689]: pgmap v3562: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 2.5 MiB/s wr, 269 op/s
Dec 06 08:11:23 compute-1 sshd-session[307766]: Received disconnect from 186.87.166.141 port 58100:11: Bye Bye [preauth]
Dec 06 08:11:23 compute-1 sshd-session[307766]: Disconnected from authenticating user root 186.87.166.141 port 58100 [preauth]
Dec 06 08:11:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:23.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:23 compute-1 nova_compute[226101]: 2025-12-06 08:11:23.788 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:24.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:25.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:25 compute-1 ceph-mon[81689]: pgmap v3563: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 2.5 MiB/s wr, 269 op/s
Dec 06 08:11:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:26.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:27 compute-1 nova_compute[226101]: 2025-12-06 08:11:27.029 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:27.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:27 compute-1 ceph-mon[81689]: pgmap v3564: 305 pgs: 305 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1014 KiB/s rd, 2.3 MiB/s wr, 282 op/s
Dec 06 08:11:28 compute-1 nova_compute[226101]: 2025-12-06 08:11:28.610 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:28 compute-1 ceph-mon[81689]: pgmap v3565: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 2.2 MiB/s wr, 206 op/s
Dec 06 08:11:28 compute-1 nova_compute[226101]: 2025-12-06 08:11:28.727 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:28 compute-1 nova_compute[226101]: 2025-12-06 08:11:28.791 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:29.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:30.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:30 compute-1 ceph-mon[81689]: pgmap v3566: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.2 MiB/s wr, 175 op/s
Dec 06 08:11:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:31.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:32 compute-1 nova_compute[226101]: 2025-12-06 08:11:32.030 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:32.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:33 compute-1 ceph-mon[81689]: pgmap v3567: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 400 KiB/s rd, 2.2 MiB/s wr, 176 op/s
Dec 06 08:11:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:33.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:33 compute-1 sshd-session[307769]: Received disconnect from 136.112.8.45 port 55804:11: Bye Bye [preauth]
Dec 06 08:11:33 compute-1 sshd-session[307769]: Disconnected from authenticating user root 136.112.8.45 port 55804 [preauth]
Dec 06 08:11:33 compute-1 nova_compute[226101]: 2025-12-06 08:11:33.793 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:34.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:35 compute-1 ceph-mon[81689]: pgmap v3568: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 65 KiB/s wr, 57 op/s
Dec 06 08:11:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:35.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:36 compute-1 nova_compute[226101]: 2025-12-06 08:11:36.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:36.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:37 compute-1 nova_compute[226101]: 2025-12-06 08:11:37.032 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:37 compute-1 ceph-mon[81689]: pgmap v3569: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 66 KiB/s wr, 57 op/s
Dec 06 08:11:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:37.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.314 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.315 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.315 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.315 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.315 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:11:38 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2058364697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.735 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.911 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.912 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4268MB free_disk=20.942691802978516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.912 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:38 compute-1 nova_compute[226101]: 2025-12-06 08:11:38.912 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:39 compute-1 nova_compute[226101]: 2025-12-06 08:11:39.086 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:11:39 compute-1 nova_compute[226101]: 2025-12-06 08:11:39.086 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:11:39 compute-1 nova_compute[226101]: 2025-12-06 08:11:39.112 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:39 compute-1 ceph-mon[81689]: pgmap v3570: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 15 op/s
Dec 06 08:11:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2058364697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:39.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:11:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3957424338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:39 compute-1 nova_compute[226101]: 2025-12-06 08:11:39.552 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:39 compute-1 nova_compute[226101]: 2025-12-06 08:11:39.557 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:11:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3957424338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:40.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:41 compute-1 ceph-mon[81689]: pgmap v3571: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 6.7 KiB/s wr, 1 op/s
Dec 06 08:11:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:41.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:41 compute-1 nova_compute[226101]: 2025-12-06 08:11:41.849 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:11:42 compute-1 nova_compute[226101]: 2025-12-06 08:11:42.034 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:42 compute-1 nova_compute[226101]: 2025-12-06 08:11:42.089 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:11:42 compute-1 nova_compute[226101]: 2025-12-06 08:11:42.090 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:42 compute-1 sshd-session[307816]: Received disconnect from 186.96.151.198 port 36958:11: Bye Bye [preauth]
Dec 06 08:11:42 compute-1 sshd-session[307816]: Disconnected from authenticating user root 186.96.151.198 port 36958 [preauth]
Dec 06 08:11:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:42.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:43 compute-1 ceph-mon[81689]: pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Dec 06 08:11:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3857437547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:43 compute-1 nova_compute[226101]: 2025-12-06 08:11:43.797 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:44 compute-1 ceph-mon[81689]: pgmap v3573: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Dec 06 08:11:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:46 compute-1 nova_compute[226101]: 2025-12-06 08:11:46.090 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:46 compute-1 nova_compute[226101]: 2025-12-06 08:11:46.091 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:11:46 compute-1 nova_compute[226101]: 2025-12-06 08:11:46.091 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:11:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:46.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:46 compute-1 ceph-mon[81689]: pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Dec 06 08:11:46 compute-1 nova_compute[226101]: 2025-12-06 08:11:46.985 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:11:47 compute-1 nova_compute[226101]: 2025-12-06 08:11:47.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/640659210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:48.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:48 compute-1 nova_compute[226101]: 2025-12-06 08:11:48.799 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:48.994 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:11:48 compute-1 nova_compute[226101]: 2025-12-06 08:11:48.996 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:48.997 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:11:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:11:48.998 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:48 compute-1 ceph-mon[81689]: pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Dec 06 08:11:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1897870452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:49 compute-1 nova_compute[226101]: 2025-12-06 08:11:49.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:49 compute-1 nova_compute[226101]: 2025-12-06 08:11:49.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:11:50 compute-1 nova_compute[226101]: 2025-12-06 08:11:50.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:51 compute-1 ceph-mon[81689]: pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:11:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:52 compute-1 nova_compute[226101]: 2025-12-06 08:11:52.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:52 compute-1 sshd-session[307818]: Received disconnect from 101.100.194.199 port 38506:11: Bye Bye [preauth]
Dec 06 08:11:52 compute-1 sshd-session[307818]: Disconnected from authenticating user root 101.100.194.199 port 38506 [preauth]
Dec 06 08:11:53 compute-1 podman[307820]: 2025-12-06 08:11:53.116398436 +0000 UTC m=+0.087167267 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 08:11:53 compute-1 podman[307821]: 2025-12-06 08:11:53.125172041 +0000 UTC m=+0.091344869 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:11:53 compute-1 podman[307822]: 2025-12-06 08:11:53.15233441 +0000 UTC m=+0.121062647 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:11:53 compute-1 ceph-mon[81689]: pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:11:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3257912044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:53 compute-1 nova_compute[226101]: 2025-12-06 08:11:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:53 compute-1 nova_compute[226101]: 2025-12-06 08:11:53.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:53 compute-1 nova_compute[226101]: 2025-12-06 08:11:53.801 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:54.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:55 compute-1 ceph-mon[81689]: pgmap v3578: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3596922978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:56.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:57 compute-1 nova_compute[226101]: 2025-12-06 08:11:57.040 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:11:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:57.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:11:57 compute-1 ceph-mon[81689]: pgmap v3579: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:58 compute-1 ceph-mon[81689]: pgmap v3580: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:58.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:58 compute-1 nova_compute[226101]: 2025-12-06 08:11:58.803 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:11:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:59.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:59 compute-1 nova_compute[226101]: 2025-12-06 08:11:59.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:00 compute-1 nova_compute[226101]: 2025-12-06 08:12:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:00 compute-1 ceph-mon[81689]: pgmap v3581: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:01.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:01 compute-1 nova_compute[226101]: 2025-12-06 08:12:01.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:12:01.690 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:12:01.691 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:12:01.691 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:02 compute-1 nova_compute[226101]: 2025-12-06 08:12:02.043 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:02.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:03 compute-1 ceph-mon[81689]: pgmap v3582: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:03.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:03 compute-1 nova_compute[226101]: 2025-12-06 08:12:03.805 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:04.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:05 compute-1 ceph-mon[81689]: pgmap v3583: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:05.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:06.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:07 compute-1 sudo[307885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:07 compute-1 sudo[307885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:07 compute-1 sudo[307885]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:07 compute-1 nova_compute[226101]: 2025-12-06 08:12:07.043 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:07 compute-1 sudo[307910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:12:07 compute-1 sudo[307910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:07 compute-1 sudo[307910]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:07 compute-1 sudo[307935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:07 compute-1 sudo[307935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:07 compute-1 sudo[307935]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:07 compute-1 ceph-mon[81689]: pgmap v3584: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:07 compute-1 sudo[307960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:12:07 compute-1 sudo[307960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:07 compute-1 sudo[307960]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:08.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2454514231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:12:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:12:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:12:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:12:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:12:08 compute-1 nova_compute[226101]: 2025-12-06 08:12:08.806 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:09.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:09 compute-1 ceph-mon[81689]: pgmap v3585: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3619985939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:12:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3619985939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:12:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:10.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:10 compute-1 ceph-mon[81689]: pgmap v3586: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:11 compute-1 sshd-session[308017]: Received disconnect from 124.18.141.70 port 56714:11: Bye Bye [preauth]
Dec 06 08:12:11 compute-1 sshd-session[308017]: Disconnected from authenticating user root 124.18.141.70 port 56714 [preauth]
Dec 06 08:12:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:11.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:12 compute-1 nova_compute[226101]: 2025-12-06 08:12:12.044 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:12.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:12 compute-1 ceph-mon[81689]: pgmap v3587: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:13.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:13 compute-1 sudo[308019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:13 compute-1 sudo[308019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:13 compute-1 sudo[308019]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:13 compute-1 nova_compute[226101]: 2025-12-06 08:12:13.808 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:13 compute-1 sudo[308044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:12:13 compute-1 sudo[308044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:13 compute-1 sudo[308044]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:14 compute-1 ceph-mon[81689]: pgmap v3588: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:14.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:15 compute-1 sshd-session[308069]: Received disconnect from 91.144.158.231 port 25933:11: Bye Bye [preauth]
Dec 06 08:12:15 compute-1 sshd-session[308069]: Disconnected from authenticating user root 91.144.158.231 port 25933 [preauth]
Dec 06 08:12:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:15.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:16.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:16 compute-1 ceph-mon[81689]: pgmap v3589: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:17 compute-1 nova_compute[226101]: 2025-12-06 08:12:17.047 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:17 compute-1 sshd-session[308071]: Received disconnect from 154.219.116.39 port 35874:11: Bye Bye [preauth]
Dec 06 08:12:17 compute-1 sshd-session[308071]: Disconnected from authenticating user root 154.219.116.39 port 35874 [preauth]
Dec 06 08:12:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:17.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:18.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:18 compute-1 nova_compute[226101]: 2025-12-06 08:12:18.810 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:19 compute-1 ceph-mon[81689]: pgmap v3590: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:19.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/308335499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3595195691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:20.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:21 compute-1 ceph-mon[81689]: pgmap v3591: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:21.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:22 compute-1 nova_compute[226101]: 2025-12-06 08:12:22.049 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:22.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:23 compute-1 ceph-mon[81689]: pgmap v3592: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:23.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:23 compute-1 nova_compute[226101]: 2025-12-06 08:12:23.811 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:24 compute-1 podman[308074]: 2025-12-06 08:12:24.081185959 +0000 UTC m=+0.060011938 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 06 08:12:24 compute-1 podman[308075]: 2025-12-06 08:12:24.101294429 +0000 UTC m=+0.080845368 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:12:24 compute-1 podman[308076]: 2025-12-06 08:12:24.15018449 +0000 UTC m=+0.128266100 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 08:12:24 compute-1 nova_compute[226101]: 2025-12-06 08:12:24.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:24.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:25 compute-1 ceph-mon[81689]: pgmap v3593: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:25.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:26.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:26 compute-1 nova_compute[226101]: 2025-12-06 08:12:26.982 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:12:26.982 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:12:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:12:26.983 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:12:27 compute-1 nova_compute[226101]: 2025-12-06 08:12:27.050 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:27.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:27 compute-1 ceph-mon[81689]: pgmap v3594: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 821 KiB/s rd, 12 KiB/s wr, 37 op/s
Dec 06 08:12:28 compute-1 sshd-session[308141]: Received disconnect from 154.209.4.183 port 51486:11: Bye Bye [preauth]
Dec 06 08:12:28 compute-1 sshd-session[308141]: Disconnected from authenticating user root 154.209.4.183 port 51486 [preauth]
Dec 06 08:12:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:28.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:28 compute-1 nova_compute[226101]: 2025-12-06 08:12:28.812 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:28 compute-1 ceph-mon[81689]: pgmap v3595: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:12:29.984 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:12:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:30 compute-1 ceph-mon[81689]: pgmap v3596: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:31.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:32 compute-1 nova_compute[226101]: 2025-12-06 08:12:32.052 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:32 compute-1 ceph-mon[81689]: pgmap v3597: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:33.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:33 compute-1 nova_compute[226101]: 2025-12-06 08:12:33.815 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:33 compute-1 ovn_controller[130279]: 2025-12-06T08:12:33Z|00819|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 06 08:12:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:34.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:35.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:35 compute-1 ceph-mon[81689]: pgmap v3598: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:36 compute-1 sshd-session[308145]: Received disconnect from 106.51.92.114 port 58318:11: Bye Bye [preauth]
Dec 06 08:12:36 compute-1 sshd-session[308145]: Disconnected from authenticating user root 106.51.92.114 port 58318 [preauth]
Dec 06 08:12:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:36.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:37 compute-1 nova_compute[226101]: 2025-12-06 08:12:37.053 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:37 compute-1 ceph-mon[81689]: pgmap v3599: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Dec 06 08:12:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:37.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:37 compute-1 nova_compute[226101]: 2025-12-06 08:12:37.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:38 compute-1 nova_compute[226101]: 2025-12-06 08:12:38.672 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:38 compute-1 nova_compute[226101]: 2025-12-06 08:12:38.673 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:38 compute-1 nova_compute[226101]: 2025-12-06 08:12:38.673 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:38 compute-1 nova_compute[226101]: 2025-12-06 08:12:38.673 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:12:38 compute-1 nova_compute[226101]: 2025-12-06 08:12:38.674 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:38 compute-1 nova_compute[226101]: 2025-12-06 08:12:38.816 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:38.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:12:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2996067469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:39 compute-1 nova_compute[226101]: 2025-12-06 08:12:39.140 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:39 compute-1 nova_compute[226101]: 2025-12-06 08:12:39.304 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:12:39 compute-1 nova_compute[226101]: 2025-12-06 08:12:39.306 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4288MB free_disk=20.957351684570312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:12:39 compute-1 nova_compute[226101]: 2025-12-06 08:12:39.306 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:39 compute-1 nova_compute[226101]: 2025-12-06 08:12:39.306 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:39 compute-1 ceph-mon[81689]: pgmap v3600: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 876 KiB/s wr, 47 op/s
Dec 06 08:12:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2996067469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:39.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:39 compute-1 nova_compute[226101]: 2025-12-06 08:12:39.650 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:12:39 compute-1 nova_compute[226101]: 2025-12-06 08:12:39.650 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:12:39 compute-1 nova_compute[226101]: 2025-12-06 08:12:39.826 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:12:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4002310260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:40 compute-1 nova_compute[226101]: 2025-12-06 08:12:40.260 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:40 compute-1 nova_compute[226101]: 2025-12-06 08:12:40.266 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:12:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4002310260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:40 compute-1 nova_compute[226101]: 2025-12-06 08:12:40.437 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:12:40 compute-1 nova_compute[226101]: 2025-12-06 08:12:40.438 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:12:40 compute-1 nova_compute[226101]: 2025-12-06 08:12:40.439 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:40.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:41 compute-1 ceph-mon[81689]: pgmap v3601: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 876 KiB/s wr, 11 op/s
Dec 06 08:12:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:41.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:42 compute-1 nova_compute[226101]: 2025-12-06 08:12:42.054 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:42.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:43 compute-1 nova_compute[226101]: 2025-12-06 08:12:43.438 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:43 compute-1 nova_compute[226101]: 2025-12-06 08:12:43.439 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:12:43 compute-1 nova_compute[226101]: 2025-12-06 08:12:43.439 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:12:43 compute-1 nova_compute[226101]: 2025-12-06 08:12:43.461 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:12:43 compute-1 nova_compute[226101]: 2025-12-06 08:12:43.819 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:44.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:44.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:44 compute-1 ceph-mon[81689]: pgmap v3602: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:12:46 compute-1 sshd-session[308143]: Connection closed by 165.154.55.146 port 38956 [preauth]
Dec 06 08:12:46 compute-1 ceph-mon[81689]: pgmap v3603: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:12:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1750020864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:46 compute-1 sshd-session[308147]: ssh_dispatch_run_fatal: Connection from 14.103.118.136 port 36290: Connection timed out [preauth]
Dec 06 08:12:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:46.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:47 compute-1 nova_compute[226101]: 2025-12-06 08:12:47.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:48 compute-1 ceph-mon[81689]: pgmap v3604: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 418 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Dec 06 08:12:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3118951938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2206427747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:48.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:48 compute-1 nova_compute[226101]: 2025-12-06 08:12:48.821 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:49 compute-1 sshd-session[308193]: Received disconnect from 186.87.166.141 port 59206:11: Bye Bye [preauth]
Dec 06 08:12:49 compute-1 sshd-session[308193]: Disconnected from authenticating user root 186.87.166.141 port 59206 [preauth]
Dec 06 08:12:49 compute-1 ceph-mon[81689]: pgmap v3605: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 403 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Dec 06 08:12:49 compute-1 sshd-session[308195]: Received disconnect from 186.96.151.198 port 40600:11: Bye Bye [preauth]
Dec 06 08:12:49 compute-1 sshd-session[308195]: Disconnected from authenticating user root 186.96.151.198 port 40600 [preauth]
Dec 06 08:12:50 compute-1 nova_compute[226101]: 2025-12-06 08:12:50.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:50 compute-1 nova_compute[226101]: 2025-12-06 08:12:50.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:12:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:50.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:51 compute-1 nova_compute[226101]: 2025-12-06 08:12:51.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:52 compute-1 ceph-mon[81689]: pgmap v3606: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.3 MiB/s wr, 59 op/s
Dec 06 08:12:52 compute-1 nova_compute[226101]: 2025-12-06 08:12:52.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:52.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:52.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:53 compute-1 ceph-mon[81689]: pgmap v3607: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 3.1 MiB/s wr, 86 op/s
Dec 06 08:12:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1156600947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2306060753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:53 compute-1 nova_compute[226101]: 2025-12-06 08:12:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:53 compute-1 nova_compute[226101]: 2025-12-06 08:12:53.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:53 compute-1 nova_compute[226101]: 2025-12-06 08:12:53.823 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:54.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/393233924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:55 compute-1 podman[308198]: 2025-12-06 08:12:55.067810085 +0000 UTC m=+0.054119841 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 08:12:55 compute-1 podman[308197]: 2025-12-06 08:12:55.073517269 +0000 UTC m=+0.061506830 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:12:55 compute-1 podman[308199]: 2025-12-06 08:12:55.129068387 +0000 UTC m=+0.109451965 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:12:55 compute-1 ceph-mon[81689]: pgmap v3608: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 08:12:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3952924626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3228740001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1362112174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:56.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:56.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:57 compute-1 nova_compute[226101]: 2025-12-06 08:12:57.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:57 compute-1 ceph-mon[81689]: pgmap v3609: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:12:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:58 compute-1 ceph-mon[81689]: pgmap v3610: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 08:12:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:58.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:58 compute-1 nova_compute[226101]: 2025-12-06 08:12:58.824 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:12:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:58.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:59 compute-1 sshd-session[308262]: Received disconnect from 45.120.216.232 port 53570:11: Bye Bye [preauth]
Dec 06 08:12:59 compute-1 sshd-session[308262]: Disconnected from authenticating user root 45.120.216.232 port 53570 [preauth]
Dec 06 08:13:00 compute-1 nova_compute[226101]: 2025-12-06 08:13:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 08:13:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 08:13:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:00.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:01 compute-1 sshd-session[308264]: Received disconnect from 101.100.194.199 port 60682:11: Bye Bye [preauth]
Dec 06 08:13:01 compute-1 sshd-session[308264]: Disconnected from authenticating user root 101.100.194.199 port 60682 [preauth]
Dec 06 08:13:01 compute-1 ceph-mon[81689]: pgmap v3611: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 06 08:13:01 compute-1 nova_compute[226101]: 2025-12-06 08:13:01.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:01.692 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:01.692 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:01.692 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:02 compute-1 nova_compute[226101]: 2025-12-06 08:13:02.095 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:02 compute-1 nova_compute[226101]: 2025-12-06 08:13:02.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:02.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:02.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:02 compute-1 ceph-mon[81689]: pgmap v3612: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Dec 06 08:13:03 compute-1 nova_compute[226101]: 2025-12-06 08:13:03.826 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:04.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:04.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:05 compute-1 ceph-mon[81689]: pgmap v3613: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 06 08:13:06 compute-1 ceph-mon[81689]: pgmap v3614: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 06 08:13:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:06.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:07 compute-1 nova_compute[226101]: 2025-12-06 08:13:07.096 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:08.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:08 compute-1 nova_compute[226101]: 2025-12-06 08:13:08.827 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:08.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:09 compute-1 ceph-mon[81689]: pgmap v3615: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 142 op/s
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.591 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.591 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.592 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.592 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.592 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.592 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.630 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.630 226109 WARNING nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.631 226109 WARNING nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.631 226109 WARNING nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Unknown base file: /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.631 226109 INFO nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Removable base files: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.631 226109 INFO nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.632 226109 INFO nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.632 226109 INFO nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/057b78433884b2f3b54db7a8a6319a4281d6a489
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.632 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.632 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 06 08:13:09 compute-1 nova_compute[226101]: 2025-12-06 08:13:09.633 226109 DEBUG nova.virt.libvirt.imagecache [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 06 08:13:10 compute-1 nova_compute[226101]: 2025-12-06 08:13:10.526 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "9f03ad4a-9649-4be1-8f8c-152f8374111c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:10 compute-1 nova_compute[226101]: 2025-12-06 08:13:10.527 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:10 compute-1 nova_compute[226101]: 2025-12-06 08:13:10.637 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:13:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:10.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:10.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:11 compute-1 nova_compute[226101]: 2025-12-06 08:13:11.225 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:11 compute-1 nova_compute[226101]: 2025-12-06 08:13:11.226 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:11 compute-1 nova_compute[226101]: 2025-12-06 08:13:11.234 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:13:11 compute-1 nova_compute[226101]: 2025-12-06 08:13:11.234 226109 INFO nova.compute.claims [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:13:11 compute-1 nova_compute[226101]: 2025-12-06 08:13:11.617 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:12 compute-1 nova_compute[226101]: 2025-12-06 08:13:12.097 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1088677073' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:13:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1088677073' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:13:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:12.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:13:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3653969976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.047 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.053 226109 DEBUG nova.compute.provider_tree [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.171 226109 DEBUG nova.scheduler.client.report [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.221 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.222 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.300 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.301 226109 DEBUG nova.network.neutron [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.417 226109 INFO nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.537 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:13:13 compute-1 ceph-mon[81689]: pgmap v3616: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 141 op/s
Dec 06 08:13:13 compute-1 ceph-mon[81689]: pgmap v3617: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 376 KiB/s wr, 162 op/s
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.737 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.738 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.739 226109 INFO nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Creating image(s)
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.875 226109 DEBUG nova.storage.rbd_utils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.915 226109 DEBUG nova.storage.rbd_utils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.939 226109 DEBUG nova.storage.rbd_utils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.942 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.970 226109 DEBUG nova.policy [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:13:13 compute-1 nova_compute[226101]: 2025-12-06 08:13:13.972 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:14 compute-1 nova_compute[226101]: 2025-12-06 08:13:14.010 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:14 compute-1 nova_compute[226101]: 2025-12-06 08:13:14.011 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:14 compute-1 nova_compute[226101]: 2025-12-06 08:13:14.012 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:14 compute-1 nova_compute[226101]: 2025-12-06 08:13:14.012 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:14 compute-1 sudo[308343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:14 compute-1 sudo[308343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:14 compute-1 sudo[308343]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 compute-1 nova_compute[226101]: 2025-12-06 08:13:14.039 226109 DEBUG nova.storage.rbd_utils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:13:14 compute-1 nova_compute[226101]: 2025-12-06 08:13:14.043 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:14 compute-1 sudo[308385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:13:14 compute-1 sudo[308385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:14 compute-1 sudo[308385]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 compute-1 sudo[308414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:14 compute-1 sudo[308414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:14 compute-1 sudo[308414]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 compute-1 sudo[308454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:13:14 compute-1 sudo[308454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:14 compute-1 sudo[308454]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:14.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:14 compute-1 nova_compute[226101]: 2025-12-06 08:13:14.856 226109 DEBUG nova.network.neutron [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Successfully created port: 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:13:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:14.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3653969976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:15 compute-1 sshd-session[308502]: Received disconnect from 136.112.8.45 port 44518:11: Bye Bye [preauth]
Dec 06 08:13:15 compute-1 sshd-session[308502]: Disconnected from authenticating user root 136.112.8.45 port 44518 [preauth]
Dec 06 08:13:15 compute-1 sudo[308504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:15 compute-1 sudo[308504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:15 compute-1 sudo[308504]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:16 compute-1 sudo[308529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:13:16 compute-1 sudo[308529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:16 compute-1 sudo[308529]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:16 compute-1 sudo[308554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:16 compute-1 sudo[308554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:16 compute-1 sudo[308554]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:16 compute-1 sudo[308579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:13:16 compute-1 sudo[308579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:16 compute-1 nova_compute[226101]: 2025-12-06 08:13:16.246 226109 DEBUG nova.network.neutron [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Successfully updated port: 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:13:16 compute-1 nova_compute[226101]: 2025-12-06 08:13:16.273 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:16 compute-1 nova_compute[226101]: 2025-12-06 08:13:16.274 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:16 compute-1 nova_compute[226101]: 2025-12-06 08:13:16.274 226109 DEBUG nova.network.neutron [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:13:16 compute-1 nova_compute[226101]: 2025-12-06 08:13:16.433 226109 DEBUG nova.compute.manager [req-95b83eea-8ac7-43cb-9f59-49d0a1e323c0 req-52034267-473d-4575-be0c-03b7613c68ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-changed-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:16 compute-1 nova_compute[226101]: 2025-12-06 08:13:16.434 226109 DEBUG nova.compute.manager [req-95b83eea-8ac7-43cb-9f59-49d0a1e323c0 req-52034267-473d-4575-be0c-03b7613c68ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Refreshing instance network info cache due to event network-changed-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:13:16 compute-1 nova_compute[226101]: 2025-12-06 08:13:16.434 226109 DEBUG oslo_concurrency.lockutils [req-95b83eea-8ac7-43cb-9f59-49d0a1e323c0 req-52034267-473d-4575-be0c-03b7613c68ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:16 compute-1 sudo[308579]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:16.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:16.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:16 compute-1 nova_compute[226101]: 2025-12-06 08:13:16.940 226109 DEBUG nova.network.neutron [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:13:16 compute-1 ceph-mon[81689]: pgmap v3618: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 364 KiB/s wr, 20 op/s
Dec 06 08:13:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:17 compute-1 nova_compute[226101]: 2025-12-06 08:13:17.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:17 compute-1 ceph-mon[81689]: pgmap v3619: 305 pgs: 305 active+clean; 260 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 376 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Dec 06 08:13:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:13:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:13:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:13:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:13:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:13:18 compute-1 nova_compute[226101]: 2025-12-06 08:13:18.131 226109 DEBUG nova.network.neutron [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updating instance_info_cache with network_info: [{"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:18 compute-1 nova_compute[226101]: 2025-12-06 08:13:18.199 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:18 compute-1 nova_compute[226101]: 2025-12-06 08:13:18.199 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Instance network_info: |[{"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:13:18 compute-1 nova_compute[226101]: 2025-12-06 08:13:18.199 226109 DEBUG oslo_concurrency.lockutils [req-95b83eea-8ac7-43cb-9f59-49d0a1e323c0 req-52034267-473d-4575-be0c-03b7613c68ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:18 compute-1 nova_compute[226101]: 2025-12-06 08:13:18.200 226109 DEBUG nova.network.neutron [req-95b83eea-8ac7-43cb-9f59-49d0a1e323c0 req-52034267-473d-4575-be0c-03b7613c68ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Refreshing network info cache for port 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:13:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:18 compute-1 nova_compute[226101]: 2025-12-06 08:13:18.517 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:18 compute-1 nova_compute[226101]: 2025-12-06 08:13:18.580 226109 DEBUG nova.storage.rbd_utils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:13:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:18.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:18 compute-1 nova_compute[226101]: 2025-12-06 08:13:18.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:19 compute-1 ceph-mon[81689]: pgmap v3620: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.701 226109 DEBUG nova.objects.instance [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 9f03ad4a-9649-4be1-8f8c-152f8374111c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.723 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.723 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Ensure instance console log exists: /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.724 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.724 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.724 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.726 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Start _get_guest_xml network_info=[{"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.730 226109 WARNING nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.735 226109 DEBUG nova.virt.libvirt.host [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.736 226109 DEBUG nova.virt.libvirt.host [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.739 226109 DEBUG nova.virt.libvirt.host [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.740 226109 DEBUG nova.virt.libvirt.host [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.741 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.741 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.741 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.742 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.742 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.742 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.742 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.742 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.743 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.743 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.743 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.743 226109 DEBUG nova.virt.hardware [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.745 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.970 226109 DEBUG nova.network.neutron [req-95b83eea-8ac7-43cb-9f59-49d0a1e323c0 req-52034267-473d-4575-be0c-03b7613c68ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updated VIF entry in instance network info cache for port 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.971 226109 DEBUG nova.network.neutron [req-95b83eea-8ac7-43cb-9f59-49d0a1e323c0 req-52034267-473d-4575-be0c-03b7613c68ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updating instance_info_cache with network_info: [{"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:19 compute-1 nova_compute[226101]: 2025-12-06 08:13:19.993 226109 DEBUG oslo_concurrency.lockutils [req-95b83eea-8ac7-43cb-9f59-49d0a1e323c0 req-52034267-473d-4575-be0c-03b7613c68ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:20 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:13:20 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1773110591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:13:20 compute-1 nova_compute[226101]: 2025-12-06 08:13:20.153 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:20 compute-1 nova_compute[226101]: 2025-12-06 08:13:20.191 226109 DEBUG nova.storage.rbd_utils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:13:20 compute-1 nova_compute[226101]: 2025-12-06 08:13:20.196 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:20.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:13:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1710177433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:13:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1773110591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.619 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.621 226109 DEBUG nova.virt.libvirt.vif [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-566468355',display_name='tempest-TestNetworkBasicOps-server-566468355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-566468355',id=201,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8eLwuyssfzJT4ThZFVtdP/zO6VJa7ZQsEEc42HZazmyuqIPh0yO+tAD/ScyUHxO85nwLHXe57Eu+8dZCRnJgpRQLArfNfUR6FZjuSX2LxTa/bT7wDV93tmmNkzvX9xbA==',key_name='tempest-TestNetworkBasicOps-1994829414',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-hlevtqdz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:13:13Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=9f03ad4a-9649-4be1-8f8c-152f8374111c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.621 226109 DEBUG nova.network.os_vif_util [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.622 226109 DEBUG nova.network.os_vif_util [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:49:a9,bridge_name='br-int',has_traffic_filtering=True,id=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4caeb5ec-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.624 226109 DEBUG nova.objects.instance [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f03ad4a-9649-4be1-8f8c-152f8374111c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.662 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <uuid>9f03ad4a-9649-4be1-8f8c-152f8374111c</uuid>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <name>instance-000000c9</name>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <nova:name>tempest-TestNetworkBasicOps-server-566468355</nova:name>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:13:19</nova:creationTime>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <nova:port uuid="4caeb5ec-155f-4b7a-8ac3-9a7c936677a6">
Dec 06 08:13:21 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <system>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <entry name="serial">9f03ad4a-9649-4be1-8f8c-152f8374111c</entry>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <entry name="uuid">9f03ad4a-9649-4be1-8f8c-152f8374111c</entry>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </system>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <os>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   </os>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <features>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   </features>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/9f03ad4a-9649-4be1-8f8c-152f8374111c_disk">
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       </source>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/9f03ad4a-9649-4be1-8f8c-152f8374111c_disk.config">
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       </source>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:13:21 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:19:49:a9"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <target dev="tap4caeb5ec-15"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c/console.log" append="off"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <video>
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </video>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:13:21 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:13:21 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:13:21 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:13:21 compute-1 nova_compute[226101]: </domain>
Dec 06 08:13:21 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.664 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Preparing to wait for external event network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.665 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.665 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.666 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.667 226109 DEBUG nova.virt.libvirt.vif [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-566468355',display_name='tempest-TestNetworkBasicOps-server-566468355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-566468355',id=201,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8eLwuyssfzJT4ThZFVtdP/zO6VJa7ZQsEEc42HZazmyuqIPh0yO+tAD/ScyUHxO85nwLHXe57Eu+8dZCRnJgpRQLArfNfUR6FZjuSX2LxTa/bT7wDV93tmmNkzvX9xbA==',key_name='tempest-TestNetworkBasicOps-1994829414',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-hlevtqdz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:13:13Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=9f03ad4a-9649-4be1-8f8c-152f8374111c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.668 226109 DEBUG nova.network.os_vif_util [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.669 226109 DEBUG nova.network.os_vif_util [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:49:a9,bridge_name='br-int',has_traffic_filtering=True,id=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4caeb5ec-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.670 226109 DEBUG os_vif [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:49:a9,bridge_name='br-int',has_traffic_filtering=True,id=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4caeb5ec-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.671 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.673 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.674 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.679 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.679 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4caeb5ec-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.680 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4caeb5ec-15, col_values=(('external_ids', {'iface-id': '4caeb5ec-155f-4b7a-8ac3-9a7c936677a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:49:a9', 'vm-uuid': '9f03ad4a-9649-4be1-8f8c-152f8374111c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.683 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:21 compute-1 NetworkManager[49031]: <info>  [1765008801.6856] manager: (tap4caeb5ec-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.686 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.695 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:21 compute-1 nova_compute[226101]: 2025-12-06 08:13:21.695 226109 INFO os_vif [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:49:a9,bridge_name='br-int',has_traffic_filtering=True,id=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4caeb5ec-15')
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.043 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.044 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.044 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:19:49:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.044 226109 INFO nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Using config drive
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.072 226109 DEBUG nova.storage.rbd_utils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.102 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:22.252 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.252 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:22.253 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.354 226109 INFO nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Creating config drive at /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c/disk.config
Dec 06 08:13:22 compute-1 nova_compute[226101]: 2025-12-06 08:13:22.358 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpza50mf08 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:22.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:22.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:23 compute-1 nova_compute[226101]: 2025-12-06 08:13:23.401 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpza50mf08" returned: 0 in 1.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:23 compute-1 nova_compute[226101]: 2025-12-06 08:13:23.429 226109 DEBUG nova.storage.rbd_utils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:13:23 compute-1 nova_compute[226101]: 2025-12-06 08:13:23.434 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c/disk.config 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:23 compute-1 ceph-mon[81689]: pgmap v3621: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 06 08:13:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1710177433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.566 226109 DEBUG oslo_concurrency.processutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c/disk.config 9f03ad4a-9649-4be1-8f8c-152f8374111c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.567 226109 INFO nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Deleting local config drive /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c/disk.config because it was imported into RBD.
Dec 06 08:13:24 compute-1 kernel: tap4caeb5ec-15: entered promiscuous mode
Dec 06 08:13:24 compute-1 NetworkManager[49031]: <info>  [1765008804.6249] manager: (tap4caeb5ec-15): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Dec 06 08:13:24 compute-1 ovn_controller[130279]: 2025-12-06T08:13:24Z|00820|binding|INFO|Claiming lport 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 for this chassis.
Dec 06 08:13:24 compute-1 ovn_controller[130279]: 2025-12-06T08:13:24Z|00821|binding|INFO|4caeb5ec-155f-4b7a-8ac3-9a7c936677a6: Claiming fa:16:3e:19:49:a9 10.100.0.7
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.627 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.633 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.640 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 NetworkManager[49031]: <info>  [1765008804.6420] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Dec 06 08:13:24 compute-1 NetworkManager[49031]: <info>  [1765008804.6424] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.648 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:49:a9 10.100.0.7'], port_security=['fa:16:3e:19:49:a9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9f03ad4a-9649-4be1-8f8c-152f8374111c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9489f490-c39b-4337-88af-2235c40dec3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c607db35-6f3d-4821-9124-a70a0a233535, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.649 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 in datapath 1bf97b73-354e-4df7-9a72-727cdc64dc43 bound to our chassis
Dec 06 08:13:24 compute-1 systemd-udevd[308841]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.651 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1bf97b73-354e-4df7-9a72-727cdc64dc43
Dec 06 08:13:24 compute-1 systemd-machined[190302]: New machine qemu-94-instance-000000c9.
Dec 06 08:13:24 compute-1 NetworkManager[49031]: <info>  [1765008804.6616] device (tap4caeb5ec-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:13:24 compute-1 NetworkManager[49031]: <info>  [1765008804.6626] device (tap4caeb5ec-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.663 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fd4c77-6808-4a42-a223-d14d113d58c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.664 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1bf97b73-31 in ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.667 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1bf97b73-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.667 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f275f2-2afa-4132-8246-299531c7d6ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.667 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c76f4ecc-c347-46f9-8993-44ef3f9cbb33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 systemd[1]: Started Virtual Machine qemu-94-instance-000000c9.
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.681 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[77d53c74-0f7c-47b6-b6b9-5aeeeb12a0d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.705 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[14eae617-9ce7-4b70-80f2-6393ebd7bb56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.734 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[803ba7c2-5557-401b-a55d-9fb6e85c4e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.739 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0b231a3b-6e06-4599-abbc-6b252f0f477d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 systemd-udevd[308845]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:13:24 compute-1 NetworkManager[49031]: <info>  [1765008804.7490] manager: (tap1bf97b73-30): new Veth device (/org/freedesktop/NetworkManager/Devices/380)
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.751 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.754 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.774 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4477a559-7530-4a5a-887d-61bdb4c61f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_controller[130279]: 2025-12-06T08:13:24Z|00822|binding|INFO|Setting lport 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 ovn-installed in OVS
Dec 06 08:13:24 compute-1 ovn_controller[130279]: 2025-12-06T08:13:24Z|00823|binding|INFO|Setting lport 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 up in Southbound
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.777 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e1070439-4f2a-4a88-b67a-9404a6c63322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.778 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:24.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:24 compute-1 NetworkManager[49031]: <info>  [1765008804.7969] device (tap1bf97b73-30): carrier: link connected
Dec 06 08:13:24 compute-1 ceph-mon[81689]: pgmap v3622: 305 pgs: 305 active+clean; 320 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 908 KiB/s rd, 3.9 MiB/s wr, 130 op/s
Dec 06 08:13:24 compute-1 ceph-mon[81689]: pgmap v3623: 305 pgs: 305 active+clean; 320 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 803 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.801 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[36f5627c-dec0-4bbe-a20e-08ff140d48a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.816 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[69db8e61-8869-415c-bf3e-bb5152327b68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1bf97b73-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:e2:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908430, 'reachable_time': 40083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308876, 'error': None, 'target': 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.830 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[91ad8065-95e2-404d-81f9-e516d2a69c0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:e244'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 908430, 'tstamp': 908430}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308877, 'error': None, 'target': 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.847 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[40423f86-f238-4947-a5ad-16e1380d1941]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1bf97b73-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:e2:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908430, 'reachable_time': 40083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308878, 'error': None, 'target': 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.877 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[040fcb62-d712-42e0-96e3-7d618d7befd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.931 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b1d121-fdd5-424b-99c8-fd0af78fb36b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.932 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bf97b73-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.932 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.933 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bf97b73-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.934 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 kernel: tap1bf97b73-30: entered promiscuous mode
Dec 06 08:13:24 compute-1 NetworkManager[49031]: <info>  [1765008804.9354] manager: (tap1bf97b73-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.937 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.938 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1bf97b73-30, col_values=(('external_ids', {'iface-id': 'df0319da-86fa-419b-bb2d-0ca654179487'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.938 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 ovn_controller[130279]: 2025-12-06T08:13:24Z|00824|binding|INFO|Releasing lport df0319da-86fa-419b-bb2d-0ca654179487 from this chassis (sb_readonly=0)
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.951 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1bf97b73-354e-4df7-9a72-727cdc64dc43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1bf97b73-354e-4df7-9a72-727cdc64dc43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.951 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ef767b22-62ce-4c94-8f09-9e8f6030b1a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.952 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-1bf97b73-354e-4df7-9a72-727cdc64dc43
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/1bf97b73-354e-4df7-9a72-727cdc64dc43.pid.haproxy
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 1bf97b73-354e-4df7-9a72-727cdc64dc43
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:13:24 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:24.953 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'env', 'PROCESS_TAG=haproxy-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1bf97b73-354e-4df7-9a72-727cdc64dc43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.975 226109 DEBUG nova.compute.manager [req-aa3f9ae5-8c95-49e7-9da8-57ac72b3fed7 req-948324a6-20b5-41db-873a-612dc5e4b696 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.976 226109 DEBUG oslo_concurrency.lockutils [req-aa3f9ae5-8c95-49e7-9da8-57ac72b3fed7 req-948324a6-20b5-41db-873a-612dc5e4b696 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.976 226109 DEBUG oslo_concurrency.lockutils [req-aa3f9ae5-8c95-49e7-9da8-57ac72b3fed7 req-948324a6-20b5-41db-873a-612dc5e4b696 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.977 226109 DEBUG oslo_concurrency.lockutils [req-aa3f9ae5-8c95-49e7-9da8-57ac72b3fed7 req-948324a6-20b5-41db-873a-612dc5e4b696 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:24 compute-1 nova_compute[226101]: 2025-12-06 08:13:24.977 226109 DEBUG nova.compute.manager [req-aa3f9ae5-8c95-49e7-9da8-57ac72b3fed7 req-948324a6-20b5-41db-873a-612dc5e4b696 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Processing event network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.148 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008805.1478343, 9f03ad4a-9649-4be1-8f8c-152f8374111c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.149 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] VM Started (Lifecycle Event)
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.151 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.159 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.163 226109 INFO nova.virt.libvirt.driver [-] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Instance spawned successfully.
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.163 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.183 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.188 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.190 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.191 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.191 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.191 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.192 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.192 226109 DEBUG nova.virt.libvirt.driver [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.220 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.220 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008805.1481054, 9f03ad4a-9649-4be1-8f8c-152f8374111c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.220 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] VM Paused (Lifecycle Event)
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.268 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.271 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765008805.1594326, 9f03ad4a-9649-4be1-8f8c-152f8374111c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.271 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] VM Resumed (Lifecycle Event)
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.283 226109 INFO nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Took 11.55 seconds to spawn the instance on the hypervisor.
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.283 226109 DEBUG nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.291 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.293 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.327 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.359 226109 INFO nova.compute.manager [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Took 14.20 seconds to build instance.
Dec 06 08:13:25 compute-1 nova_compute[226101]: 2025-12-06 08:13:25.378 226109 DEBUG oslo_concurrency.lockutils [None req-b71701cf-c063-45f9-88d5-92e031a9aeac d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:25 compute-1 podman[308952]: 2025-12-06 08:13:25.330370782 +0000 UTC m=+0.038990666 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:13:26 compute-1 nova_compute[226101]: 2025-12-06 08:13:26.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:26.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:26.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:27 compute-1 nova_compute[226101]: 2025-12-06 08:13:27.088 226109 DEBUG nova.compute.manager [req-f4a695c7-17bf-4420-99b7-c131910dee2e req-43d47237-ca1d-49df-ba8d-0ea48a467db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:27 compute-1 nova_compute[226101]: 2025-12-06 08:13:27.088 226109 DEBUG oslo_concurrency.lockutils [req-f4a695c7-17bf-4420-99b7-c131910dee2e req-43d47237-ca1d-49df-ba8d-0ea48a467db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:27 compute-1 nova_compute[226101]: 2025-12-06 08:13:27.089 226109 DEBUG oslo_concurrency.lockutils [req-f4a695c7-17bf-4420-99b7-c131910dee2e req-43d47237-ca1d-49df-ba8d-0ea48a467db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:27 compute-1 nova_compute[226101]: 2025-12-06 08:13:27.089 226109 DEBUG oslo_concurrency.lockutils [req-f4a695c7-17bf-4420-99b7-c131910dee2e req-43d47237-ca1d-49df-ba8d-0ea48a467db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:27 compute-1 nova_compute[226101]: 2025-12-06 08:13:27.089 226109 DEBUG nova.compute.manager [req-f4a695c7-17bf-4420-99b7-c131910dee2e req-43d47237-ca1d-49df-ba8d-0ea48a467db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] No waiting events found dispatching network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:13:27 compute-1 nova_compute[226101]: 2025-12-06 08:13:27.089 226109 WARNING nova.compute.manager [req-f4a695c7-17bf-4420-99b7-c131910dee2e req-43d47237-ca1d-49df-ba8d-0ea48a467db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received unexpected event network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 for instance with vm_state active and task_state None.
Dec 06 08:13:27 compute-1 nova_compute[226101]: 2025-12-06 08:13:27.103 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:27 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:27.255 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:28.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:30.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:30.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:31 compute-1 sshd-session[308994]: Received disconnect from 154.219.116.39 port 53704:11: Bye Bye [preauth]
Dec 06 08:13:31 compute-1 sshd-session[308994]: Disconnected from authenticating user root 154.219.116.39 port 53704 [preauth]
Dec 06 08:13:31 compute-1 nova_compute[226101]: 2025-12-06 08:13:31.686 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:32 compute-1 nova_compute[226101]: 2025-12-06 08:13:32.106 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:32.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:32.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:33 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 08:13:33 compute-1 nova_compute[226101]: 2025-12-06 08:13:33.625 226109 DEBUG nova.compute.manager [req-c981faf2-0573-4a08-a2be-c158cf27ad4e req-2de7b000-10b6-4ca6-b387-79bf7d196a83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-changed-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:33 compute-1 nova_compute[226101]: 2025-12-06 08:13:33.626 226109 DEBUG nova.compute.manager [req-c981faf2-0573-4a08-a2be-c158cf27ad4e req-2de7b000-10b6-4ca6-b387-79bf7d196a83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Refreshing instance network info cache due to event network-changed-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:13:33 compute-1 nova_compute[226101]: 2025-12-06 08:13:33.626 226109 DEBUG oslo_concurrency.lockutils [req-c981faf2-0573-4a08-a2be-c158cf27ad4e req-2de7b000-10b6-4ca6-b387-79bf7d196a83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:33 compute-1 nova_compute[226101]: 2025-12-06 08:13:33.626 226109 DEBUG oslo_concurrency.lockutils [req-c981faf2-0573-4a08-a2be-c158cf27ad4e req-2de7b000-10b6-4ca6-b387-79bf7d196a83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:33 compute-1 nova_compute[226101]: 2025-12-06 08:13:33.626 226109 DEBUG nova.network.neutron [req-c981faf2-0573-4a08-a2be-c158cf27ad4e req-2de7b000-10b6-4ca6-b387-79bf7d196a83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Refreshing network info cache for port 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:13:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:34.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:34.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:35 compute-1 nova_compute[226101]: 2025-12-06 08:13:35.403 226109 DEBUG nova.network.neutron [req-c981faf2-0573-4a08-a2be-c158cf27ad4e req-2de7b000-10b6-4ca6-b387-79bf7d196a83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updated VIF entry in instance network info cache for port 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:13:35 compute-1 nova_compute[226101]: 2025-12-06 08:13:35.404 226109 DEBUG nova.network.neutron [req-c981faf2-0573-4a08-a2be-c158cf27ad4e req-2de7b000-10b6-4ca6-b387-79bf7d196a83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updating instance_info_cache with network_info: [{"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:35 compute-1 nova_compute[226101]: 2025-12-06 08:13:35.629 226109 DEBUG oslo_concurrency.lockutils [req-c981faf2-0573-4a08-a2be-c158cf27ad4e req-2de7b000-10b6-4ca6-b387-79bf7d196a83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:36 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.120208740s, txc = 0x55b55279c000
Dec 06 08:13:36 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 10.120072365s
Dec 06 08:13:36 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 10.120072365s
Dec 06 08:13:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).paxos(paxos updating c 6778..7443) lease_timeout -- calling new election
Dec 06 08:13:36 compute-1 ceph-mon[81689]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 06 08:13:36 compute-1 ceph-mon[81689]: paxos.2).electionLogic(62) init, last seen epoch 62
Dec 06 08:13:36 compute-1 podman[308965]: 2025-12-06 08:13:36.471409453 +0000 UTC m=+10.442442905 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:13:36 compute-1 podman[308964]: 2025-12-06 08:13:36.483919918 +0000 UTC m=+10.463197231 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 06 08:13:36 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:36 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.509048462s, txc = 0x55b554619500
Dec 06 08:13:36 compute-1 podman[308952]: 2025-12-06 08:13:36.655404596 +0000 UTC m=+11.364024470 container create 87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 08:13:36 compute-1 nova_compute[226101]: 2025-12-06 08:13:36.689 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:36.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:36 compute-1 podman[308966]: 2025-12-06 08:13:36.863392451 +0000 UTC m=+10.831043002 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Dec 06 08:13:36 compute-1 systemd[1]: Started libpod-conmon-87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af.scope.
Dec 06 08:13:36 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:13:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:36.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512765d0ba2c371049a760b3b8853e9f53b26d4c60215a47814040d4886028bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:37 compute-1 nova_compute[226101]: 2025-12-06 08:13:37.159 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:37 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 08:13:38 compute-1 podman[308952]: 2025-12-06 08:13:38.047080512 +0000 UTC m=+12.755700416 container init 87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 08:13:38 compute-1 podman[308952]: 2025-12-06 08:13:38.052350924 +0000 UTC m=+12.760970798 container start 87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 08:13:38 compute-1 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[309032]: [NOTICE]   (309036) : New worker (309038) forked
Dec 06 08:13:38 compute-1 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[309032]: [NOTICE]   (309036) : Loading success.
Dec 06 08:13:38 compute-1 nova_compute[226101]: 2025-12-06 08:13:38.633 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:38 compute-1 nova_compute[226101]: 2025-12-06 08:13:38.656 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:38 compute-1 nova_compute[226101]: 2025-12-06 08:13:38.656 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:38 compute-1 nova_compute[226101]: 2025-12-06 08:13:38.656 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:38 compute-1 nova_compute[226101]: 2025-12-06 08:13:38.656 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:13:38 compute-1 nova_compute[226101]: 2025-12-06 08:13:38.657 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:38 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:38.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:39 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:39 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:40 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:40 compute-1 sshd-session[309058]: Received disconnect from 124.18.141.70 port 58908:11: Bye Bye [preauth]
Dec 06 08:13:40 compute-1 sshd-session[309058]: Disconnected from authenticating user root 124.18.141.70 port 58908 [preauth]
Dec 06 08:13:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:40.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:41 compute-1 ceph-mds[83729]: mds.beacon.cephfs.compute-1.vsxbzt missed beacon ack from the monitors
Dec 06 08:13:41 compute-1 sudo[309060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:41 compute-1 sudo[309060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:41 compute-1 sudo[309060]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:41 compute-1 ceph-mon[81689]: paxos.2).electionLogic(63) init, last seen epoch 63, mid-election, bumping
Dec 06 08:13:41 compute-1 sudo[309085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:13:41 compute-1 sudo[309085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:41 compute-1 sudo[309085]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:41 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:41 compute-1 nova_compute[226101]: 2025-12-06 08:13:41.723 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:41 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:41 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:42 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:42 compute-1 nova_compute[226101]: 2025-12-06 08:13:42.160 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:42 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:42 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:42.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:42.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:43 compute-1 ovn_controller[130279]: 2025-12-06T08:13:43Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:49:a9 10.100.0.7
Dec 06 08:13:43 compute-1 ovn_controller[130279]: 2025-12-06T08:13:43Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:49:a9 10.100.0.7
Dec 06 08:13:43 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:43 compute-1 ceph-mon[81689]: mon.compute-1@2(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.530 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.873s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.610 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.610 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:13:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:43 compute-1 ceph-mon[81689]: mon.compute-1 calling monitor election
Dec 06 08:13:43 compute-1 ceph-mon[81689]: mon.compute-0 calling monitor election
Dec 06 08:13:43 compute-1 ceph-mon[81689]: mon.compute-2 calling monitor election
Dec 06 08:13:43 compute-1 ceph-mon[81689]: pgmap v3632: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 91 op/s
Dec 06 08:13:43 compute-1 ceph-mon[81689]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 08:13:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/138621338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:43 compute-1 ceph-mon[81689]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 08:13:43 compute-1 ceph-mon[81689]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 08:13:43 compute-1 ceph-mon[81689]: osdmap e416: 3 total, 3 up, 3 in
Dec 06 08:13:43 compute-1 ceph-mon[81689]: mgrmap e11: compute-0.sfzyix(active, since 107m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 08:13:43 compute-1 ceph-mon[81689]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 08:13:43 compute-1 ceph-mon[81689]: Cluster is now healthy
Dec 06 08:13:43 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 08:13:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3746927814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.789 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.793 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4062MB free_disk=20.89928436279297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.794 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.794 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.939 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 9f03ad4a-9649-4be1-8f8c-152f8374111c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.940 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.940 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:13:43 compute-1 nova_compute[226101]: 2025-12-06 08:13:43.979 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:13:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3413032714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:44 compute-1 nova_compute[226101]: 2025-12-06 08:13:44.407 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:44 compute-1 nova_compute[226101]: 2025-12-06 08:13:44.415 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:13:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:44.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:45 compute-1 nova_compute[226101]: 2025-12-06 08:13:45.109 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:13:45 compute-1 nova_compute[226101]: 2025-12-06 08:13:45.381 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:13:45 compute-1 nova_compute[226101]: 2025-12-06 08:13:45.382 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:45 compute-1 ceph-mon[81689]: pgmap v3633: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 2.0 MiB/s wr, 28 op/s
Dec 06 08:13:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3413032714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:46 compute-1 nova_compute[226101]: 2025-12-06 08:13:46.725 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:46.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:46.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:47 compute-1 nova_compute[226101]: 2025-12-06 08:13:47.220 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:47 compute-1 nova_compute[226101]: 2025-12-06 08:13:47.339 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:47 compute-1 nova_compute[226101]: 2025-12-06 08:13:47.339 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:13:47 compute-1 nova_compute[226101]: 2025-12-06 08:13:47.339 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:13:48 compute-1 ceph-mon[81689]: pgmap v3634: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Dec 06 08:13:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3656757550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:48 compute-1 sshd-session[309144]: Received disconnect from 91.144.158.231 port 37014:11: Bye Bye [preauth]
Dec 06 08:13:48 compute-1 sshd-session[309144]: Disconnected from authenticating user root 91.144.158.231 port 37014 [preauth]
Dec 06 08:13:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:48.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:48.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:48 compute-1 ceph-mon[81689]: pgmap v3635: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Dec 06 08:13:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3586938534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:48 compute-1 nova_compute[226101]: 2025-12-06 08:13:48.959 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:48 compute-1 nova_compute[226101]: 2025-12-06 08:13:48.959 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:48 compute-1 nova_compute[226101]: 2025-12-06 08:13:48.959 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:13:48 compute-1 nova_compute[226101]: 2025-12-06 08:13:48.959 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9f03ad4a-9649-4be1-8f8c-152f8374111c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:13:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:50.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:50.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:51 compute-1 ceph-mon[81689]: pgmap v3636: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Dec 06 08:13:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:51 compute-1 nova_compute[226101]: 2025-12-06 08:13:51.728 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:52 compute-1 nova_compute[226101]: 2025-12-06 08:13:52.223 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:52.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:52.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:53 compute-1 ceph-mon[81689]: pgmap v3637: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 06 08:13:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2634793710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:53 compute-1 nova_compute[226101]: 2025-12-06 08:13:53.986 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updating instance_info_cache with network_info: [{"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:53 compute-1 sshd-session[309148]: Received disconnect from 186.96.151.198 port 47766:11: Bye Bye [preauth]
Dec 06 08:13:53 compute-1 sshd-session[309148]: Disconnected from authenticating user root 186.96.151.198 port 47766 [preauth]
Dec 06 08:13:54 compute-1 nova_compute[226101]: 2025-12-06 08:13:54.016 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:54 compute-1 nova_compute[226101]: 2025-12-06 08:13:54.017 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:13:54 compute-1 nova_compute[226101]: 2025-12-06 08:13:54.017 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:54 compute-1 nova_compute[226101]: 2025-12-06 08:13:54.017 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:54 compute-1 nova_compute[226101]: 2025-12-06 08:13:54.018 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:13:54 compute-1 nova_compute[226101]: 2025-12-06 08:13:54.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:54 compute-1 ovn_controller[130279]: 2025-12-06T08:13:54Z|00825|binding|INFO|Releasing lport df0319da-86fa-419b-bb2d-0ca654179487 from this chassis (sb_readonly=0)
Dec 06 08:13:54 compute-1 ceph-mon[81689]: pgmap v3638: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 262 KiB/s rd, 97 KiB/s wr, 39 op/s
Dec 06 08:13:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2605282351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:54 compute-1 nova_compute[226101]: 2025-12-06 08:13:54.678 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:54.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:54.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:55 compute-1 sshd-session[309146]: Connection closed by 107.150.106.178 port 58686 [preauth]
Dec 06 08:13:55 compute-1 nova_compute[226101]: 2025-12-06 08:13:55.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.451 226109 DEBUG nova.compute.manager [req-f9ac1e29-7cca-4844-a128-404b9a59465d req-2b6ec1fd-a5e6-42c7-8ba2-bcd08e2fff34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-changed-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.452 226109 DEBUG nova.compute.manager [req-f9ac1e29-7cca-4844-a128-404b9a59465d req-2b6ec1fd-a5e6-42c7-8ba2-bcd08e2fff34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Refreshing instance network info cache due to event network-changed-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.452 226109 DEBUG oslo_concurrency.lockutils [req-f9ac1e29-7cca-4844-a128-404b9a59465d req-2b6ec1fd-a5e6-42c7-8ba2-bcd08e2fff34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.452 226109 DEBUG oslo_concurrency.lockutils [req-f9ac1e29-7cca-4844-a128-404b9a59465d req-2b6ec1fd-a5e6-42c7-8ba2-bcd08e2fff34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.453 226109 DEBUG nova.network.neutron [req-f9ac1e29-7cca-4844-a128-404b9a59465d req-2b6ec1fd-a5e6-42c7-8ba2-bcd08e2fff34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Refreshing network info cache for port 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:13:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.526 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "9f03ad4a-9649-4be1-8f8c-152f8374111c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.527 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.527 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.527 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.528 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.529 226109 INFO nova.compute.manager [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Terminating instance
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.530 226109 DEBUG nova.compute.manager [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:13:56 compute-1 kernel: tap4caeb5ec-15 (unregistering): left promiscuous mode
Dec 06 08:13:56 compute-1 NetworkManager[49031]: <info>  [1765008836.5883] device (tap4caeb5ec-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:13:56 compute-1 ovn_controller[130279]: 2025-12-06T08:13:56Z|00826|binding|INFO|Releasing lport 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 from this chassis (sb_readonly=0)
Dec 06 08:13:56 compute-1 ovn_controller[130279]: 2025-12-06T08:13:56Z|00827|binding|INFO|Setting lport 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 down in Southbound
Dec 06 08:13:56 compute-1 ovn_controller[130279]: 2025-12-06T08:13:56Z|00828|binding|INFO|Removing iface tap4caeb5ec-15 ovn-installed in OVS
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.651 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.661 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:49:a9 10.100.0.7'], port_security=['fa:16:3e:19:49:a9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9f03ad4a-9649-4be1-8f8c-152f8374111c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9489f490-c39b-4337-88af-2235c40dec3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c607db35-6f3d-4821-9124-a70a0a233535, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.663 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.664 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 in datapath 1bf97b73-354e-4df7-9a72-727cdc64dc43 unbound from our chassis
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.666 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1bf97b73-354e-4df7-9a72-727cdc64dc43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.667 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2da51115-631c-4185-ab76-920afb11a94c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.668 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 namespace which is not needed anymore
Dec 06 08:13:56 compute-1 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000c9.scope: Deactivated successfully.
Dec 06 08:13:56 compute-1 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000c9.scope: Consumed 13.969s CPU time.
Dec 06 08:13:56 compute-1 systemd-machined[190302]: Machine qemu-94-instance-000000c9 terminated.
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.730 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.752 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.756 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.764 226109 INFO nova.virt.libvirt.driver [-] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Instance destroyed successfully.
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.764 226109 DEBUG nova.objects.instance [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 9f03ad4a-9649-4be1-8f8c-152f8374111c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.779 226109 DEBUG nova.virt.libvirt.vif [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-566468355',display_name='tempest-TestNetworkBasicOps-server-566468355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-566468355',id=201,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8eLwuyssfzJT4ThZFVtdP/zO6VJa7ZQsEEc42HZazmyuqIPh0yO+tAD/ScyUHxO85nwLHXe57Eu+8dZCRnJgpRQLArfNfUR6FZjuSX2LxTa/bT7wDV93tmmNkzvX9xbA==',key_name='tempest-TestNetworkBasicOps-1994829414',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:13:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-hlevtqdz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:13:25Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=9f03ad4a-9649-4be1-8f8c-152f8374111c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.779 226109 DEBUG nova.network.os_vif_util [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.780 226109 DEBUG nova.network.os_vif_util [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:49:a9,bridge_name='br-int',has_traffic_filtering=True,id=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4caeb5ec-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.780 226109 DEBUG os_vif [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:49:a9,bridge_name='br-int',has_traffic_filtering=True,id=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4caeb5ec-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.782 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4caeb5ec-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.789 226109 INFO os_vif [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:49:a9,bridge_name='br-int',has_traffic_filtering=True,id=4caeb5ec-155f-4b7a-8ac3-9a7c936677a6,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4caeb5ec-15')
Dec 06 08:13:56 compute-1 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[309032]: [NOTICE]   (309036) : haproxy version is 2.8.14-c23fe91
Dec 06 08:13:56 compute-1 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[309032]: [NOTICE]   (309036) : path to executable is /usr/sbin/haproxy
Dec 06 08:13:56 compute-1 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[309032]: [WARNING]  (309036) : Exiting Master process...
Dec 06 08:13:56 compute-1 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[309032]: [ALERT]    (309036) : Current worker (309038) exited with code 143 (Terminated)
Dec 06 08:13:56 compute-1 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[309032]: [WARNING]  (309036) : All workers exited. Exiting... (0)
Dec 06 08:13:56 compute-1 systemd[1]: libpod-87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af.scope: Deactivated successfully.
Dec 06 08:13:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:56 compute-1 podman[309177]: 2025-12-06 08:13:56.828678966 +0000 UTC m=+0.059054814 container died 87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:13:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:56.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-512765d0ba2c371049a760b3b8853e9f53b26d4c60215a47814040d4886028bf-merged.mount: Deactivated successfully.
Dec 06 08:13:56 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:56 compute-1 podman[309177]: 2025-12-06 08:13:56.873900138 +0000 UTC m=+0.104275986 container cleanup 87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 08:13:56 compute-1 systemd[1]: libpod-conmon-87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af.scope: Deactivated successfully.
Dec 06 08:13:56 compute-1 podman[309230]: 2025-12-06 08:13:56.939771694 +0000 UTC m=+0.044512415 container remove 87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.946 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[26e6befb-3b5f-469a-9699-4ac905126cc9]: (4, ('Sat Dec  6 08:13:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 (87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af)\n87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af\nSat Dec  6 08:13:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 (87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af)\n87959a2259c90db4e958b7ec70d85d7f3864f59af9c1bffdf88012a69bec68af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.947 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[787617a2-0d61-47de-ba27-92f2f5cec8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.948 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bf97b73-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:56 compute-1 kernel: tap1bf97b73-30: left promiscuous mode
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.955 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4506fd5c-8b0b-45a6-8869-791526fce295]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:56 compute-1 nova_compute[226101]: 2025-12-06 08:13:56.965 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:56.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:56 compute-1 ceph-mon[81689]: pgmap v3639: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 267 KiB/s rd, 100 KiB/s wr, 40 op/s
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.969 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4494c70b-5f1f-4c31-a0e3-c6413ef99fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.971 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[36712644-7e6f-4ba6-b3c5-a157ab1807c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.992 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9d08d97d-08fb-4bb1-8b86-bc7d275979a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908423, 'reachable_time': 42302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309245, 'error': None, 'target': 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.995 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:13:56 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:13:56.995 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[80370752-5cd6-46ed-aac2-eeacf2a78a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:13:56 compute-1 systemd[1]: run-netns-ovnmeta\x2d1bf97b73\x2d354e\x2d4df7\x2d9a72\x2d727cdc64dc43.mount: Deactivated successfully.
Dec 06 08:13:57 compute-1 nova_compute[226101]: 2025-12-06 08:13:57.225 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:57 compute-1 nova_compute[226101]: 2025-12-06 08:13:57.405 226109 INFO nova.virt.libvirt.driver [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Deleting instance files /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c_del
Dec 06 08:13:57 compute-1 nova_compute[226101]: 2025-12-06 08:13:57.406 226109 INFO nova.virt.libvirt.driver [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Deletion of /var/lib/nova/instances/9f03ad4a-9649-4be1-8f8c-152f8374111c_del complete
Dec 06 08:13:57 compute-1 nova_compute[226101]: 2025-12-06 08:13:57.464 226109 INFO nova.compute.manager [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Took 0.93 seconds to destroy the instance on the hypervisor.
Dec 06 08:13:57 compute-1 nova_compute[226101]: 2025-12-06 08:13:57.465 226109 DEBUG oslo.service.loopingcall [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:13:57 compute-1 nova_compute[226101]: 2025-12-06 08:13:57.465 226109 DEBUG nova.compute.manager [-] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:13:57 compute-1 nova_compute[226101]: 2025-12-06 08:13:57.466 226109 DEBUG nova.network.neutron [-] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:13:58 compute-1 sshd-session[309247]: Received disconnect from 106.51.92.114 port 44873:11: Bye Bye [preauth]
Dec 06 08:13:58 compute-1 sshd-session[309247]: Disconnected from authenticating user root 106.51.92.114 port 44873 [preauth]
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.547 226109 DEBUG nova.compute.manager [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-vif-unplugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.548 226109 DEBUG oslo_concurrency.lockutils [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.549 226109 DEBUG oslo_concurrency.lockutils [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.549 226109 DEBUG oslo_concurrency.lockutils [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.549 226109 DEBUG nova.compute.manager [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] No waiting events found dispatching network-vif-unplugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.549 226109 DEBUG nova.compute.manager [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-vif-unplugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.550 226109 DEBUG nova.compute.manager [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.550 226109 DEBUG oslo_concurrency.lockutils [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.550 226109 DEBUG oslo_concurrency.lockutils [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.550 226109 DEBUG oslo_concurrency.lockutils [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.551 226109 DEBUG nova.compute.manager [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] No waiting events found dispatching network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:13:58 compute-1 nova_compute[226101]: 2025-12-06 08:13:58.551 226109 WARNING nova.compute.manager [req-9c31bdd7-cc0e-49c3-ab9b-de8d19ff2b6a req-cdcf3199-274a-4ffa-be90-1f0de144a0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received unexpected event network-vif-plugged-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 for instance with vm_state active and task_state deleting.
Dec 06 08:13:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:58.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:13:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:58 compute-1 ceph-mon[81689]: pgmap v3640: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 79 KiB/s wr, 24 op/s
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.047 226109 DEBUG nova.network.neutron [req-f9ac1e29-7cca-4844-a128-404b9a59465d req-2b6ec1fd-a5e6-42c7-8ba2-bcd08e2fff34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updated VIF entry in instance network info cache for port 4caeb5ec-155f-4b7a-8ac3-9a7c936677a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.047 226109 DEBUG nova.network.neutron [req-f9ac1e29-7cca-4844-a128-404b9a59465d req-2b6ec1fd-a5e6-42c7-8ba2-bcd08e2fff34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updating instance_info_cache with network_info: [{"id": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "address": "fa:16:3e:19:49:a9", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4caeb5ec-15", "ovs_interfaceid": "4caeb5ec-155f-4b7a-8ac3-9a7c936677a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.070 226109 DEBUG oslo_concurrency.lockutils [req-f9ac1e29-7cca-4844-a128-404b9a59465d req-2b6ec1fd-a5e6-42c7-8ba2-bcd08e2fff34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9f03ad4a-9649-4be1-8f8c-152f8374111c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.492 226109 DEBUG nova.network.neutron [-] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.519 226109 INFO nova.compute.manager [-] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Took 2.05 seconds to deallocate network for instance.
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.568 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.569 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.587 226109 DEBUG nova.compute.manager [req-3eeffa4f-6c00-42c6-b5a9-8dd1406284ba req-b15a5324-2770-4c02-a2ec-b7d8adbfbae8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Received event network-vif-deleted-4caeb5ec-155f-4b7a-8ac3-9a7c936677a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:59 compute-1 nova_compute[226101]: 2025-12-06 08:13:59.622 226109 DEBUG oslo_concurrency.processutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:14:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/980405708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:00 compute-1 nova_compute[226101]: 2025-12-06 08:14:00.069 226109 DEBUG oslo_concurrency.processutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:00 compute-1 nova_compute[226101]: 2025-12-06 08:14:00.075 226109 DEBUG nova.compute.provider_tree [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:14:00 compute-1 nova_compute[226101]: 2025-12-06 08:14:00.091 226109 DEBUG nova.scheduler.client.report [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:14:00 compute-1 nova_compute[226101]: 2025-12-06 08:14:00.117 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:00 compute-1 nova_compute[226101]: 2025-12-06 08:14:00.149 226109 INFO nova.scheduler.client.report [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 9f03ad4a-9649-4be1-8f8c-152f8374111c
Dec 06 08:14:00 compute-1 nova_compute[226101]: 2025-12-06 08:14:00.215 226109 DEBUG oslo_concurrency.lockutils [None req-809a0adb-b355-4a98-a67f-37c1fa5eb7f3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f03ad4a-9649-4be1-8f8c-152f8374111c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:00 compute-1 sshd-session[309249]: Received disconnect from 154.209.4.183 port 44618:11: Bye Bye [preauth]
Dec 06 08:14:00 compute-1 sshd-session[309249]: Disconnected from authenticating user root 154.209.4.183 port 44618 [preauth]
Dec 06 08:14:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:00.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:00.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:01 compute-1 ceph-mon[81689]: pgmap v3641: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 33 KiB/s wr, 12 op/s
Dec 06 08:14:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/980405708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:01 compute-1 nova_compute[226101]: 2025-12-06 08:14:01.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:01 compute-1 nova_compute[226101]: 2025-12-06 08:14:01.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:01.692 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:01.693 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:01.693 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:01 compute-1 nova_compute[226101]: 2025-12-06 08:14:01.785 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-1 nova_compute[226101]: 2025-12-06 08:14:02.251 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-1 nova_compute[226101]: 2025-12-06 08:14:02.419 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:02.420 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:14:02 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:02.421 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:14:02 compute-1 nova_compute[226101]: 2025-12-06 08:14:02.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:02.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:02.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:03 compute-1 nova_compute[226101]: 2025-12-06 08:14:03.084 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:03 compute-1 ceph-mon[81689]: pgmap v3642: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 35 KiB/s wr, 40 op/s
Dec 06 08:14:04 compute-1 ceph-mon[81689]: pgmap v3643: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 18 KiB/s wr, 29 op/s
Dec 06 08:14:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:04.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:04.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:06 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:06.423 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:06 compute-1 nova_compute[226101]: 2025-12-06 08:14:06.826 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:06.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:06.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:07 compute-1 podman[309275]: 2025-12-06 08:14:07.087824044 +0000 UTC m=+0.066849704 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:14:07 compute-1 podman[309276]: 2025-12-06 08:14:07.102280121 +0000 UTC m=+0.072520965 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:14:07 compute-1 ceph-mon[81689]: pgmap v3644: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 19 KiB/s wr, 45 op/s
Dec 06 08:14:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1991368399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:07 compute-1 podman[309277]: 2025-12-06 08:14:07.133260101 +0000 UTC m=+0.102571460 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:14:07 compute-1 nova_compute[226101]: 2025-12-06 08:14:07.254 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:07 compute-1 nova_compute[226101]: 2025-12-06 08:14:07.653 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:08 compute-1 sshd-session[309273]: Received disconnect from 101.100.194.199 port 42674:11: Bye Bye [preauth]
Dec 06 08:14:08 compute-1 sshd-session[309273]: Disconnected from authenticating user root 101.100.194.199 port 42674 [preauth]
Dec 06 08:14:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:08.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:08.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:09 compute-1 ceph-mon[81689]: pgmap v3645: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 17 KiB/s wr, 55 op/s
Dec 06 08:14:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3462730774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:14:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3462730774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:14:10 compute-1 nova_compute[226101]: 2025-12-06 08:14:10.182 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:10 compute-1 nova_compute[226101]: 2025-12-06 08:14:10.271 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:10.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:10 compute-1 ceph-mon[81689]: pgmap v3646: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Dec 06 08:14:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:10.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:11 compute-1 nova_compute[226101]: 2025-12-06 08:14:11.762 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008836.7621784, 9f03ad4a-9649-4be1-8f8c-152f8374111c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:14:11 compute-1 nova_compute[226101]: 2025-12-06 08:14:11.763 226109 INFO nova.compute.manager [-] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] VM Stopped (Lifecycle Event)
Dec 06 08:14:11 compute-1 nova_compute[226101]: 2025-12-06 08:14:11.790 226109 DEBUG nova.compute.manager [None req-b1701a70-f4ad-4a68-b01f-9735c89ed3d5 - - - - - -] [instance: 9f03ad4a-9649-4be1-8f8c-152f8374111c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:14:11 compute-1 nova_compute[226101]: 2025-12-06 08:14:11.828 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/607142524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:12 compute-1 nova_compute[226101]: 2025-12-06 08:14:12.290 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:12.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:12.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:13 compute-1 ceph-mon[81689]: pgmap v3647: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Dec 06 08:14:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:14.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:14 compute-1 ceph-mon[81689]: pgmap v3648: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:14:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:14.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:15 compute-1 sshd-session[309339]: Received disconnect from 186.87.166.141 port 60318:11: Bye Bye [preauth]
Dec 06 08:14:15 compute-1 sshd-session[309339]: Disconnected from authenticating user root 186.87.166.141 port 60318 [preauth]
Dec 06 08:14:15 compute-1 sshd-session[309337]: Received disconnect from 45.120.216.232 port 53906:11: Bye Bye [preauth]
Dec 06 08:14:15 compute-1 sshd-session[309337]: Disconnected from authenticating user root 45.120.216.232 port 53906 [preauth]
Dec 06 08:14:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:16 compute-1 nova_compute[226101]: 2025-12-06 08:14:16.829 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:16.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:16.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:17 compute-1 nova_compute[226101]: 2025-12-06 08:14:17.291 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:17 compute-1 ceph-mon[81689]: pgmap v3649: 305 pgs: 305 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Dec 06 08:14:18 compute-1 nova_compute[226101]: 2025-12-06 08:14:18.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:18 compute-1 nova_compute[226101]: 2025-12-06 08:14:18.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:14:18 compute-1 nova_compute[226101]: 2025-12-06 08:14:18.605 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:14:18 compute-1 nova_compute[226101]: 2025-12-06 08:14:18.605 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:18.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3375879178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2525064073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:18.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:20.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:20 compute-1 ceph-mon[81689]: pgmap v3650: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 08:14:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:20.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:21 compute-1 nova_compute[226101]: 2025-12-06 08:14:21.833 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:22 compute-1 ceph-mon[81689]: pgmap v3651: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:14:22 compute-1 nova_compute[226101]: 2025-12-06 08:14:22.293 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:22.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:23.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:23 compute-1 ceph-mon[81689]: pgmap v3652: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:14:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:24.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:25.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:25 compute-1 sshd-session[309342]: Received disconnect from 14.103.75.9 port 22664:11: Bye Bye [preauth]
Dec 06 08:14:25 compute-1 sshd-session[309342]: Disconnected from authenticating user root 14.103.75.9 port 22664 [preauth]
Dec 06 08:14:25 compute-1 ceph-mon[81689]: pgmap v3653: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:14:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:26 compute-1 ceph-mon[81689]: pgmap v3654: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Dec 06 08:14:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:26.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:26 compute-1 nova_compute[226101]: 2025-12-06 08:14:26.884 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:27.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:27 compute-1 nova_compute[226101]: 2025-12-06 08:14:27.295 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:28.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:29.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:29 compute-1 nova_compute[226101]: 2025-12-06 08:14:29.609 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:29 compute-1 ceph-mon[81689]: pgmap v3655: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 355 KiB/s wr, 74 op/s
Dec 06 08:14:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:30.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:31.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:31 compute-1 nova_compute[226101]: 2025-12-06 08:14:31.886 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:31 compute-1 ceph-mon[81689]: pgmap v3656: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:14:32 compute-1 nova_compute[226101]: 2025-12-06 08:14:32.298 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:32 compute-1 ceph-mon[81689]: pgmap v3657: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:14:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:32.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:34.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:34 compute-1 ceph-mon[81689]: pgmap v3658: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 op/s
Dec 06 08:14:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:35.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3565198130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:36.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:36 compute-1 nova_compute[226101]: 2025-12-06 08:14:36.888 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:36 compute-1 ceph-mon[81689]: pgmap v3659: 305 pgs: 305 active+clean; 192 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 132 op/s
Dec 06 08:14:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:37.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:37 compute-1 nova_compute[226101]: 2025-12-06 08:14:37.300 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:38 compute-1 podman[309348]: 2025-12-06 08:14:38.069896557 +0000 UTC m=+0.052802627 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 08:14:38 compute-1 podman[309347]: 2025-12-06 08:14:38.079691829 +0000 UTC m=+0.064373686 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd)
Dec 06 08:14:38 compute-1 podman[309349]: 2025-12-06 08:14:38.162214632 +0000 UTC m=+0.142055200 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:14:38 compute-1 nova_compute[226101]: 2025-12-06 08:14:38.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:38 compute-1 nova_compute[226101]: 2025-12-06 08:14:38.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:38 compute-1 nova_compute[226101]: 2025-12-06 08:14:38.621 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:38 compute-1 nova_compute[226101]: 2025-12-06 08:14:38.621 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:38 compute-1 nova_compute[226101]: 2025-12-06 08:14:38.621 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:14:38 compute-1 nova_compute[226101]: 2025-12-06 08:14:38.622 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:38.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 08:14:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:39.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 08:14:39 compute-1 ceph-mon[81689]: pgmap v3660: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 684 KiB/s rd, 2.5 MiB/s wr, 83 op/s
Dec 06 08:14:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2016091100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/91966497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:14:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4210566326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.107 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.257 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.258 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4288MB free_disk=20.938648223876953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.258 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.259 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.527 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.528 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.558 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:39 compute-1 sshd-session[309344]: Connection closed by 165.154.55.146 port 33714 [preauth]
Dec 06 08:14:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:14:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4074947016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.983 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:39 compute-1 nova_compute[226101]: 2025-12-06 08:14:39.988 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:14:40 compute-1 nova_compute[226101]: 2025-12-06 08:14:40.050 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:14:40 compute-1 nova_compute[226101]: 2025-12-06 08:14:40.108 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:14:40 compute-1 nova_compute[226101]: 2025-12-06 08:14:40.109 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4210566326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4074947016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:40 compute-1 nova_compute[226101]: 2025-12-06 08:14:40.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:40 compute-1 nova_compute[226101]: 2025-12-06 08:14:40.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:14:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:40.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:41.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:41 compute-1 ceph-mon[81689]: pgmap v3661: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Dec 06 08:14:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:41 compute-1 sudo[309455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:41 compute-1 sudo[309455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-1 sudo[309455]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:41 compute-1 sudo[309480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:14:41 compute-1 sudo[309480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-1 sudo[309480]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:41 compute-1 sudo[309505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:41 compute-1 sudo[309505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-1 sudo[309505]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:41 compute-1 sudo[309530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:14:41 compute-1 sudo[309530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-1 nova_compute[226101]: 2025-12-06 08:14:41.891 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:42 compute-1 sudo[309530]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:42 compute-1 nova_compute[226101]: 2025-12-06 08:14:42.301 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:42 compute-1 nova_compute[226101]: 2025-12-06 08:14:42.740 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:42 compute-1 nova_compute[226101]: 2025-12-06 08:14:42.740 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:14:42 compute-1 nova_compute[226101]: 2025-12-06 08:14:42.740 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:14:42 compute-1 nova_compute[226101]: 2025-12-06 08:14:42.756 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:14:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:42.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:43.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:43 compute-1 ceph-mon[81689]: pgmap v3662: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 06 08:14:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:14:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:14:43 compute-1 nova_compute[226101]: 2025-12-06 08:14:43.264 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:43.264 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:14:43 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:43.266 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:14:44 compute-1 sshd-session[309586]: Received disconnect from 154.219.116.39 port 59052:11: Bye Bye [preauth]
Dec 06 08:14:44 compute-1 sshd-session[309586]: Disconnected from authenticating user root 154.219.116.39 port 59052 [preauth]
Dec 06 08:14:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:44.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:44 compute-1 nova_compute[226101]: 2025-12-06 08:14:44.925 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:45.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:45 compute-1 ceph-mon[81689]: pgmap v3663: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 06 08:14:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:14:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:14:46 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:14:46.267 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:14:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:14:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:14:46 compute-1 ceph-mon[81689]: pgmap v3664: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Dec 06 08:14:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:46.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:46 compute-1 nova_compute[226101]: 2025-12-06 08:14:46.892 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:47.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:47 compute-1 nova_compute[226101]: 2025-12-06 08:14:47.303 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:48.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:49.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:49 compute-1 ceph-mon[81689]: pgmap v3665: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 100 op/s
Dec 06 08:14:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1832332214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1311401847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:51.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.212076) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891212110, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2397, "num_deletes": 252, "total_data_size": 5918978, "memory_usage": 5996784, "flush_reason": "Manual Compaction"}
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891243288, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3875037, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77806, "largest_seqno": 80198, "table_properties": {"data_size": 3865185, "index_size": 6281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20466, "raw_average_key_size": 20, "raw_value_size": 3845433, "raw_average_value_size": 3880, "num_data_blocks": 274, "num_entries": 991, "num_filter_entries": 991, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008661, "oldest_key_time": 1765008661, "file_creation_time": 1765008891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 31303 microseconds, and 7298 cpu microseconds.
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.243371) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3875037 bytes OK
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.243399) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.245662) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.245692) EVENT_LOG_v1 {"time_micros": 1765008891245679, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.245720) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5908427, prev total WAL file size 5908427, number of live WAL files 2.
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.248495) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3784KB)], [159(11MB)]
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891248576, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 15871674, "oldest_snapshot_seqno": -1}
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 10878 keys, 13927170 bytes, temperature: kUnknown
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891394090, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 13927170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13857234, "index_size": 41663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27205, "raw_key_size": 287479, "raw_average_key_size": 26, "raw_value_size": 13667027, "raw_average_value_size": 1256, "num_data_blocks": 1582, "num_entries": 10878, "num_filter_entries": 10878, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:14:51 compute-1 sudo[309588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.394367) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 13927170 bytes
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.398609) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.0 rd, 95.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 11.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 11411, records dropped: 533 output_compression: NoCompression
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.398635) EVENT_LOG_v1 {"time_micros": 1765008891398623, "job": 102, "event": "compaction_finished", "compaction_time_micros": 145627, "compaction_time_cpu_micros": 44172, "output_level": 6, "num_output_files": 1, "total_output_size": 13927170, "num_input_records": 11411, "num_output_records": 10878, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891399791, "job": 102, "event": "table_file_deletion", "file_number": 161}
Dec 06 08:14:51 compute-1 ceph-mon[81689]: pgmap v3666: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Dec 06 08:14:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:51 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:51 compute-1 sudo[309588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891402634, "job": 102, "event": "table_file_deletion", "file_number": 159}
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.248379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.402757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.402762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.402763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.402764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:14:51.402766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-1 sudo[309588]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:51 compute-1 sudo[309613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:14:51 compute-1 sudo[309613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:51 compute-1 sudo[309613]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:51 compute-1 nova_compute[226101]: 2025-12-06 08:14:51.927 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:52 compute-1 nova_compute[226101]: 2025-12-06 08:14:52.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:52 compute-1 nova_compute[226101]: 2025-12-06 08:14:52.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:52 compute-1 nova_compute[226101]: 2025-12-06 08:14:52.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:14:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:52.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:53 compute-1 nova_compute[226101]: 2025-12-06 08:14:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:53 compute-1 ceph-mon[81689]: pgmap v3667: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Dec 06 08:14:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:54.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:55.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:55 compute-1 ceph-mon[81689]: pgmap v3668: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 79 op/s
Dec 06 08:14:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1644300321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:55 compute-1 nova_compute[226101]: 2025-12-06 08:14:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:56 compute-1 ovn_controller[130279]: 2025-12-06T08:14:56Z|00829|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 06 08:14:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1336330205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:56 compute-1 nova_compute[226101]: 2025-12-06 08:14:56.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:56.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:56 compute-1 nova_compute[226101]: 2025-12-06 08:14:56.930 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:57.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:57 compute-1 nova_compute[226101]: 2025-12-06 08:14:57.307 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:57 compute-1 ceph-mon[81689]: pgmap v3669: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 972 KiB/s wr, 131 op/s
Dec 06 08:14:58 compute-1 sshd-session[309638]: Received disconnect from 186.96.151.198 port 46398:11: Bye Bye [preauth]
Dec 06 08:14:58 compute-1 sshd-session[309638]: Disconnected from authenticating user root 186.96.151.198 port 46398 [preauth]
Dec 06 08:14:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1889324331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:58.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:14:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:59.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:59 compute-1 ceph-mon[81689]: pgmap v3670: 305 pgs: 305 active+clean; 181 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 1.2 MiB/s wr, 70 op/s
Dec 06 08:14:59 compute-1 sshd-session[309640]: Received disconnect from 124.18.141.70 port 34388:11: Bye Bye [preauth]
Dec 06 08:14:59 compute-1 sshd-session[309640]: Disconnected from authenticating user root 124.18.141.70 port 34388 [preauth]
Dec 06 08:15:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:00.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:01 compute-1 ceph-mon[81689]: pgmap v3671: 305 pgs: 305 active+clean; 181 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 95 KiB/s rd, 1.2 MiB/s wr, 60 op/s
Dec 06 08:15:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:01.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:01 compute-1 nova_compute[226101]: 2025-12-06 08:15:01.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:01 compute-1 nova_compute[226101]: 2025-12-06 08:15:01.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:01.694 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:01.695 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:01.695 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:01 compute-1 nova_compute[226101]: 2025-12-06 08:15:01.933 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:01 compute-1 sshd-session[309642]: Received disconnect from 136.112.8.45 port 57450:11: Bye Bye [preauth]
Dec 06 08:15:01 compute-1 sshd-session[309642]: Disconnected from authenticating user root 136.112.8.45 port 57450 [preauth]
Dec 06 08:15:02 compute-1 nova_compute[226101]: 2025-12-06 08:15:02.309 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:02 compute-1 nova_compute[226101]: 2025-12-06 08:15:02.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:02.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:03.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:03 compute-1 ceph-mon[81689]: pgmap v3672: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:15:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:04.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:05.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:05 compute-1 ceph-mon[81689]: pgmap v3673: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 08:15:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 08:15:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:06.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:06 compute-1 nova_compute[226101]: 2025-12-06 08:15:06.935 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:07.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:07 compute-1 nova_compute[226101]: 2025-12-06 08:15:07.364 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:07 compute-1 ceph-mon[81689]: pgmap v3674: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 08:15:08 compute-1 ceph-mon[81689]: pgmap v3675: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.2 MiB/s wr, 37 op/s
Dec 06 08:15:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:08.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:09.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:09 compute-1 podman[309645]: 2025-12-06 08:15:09.073152239 +0000 UTC m=+0.056048663 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:15:09 compute-1 podman[309644]: 2025-12-06 08:15:09.083292061 +0000 UTC m=+0.065612380 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 08:15:09 compute-1 podman[309646]: 2025-12-06 08:15:09.104473429 +0000 UTC m=+0.084165008 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:15:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1900472723' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:15:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1900472723' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:15:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:10.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:10 compute-1 ceph-mon[81689]: pgmap v3676: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 993 KiB/s wr, 34 op/s
Dec 06 08:15:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:11.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:11 compute-1 sshd-session[309707]: Received disconnect from 91.144.158.231 port 34922:11: Bye Bye [preauth]
Dec 06 08:15:11 compute-1 sshd-session[309707]: Disconnected from authenticating user root 91.144.158.231 port 34922 [preauth]
Dec 06 08:15:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:11 compute-1 nova_compute[226101]: 2025-12-06 08:15:11.937 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:12 compute-1 nova_compute[226101]: 2025-12-06 08:15:12.365 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:12.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:12 compute-1 ceph-mon[81689]: pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 994 KiB/s wr, 61 op/s
Dec 06 08:15:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:15.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:15 compute-1 ceph-mon[81689]: pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:15:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/349789118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:16 compute-1 sshd-session[309709]: Received disconnect from 101.100.194.199 port 35428:11: Bye Bye [preauth]
Dec 06 08:15:16 compute-1 sshd-session[309709]: Disconnected from authenticating user root 101.100.194.199 port 35428 [preauth]
Dec 06 08:15:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:16 compute-1 nova_compute[226101]: 2025-12-06 08:15:16.939 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:17.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:17 compute-1 nova_compute[226101]: 2025-12-06 08:15:17.367 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:17 compute-1 ceph-mon[81689]: pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:15:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:18.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:19.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:19 compute-1 sshd-session[309712]: Received disconnect from 106.51.92.114 port 59659:11: Bye Bye [preauth]
Dec 06 08:15:19 compute-1 sshd-session[309712]: Disconnected from authenticating user root 106.51.92.114 port 59659 [preauth]
Dec 06 08:15:19 compute-1 ceph-mon[81689]: pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:20 compute-1 ceph-mon[81689]: pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:20.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:21.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:21 compute-1 nova_compute[226101]: 2025-12-06 08:15:21.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:22.219 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:15:22 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:22.220 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:15:22 compute-1 nova_compute[226101]: 2025-12-06 08:15:22.220 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:22 compute-1 nova_compute[226101]: 2025-12-06 08:15:22.370 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:22.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:23.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:23 compute-1 ceph-mon[81689]: pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:24.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:25.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:25 compute-1 ceph-mon[81689]: pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:26.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:27 compute-1 nova_compute[226101]: 2025-12-06 08:15:27.072 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:27.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:27 compute-1 ceph-mon[81689]: pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:27 compute-1 nova_compute[226101]: 2025-12-06 08:15:27.371 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:28.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:29.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:29 compute-1 ceph-mon[81689]: pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:30 compute-1 sshd-session[309714]: Received disconnect from 154.209.4.183 port 36472:11: Bye Bye [preauth]
Dec 06 08:15:30 compute-1 sshd-session[309714]: Disconnected from authenticating user root 154.209.4.183 port 36472 [preauth]
Dec 06 08:15:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:30.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:31.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:31.223 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:31 compute-1 ceph-mon[81689]: pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:32 compute-1 nova_compute[226101]: 2025-12-06 08:15:32.074 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:32 compute-1 nova_compute[226101]: 2025-12-06 08:15:32.373 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:32 compute-1 ceph-mon[81689]: pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:32.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:33.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:33 compute-1 sshd-session[309711]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:15:33 compute-1 sshd-session[309711]: banner exchange: Connection from 14.103.75.9 port 13980: Connection timed out
Dec 06 08:15:33 compute-1 sshd-session[309716]: Received disconnect from 45.120.216.232 port 54238:11: Bye Bye [preauth]
Dec 06 08:15:33 compute-1 sshd-session[309716]: Disconnected from authenticating user root 45.120.216.232 port 54238 [preauth]
Dec 06 08:15:34 compute-1 ovn_controller[130279]: 2025-12-06T08:15:34Z|00830|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 08:15:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:35.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:35 compute-1 ceph-mon[81689]: pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:37 compute-1 nova_compute[226101]: 2025-12-06 08:15:37.076 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:37.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:37 compute-1 ceph-mon[81689]: pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:37 compute-1 nova_compute[226101]: 2025-12-06 08:15:37.376 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:38 compute-1 nova_compute[226101]: 2025-12-06 08:15:38.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:38 compute-1 nova_compute[226101]: 2025-12-06 08:15:38.619 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:38 compute-1 nova_compute[226101]: 2025-12-06 08:15:38.619 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:38 compute-1 nova_compute[226101]: 2025-12-06 08:15:38.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:38 compute-1 nova_compute[226101]: 2025-12-06 08:15:38.620 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:15:38 compute-1 nova_compute[226101]: 2025-12-06 08:15:38.620 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:38 compute-1 ceph-mon[81689]: pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:38.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:15:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1889620069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.045 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:39.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.192 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.193 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4306MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.193 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.194 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.303 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.303 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.335 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:15:39 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3724633399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1889620069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:39 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3724633399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.747 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.753 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.770 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.772 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:15:39 compute-1 nova_compute[226101]: 2025-12-06 08:15:39.772 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:40 compute-1 podman[309763]: 2025-12-06 08:15:40.065085637 +0000 UTC m=+0.051480242 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 08:15:40 compute-1 podman[309762]: 2025-12-06 08:15:40.072259269 +0000 UTC m=+0.058865559 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 08:15:40 compute-1 podman[309764]: 2025-12-06 08:15:40.09728689 +0000 UTC m=+0.079584835 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:15:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:41.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:41 compute-1 ceph-mon[81689]: pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:42 compute-1 nova_compute[226101]: 2025-12-06 08:15:42.077 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:42 compute-1 nova_compute[226101]: 2025-12-06 08:15:42.379 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:43.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:43 compute-1 ceph-mon[81689]: pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:44 compute-1 nova_compute[226101]: 2025-12-06 08:15:44.772 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:44 compute-1 nova_compute[226101]: 2025-12-06 08:15:44.773 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:15:44 compute-1 nova_compute[226101]: 2025-12-06 08:15:44.773 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:15:44 compute-1 nova_compute[226101]: 2025-12-06 08:15:44.790 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:15:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:44.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:45.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:45 compute-1 ceph-mon[81689]: pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:46.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:47 compute-1 nova_compute[226101]: 2025-12-06 08:15:47.079 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:47.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:47 compute-1 ceph-mon[81689]: pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 08:15:47 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2651028505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:47 compute-1 nova_compute[226101]: 2025-12-06 08:15:47.381 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:49 compute-1 ceph-mon[81689]: pgmap v3695: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 06 08:15:49 compute-1 sshd-session[309825]: Received disconnect from 186.87.166.141 port 33190:11: Bye Bye [preauth]
Dec 06 08:15:49 compute-1 sshd-session[309825]: Disconnected from authenticating user root 186.87.166.141 port 33190 [preauth]
Dec 06 08:15:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:15:50 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2115213910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1497060737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2115213910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:51.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:51 compute-1 ceph-mon[81689]: pgmap v3696: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 06 08:15:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/911365166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:51 compute-1 sudo[309827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:51 compute-1 sudo[309827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:51 compute-1 sudo[309827]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:51 compute-1 sudo[309852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:15:51 compute-1 sudo[309852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:51 compute-1 sudo[309852]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:51 compute-1 sudo[309877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:51 compute-1 sudo[309877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:51 compute-1 sudo[309877]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:51 compute-1 sudo[309902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:15:51 compute-1 sudo[309902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:52 compute-1 nova_compute[226101]: 2025-12-06 08:15:52.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:52 compute-1 sudo[309902]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:52 compute-1 nova_compute[226101]: 2025-12-06 08:15:52.382 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/961934422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:52 compute-1 ceph-mon[81689]: pgmap v3697: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1481869598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:15:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3720579820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:53.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:53 compute-1 nova_compute[226101]: 2025-12-06 08:15:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:53 compute-1 sshd-session[309823]: Connection closed by 107.150.106.178 port 56174 [preauth]
Dec 06 08:15:54 compute-1 nova_compute[226101]: 2025-12-06 08:15:54.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:54 compute-1 nova_compute[226101]: 2025-12-06 08:15:54.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:15:54 compute-1 ceph-mon[81689]: pgmap v3698: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:15:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2826303976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:55.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:55 compute-1 nova_compute[226101]: 2025-12-06 08:15:55.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:57 compute-1 nova_compute[226101]: 2025-12-06 08:15:57.082 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:57.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:57 compute-1 ceph-mon[81689]: pgmap v3699: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 08:15:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1161032478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:57 compute-1 nova_compute[226101]: 2025-12-06 08:15:57.417 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:57 compute-1 nova_compute[226101]: 2025-12-06 08:15:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2520721463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:58 compute-1 sudo[309961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:58 compute-1 sudo[309961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:58 compute-1 sudo[309961]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:58 compute-1 sudo[309986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:15:58 compute-1 sudo[309986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:58 compute-1 sudo[309986]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:15:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:59.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:59 compute-1 sshd-session[309959]: Received disconnect from 154.219.116.39 port 45022:11: Bye Bye [preauth]
Dec 06 08:15:59 compute-1 sshd-session[309959]: Disconnected from authenticating user root 154.219.116.39 port 45022 [preauth]
Dec 06 08:15:59 compute-1 ceph-mon[81689]: pgmap v3700: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:15:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:59 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:59.748 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:15:59 compute-1 nova_compute[226101]: 2025-12-06 08:15:59.748 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:59.750 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:15:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:15:59.751 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:16:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:01.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:01 compute-1 ceph-mon[81689]: pgmap v3701: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 08:16:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:16:01.694 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:16:01.695 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:16:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:16:01.695 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:16:02 compute-1 nova_compute[226101]: 2025-12-06 08:16:02.084 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:02 compute-1 nova_compute[226101]: 2025-12-06 08:16:02.458 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:03 compute-1 ceph-mon[81689]: pgmap v3702: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Dec 06 08:16:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:03.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:03 compute-1 nova_compute[226101]: 2025-12-06 08:16:03.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:03 compute-1 nova_compute[226101]: 2025-12-06 08:16:03.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:03 compute-1 nova_compute[226101]: 2025-12-06 08:16:03.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:04 compute-1 ceph-mon[81689]: pgmap v3703: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Dec 06 08:16:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:05.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:05.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:07.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:07 compute-1 ceph-mon[81689]: pgmap v3704: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 93 op/s
Dec 06 08:16:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1624445191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:07 compute-1 nova_compute[226101]: 2025-12-06 08:16:07.087 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:07.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:07 compute-1 nova_compute[226101]: 2025-12-06 08:16:07.492 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:09.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:09 compute-1 ceph-mon[81689]: pgmap v3705: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 91 op/s
Dec 06 08:16:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:09.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:16:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2380515998' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:16:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2380515998' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2667934728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2667934728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2380515998' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2380515998' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:10 compute-1 sshd-session[310012]: Received disconnect from 186.96.151.198 port 56880:11: Bye Bye [preauth]
Dec 06 08:16:10 compute-1 sshd-session[310012]: Disconnected from authenticating user root 186.96.151.198 port 56880 [preauth]
Dec 06 08:16:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:11.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:11 compute-1 podman[310016]: 2025-12-06 08:16:11.086605104 +0000 UTC m=+0.060900603 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 08:16:11 compute-1 podman[310015]: 2025-12-06 08:16:11.089682927 +0000 UTC m=+0.064048418 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 08:16:11 compute-1 podman[310017]: 2025-12-06 08:16:11.119276981 +0000 UTC m=+0.084418834 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:16:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:11 compute-1 ceph-mon[81689]: pgmap v3706: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Dec 06 08:16:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:12 compute-1 nova_compute[226101]: 2025-12-06 08:16:12.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:12 compute-1 nova_compute[226101]: 2025-12-06 08:16:12.493 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:13 compute-1 ceph-mon[81689]: pgmap v3707: 305 pgs: 305 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Dec 06 08:16:14 compute-1 ceph-mon[81689]: pgmap v3708: 305 pgs: 305 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Dec 06 08:16:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 08:16:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 08:16:15 compute-1 sshd-session[310011]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:16:15 compute-1 sshd-session[310011]: banner exchange: Connection from 14.103.75.9 port 23068: Connection timed out
Dec 06 08:16:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:16 compute-1 ceph-mon[81689]: pgmap v3709: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Dec 06 08:16:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:17.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:17 compute-1 nova_compute[226101]: 2025-12-06 08:16:17.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:17 compute-1 nova_compute[226101]: 2025-12-06 08:16:17.496 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:18 compute-1 ceph-mon[81689]: pgmap v3710: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 08:16:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:19.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:19.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:21.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:21.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e417 e417: 3 total, 3 up, 3 in
Dec 06 08:16:22 compute-1 nova_compute[226101]: 2025-12-06 08:16:22.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:22 compute-1 ceph-mon[81689]: pgmap v3711: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Dec 06 08:16:22 compute-1 nova_compute[226101]: 2025-12-06 08:16:22.497 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:23.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:23.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:23 compute-1 ceph-mon[81689]: pgmap v3712: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 123 op/s
Dec 06 08:16:23 compute-1 ceph-mon[81689]: osdmap e417: 3 total, 3 up, 3 in
Dec 06 08:16:23 compute-1 sshd-session[310081]: Received disconnect from 14.225.3.79 port 42024:11: Bye Bye [preauth]
Dec 06 08:16:23 compute-1 sshd-session[310081]: Disconnected from authenticating user root 14.225.3.79 port 42024 [preauth]
Dec 06 08:16:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:25.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:25.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:25 compute-1 ceph-mon[81689]: pgmap v3714: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 62 op/s
Dec 06 08:16:25 compute-1 sshd-session[310083]: Received disconnect from 101.100.194.199 port 47294:11: Bye Bye [preauth]
Dec 06 08:16:25 compute-1 sshd-session[310083]: Disconnected from authenticating user root 101.100.194.199 port 47294 [preauth]
Dec 06 08:16:25 compute-1 sshd-session[310085]: Received disconnect from 124.18.141.70 port 41660:11: Bye Bye [preauth]
Dec 06 08:16:25 compute-1 sshd-session[310085]: Disconnected from authenticating user root 124.18.141.70 port 41660 [preauth]
Dec 06 08:16:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:27 compute-1 nova_compute[226101]: 2025-12-06 08:16:27.095 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:27.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:27 compute-1 nova_compute[226101]: 2025-12-06 08:16:27.585 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:27 compute-1 ceph-mon[81689]: pgmap v3715: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Dec 06 08:16:28 compute-1 sshd-session[310087]: Received disconnect from 91.144.158.231 port 11882:11: Bye Bye [preauth]
Dec 06 08:16:28 compute-1 sshd-session[310087]: Disconnected from authenticating user root 91.144.158.231 port 11882 [preauth]
Dec 06 08:16:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:29.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:30 compute-1 ceph-mon[81689]: pgmap v3716: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 08:16:30 compute-1 nova_compute[226101]: 2025-12-06 08:16:30.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:31 compute-1 ceph-mon[81689]: pgmap v3717: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 08:16:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1315743655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3150436561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:31.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:31.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:32 compute-1 nova_compute[226101]: 2025-12-06 08:16:32.097 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:32 compute-1 nova_compute[226101]: 2025-12-06 08:16:32.588 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:33.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:33.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:34 compute-1 ceph-mon[81689]: pgmap v3718: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 KiB/s rd, 4.1 KiB/s wr, 5 op/s
Dec 06 08:16:34 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Dec 06 08:16:34 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:34.989235) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:16:34 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Dec 06 08:16:34 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008994989330, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 251, "total_data_size": 2701414, "memory_usage": 2735336, "flush_reason": "Manual Compaction"}
Dec 06 08:16:34 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995003509, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 1075464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80203, "largest_seqno": 81430, "table_properties": {"data_size": 1071257, "index_size": 1730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11354, "raw_average_key_size": 20, "raw_value_size": 1062089, "raw_average_value_size": 1955, "num_data_blocks": 77, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008891, "oldest_key_time": 1765008891, "file_creation_time": 1765008994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 14318 microseconds, and 5195 cpu microseconds.
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.003557) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 1075464 bytes OK
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.003579) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.005387) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.005400) EVENT_LOG_v1 {"time_micros": 1765008995005396, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.005433) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2695587, prev total WAL file size 2695587, number of live WAL files 2.
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.006305) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373737' seq:72057594037927935, type:22 .. '6D6772737461740033303239' seq:0, type:0; will stop at (end)
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(1050KB)], [162(13MB)]
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995007156, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 15002634, "oldest_snapshot_seqno": -1}
Dec 06 08:16:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:35.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10948 keys, 11853242 bytes, temperature: kUnknown
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995093927, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 11853242, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11786196, "index_size": 38581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27397, "raw_key_size": 289177, "raw_average_key_size": 26, "raw_value_size": 11598148, "raw_average_value_size": 1059, "num_data_blocks": 1455, "num_entries": 10948, "num_filter_entries": 10948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765008995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.094780) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11853242 bytes
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.096669) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.2 rd, 136.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.3 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(25.0) write-amplify(11.0) OK, records in: 11421, records dropped: 473 output_compression: NoCompression
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.096704) EVENT_LOG_v1 {"time_micros": 1765008995096688, "job": 104, "event": "compaction_finished", "compaction_time_micros": 87113, "compaction_time_cpu_micros": 31058, "output_level": 6, "num_output_files": 1, "total_output_size": 11853242, "num_input_records": 11421, "num_output_records": 10948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995097245, "job": 104, "event": "table_file_deletion", "file_number": 164}
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995102404, "job": 104, "event": "table_file_deletion", "file_number": 162}
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.006133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.102579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.102586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.102587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.102589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:16:35.102590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:35.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:35 compute-1 ceph-mon[81689]: pgmap v3719: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 3.5 KiB/s wr, 4 op/s
Dec 06 08:16:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3745854789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1508888652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2322241667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:37.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:37 compute-1 nova_compute[226101]: 2025-12-06 08:16:37.125 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:37.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:37 compute-1 nova_compute[226101]: 2025-12-06 08:16:37.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:37 compute-1 sshd-session[310089]: Received disconnect from 165.154.55.146 port 53514:11: Bye Bye [preauth]
Dec 06 08:16:37 compute-1 sshd-session[310089]: Disconnected from authenticating user root 165.154.55.146 port 53514 [preauth]
Dec 06 08:16:38 compute-1 ceph-mon[81689]: pgmap v3720: 305 pgs: 305 active+clean; 287 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Dec 06 08:16:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3235342965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:39.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:39 compute-1 ceph-mon[81689]: pgmap v3721: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:16:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:39.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:39 compute-1 nova_compute[226101]: 2025-12-06 08:16:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:39 compute-1 nova_compute[226101]: 2025-12-06 08:16:39.614 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:16:39 compute-1 nova_compute[226101]: 2025-12-06 08:16:39.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:16:39 compute-1 nova_compute[226101]: 2025-12-06 08:16:39.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:16:39 compute-1 nova_compute[226101]: 2025-12-06 08:16:39.615 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:16:39 compute-1 nova_compute[226101]: 2025-12-06 08:16:39.615 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:16:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:16:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2198310770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.047 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:16:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2198310770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.207 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.209 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4296MB free_disk=20.921966552734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.209 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.210 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.313 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.313 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.330 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.366 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.366 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.382 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.419 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.445 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:16:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:16:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2035496392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.883 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.890 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.905 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.929 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:16:40 compute-1 nova_compute[226101]: 2025-12-06 08:16:40.930 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:16:40 compute-1 sshd-session[310091]: Received disconnect from 136.112.8.45 port 40590:11: Bye Bye [preauth]
Dec 06 08:16:40 compute-1 sshd-session[310091]: Disconnected from authenticating user root 136.112.8.45 port 40590 [preauth]
Dec 06 08:16:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:41.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:41.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:41 compute-1 sshd-session[310113]: Received disconnect from 106.51.92.114 port 46214:11: Bye Bye [preauth]
Dec 06 08:16:41 compute-1 sshd-session[310113]: Disconnected from authenticating user root 106.51.92.114 port 46214 [preauth]
Dec 06 08:16:41 compute-1 ceph-mon[81689]: pgmap v3722: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 08:16:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2035496392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:42 compute-1 podman[310140]: 2025-12-06 08:16:42.091023059 +0000 UTC m=+0.071721444 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:16:42 compute-1 podman[310139]: 2025-12-06 08:16:42.094384479 +0000 UTC m=+0.077587011 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:16:42 compute-1 nova_compute[226101]: 2025-12-06 08:16:42.127 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:42 compute-1 podman[310141]: 2025-12-06 08:16:42.132662965 +0000 UTC m=+0.104907643 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:16:42 compute-1 nova_compute[226101]: 2025-12-06 08:16:42.614 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:43.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:43.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:43 compute-1 ceph-mon[81689]: pgmap v3723: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Dec 06 08:16:44 compute-1 nova_compute[226101]: 2025-12-06 08:16:44.930 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:44 compute-1 nova_compute[226101]: 2025-12-06 08:16:44.930 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:16:44 compute-1 nova_compute[226101]: 2025-12-06 08:16:44.931 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:16:44 compute-1 nova_compute[226101]: 2025-12-06 08:16:44.947 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:16:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:45.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:45.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:46 compute-1 ceph-mon[81689]: pgmap v3724: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Dec 06 08:16:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:47.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:47 compute-1 nova_compute[226101]: 2025-12-06 08:16:47.129 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:47.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:47 compute-1 nova_compute[226101]: 2025-12-06 08:16:47.615 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:48 compute-1 ceph-mon[81689]: pgmap v3725: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Dec 06 08:16:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:49.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:49 compute-1 ceph-mon[81689]: pgmap v3726: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 132 KiB/s wr, 178 op/s
Dec 06 08:16:51 compute-1 ceph-mon[81689]: pgmap v3727: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 157 op/s
Dec 06 08:16:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/950012681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:51.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:52 compute-1 nova_compute[226101]: 2025-12-06 08:16:52.130 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:52 compute-1 nova_compute[226101]: 2025-12-06 08:16:52.618 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:53 compute-1 sshd-session[310198]: Received disconnect from 45.120.216.232 port 54574:11: Bye Bye [preauth]
Dec 06 08:16:53 compute-1 sshd-session[310198]: Disconnected from authenticating user root 45.120.216.232 port 54574 [preauth]
Dec 06 08:16:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:53.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e418 e418: 3 total, 3 up, 3 in
Dec 06 08:16:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1802524803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1802524803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:16:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3489347069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:16:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3489347069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:54 compute-1 nova_compute[226101]: 2025-12-06 08:16:54.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:54 compute-1 ceph-mon[81689]: pgmap v3728: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 178 op/s
Dec 06 08:16:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1924030309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:54 compute-1 ceph-mon[81689]: osdmap e418: 3 total, 3 up, 3 in
Dec 06 08:16:54 compute-1 ceph-mon[81689]: pgmap v3730: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec 06 08:16:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1434463726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3489347069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3489347069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:55.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:55 compute-1 nova_compute[226101]: 2025-12-06 08:16:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:55 compute-1 nova_compute[226101]: 2025-12-06 08:16:55.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:55 compute-1 nova_compute[226101]: 2025-12-06 08:16:55.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:16:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:57 compute-1 nova_compute[226101]: 2025-12-06 08:16:57.130 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:57 compute-1 ceph-mon[81689]: pgmap v3731: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 476 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 06 08:16:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1484566160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:57.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:57 compute-1 nova_compute[226101]: 2025-12-06 08:16:57.665 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1500307962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:58 compute-1 nova_compute[226101]: 2025-12-06 08:16:58.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:58 compute-1 sudo[310200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:58 compute-1 sudo[310200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:58 compute-1 sudo[310200]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:59 compute-1 sudo[310225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:16:59 compute-1 sudo[310225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:59 compute-1 sudo[310225]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:59.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:59 compute-1 sudo[310250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:59 compute-1 sudo[310250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:59 compute-1 sudo[310250]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:59 compute-1 sudo[310275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:16:59 compute-1 sudo[310275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:16:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:59 compute-1 ceph-mon[81689]: pgmap v3732: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 06 08:16:59 compute-1 sudo[310275]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 e419: 3 total, 3 up, 3 in
Dec 06 08:17:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:17:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:17:00 compute-1 ceph-mon[81689]: osdmap e419: 3 total, 3 up, 3 in
Dec 06 08:17:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:17:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:17:00 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:17:00 compute-1 sshd-session[310300]: Received disconnect from 154.209.4.183 port 44308:11: Bye Bye [preauth]
Dec 06 08:17:00 compute-1 sshd-session[310300]: Disconnected from authenticating user root 154.209.4.183 port 44308 [preauth]
Dec 06 08:17:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:01.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:01.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:01 compute-1 ceph-mon[81689]: pgmap v3733: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 06 08:17:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:17:01.695 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:17:01.696 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:17:01.696 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:02 compute-1 nova_compute[226101]: 2025-12-06 08:17:02.132 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:02 compute-1 nova_compute[226101]: 2025-12-06 08:17:02.669 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:03.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:03 compute-1 ceph-mon[81689]: pgmap v3735: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 149 op/s
Dec 06 08:17:04 compute-1 nova_compute[226101]: 2025-12-06 08:17:04.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:04 compute-1 ceph-mon[81689]: pgmap v3736: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 120 op/s
Dec 06 08:17:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:05.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:17:05.377 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:17:05 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:17:05.378 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:17:05 compute-1 nova_compute[226101]: 2025-12-06 08:17:05.378 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:05 compute-1 nova_compute[226101]: 2025-12-06 08:17:05.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:05 compute-1 nova_compute[226101]: 2025-12-06 08:17:05.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:05 compute-1 sudo[310334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:05 compute-1 sudo[310334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:05 compute-1 sudo[310334]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:06 compute-1 sudo[310359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:17:06 compute-1 sudo[310359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:06 compute-1 sudo[310359]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:06 compute-1 sshd-session[310384]: Connection closed by 149.100.11.243 port 55232 [preauth]
Dec 06 08:17:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:06 compute-1 ceph-mon[81689]: pgmap v3737: 305 pgs: 305 active+clean; 239 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 94 op/s
Dec 06 08:17:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:07.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:07 compute-1 nova_compute[226101]: 2025-12-06 08:17:07.133 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:07.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:07 compute-1 nova_compute[226101]: 2025-12-06 08:17:07.708 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/713378447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:08 compute-1 ceph-mon[81689]: pgmap v3738: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:17:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:09 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1028973287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:17:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1028973287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:17:10 compute-1 ceph-mon[81689]: pgmap v3739: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:17:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/28693135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:11.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:11 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:11.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:17:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/109254007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/109254007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:12 compute-1 nova_compute[226101]: 2025-12-06 08:17:12.135 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:12 compute-1 nova_compute[226101]: 2025-12-06 08:17:12.709 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:13 compute-1 ceph-mon[81689]: pgmap v3740: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Dec 06 08:17:13 compute-1 podman[310386]: 2025-12-06 08:17:13.086268926 +0000 UTC m=+0.069819012 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 06 08:17:13 compute-1 podman[310387]: 2025-12-06 08:17:13.089934985 +0000 UTC m=+0.063366260 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:17:13 compute-1 podman[310388]: 2025-12-06 08:17:13.099787169 +0000 UTC m=+0.079210265 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:17:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:13.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:13 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:13.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/93613626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:15 compute-1 ceph-mon[81689]: pgmap v3741: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 08:17:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/686669092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:15 compute-1 sshd-session[310446]: Received disconnect from 154.219.116.39 port 43456:11: Bye Bye [preauth]
Dec 06 08:17:15 compute-1 sshd-session[310446]: Disconnected from authenticating user root 154.219.116.39 port 43456 [preauth]
Dec 06 08:17:15 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:17:15.379 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:15.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:15.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:16 compute-1 ceph-mon[81689]: pgmap v3742: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 08:17:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:17 compute-1 nova_compute[226101]: 2025-12-06 08:17:17.138 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:17.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:17 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:17.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:17 compute-1 nova_compute[226101]: 2025-12-06 08:17:17.750 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:18 compute-1 ceph-mon[81689]: pgmap v3743: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1018 KiB/s wr, 62 op/s
Dec 06 08:17:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000078s ======
Dec 06 08:17:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:19.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Dec 06 08:17:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 08:17:19 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:19.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 08:17:20 compute-1 sshd-session[310448]: Received disconnect from 186.96.151.198 port 47450:11: Bye Bye [preauth]
Dec 06 08:17:20 compute-1 sshd-session[310448]: Disconnected from authenticating user root 186.96.151.198 port 47450 [preauth]
Dec 06 08:17:20 compute-1 ceph-mon[81689]: pgmap v3744: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 3.6 KiB/s wr, 36 op/s
Dec 06 08:17:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:21 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:22 compute-1 nova_compute[226101]: 2025-12-06 08:17:22.140 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:22 compute-1 nova_compute[226101]: 2025-12-06 08:17:22.752 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:23 compute-1 ceph-mon[81689]: pgmap v3745: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 18 KiB/s wr, 105 op/s
Dec 06 08:17:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:23.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:23 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:23.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:25 compute-1 ceph-mon[81689]: pgmap v3746: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 96 op/s
Dec 06 08:17:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:25.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:25.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:25 compute-1 sshd-session[310450]: Received disconnect from 186.87.166.141 port 34310:11: Bye Bye [preauth]
Dec 06 08:17:25 compute-1 sshd-session[310450]: Disconnected from authenticating user root 186.87.166.141 port 34310 [preauth]
Dec 06 08:17:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:27 compute-1 ceph-mon[81689]: pgmap v3747: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 101 op/s
Dec 06 08:17:27 compute-1 nova_compute[226101]: 2025-12-06 08:17:27.190 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:27.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:27.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:27 compute-1 nova_compute[226101]: 2025-12-06 08:17:27.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:29 compute-1 ceph-mon[81689]: pgmap v3748: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 06 08:17:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:29.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:29 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:29.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:31.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:31 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:31.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:31 compute-1 ceph-mon[81689]: pgmap v3749: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 06 08:17:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:32 compute-1 nova_compute[226101]: 2025-12-06 08:17:32.192 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:32 compute-1 nova_compute[226101]: 2025-12-06 08:17:32.757 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:33 compute-1 ceph-mon[81689]: pgmap v3750: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 08:17:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:33.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:33 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:33.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:35 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:35.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:35 compute-1 ceph-mon[81689]: pgmap v3751: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 KiB/s rd, 6 op/s
Dec 06 08:17:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:37 compute-1 sshd-session[310452]: Received disconnect from 101.100.194.199 port 45628:11: Bye Bye [preauth]
Dec 06 08:17:37 compute-1 sshd-session[310452]: Disconnected from authenticating user root 101.100.194.199 port 45628 [preauth]
Dec 06 08:17:37 compute-1 ceph-mon[81689]: pgmap v3752: 305 pgs: 305 active+clean; 191 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 1.5 MiB/s wr, 43 op/s
Dec 06 08:17:37 compute-1 nova_compute[226101]: 2025-12-06 08:17:37.194 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:37.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:37 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:37.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:37 compute-1 nova_compute[226101]: 2025-12-06 08:17:37.759 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:39 compute-1 ceph-mon[81689]: pgmap v3753: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:39.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:39 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:39.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:39 compute-1 nova_compute[226101]: 2025-12-06 08:17:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:39 compute-1 nova_compute[226101]: 2025-12-06 08:17:39.658 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:39 compute-1 nova_compute[226101]: 2025-12-06 08:17:39.659 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:39 compute-1 nova_compute[226101]: 2025-12-06 08:17:39.659 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:39 compute-1 nova_compute[226101]: 2025-12-06 08:17:39.659 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:17:39 compute-1 nova_compute[226101]: 2025-12-06 08:17:39.660 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:17:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1944053455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.121 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1944053455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.324 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.325 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4306MB free_disk=20.988109588623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.325 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.326 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.458 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.458 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.481 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:17:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1459922500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.967 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.975 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:17:40 compute-1 nova_compute[226101]: 2025-12-06 08:17:40.992 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:17:41 compute-1 nova_compute[226101]: 2025-12-06 08:17:41.017 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:17:41 compute-1 nova_compute[226101]: 2025-12-06 08:17:41.017 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:41 compute-1 ceph-mon[81689]: pgmap v3754: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1459922500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2842103239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:41.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:41 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:42 compute-1 nova_compute[226101]: 2025-12-06 08:17:42.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:42 compute-1 nova_compute[226101]: 2025-12-06 08:17:42.761 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:43 compute-1 ceph-mon[81689]: pgmap v3755: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:43 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:44 compute-1 podman[310498]: 2025-12-06 08:17:44.121854144 +0000 UTC m=+0.081957107 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:17:44 compute-1 podman[310499]: 2025-12-06 08:17:44.130149837 +0000 UTC m=+0.087062645 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 08:17:44 compute-1 podman[310500]: 2025-12-06 08:17:44.153051431 +0000 UTC m=+0.097951216 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:17:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:45 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:45.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:45.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:45 compute-1 ceph-mon[81689]: pgmap v3756: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:17:46 compute-1 nova_compute[226101]: 2025-12-06 08:17:46.018 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:46 compute-1 nova_compute[226101]: 2025-12-06 08:17:46.019 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:17:46 compute-1 nova_compute[226101]: 2025-12-06 08:17:46.019 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:17:46 compute-1 nova_compute[226101]: 2025-12-06 08:17:46.087 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:17:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e420 e420: 3 total, 3 up, 3 in
Dec 06 08:17:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:47 compute-1 nova_compute[226101]: 2025-12-06 08:17:47.238 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:47 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:47 compute-1 ceph-mon[81689]: pgmap v3757: 305 pgs: 305 active+clean; 233 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 380 KiB/s rd, 3.8 MiB/s wr, 93 op/s
Dec 06 08:17:47 compute-1 ceph-mon[81689]: osdmap e420: 3 total, 3 up, 3 in
Dec 06 08:17:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 e421: 3 total, 3 up, 3 in
Dec 06 08:17:47 compute-1 nova_compute[226101]: 2025-12-06 08:17:47.765 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:48 compute-1 ceph-mon[81689]: osdmap e421: 3 total, 3 up, 3 in
Dec 06 08:17:48 compute-1 ceph-mon[81689]: pgmap v3760: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.7 MiB/s wr, 44 op/s
Dec 06 08:17:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2865137747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:48 compute-1 sshd-session[310562]: Received disconnect from 91.144.158.231 port 23277:11: Bye Bye [preauth]
Dec 06 08:17:48 compute-1 sshd-session[310562]: Disconnected from authenticating user root 91.144.158.231 port 23277 [preauth]
Dec 06 08:17:48 compute-1 sshd-session[310564]: Received disconnect from 107.150.106.178 port 53664:11: Bye Bye [preauth]
Dec 06 08:17:48 compute-1 sshd-session[310564]: Disconnected from authenticating user root 107.150.106.178 port 53664 [preauth]
Dec 06 08:17:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:49.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 08:17:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:49.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 08:17:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/406431765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:50 compute-1 ceph-mon[81689]: pgmap v3761: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.7 MiB/s wr, 43 op/s
Dec 06 08:17:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:51.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:51.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:52 compute-1 nova_compute[226101]: 2025-12-06 08:17:52.241 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:52 compute-1 nova_compute[226101]: 2025-12-06 08:17:52.767 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:52 compute-1 ceph-mon[81689]: pgmap v3762: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 139 KiB/s rd, 2.7 MiB/s wr, 79 op/s
Dec 06 08:17:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:53.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:53 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:53.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:53 compute-1 sshd-session[310566]: Received disconnect from 124.18.141.70 port 39198:11: Bye Bye [preauth]
Dec 06 08:17:53 compute-1 sshd-session[310566]: Disconnected from authenticating user root 124.18.141.70 port 39198 [preauth]
Dec 06 08:17:54 compute-1 ceph-mon[81689]: pgmap v3763: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 111 KiB/s rd, 173 KiB/s wr, 36 op/s
Dec 06 08:17:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1622176443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.081 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "fd33cb27-7fd5-4a29-9653-903deb886052" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.082 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.101 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.234 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.235 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.243 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.244 226109 INFO nova.compute.claims [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:17:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:55.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:55 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:55.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:55 compute-1 nova_compute[226101]: 2025-12-06 08:17:55.617 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3347139014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:17:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3542385760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.122 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.128 226109 DEBUG nova.compute.provider_tree [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.147 226109 DEBUG nova.scheduler.client.report [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.174 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.176 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.224 226109 INFO nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.228 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.228 226109 DEBUG nova.network.neutron [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.266 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.332 226109 INFO nova.virt.block_device [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Booting with volume snapshot 8c532b5e-c264-4201-a213-9aa271d92ee9 at /dev/vda
Dec 06 08:17:56 compute-1 nova_compute[226101]: 2025-12-06 08:17:56.875 226109 DEBUG nova.policy [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e8feb4540af4e2caa45a88a9202dbe2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:17:57 compute-1 ceph-mon[81689]: pgmap v3764: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 20 KiB/s wr, 103 op/s
Dec 06 08:17:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3542385760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:57 compute-1 nova_compute[226101]: 2025-12-06 08:17:57.243 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:57.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:57 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:57.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:57 compute-1 nova_compute[226101]: 2025-12-06 08:17:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:57 compute-1 nova_compute[226101]: 2025-12-06 08:17:57.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:17:57 compute-1 nova_compute[226101]: 2025-12-06 08:17:57.769 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:57 compute-1 nova_compute[226101]: 2025-12-06 08:17:57.808 226109 DEBUG nova.network.neutron [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Successfully created port: 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:17:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/895991980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:59 compute-1 ceph-mon[81689]: pgmap v3765: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 18 KiB/s wr, 102 op/s
Dec 06 08:17:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2688482981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:17:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:17:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:59.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:59 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.205 226109 DEBUG nova.network.neutron [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Successfully updated port: 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.229 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.230 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquired lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.230 226109 DEBUG nova.network.neutron [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.336 226109 DEBUG nova.compute.manager [req-97bc3ed4-1545-454f-a78f-6e87050203f3 req-23910cb7-007d-4c0a-83fe-32c2a9acad25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received event network-changed-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.336 226109 DEBUG nova.compute.manager [req-97bc3ed4-1545-454f-a78f-6e87050203f3 req-23910cb7-007d-4c0a-83fe-32c2a9acad25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Refreshing instance network info cache due to event network-changed-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.337 226109 DEBUG oslo_concurrency.lockutils [req-97bc3ed4-1545-454f-a78f-6e87050203f3 req-23910cb7-007d-4c0a-83fe-32c2a9acad25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.530 226109 DEBUG nova.network.neutron [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.688 226109 DEBUG os_brick.utils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.689 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.700 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.701 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[cea47912-ffaa-4bef-8543-387d76d30b11]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.702 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.710 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.710 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[5907ad86-4f24-406b-9080-60f69999b67c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.711 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.720 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.720 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[4259f96f-4f31-4ddf-b3ef-0d3bdca102cd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.721 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc14b14-561c-4fe5-9ca6-af9727d25e04]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.722 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.751 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.754 226109 DEBUG os_brick.initiator.connectors.lightos [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.754 226109 DEBUG os_brick.initiator.connectors.lightos [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.755 226109 DEBUG os_brick.initiator.connectors.lightos [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.755 226109 DEBUG os_brick.utils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 08:18:00 compute-1 nova_compute[226101]: 2025-12-06 08:18:00.755 226109 DEBUG nova.virt.block_device [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Updating existing volume attachment record: f9616046-1843-4502-87dc-b5e3433ae6e1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 08:18:01 compute-1 ceph-mon[81689]: pgmap v3766: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.325 226109 DEBUG nova.network.neutron [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Updating instance_info_cache with network_info: [{"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.419 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Releasing lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.420 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Instance network_info: |[{"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.420 226109 DEBUG oslo_concurrency.lockutils [req-97bc3ed4-1545-454f-a78f-6e87050203f3 req-23910cb7-007d-4c0a-83fe-32c2a9acad25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.421 226109 DEBUG nova.network.neutron [req-97bc3ed4-1545-454f-a78f-6e87050203f3 req-23910cb7-007d-4c0a-83fe-32c2a9acad25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Refreshing network info cache for port 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:18:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:18:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/320196248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:01.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:01 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:01.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:01.696 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:01.696 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:01.696 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.844 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.847 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.847 226109 INFO nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Creating image(s)
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.848 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.849 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Ensure instance console log exists: /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.849 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.849 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.850 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.853 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Start _get_guest_xml network_info=[{"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-12-06T08:17:46Z,direct_url=<?>,disk_format='qcow2',id=c5cbac11-d6e7-4196-b42c-615f51a1fae2,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-107678206',owner='4b2dc4b8729f446a9c7ac69ca446f71d',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-12-06T08:17:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-43e97266-6e92-4724-9224-f4b44696e11a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '43e97266-6e92-4724-9224-f4b44696e11a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'fd33cb27-7fd5-4a29-9653-903deb886052', 'attached_at': '', 'detached_at': '', 'volume_id': '43e97266-6e92-4724-9224-f4b44696e11a', 'serial': '43e97266-6e92-4724-9224-f4b44696e11a'}, 'mount_device': '/dev/vda', 'boot_index': 0, 'attachment_id': 'f9616046-1843-4502-87dc-b5e3433ae6e1', 'guest_format': None, 'delete_on_termination': True, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.857 226109 WARNING nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.862 226109 DEBUG nova.virt.libvirt.host [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.862 226109 DEBUG nova.virt.libvirt.host [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.867 226109 DEBUG nova.virt.libvirt.host [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.868 226109 DEBUG nova.virt.libvirt.host [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.870 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.870 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-12-06T08:17:46Z,direct_url=<?>,disk_format='qcow2',id=c5cbac11-d6e7-4196-b42c-615f51a1fae2,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-107678206',owner='4b2dc4b8729f446a9c7ac69ca446f71d',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-12-06T08:17:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.871 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.872 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.872 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.873 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.873 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.874 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.874 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.874 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.875 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.875 226109 DEBUG nova.virt.hardware [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.914 226109 DEBUG nova.storage.rbd_utils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image fd33cb27-7fd5-4a29-9653-903deb886052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:18:01 compute-1 nova_compute[226101]: 2025-12-06 08:18:01.919 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/320196248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:02 compute-1 sshd-session[310591]: Received disconnect from 106.51.92.114 port 32771:11: Bye Bye [preauth]
Dec 06 08:18:02 compute-1 sshd-session[310591]: Disconnected from authenticating user root 106.51.92.114 port 32771 [preauth]
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.244 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:18:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/800670191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.376 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.470 226109 DEBUG nova.virt.libvirt.vif [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:17:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-492005604',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-492005604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-492005604',id=210,image_ref='c5cbac11-d6e7-4196-b42c-615f51a1fae2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkaOSMRiE9x9L/H0T0J1x895cgNVGINGUxjAGosCuIMsXIXt7lpP2doMYDuxla2yuoKptfNo3BL3gjaNczVUkm0D0R4c6kqODJGtsXBruxfwZsXmNdjHABOzVtdB+6Pvg==',key_name='tempest-keypair-1185520304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-heg1sts2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-97496240',image_owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:17:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=fd33cb27-7fd5-4a29-9653-903deb886052,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.471 226109 DEBUG nova.network.os_vif_util [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.471 226109 DEBUG nova.network.os_vif_util [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:69:90,bridge_name='br-int',has_traffic_filtering=True,id=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9736c5e1-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.472 226109 DEBUG nova.objects.instance [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'pci_devices' on Instance uuid fd33cb27-7fd5-4a29-9653-903deb886052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.539 226109 DEBUG nova.network.neutron [req-97bc3ed4-1545-454f-a78f-6e87050203f3 req-23910cb7-007d-4c0a-83fe-32c2a9acad25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Updated VIF entry in instance network info cache for port 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.540 226109 DEBUG nova.network.neutron [req-97bc3ed4-1545-454f-a78f-6e87050203f3 req-23910cb7-007d-4c0a-83fe-32c2a9acad25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Updating instance_info_cache with network_info: [{"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.546 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <uuid>fd33cb27-7fd5-4a29-9653-903deb886052</uuid>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <name>instance-000000d2</name>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-492005604</nova:name>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:18:01</nova:creationTime>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <nova:user uuid="8e8feb4540af4e2caa45a88a9202dbe2">tempest-TestVolumeBootPattern-97496240-project-member</nova:user>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <nova:project uuid="4b2dc4b8729f446a9c7ac69ca446f71d">tempest-TestVolumeBootPattern-97496240</nova:project>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="c5cbac11-d6e7-4196-b42c-615f51a1fae2"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <nova:port uuid="9736c5e1-fb9a-4cb8-a7cb-c2f972a26552">
Dec 06 08:18:02 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <system>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <entry name="serial">fd33cb27-7fd5-4a29-9653-903deb886052</entry>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <entry name="uuid">fd33cb27-7fd5-4a29-9653-903deb886052</entry>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </system>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <os>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   </os>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <features>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   </features>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/fd33cb27-7fd5-4a29-9653-903deb886052_disk.config">
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       </source>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <source protocol="rbd" name="volumes/volume-43e97266-6e92-4724-9224-f4b44696e11a">
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       </source>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:18:02 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <serial>43e97266-6e92-4724-9224-f4b44696e11a</serial>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:40:69:90"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <target dev="tap9736c5e1-fb"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052/console.log" append="off"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <video>
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </video>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <input type="keyboard" bus="usb"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:18:02 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:18:02 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:18:02 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:18:02 compute-1 nova_compute[226101]: </domain>
Dec 06 08:18:02 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.548 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Preparing to wait for external event network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.548 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.548 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.549 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.549 226109 DEBUG nova.virt.libvirt.vif [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:17:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-492005604',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-492005604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-492005604',id=210,image_ref='c5cbac11-d6e7-4196-b42c-615f51a1fae2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkaOSMRiE9x9L/H0T0J1x895cgNVGINGUxjAGosCuIMsXIXt7lpP2doMYDuxla2yuoKptfNo3BL3gjaNczVUkm0D0R4c6kqODJGtsXBruxfwZsXmNdjHABOzVtdB+6Pvg==',key_name='tempest-keypair-1185520304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-heg1sts2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-97496240',image_owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:17:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=fd33cb27-7fd5-4a29-9653-903deb886052,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.550 226109 DEBUG nova.network.os_vif_util [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.550 226109 DEBUG nova.network.os_vif_util [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:69:90,bridge_name='br-int',has_traffic_filtering=True,id=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9736c5e1-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.550 226109 DEBUG os_vif [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:69:90,bridge_name='br-int',has_traffic_filtering=True,id=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9736c5e1-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.551 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.551 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.552 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.555 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.555 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9736c5e1-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.556 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9736c5e1-fb, col_values=(('external_ids', {'iface-id': '9736c5e1-fb9a-4cb8-a7cb-c2f972a26552', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:69:90', 'vm-uuid': 'fd33cb27-7fd5-4a29-9653-903deb886052'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:02 compute-1 NetworkManager[49031]: <info>  [1765009082.5912] manager: (tap9736c5e1-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.590 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.594 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.597 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.599 226109 INFO os_vif [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:69:90,bridge_name='br-int',has_traffic_filtering=True,id=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9736c5e1-fb')
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.726 226109 DEBUG oslo_concurrency.lockutils [req-97bc3ed4-1545-454f-a78f-6e87050203f3 req-23910cb7-007d-4c0a-83fe-32c2a9acad25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.760 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.761 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.761 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No VIF found with MAC fa:16:3e:40:69:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.762 226109 INFO nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Using config drive
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.802 226109 DEBUG nova.storage.rbd_utils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image fd33cb27-7fd5-4a29-9653-903deb886052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:18:02 compute-1 nova_compute[226101]: 2025-12-06 08:18:02.808 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:03 compute-1 ceph-mon[81689]: pgmap v3767: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 98 op/s
Dec 06 08:18:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/800670191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.194 226109 INFO nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Creating config drive at /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052/disk.config
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.201 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpno902pyj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.335 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpno902pyj" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.365 226109 DEBUG nova.storage.rbd_utils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image fd33cb27-7fd5-4a29-9653-903deb886052_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.368 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052/disk.config fd33cb27-7fd5-4a29-9653-903deb886052_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:03.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:03.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.536 226109 DEBUG oslo_concurrency.processutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052/disk.config fd33cb27-7fd5-4a29-9653-903deb886052_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.537 226109 INFO nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Deleting local config drive /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052/disk.config because it was imported into RBD.
Dec 06 08:18:03 compute-1 kernel: tap9736c5e1-fb: entered promiscuous mode
Dec 06 08:18:03 compute-1 NetworkManager[49031]: <info>  [1765009083.6119] manager: (tap9736c5e1-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Dec 06 08:18:03 compute-1 ovn_controller[130279]: 2025-12-06T08:18:03Z|00831|binding|INFO|Claiming lport 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 for this chassis.
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.613 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:03 compute-1 ovn_controller[130279]: 2025-12-06T08:18:03Z|00832|binding|INFO|9736c5e1-fb9a-4cb8-a7cb-c2f972a26552: Claiming fa:16:3e:40:69:90 10.100.0.9
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.638 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:69:90 10.100.0.9'], port_security=['fa:16:3e:40:69:90 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fd33cb27-7fd5-4a29-9653-903deb886052', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1fac348-226c-4298-a6da-922652cc16d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.639 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a bound to our chassis
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.641 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.660 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[63deed4f-1bd6-4902-ba40-3f87dc6bf44b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.661 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4ef1374-91 in ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.664 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.664 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4ef1374-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.664 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6a673d86-12a8-4ee8-8b67-2f25e992d5b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 NetworkManager[49031]: <info>  [1765009083.6658] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Dec 06 08:18:03 compute-1 NetworkManager[49031]: <info>  [1765009083.6668] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.667 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[dce5e559-0fc3-46cb-ba7a-b9c2bacfab41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.686 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[901d9ac7-f7fb-4225-903c-49119a6aa1f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 systemd-machined[190302]: New machine qemu-95-instance-000000d2.
Dec 06 08:18:03 compute-1 systemd-udevd[310715]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:18:03 compute-1 systemd[1]: Started Virtual Machine qemu-95-instance-000000d2.
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.722 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a36641d9-c8a4-4caa-8798-5fbba320d15c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 NetworkManager[49031]: <info>  [1765009083.7342] device (tap9736c5e1-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:18:03 compute-1 NetworkManager[49031]: <info>  [1765009083.7351] device (tap9736c5e1-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.766 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[50443a5c-7870-4d23-aed2-499abc574965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 NetworkManager[49031]: <info>  [1765009083.7790] manager: (tapb4ef1374-90): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.778 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9241df61-e538-4d56-bd78-29920107267e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.802 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.827 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[311fc198-e363-4a91-8b6e-c6fcdda7ecb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.828 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.831 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[8e13db18-23e5-4687-8ec9-4937e31c182d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 ovn_controller[130279]: 2025-12-06T08:18:03Z|00833|binding|INFO|Setting lport 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 ovn-installed in OVS
Dec 06 08:18:03 compute-1 ovn_controller[130279]: 2025-12-06T08:18:03Z|00834|binding|INFO|Setting lport 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 up in Southbound
Dec 06 08:18:03 compute-1 nova_compute[226101]: 2025-12-06 08:18:03.838 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:03 compute-1 NetworkManager[49031]: <info>  [1765009083.8575] device (tapb4ef1374-90): carrier: link connected
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.864 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[80906d66-347e-4f82-8720-447b24955759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.887 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1408cbd9-24a4-4fce-a35b-bd35b0d1130c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936336, 'reachable_time': 43938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310745, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.905 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ad01fca6-28c6-4088-a0b5-7e43cc1bd854]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:d4b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936336, 'tstamp': 936336}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310746, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.924 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8990ef-a6a6-4216-9f1d-d1830b1b00c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936336, 'reachable_time': 43938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310747, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:03 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:03.956 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c90b78c2-354b-45da-be77-222926a53d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.019 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[97182703-9e6a-40d5-851e-be80306f69cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.021 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.021 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.022 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4ef1374-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.024 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:04 compute-1 NetworkManager[49031]: <info>  [1765009084.0249] manager: (tapb4ef1374-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Dec 06 08:18:04 compute-1 kernel: tapb4ef1374-90: entered promiscuous mode
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.026 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.028 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4ef1374-90, col_values=(('external_ids', {'iface-id': '32c82c25-6496-4edd-ba74-1791824b99ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:04 compute-1 ovn_controller[130279]: 2025-12-06T08:18:04Z|00835|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.030 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.046 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.063 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.064 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5d6133-41cb-4b02-9c4c-4e8e687392e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.065 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:18:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:04.065 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'env', 'PROCESS_TAG=haproxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4ef1374-9c77-45a7-8776-50aa60c7d84a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:18:04 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.214 226109 DEBUG nova.compute.manager [req-2f73e788-26db-4927-ba64-f82c0557f3f4 req-5e49a71a-0aab-4eb2-9f7b-f28b9510e774 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received event network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.214 226109 DEBUG oslo_concurrency.lockutils [req-2f73e788-26db-4927-ba64-f82c0557f3f4 req-5e49a71a-0aab-4eb2-9f7b-f28b9510e774 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.214 226109 DEBUG oslo_concurrency.lockutils [req-2f73e788-26db-4927-ba64-f82c0557f3f4 req-5e49a71a-0aab-4eb2-9f7b-f28b9510e774 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.215 226109 DEBUG oslo_concurrency.lockutils [req-2f73e788-26db-4927-ba64-f82c0557f3f4 req-5e49a71a-0aab-4eb2-9f7b-f28b9510e774 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.215 226109 DEBUG nova.compute.manager [req-2f73e788-26db-4927-ba64-f82c0557f3f4 req-5e49a71a-0aab-4eb2-9f7b-f28b9510e774 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Processing event network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:18:04 compute-1 podman[310786]: 2025-12-06 08:18:04.43296641 +0000 UTC m=+0.060458782 container create c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:18:04 compute-1 systemd[1]: Started libpod-conmon-c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc.scope.
Dec 06 08:18:04 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:18:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34ea82f41df9148c618a775805d4238f16c53a528fdaebcf1584bb756987156/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:04 compute-1 podman[310786]: 2025-12-06 08:18:04.403654014 +0000 UTC m=+0.031146406 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:18:04 compute-1 podman[310786]: 2025-12-06 08:18:04.511054073 +0000 UTC m=+0.138546475 container init c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:18:04 compute-1 podman[310786]: 2025-12-06 08:18:04.516269732 +0000 UTC m=+0.143762104 container start c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:18:04 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[310833]: [NOTICE]   (310840) : New worker (310842) forked
Dec 06 08:18:04 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[310833]: [NOTICE]   (310840) : Loading success.
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.586 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009084.5859828, fd33cb27-7fd5-4a29-9653-903deb886052 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.586 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] VM Started (Lifecycle Event)
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.589 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.594 226109 DEBUG nova.virt.libvirt.driver [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.598 226109 INFO nova.virt.libvirt.driver [-] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Instance spawned successfully.
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.598 226109 INFO nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Took 2.75 seconds to spawn the instance on the hypervisor.
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.599 226109 DEBUG nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.635 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.639 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.724 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.725 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009084.5886474, fd33cb27-7fd5-4a29-9653-903deb886052 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.725 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] VM Paused (Lifecycle Event)
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.743 226109 INFO nova.compute.manager [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Took 9.59 seconds to build instance.
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.751 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.756 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009084.5938354, fd33cb27-7fd5-4a29-9653-903deb886052 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.757 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] VM Resumed (Lifecycle Event)
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.771 226109 DEBUG oslo_concurrency.lockutils [None req-4150709c-ee1c-488e-8e68-8639a77e620f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.778 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:18:04 compute-1 nova_compute[226101]: 2025-12-06 08:18:04.783 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:18:05 compute-1 ceph-mon[81689]: pgmap v3768: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Dec 06 08:18:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:05.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:05 compute-1 nova_compute[226101]: 2025-12-06 08:18:05.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:06 compute-1 sudo[310852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:06 compute-1 sudo[310852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:06 compute-1 sudo[310852]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:06 compute-1 nova_compute[226101]: 2025-12-06 08:18:06.335 226109 DEBUG nova.compute.manager [req-1210a674-48c3-48fb-8754-1080a383192b req-b9611a79-6be4-4a32-af1e-0506adb0d197 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received event network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:18:06 compute-1 nova_compute[226101]: 2025-12-06 08:18:06.337 226109 DEBUG oslo_concurrency.lockutils [req-1210a674-48c3-48fb-8754-1080a383192b req-b9611a79-6be4-4a32-af1e-0506adb0d197 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:06 compute-1 nova_compute[226101]: 2025-12-06 08:18:06.338 226109 DEBUG oslo_concurrency.lockutils [req-1210a674-48c3-48fb-8754-1080a383192b req-b9611a79-6be4-4a32-af1e-0506adb0d197 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:06 compute-1 nova_compute[226101]: 2025-12-06 08:18:06.338 226109 DEBUG oslo_concurrency.lockutils [req-1210a674-48c3-48fb-8754-1080a383192b req-b9611a79-6be4-4a32-af1e-0506adb0d197 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:06 compute-1 nova_compute[226101]: 2025-12-06 08:18:06.338 226109 DEBUG nova.compute.manager [req-1210a674-48c3-48fb-8754-1080a383192b req-b9611a79-6be4-4a32-af1e-0506adb0d197 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] No waiting events found dispatching network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:18:06 compute-1 nova_compute[226101]: 2025-12-06 08:18:06.338 226109 WARNING nova.compute.manager [req-1210a674-48c3-48fb-8754-1080a383192b req-b9611a79-6be4-4a32-af1e-0506adb0d197 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received unexpected event network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 for instance with vm_state active and task_state None.
Dec 06 08:18:06 compute-1 sudo[310877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:18:06 compute-1 sudo[310877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:06 compute-1 sudo[310877]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:06 compute-1 sudo[310902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:06 compute-1 sudo[310902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:06 compute-1 sudo[310902]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:06 compute-1 sudo[310927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:18:06 compute-1 sudo[310927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:06 compute-1 sudo[310927]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:07 compute-1 ceph-mon[81689]: pgmap v3769: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Dec 06 08:18:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:18:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:18:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:18:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:18:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:18:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:18:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:07.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:07.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:07 compute-1 nova_compute[226101]: 2025-12-06 08:18:07.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:07 compute-1 nova_compute[226101]: 2025-12-06 08:18:07.591 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:07 compute-1 nova_compute[226101]: 2025-12-06 08:18:07.773 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:08 compute-1 nova_compute[226101]: 2025-12-06 08:18:08.501 226109 DEBUG nova.compute.manager [req-db0e11d3-d002-4d96-915c-35057abf4a37 req-23af67fb-7ae8-42d9-8369-9de52e6145cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received event network-changed-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:18:08 compute-1 nova_compute[226101]: 2025-12-06 08:18:08.502 226109 DEBUG nova.compute.manager [req-db0e11d3-d002-4d96-915c-35057abf4a37 req-23af67fb-7ae8-42d9-8369-9de52e6145cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Refreshing instance network info cache due to event network-changed-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:18:08 compute-1 nova_compute[226101]: 2025-12-06 08:18:08.503 226109 DEBUG oslo_concurrency.lockutils [req-db0e11d3-d002-4d96-915c-35057abf4a37 req-23af67fb-7ae8-42d9-8369-9de52e6145cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:18:08 compute-1 nova_compute[226101]: 2025-12-06 08:18:08.503 226109 DEBUG oslo_concurrency.lockutils [req-db0e11d3-d002-4d96-915c-35057abf4a37 req-23af67fb-7ae8-42d9-8369-9de52e6145cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:18:08 compute-1 nova_compute[226101]: 2025-12-06 08:18:08.504 226109 DEBUG nova.network.neutron [req-db0e11d3-d002-4d96-915c-35057abf4a37 req-23af67fb-7ae8-42d9-8369-9de52e6145cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Refreshing network info cache for port 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:18:09 compute-1 ceph-mon[81689]: pgmap v3770: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec 06 08:18:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3375858047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3375858047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:09.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:10 compute-1 nova_compute[226101]: 2025-12-06 08:18:10.099 226109 DEBUG nova.network.neutron [req-db0e11d3-d002-4d96-915c-35057abf4a37 req-23af67fb-7ae8-42d9-8369-9de52e6145cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Updated VIF entry in instance network info cache for port 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:18:10 compute-1 nova_compute[226101]: 2025-12-06 08:18:10.099 226109 DEBUG nova.network.neutron [req-db0e11d3-d002-4d96-915c-35057abf4a37 req-23af67fb-7ae8-42d9-8369-9de52e6145cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Updating instance_info_cache with network_info: [{"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:18:10 compute-1 nova_compute[226101]: 2025-12-06 08:18:10.118 226109 DEBUG oslo_concurrency.lockutils [req-db0e11d3-d002-4d96-915c-35057abf4a37 req-23af67fb-7ae8-42d9-8369-9de52e6145cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fd33cb27-7fd5-4a29-9653-903deb886052" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:18:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:11.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:11 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:11.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:11 compute-1 ceph-mon[81689]: pgmap v3771: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Dec 06 08:18:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:12 compute-1 nova_compute[226101]: 2025-12-06 08:18:12.592 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:12 compute-1 nova_compute[226101]: 2025-12-06 08:18:12.775 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:13 compute-1 ceph-mon[81689]: pgmap v3772: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Dec 06 08:18:13 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:13 compute-1 sudo[310985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:13 compute-1 sudo[310985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:13 compute-1 sudo[310985]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:13 compute-1 sudo[311010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:18:13 compute-1 sudo[311010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:13 compute-1 sudo[311010]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:13 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:13.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:13.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:14 compute-1 sshd-session[310983]: Received disconnect from 45.120.216.232 port 54910:11: Bye Bye [preauth]
Dec 06 08:18:14 compute-1 sshd-session[310983]: Disconnected from authenticating user root 45.120.216.232 port 54910 [preauth]
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.680265) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094680330, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1322, "num_deletes": 256, "total_data_size": 2745897, "memory_usage": 2786448, "flush_reason": "Manual Compaction"}
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094694238, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1799082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81435, "largest_seqno": 82752, "table_properties": {"data_size": 1793451, "index_size": 2961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12596, "raw_average_key_size": 19, "raw_value_size": 1781830, "raw_average_value_size": 2810, "num_data_blocks": 131, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008995, "oldest_key_time": 1765008995, "file_creation_time": 1765009094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 14014 microseconds, and 6214 cpu microseconds.
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.694287) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1799082 bytes OK
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.694307) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.696299) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.696352) EVENT_LOG_v1 {"time_micros": 1765009094696341, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.696380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2739615, prev total WAL file size 2739615, number of live WAL files 2.
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.697467) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303330' seq:72057594037927935, type:22 .. '6C6F676D0033323831' seq:0, type:0; will stop at (end)
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1756KB)], [165(11MB)]
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094697530, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 13652324, "oldest_snapshot_seqno": -1}
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 11053 keys, 13516850 bytes, temperature: kUnknown
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094772396, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 13516850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13447143, "index_size": 40986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27653, "raw_key_size": 292426, "raw_average_key_size": 26, "raw_value_size": 13255389, "raw_average_value_size": 1199, "num_data_blocks": 1555, "num_entries": 11053, "num_filter_entries": 11053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.772640) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13516850 bytes
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.773809) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.2 rd, 180.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.3 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(15.1) write-amplify(7.5) OK, records in: 11582, records dropped: 529 output_compression: NoCompression
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.773823) EVENT_LOG_v1 {"time_micros": 1765009094773816, "job": 106, "event": "compaction_finished", "compaction_time_micros": 74940, "compaction_time_cpu_micros": 31323, "output_level": 6, "num_output_files": 1, "total_output_size": 13516850, "num_input_records": 11582, "num_output_records": 11053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094774210, "job": 106, "event": "table_file_deletion", "file_number": 167}
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094776030, "job": 106, "event": "table_file_deletion", "file_number": 165}
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.697272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.776099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.776107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.776111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.776114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:14.776117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:15 compute-1 podman[311037]: 2025-12-06 08:18:15.068943761 +0000 UTC m=+0.055354424 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec 06 08:18:15 compute-1 podman[311039]: 2025-12-06 08:18:15.078264171 +0000 UTC m=+0.061333465 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 08:18:15 compute-1 podman[311040]: 2025-12-06 08:18:15.116983369 +0000 UTC m=+0.099639662 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 08:18:15 compute-1 ceph-mon[81689]: pgmap v3773: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 08:18:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:15.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:15 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:15.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:15 compute-1 sshd-session[311036]: Received disconnect from 136.112.8.45 port 46502:11: Bye Bye [preauth]
Dec 06 08:18:15 compute-1 sshd-session[311036]: Disconnected from authenticating user root 136.112.8.45 port 46502 [preauth]
Dec 06 08:18:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:17 compute-1 ceph-mon[81689]: pgmap v3774: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 08:18:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4049142387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:17 compute-1 nova_compute[226101]: 2025-12-06 08:18:17.629 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:17 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:17.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:17.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:17 compute-1 nova_compute[226101]: 2025-12-06 08:18:17.776 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:18 compute-1 sshd-session[311102]: Received disconnect from 14.225.3.79 port 42670:11: Bye Bye [preauth]
Dec 06 08:18:18 compute-1 sshd-session[311102]: Disconnected from authenticating user root 14.225.3.79 port 42670 [preauth]
Dec 06 08:18:18 compute-1 ceph-mon[81689]: pgmap v3775: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1009 KiB/s wr, 81 op/s
Dec 06 08:18:19 compute-1 ovn_controller[130279]: 2025-12-06T08:18:19Z|00109|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.9
Dec 06 08:18:19 compute-1 ovn_controller[130279]: 2025-12-06T08:18:19Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:40:69:90 10.100.0.9
Dec 06 08:18:19 compute-1 nova_compute[226101]: 2025-12-06 08:18:19.439 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:19.441 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:18:19 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:19.442 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:18:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:19.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:19.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:21 compute-1 ceph-mon[81689]: pgmap v3776: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 944 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:18:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:21.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:21 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:21.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:18:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6605.4 total, 600.0 interval
                                           Cumulative writes: 64K writes, 252K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 64K writes, 23K syncs, 2.77 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4108 writes, 15K keys, 4108 commit groups, 1.0 writes per commit group, ingest: 16.42 MB, 0.03 MB/s
                                           Interval WAL: 4108 writes, 1697 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:18:22 compute-1 nova_compute[226101]: 2025-12-06 08:18:22.672 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:22 compute-1 nova_compute[226101]: 2025-12-06 08:18:22.779 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:23 compute-1 ceph-mon[81689]: pgmap v3777: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 100 op/s
Dec 06 08:18:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:23.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:23 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:23.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:23 compute-1 ovn_controller[130279]: 2025-12-06T08:18:23Z|00111|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.9
Dec 06 08:18:23 compute-1 ovn_controller[130279]: 2025-12-06T08:18:23Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:40:69:90 10.100.0.9
Dec 06 08:18:24 compute-1 ovn_controller[130279]: 2025-12-06T08:18:24Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:69:90 10.100.0.9
Dec 06 08:18:24 compute-1 ovn_controller[130279]: 2025-12-06T08:18:24Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:69:90 10.100.0.9
Dec 06 08:18:25 compute-1 ceph-mon[81689]: pgmap v3778: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 69 op/s
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.275055) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105275114, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 348, "num_deletes": 251, "total_data_size": 266556, "memory_usage": 273928, "flush_reason": "Manual Compaction"}
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105279907, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 175405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82757, "largest_seqno": 83100, "table_properties": {"data_size": 173268, "index_size": 300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5355, "raw_average_key_size": 18, "raw_value_size": 169094, "raw_average_value_size": 585, "num_data_blocks": 14, "num_entries": 289, "num_filter_entries": 289, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009095, "oldest_key_time": 1765009095, "file_creation_time": 1765009105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 4904 microseconds, and 1776 cpu microseconds.
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.279960) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 175405 bytes OK
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.279980) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.281884) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.281927) EVENT_LOG_v1 {"time_micros": 1765009105281915, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.281955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 264181, prev total WAL file size 264181, number of live WAL files 2.
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.282605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(171KB)], [168(12MB)]
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105283241, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 13692255, "oldest_snapshot_seqno": -1}
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10832 keys, 11756130 bytes, temperature: kUnknown
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105397015, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 11756130, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11689312, "index_size": 38609, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 288527, "raw_average_key_size": 26, "raw_value_size": 11502781, "raw_average_value_size": 1061, "num_data_blocks": 1447, "num_entries": 10832, "num_filter_entries": 10832, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.397310) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11756130 bytes
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.399314) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.3 rd, 103.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.9 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(145.1) write-amplify(67.0) OK, records in: 11342, records dropped: 510 output_compression: NoCompression
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.399336) EVENT_LOG_v1 {"time_micros": 1765009105399326, "job": 108, "event": "compaction_finished", "compaction_time_micros": 113849, "compaction_time_cpu_micros": 52288, "output_level": 6, "num_output_files": 1, "total_output_size": 11756130, "num_input_records": 11342, "num_output_records": 10832, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105399513, "job": 108, "event": "table_file_deletion", "file_number": 170}
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105402530, "job": 108, "event": "table_file_deletion", "file_number": 168}
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.282474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.402616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.402623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.402626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.402629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:18:25.402632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:25.443 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:25 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:25.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:27 compute-1 ceph-mon[81689]: pgmap v3779: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:27 compute-1 nova_compute[226101]: 2025-12-06 08:18:27.674 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:27.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:27 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:27.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:27 compute-1 nova_compute[226101]: 2025-12-06 08:18:27.781 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:29 compute-1 ceph-mon[81689]: pgmap v3780: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:29 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:29.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:29.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:30 compute-1 sshd-session[311106]: Received disconnect from 154.209.4.183 port 38360:11: Bye Bye [preauth]
Dec 06 08:18:30 compute-1 sshd-session[311106]: Disconnected from authenticating user root 154.209.4.183 port 38360 [preauth]
Dec 06 08:18:30 compute-1 sshd-session[311108]: Received disconnect from 154.219.116.39 port 44150:11: Bye Bye [preauth]
Dec 06 08:18:30 compute-1 sshd-session[311108]: Disconnected from authenticating user root 154.219.116.39 port 44150 [preauth]
Dec 06 08:18:31 compute-1 ceph-mon[81689]: pgmap v3781: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2296411019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2222883412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:31.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:31 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:31.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:32 compute-1 nova_compute[226101]: 2025-12-06 08:18:32.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:32 compute-1 nova_compute[226101]: 2025-12-06 08:18:32.677 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:32 compute-1 nova_compute[226101]: 2025-12-06 08:18:32.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:33 compute-1 ceph-mon[81689]: pgmap v3782: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:33 compute-1 sshd-session[311110]: Received disconnect from 186.96.151.198 port 57522:11: Bye Bye [preauth]
Dec 06 08:18:33 compute-1 sshd-session[311110]: Disconnected from authenticating user root 186.96.151.198 port 57522 [preauth]
Dec 06 08:18:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:33 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:33.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:34 compute-1 ceph-mon[81689]: pgmap v3783: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 1.2 MiB/s wr, 16 op/s
Dec 06 08:18:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:35.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:35 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:35.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:37 compute-1 ceph-mon[81689]: pgmap v3784: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 58 op/s
Dec 06 08:18:37 compute-1 nova_compute[226101]: 2025-12-06 08:18:37.681 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:37.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:37 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:37.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:37 compute-1 nova_compute[226101]: 2025-12-06 08:18:37.787 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:39 compute-1 nova_compute[226101]: 2025-12-06 08:18:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:39 compute-1 nova_compute[226101]: 2025-12-06 08:18:39.627 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:39 compute-1 nova_compute[226101]: 2025-12-06 08:18:39.628 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:39 compute-1 nova_compute[226101]: 2025-12-06 08:18:39.629 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:39 compute-1 nova_compute[226101]: 2025-12-06 08:18:39.629 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:18:39 compute-1 nova_compute[226101]: 2025-12-06 08:18:39.630 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:39.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:39 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:39.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:39 compute-1 ceph-mon[81689]: pgmap v3785: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Dec 06 08:18:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:18:40 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/541583109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.141 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.222 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000d2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.222 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000d2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.391 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.392 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4071MB free_disk=20.92148208618164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.392 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.392 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.485 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance fd33cb27-7fd5-4a29-9653-903deb886052 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.486 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.486 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:18:40 compute-1 nova_compute[226101]: 2025-12-06 08:18:40.598 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:18:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1695264830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.046 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.055 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.076 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:18:41 compute-1 ceph-mon[81689]: pgmap v3786: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 74 op/s
Dec 06 08:18:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/541583109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.105 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.106 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.536 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "fd33cb27-7fd5-4a29-9653-903deb886052" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.536 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.537 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.537 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.538 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.540 226109 INFO nova.compute.manager [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Terminating instance
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.542 226109 DEBUG nova.compute.manager [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:18:41 compute-1 kernel: tap9736c5e1-fb (unregistering): left promiscuous mode
Dec 06 08:18:41 compute-1 NetworkManager[49031]: <info>  [1765009121.7238] device (tap9736c5e1-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.735 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:41 compute-1 ovn_controller[130279]: 2025-12-06T08:18:41Z|00836|binding|INFO|Releasing lport 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 from this chassis (sb_readonly=0)
Dec 06 08:18:41 compute-1 ovn_controller[130279]: 2025-12-06T08:18:41Z|00837|binding|INFO|Setting lport 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 down in Southbound
Dec 06 08:18:41 compute-1 ovn_controller[130279]: 2025-12-06T08:18:41Z|00838|binding|INFO|Removing iface tap9736c5e1-fb ovn-installed in OVS
Dec 06 08:18:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.740 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:41.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:41.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:41.754 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:69:90 10.100.0.9'], port_security=['fa:16:3e:40:69:90 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fd33cb27-7fd5-4a29-9653-903deb886052', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1fac348-226c-4298-a6da-922652cc16d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:18:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:41.757 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a unbound from our chassis
Dec 06 08:18:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:41.761 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4ef1374-9c77-45a7-8776-50aa60c7d84a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:18:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:41.762 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[04854d06-f473-45e0-a54c-0ed36c5ba186]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:41.764 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace which is not needed anymore
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.772 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:41 compute-1 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d2.scope: Deactivated successfully.
Dec 06 08:18:41 compute-1 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d2.scope: Consumed 16.169s CPU time.
Dec 06 08:18:41 compute-1 systemd-machined[190302]: Machine qemu-95-instance-000000d2 terminated.
Dec 06 08:18:41 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[310833]: [NOTICE]   (310840) : haproxy version is 2.8.14-c23fe91
Dec 06 08:18:41 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[310833]: [NOTICE]   (310840) : path to executable is /usr/sbin/haproxy
Dec 06 08:18:41 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[310833]: [WARNING]  (310840) : Exiting Master process...
Dec 06 08:18:41 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[310833]: [WARNING]  (310840) : Exiting Master process...
Dec 06 08:18:41 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[310833]: [ALERT]    (310840) : Current worker (310842) exited with code 143 (Terminated)
Dec 06 08:18:41 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[310833]: [WARNING]  (310840) : All workers exited. Exiting... (0)
Dec 06 08:18:41 compute-1 systemd[1]: libpod-c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc.scope: Deactivated successfully.
Dec 06 08:18:41 compute-1 podman[311184]: 2025-12-06 08:18:41.911340002 +0000 UTC m=+0.044577726 container died c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:18:41 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc-userdata-shm.mount: Deactivated successfully.
Dec 06 08:18:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-c34ea82f41df9148c618a775805d4238f16c53a528fdaebcf1584bb756987156-merged.mount: Deactivated successfully.
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.965 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:41 compute-1 podman[311184]: 2025-12-06 08:18:41.968386452 +0000 UTC m=+0.101624176 container cleanup c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.971 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:41 compute-1 systemd[1]: libpod-conmon-c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc.scope: Deactivated successfully.
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.980 226109 INFO nova.virt.libvirt.driver [-] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Instance destroyed successfully.
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.980 226109 DEBUG nova.objects.instance [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'resources' on Instance uuid fd33cb27-7fd5-4a29-9653-903deb886052 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.997 226109 DEBUG nova.virt.libvirt.vif [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:17:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-492005604',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-492005604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-492005604',id=210,image_ref='c5cbac11-d6e7-4196-b42c-615f51a1fae2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkaOSMRiE9x9L/H0T0J1x895cgNVGINGUxjAGosCuIMsXIXt7lpP2doMYDuxla2yuoKptfNo3BL3gjaNczVUkm0D0R4c6kqODJGtsXBruxfwZsXmNdjHABOzVtdB+6Pvg==',key_name='tempest-keypair-1185520304',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:18:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-heg1sts2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-97496240',image_owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:18:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=fd33cb27-7fd5-4a29-9653-903deb886052,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.998 226109 DEBUG nova.network.os_vif_util [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "address": "fa:16:3e:40:69:90", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9736c5e1-fb", "ovs_interfaceid": "9736c5e1-fb9a-4cb8-a7cb-c2f972a26552", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.998 226109 DEBUG nova.network.os_vif_util [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:69:90,bridge_name='br-int',has_traffic_filtering=True,id=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9736c5e1-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:18:41 compute-1 nova_compute[226101]: 2025-12-06 08:18:41.999 226109 DEBUG os_vif [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:69:90,bridge_name='br-int',has_traffic_filtering=True,id=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9736c5e1-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.001 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9736c5e1-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.002 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.004 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.006 226109 INFO os_vif [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:69:90,bridge_name='br-int',has_traffic_filtering=True,id=9736c5e1-fb9a-4cb8-a7cb-c2f972a26552,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9736c5e1-fb')
Dec 06 08:18:42 compute-1 podman[311223]: 2025-12-06 08:18:42.034785852 +0000 UTC m=+0.045269015 container remove c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.040 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac6edfd-4209-4a9b-8806-3f2fdb9e59fe]: (4, ('Sat Dec  6 08:18:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc)\nc2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc\nSat Dec  6 08:18:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (c2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc)\nc2383b83c0767e4d19fc7c2ed9d58c0e8aae8020872fccdb2bacf8a5c5f080bc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.042 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9379f562-e846-465f-94fb-b2aa7e18bb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.043 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.044 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:42 compute-1 kernel: tapb4ef1374-90: left promiscuous mode
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.056 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.058 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.060 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0826c7-f39f-41fb-82e6-a8166d04be46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.074 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c3aade15-a311-47a5-8647-9eab0bd4b1b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.075 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[37ffc677-358b-488f-a8e2-e9ac8c545b06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.089 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d2e574-a60b-4281-be87-fc3ef6dd7019]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936326, 'reachable_time': 26578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311258, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:42 compute-1 systemd[1]: run-netns-ovnmeta\x2db4ef1374\x2d9c77\x2d45a7\x2d8776\x2d50aa60c7d84a.mount: Deactivated successfully.
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.094 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:18:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:42.095 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae5430c-0a92-4c75-8b6e-76086d48886f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:18:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1695264830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.203 226109 DEBUG nova.compute.manager [req-6894cb0b-c7fd-45c4-a3da-f1f3ed5ea546 req-538532dc-4d38-4dc8-8c1b-7b62c11beac6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received event network-vif-unplugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.203 226109 DEBUG oslo_concurrency.lockutils [req-6894cb0b-c7fd-45c4-a3da-f1f3ed5ea546 req-538532dc-4d38-4dc8-8c1b-7b62c11beac6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.204 226109 DEBUG oslo_concurrency.lockutils [req-6894cb0b-c7fd-45c4-a3da-f1f3ed5ea546 req-538532dc-4d38-4dc8-8c1b-7b62c11beac6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.204 226109 DEBUG oslo_concurrency.lockutils [req-6894cb0b-c7fd-45c4-a3da-f1f3ed5ea546 req-538532dc-4d38-4dc8-8c1b-7b62c11beac6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.204 226109 DEBUG nova.compute.manager [req-6894cb0b-c7fd-45c4-a3da-f1f3ed5ea546 req-538532dc-4d38-4dc8-8c1b-7b62c11beac6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] No waiting events found dispatching network-vif-unplugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.205 226109 DEBUG nova.compute.manager [req-6894cb0b-c7fd-45c4-a3da-f1f3ed5ea546 req-538532dc-4d38-4dc8-8c1b-7b62c11beac6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received event network-vif-unplugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.211 226109 INFO nova.virt.libvirt.driver [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Deleting instance files /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052_del
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.212 226109 INFO nova.virt.libvirt.driver [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Deletion of /var/lib/nova/instances/fd33cb27-7fd5-4a29-9653-903deb886052_del complete
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.262 226109 INFO nova.compute.manager [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Took 0.72 seconds to destroy the instance on the hypervisor.
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.263 226109 DEBUG oslo.service.loopingcall [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.263 226109 DEBUG nova.compute.manager [-] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.263 226109 DEBUG nova.network.neutron [-] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:18:42 compute-1 nova_compute[226101]: 2025-12-06 08:18:42.788 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:43 compute-1 ceph-mon[81689]: pgmap v3787: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 219 KiB/s wr, 80 op/s
Dec 06 08:18:43 compute-1 nova_compute[226101]: 2025-12-06 08:18:43.151 226109 DEBUG nova.network.neutron [-] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:18:43 compute-1 nova_compute[226101]: 2025-12-06 08:18:43.167 226109 INFO nova.compute.manager [-] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Took 0.90 seconds to deallocate network for instance.
Dec 06 08:18:43 compute-1 nova_compute[226101]: 2025-12-06 08:18:43.366 226109 INFO nova.compute.manager [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Took 0.20 seconds to detach 1 volumes for instance.
Dec 06 08:18:43 compute-1 nova_compute[226101]: 2025-12-06 08:18:43.368 226109 DEBUG nova.compute.manager [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Deleting volume: 43e97266-6e92-4724-9224-f4b44696e11a _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Dec 06 08:18:43 compute-1 sshd-session[311260]: Connection closed by 43.225.159.111 port 42650
Dec 06 08:18:43 compute-1 nova_compute[226101]: 2025-12-06 08:18:43.562 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:43 compute-1 nova_compute[226101]: 2025-12-06 08:18:43.562 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:43 compute-1 nova_compute[226101]: 2025-12-06 08:18:43.605 226109 DEBUG oslo_concurrency.processutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:43.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:43 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:43.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:18:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4041479394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.070 226109 DEBUG oslo_concurrency.processutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.079 226109 DEBUG nova.compute.provider_tree [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.098 226109 DEBUG nova.scheduler.client.report [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.134 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4041479394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.173 226109 INFO nova.scheduler.client.report [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Deleted allocations for instance fd33cb27-7fd5-4a29-9653-903deb886052
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.255 226109 DEBUG oslo_concurrency.lockutils [None req-d90d903e-f672-44a5-991a-a1fce24d9759 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.286 226109 DEBUG nova.compute.manager [req-d5f6ac34-172c-43c6-93e0-e533dc8e52db req-72cd75db-7e3a-458a-8bf6-ac7d2e6f6560 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received event network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.287 226109 DEBUG oslo_concurrency.lockutils [req-d5f6ac34-172c-43c6-93e0-e533dc8e52db req-72cd75db-7e3a-458a-8bf6-ac7d2e6f6560 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.288 226109 DEBUG oslo_concurrency.lockutils [req-d5f6ac34-172c-43c6-93e0-e533dc8e52db req-72cd75db-7e3a-458a-8bf6-ac7d2e6f6560 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.288 226109 DEBUG oslo_concurrency.lockutils [req-d5f6ac34-172c-43c6-93e0-e533dc8e52db req-72cd75db-7e3a-458a-8bf6-ac7d2e6f6560 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fd33cb27-7fd5-4a29-9653-903deb886052-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.289 226109 DEBUG nova.compute.manager [req-d5f6ac34-172c-43c6-93e0-e533dc8e52db req-72cd75db-7e3a-458a-8bf6-ac7d2e6f6560 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] No waiting events found dispatching network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.289 226109 WARNING nova.compute.manager [req-d5f6ac34-172c-43c6-93e0-e533dc8e52db req-72cd75db-7e3a-458a-8bf6-ac7d2e6f6560 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received unexpected event network-vif-plugged-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 for instance with vm_state deleted and task_state None.
Dec 06 08:18:44 compute-1 nova_compute[226101]: 2025-12-06 08:18:44.290 226109 DEBUG nova.compute.manager [req-d5f6ac34-172c-43c6-93e0-e533dc8e52db req-72cd75db-7e3a-458a-8bf6-ac7d2e6f6560 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Received event network-vif-deleted-9736c5e1-fb9a-4cb8-a7cb-c2f972a26552 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:18:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e422 e422: 3 total, 3 up, 3 in
Dec 06 08:18:45 compute-1 ceph-mon[81689]: pgmap v3788: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 215 KiB/s wr, 79 op/s
Dec 06 08:18:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2974362627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2974362627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:45.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:45.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:46 compute-1 podman[311283]: 2025-12-06 08:18:46.091559572 +0000 UTC m=+0.071811176 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:18:46 compute-1 nova_compute[226101]: 2025-12-06 08:18:46.107 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:46 compute-1 nova_compute[226101]: 2025-12-06 08:18:46.107 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:18:46 compute-1 nova_compute[226101]: 2025-12-06 08:18:46.107 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:18:46 compute-1 nova_compute[226101]: 2025-12-06 08:18:46.123 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:18:46 compute-1 podman[311284]: 2025-12-06 08:18:46.124489485 +0000 UTC m=+0.090531308 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:18:46 compute-1 ceph-mon[81689]: osdmap e422: 3 total, 3 up, 3 in
Dec 06 08:18:46 compute-1 podman[311285]: 2025-12-06 08:18:46.20036695 +0000 UTC m=+0.160688189 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 08:18:47 compute-1 nova_compute[226101]: 2025-12-06 08:18:47.005 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:47 compute-1 ceph-mon[81689]: pgmap v3790: 305 pgs: 305 active+clean; 334 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 616 KiB/s wr, 92 op/s
Dec 06 08:18:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:47 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:47 compute-1 nova_compute[226101]: 2025-12-06 08:18:47.791 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:48 compute-1 sshd-session[311347]: Received disconnect from 101.100.194.199 port 41878:11: Bye Bye [preauth]
Dec 06 08:18:48 compute-1 sshd-session[311347]: Disconnected from authenticating user root 101.100.194.199 port 41878 [preauth]
Dec 06 08:18:49 compute-1 ceph-mon[81689]: pgmap v3791: 305 pgs: 305 active+clean; 330 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Dec 06 08:18:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:18:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1304841708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:18:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1304841708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:49.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:49 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:49.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1304841708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1304841708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/274917318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:50 compute-1 sshd-session[311112]: Connection closed by 165.154.55.146 port 48904 [preauth]
Dec 06 08:18:51 compute-1 ceph-mon[81689]: pgmap v3792: 305 pgs: 305 active+clean; 330 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Dec 06 08:18:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:51 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:51.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:51.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:52 compute-1 nova_compute[226101]: 2025-12-06 08:18:52.008 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:52 compute-1 nova_compute[226101]: 2025-12-06 08:18:52.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:53 compute-1 ceph-mon[81689]: pgmap v3793: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 457 KiB/s rd, 2.6 MiB/s wr, 166 op/s
Dec 06 08:18:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:53.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:53 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:53.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e423 e423: 3 total, 3 up, 3 in
Dec 06 08:18:55 compute-1 ceph-mon[81689]: pgmap v3794: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 457 KiB/s rd, 2.6 MiB/s wr, 166 op/s
Dec 06 08:18:55 compute-1 ceph-mon[81689]: osdmap e423: 3 total, 3 up, 3 in
Dec 06 08:18:55 compute-1 nova_compute[226101]: 2025-12-06 08:18:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:55 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:55.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:55.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4170588744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:56 compute-1 nova_compute[226101]: 2025-12-06 08:18:56.979 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009121.9776146, fd33cb27-7fd5-4a29-9653-903deb886052 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:18:56 compute-1 nova_compute[226101]: 2025-12-06 08:18:56.980 226109 INFO nova.compute.manager [-] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] VM Stopped (Lifecycle Event)
Dec 06 08:18:57 compute-1 nova_compute[226101]: 2025-12-06 08:18:57.010 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:57 compute-1 nova_compute[226101]: 2025-12-06 08:18:57.108 226109 DEBUG nova.compute.manager [None req-f44e8f5c-b1c2-4f10-960b-809c87ae9d5c - - - - - -] [instance: fd33cb27-7fd5-4a29-9653-903deb886052] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:18:57 compute-1 ceph-mon[81689]: pgmap v3796: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 411 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Dec 06 08:18:57 compute-1 nova_compute[226101]: 2025-12-06 08:18:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:57 compute-1 nova_compute[226101]: 2025-12-06 08:18:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:57 compute-1 nova_compute[226101]: 2025-12-06 08:18:57.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:18:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:18:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:57 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:57.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:57.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:57 compute-1 nova_compute[226101]: 2025-12-06 08:18:57.797 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4257723763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:59.063 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:18:59 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:18:59.064 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:18:59 compute-1 nova_compute[226101]: 2025-12-06 08:18:59.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:59 compute-1 ceph-mon[81689]: pgmap v3797: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 1.8 MiB/s wr, 118 op/s
Dec 06 08:18:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e424 e424: 3 total, 3 up, 3 in
Dec 06 08:18:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:59.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:18:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:59.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:00 compute-1 ceph-mon[81689]: osdmap e424: 3 total, 3 up, 3 in
Dec 06 08:19:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/315860512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3436298347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/870741633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:00 compute-1 nova_compute[226101]: 2025-12-06 08:19:00.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:01 compute-1 ceph-mon[81689]: pgmap v3799: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 21 KiB/s wr, 22 op/s
Dec 06 08:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:01.696 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:01.697 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:01.697 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:01.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:01.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:02 compute-1 nova_compute[226101]: 2025-12-06 08:19:02.012 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:02 compute-1 sshd-session[311349]: Received disconnect from 186.87.166.141 port 35432:11: Bye Bye [preauth]
Dec 06 08:19:02 compute-1 sshd-session[311349]: Disconnected from authenticating user root 186.87.166.141 port 35432 [preauth]
Dec 06 08:19:02 compute-1 nova_compute[226101]: 2025-12-06 08:19:02.798 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-1 ceph-mon[81689]: pgmap v3800: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 24 KiB/s wr, 71 op/s
Dec 06 08:19:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:03.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:03.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:04.066 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 e425: 3 total, 3 up, 3 in
Dec 06 08:19:05 compute-1 ceph-mon[81689]: pgmap v3801: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 21 KiB/s wr, 61 op/s
Dec 06 08:19:05 compute-1 ceph-mon[81689]: osdmap e425: 3 total, 3 up, 3 in
Dec 06 08:19:05 compute-1 nova_compute[226101]: 2025-12-06 08:19:05.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:05 compute-1 nova_compute[226101]: 2025-12-06 08:19:05.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:05.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:05.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1645654584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:07 compute-1 nova_compute[226101]: 2025-12-06 08:19:07.015 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:07 compute-1 ceph-mon[81689]: pgmap v3803: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 4.9 KiB/s wr, 77 op/s
Dec 06 08:19:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:07 compute-1 sshd-session[311351]: Received disconnect from 91.144.158.231 port 5052:11: Bye Bye [preauth]
Dec 06 08:19:07 compute-1 sshd-session[311351]: Disconnected from authenticating user root 91.144.158.231 port 5052 [preauth]
Dec 06 08:19:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:19:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:07.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:07 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:07.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:07 compute-1 nova_compute[226101]: 2025-12-06 08:19:07.975 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:09 compute-1 ceph-mon[81689]: pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 4.7 KiB/s wr, 86 op/s
Dec 06 08:19:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:19:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/570818404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:19:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:19:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/570818404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:19:09 compute-1 nova_compute[226101]: 2025-12-06 08:19:09.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:19:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:09 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/570818404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:19:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/570818404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:19:11 compute-1 nova_compute[226101]: 2025-12-06 08:19:11.064 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:11 compute-1 ceph-mon[81689]: pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.9 KiB/s wr, 72 op/s
Dec 06 08:19:11 compute-1 nova_compute[226101]: 2025-12-06 08:19:11.246 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:11.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:19:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:11 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:12 compute-1 nova_compute[226101]: 2025-12-06 08:19:12.016 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:12 compute-1 nova_compute[226101]: 2025-12-06 08:19:12.976 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:13 compute-1 ceph-mon[81689]: pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Dec 06 08:19:13 compute-1 sudo[311354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:13 compute-1 sudo[311354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:13 compute-1 sudo[311354]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:13 compute-1 sudo[311379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:19:13 compute-1 sudo[311379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:13 compute-1 sudo[311379]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:13 compute-1 sudo[311404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:13 compute-1 sudo[311404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:13 compute-1 sudo[311404]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:13 compute-1 sudo[311429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:19:13 compute-1 sudo[311429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:19:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:13 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:14 compute-1 sudo[311429]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:19:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:19:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:19:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:19:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:19:15 compute-1 ceph-mon[81689]: pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Dec 06 08:19:15 compute-1 sshd-session[311487]: Received disconnect from 124.18.141.70 port 46170:11: Bye Bye [preauth]
Dec 06 08:19:15 compute-1 sshd-session[311487]: Disconnected from authenticating user root 124.18.141.70 port 46170 [preauth]
Dec 06 08:19:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:19:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:15 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:16 compute-1 ceph-mon[81689]: pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 KiB/s wr, 37 op/s
Dec 06 08:19:17 compute-1 nova_compute[226101]: 2025-12-06 08:19:17.018 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:17 compute-1 podman[311490]: 2025-12-06 08:19:17.096215283 +0000 UTC m=+0.073837740 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:19:17 compute-1 podman[311489]: 2025-12-06 08:19:17.104811534 +0000 UTC m=+0.086791008 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:19:17 compute-1 podman[311491]: 2025-12-06 08:19:17.16324034 +0000 UTC m=+0.118956460 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 08:19:17 compute-1 nova_compute[226101]: 2025-12-06 08:19:17.979 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:19:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:17 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:18 compute-1 nova_compute[226101]: 2025-12-06 08:19:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:18 compute-1 ceph-mon[81689]: pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 15 op/s
Dec 06 08:19:19 compute-1 sshd-session[311551]: Received disconnect from 106.51.92.114 port 47557:11: Bye Bye [preauth]
Dec 06 08:19:19 compute-1 sshd-session[311551]: Disconnected from authenticating user root 106.51.92.114 port 47557 [preauth]
Dec 06 08:19:19 compute-1 sudo[311553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:19 compute-1 sudo[311553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:19 compute-1 sudo[311553]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:19.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:20 compute-1 sudo[311578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:19:20 compute-1 sudo[311578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:20 compute-1 sudo[311578]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:20 compute-1 ceph-mon[81689]: pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Dec 06 08:19:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:21.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:21.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:22 compute-1 nova_compute[226101]: 2025-12-06 08:19:22.021 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:22 compute-1 nova_compute[226101]: 2025-12-06 08:19:22.981 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:23 compute-1 ceph-mon[81689]: pgmap v3811: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:23 compute-1 nova_compute[226101]: 2025-12-06 08:19:23.607 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:23 compute-1 nova_compute[226101]: 2025-12-06 08:19:23.608 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:19:23 compute-1 nova_compute[226101]: 2025-12-06 08:19:23.858 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:19:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:23.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:25 compute-1 ceph-mon[81689]: pgmap v3812: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:25.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:26.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:27 compute-1 nova_compute[226101]: 2025-12-06 08:19:27.023 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:27 compute-1 ceph-mon[81689]: pgmap v3813: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:27 compute-1 nova_compute[226101]: 2025-12-06 08:19:27.983 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:28.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:28.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:28 compute-1 nova_compute[226101]: 2025-12-06 08:19:28.705 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "f1b83445-6063-4baa-b214-de9700ef24ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:28 compute-1 nova_compute[226101]: 2025-12-06 08:19:28.706 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:28 compute-1 nova_compute[226101]: 2025-12-06 08:19:28.781 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:19:28 compute-1 nova_compute[226101]: 2025-12-06 08:19:28.913 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:28 compute-1 nova_compute[226101]: 2025-12-06 08:19:28.914 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:28 compute-1 nova_compute[226101]: 2025-12-06 08:19:28.931 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:19:28 compute-1 nova_compute[226101]: 2025-12-06 08:19:28.932 226109 INFO nova.compute.claims [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.092 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:29 compute-1 ceph-mon[81689]: pgmap v3814: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 08:19:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:19:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2426268881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.555 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.562 226109 DEBUG nova.compute.provider_tree [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.616 226109 DEBUG nova.scheduler.client.report [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.673 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.674 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.740 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.740 226109 DEBUG nova.network.neutron [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.778 226109 INFO nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.833 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:19:29 compute-1 nova_compute[226101]: 2025-12-06 08:19:29.952 226109 INFO nova.virt.block_device [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Booting with volume 51b925ec-146c-45cc-ba59-9db845a18e81 at /dev/vda
Dec 06 08:19:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:30.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.183 226109 DEBUG os_brick.utils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.185 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.195 226109 DEBUG nova.policy [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e8feb4540af4e2caa45a88a9202dbe2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.203 236517 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.204 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[e2da0a85-18d8-4d28-a94a-8c9475eae483]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.205 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.217 236517 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.218 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[cd01749e-86e1-4c4a-8002-d33998e9c461]: (4, ('InitiatorName=iqn.1994-05.com.redhat:7842346547e0', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.219 236517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.228 236517 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.228 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[786163c2-f62c-4a5d-8567-a5ccd8c76b51]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.230 236517 DEBUG oslo.privsep.daemon [-] privsep: reply[41a05be7-8df1-4f27-ad7f-112cabe11c4e]: (4, 'effe0b74-d2bb-436f-b621-5e7c5f665fb5') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.230 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.259 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.262 226109 DEBUG os_brick.initiator.connectors.lightos [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.262 226109 DEBUG os_brick.initiator.connectors.lightos [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.263 226109 DEBUG os_brick.initiator.connectors.lightos [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.263 226109 DEBUG os_brick.utils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:7842346547e0', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'effe0b74-d2bb-436f-b621-5e7c5f665fb5', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 08:19:30 compute-1 nova_compute[226101]: 2025-12-06 08:19:30.264 226109 DEBUG nova.virt.block_device [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updating existing volume attachment record: f15f1579-5bca-46a9-99e4-f9b965870bfa _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 08:19:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:19:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.5 total, 600.0 interval
                                           Cumulative writes: 16K writes, 83K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1504 writes, 7459 keys, 1504 commit groups, 1.0 writes per commit group, ingest: 15.25 MB, 0.03 MB/s
                                           Interval WAL: 1504 writes, 1504 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     17.6      5.92              0.31        54    0.110       0      0       0.0       0.0
                                             L6      1/0   11.21 MB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   5.4     46.6     40.0     14.13              1.59        53    0.267    435K    28K       0.0       0.0
                                            Sum      1/0   11.21 MB   0.0      0.6     0.1      0.5       0.7      0.1       0.0   6.4     32.8     33.4     20.05              1.89       107    0.187    435K    28K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6    117.4    118.6      0.70              0.26        12    0.058     67K   3094       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   0.0     46.6     40.0     14.13              1.59        53    0.267    435K    28K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     17.7      5.87              0.31        53    0.111       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.102, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.65 GB write, 0.10 MB/s write, 0.64 GB read, 0.10 MB/s read, 20.0 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 71.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000534 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4053,68.16 MB,22.4202%) FilterBlock(107,1.17 MB,0.384175%) IndexBlock(107,1.90 MB,0.624361%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:19:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2426268881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:31 compute-1 ceph-mon[81689]: pgmap v3815: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:19:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4163239937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:32.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:32.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.025 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.089 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.091 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.091 226109 INFO nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Creating image(s)
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.092 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.092 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Ensure instance console log exists: /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.093 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.093 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.094 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.175 226109 DEBUG nova.network.neutron [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Successfully created port: 0552fe40-217a-4175-8342-6ab3a50ce4d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:19:32 compute-1 ceph-mon[81689]: pgmap v3816: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:19:32 compute-1 nova_compute[226101]: 2025-12-06 08:19:32.984 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:33 compute-1 nova_compute[226101]: 2025-12-06 08:19:33.783 226109 DEBUG nova.network.neutron [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Successfully updated port: 0552fe40-217a-4175-8342-6ab3a50ce4d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:19:33 compute-1 nova_compute[226101]: 2025-12-06 08:19:33.807 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:19:33 compute-1 nova_compute[226101]: 2025-12-06 08:19:33.808 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquired lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:19:33 compute-1 nova_compute[226101]: 2025-12-06 08:19:33.808 226109 DEBUG nova.network.neutron [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:19:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:34.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:34 compute-1 nova_compute[226101]: 2025-12-06 08:19:34.015 226109 DEBUG nova.compute.manager [req-5422eb2a-73d7-418d-b17a-4745cff80d56 req-41bfe751-218e-48f4-85aa-f83707f34e35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received event network-changed-0552fe40-217a-4175-8342-6ab3a50ce4d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:19:34 compute-1 nova_compute[226101]: 2025-12-06 08:19:34.015 226109 DEBUG nova.compute.manager [req-5422eb2a-73d7-418d-b17a-4745cff80d56 req-41bfe751-218e-48f4-85aa-f83707f34e35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Refreshing instance network info cache due to event network-changed-0552fe40-217a-4175-8342-6ab3a50ce4d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:19:34 compute-1 nova_compute[226101]: 2025-12-06 08:19:34.015 226109 DEBUG oslo_concurrency.lockutils [req-5422eb2a-73d7-418d-b17a-4745cff80d56 req-41bfe751-218e-48f4-85aa-f83707f34e35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:19:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:34.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:34 compute-1 nova_compute[226101]: 2025-12-06 08:19:34.176 226109 DEBUG nova.network.neutron [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:19:35 compute-1 ceph-mon[81689]: pgmap v3817: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:35 compute-1 sshd-session[311632]: Received disconnect from 45.120.216.232 port 55244:11: Bye Bye [preauth]
Dec 06 08:19:35 compute-1 sshd-session[311632]: Disconnected from authenticating user root 45.120.216.232 port 55244 [preauth]
Dec 06 08:19:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:36.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:37 compute-1 ceph-mon[81689]: pgmap v3818: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.028 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.209 226109 DEBUG nova.network.neutron [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updating instance_info_cache with network_info: [{"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.274 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Releasing lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.275 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Instance network_info: |[{"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.276 226109 DEBUG oslo_concurrency.lockutils [req-5422eb2a-73d7-418d-b17a-4745cff80d56 req-41bfe751-218e-48f4-85aa-f83707f34e35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.277 226109 DEBUG nova.network.neutron [req-5422eb2a-73d7-418d-b17a-4745cff80d56 req-41bfe751-218e-48f4-85aa-f83707f34e35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Refreshing network info cache for port 0552fe40-217a-4175-8342-6ab3a50ce4d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.280 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Start _get_guest_xml network_info=[{"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-51b925ec-146c-45cc-ba59-9db845a18e81', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '51b925ec-146c-45cc-ba59-9db845a18e81', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f1b83445-6063-4baa-b214-de9700ef24ac', 'attached_at': '', 'detached_at': '', 'volume_id': '51b925ec-146c-45cc-ba59-9db845a18e81', 'serial': '51b925ec-146c-45cc-ba59-9db845a18e81'}, 'mount_device': '/dev/vda', 'boot_index': 0, 'attachment_id': 'f15f1579-5bca-46a9-99e4-f9b965870bfa', 'guest_format': None, 'delete_on_termination': False, 'device_type': 'disk', 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.285 226109 WARNING nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.291 226109 DEBUG nova.virt.libvirt.host [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.292 226109 DEBUG nova.virt.libvirt.host [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.295 226109 DEBUG nova.virt.libvirt.host [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.296 226109 DEBUG nova.virt.libvirt.host [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.297 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.297 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.298 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.298 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.298 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.298 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.299 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.299 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.299 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.299 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.299 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.300 226109 DEBUG nova.virt.hardware [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.336 226109 DEBUG nova.storage.rbd_utils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image f1b83445-6063-4baa-b214-de9700ef24ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.340 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:19:37 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/374432018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.781 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.817 226109 DEBUG nova.virt.libvirt.vif [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1416313409',display_name='tempest-TestVolumeBootPattern-server-1416313409',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1416313409',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-qtx2qm50',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:19:29Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=f1b83445-6063-4baa-b214-de9700ef24ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.817 226109 DEBUG nova.network.os_vif_util [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.818 226109 DEBUG nova.network.os_vif_util [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:11:92,bridge_name='br-int',has_traffic_filtering=True,id=0552fe40-217a-4175-8342-6ab3a50ce4d2,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0552fe40-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.819 226109 DEBUG nova.objects.instance [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'pci_devices' on Instance uuid f1b83445-6063-4baa-b214-de9700ef24ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.854 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <uuid>f1b83445-6063-4baa-b214-de9700ef24ac</uuid>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <name>instance-000000d4</name>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <nova:name>tempest-TestVolumeBootPattern-server-1416313409</nova:name>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:19:37</nova:creationTime>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <nova:user uuid="8e8feb4540af4e2caa45a88a9202dbe2">tempest-TestVolumeBootPattern-97496240-project-member</nova:user>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <nova:project uuid="4b2dc4b8729f446a9c7ac69ca446f71d">tempest-TestVolumeBootPattern-97496240</nova:project>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <nova:port uuid="0552fe40-217a-4175-8342-6ab3a50ce4d2">
Dec 06 08:19:37 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <system>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <entry name="serial">f1b83445-6063-4baa-b214-de9700ef24ac</entry>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <entry name="uuid">f1b83445-6063-4baa-b214-de9700ef24ac</entry>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </system>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <os>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   </os>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <features>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   </features>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/f1b83445-6063-4baa-b214-de9700ef24ac_disk.config">
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       </source>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <source protocol="rbd" name="volumes/volume-51b925ec-146c-45cc-ba59-9db845a18e81">
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       </source>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:19:37 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <serial>51b925ec-146c-45cc-ba59-9db845a18e81</serial>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:cb:11:92"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <target dev="tap0552fe40-21"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac/console.log" append="off"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <video>
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </video>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:19:37 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:19:37 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:19:37 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:19:37 compute-1 nova_compute[226101]: </domain>
Dec 06 08:19:37 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.855 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Preparing to wait for external event network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.857 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.857 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.857 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.858 226109 DEBUG nova.virt.libvirt.vif [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1416313409',display_name='tempest-TestVolumeBootPattern-server-1416313409',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1416313409',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-qtx2qm50',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:19:29Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=f1b83445-6063-4baa-b214-de9700ef24ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.858 226109 DEBUG nova.network.os_vif_util [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.858 226109 DEBUG nova.network.os_vif_util [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:11:92,bridge_name='br-int',has_traffic_filtering=True,id=0552fe40-217a-4175-8342-6ab3a50ce4d2,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0552fe40-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.859 226109 DEBUG os_vif [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:11:92,bridge_name='br-int',has_traffic_filtering=True,id=0552fe40-217a-4175-8342-6ab3a50ce4d2,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0552fe40-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.859 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.860 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.860 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.865 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.865 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0552fe40-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.866 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0552fe40-21, col_values=(('external_ids', {'iface-id': '0552fe40-217a-4175-8342-6ab3a50ce4d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:11:92', 'vm-uuid': 'f1b83445-6063-4baa-b214-de9700ef24ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:37 compute-1 NetworkManager[49031]: <info>  [1765009177.9040] manager: (tap0552fe40-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.903 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.907 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.909 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.910 226109 INFO os_vif [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:11:92,bridge_name='br-int',has_traffic_filtering=True,id=0552fe40-217a-4175-8342-6ab3a50ce4d2,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0552fe40-21')
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.986 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.997 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.997 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.997 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No VIF found with MAC fa:16:3e:cb:11:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:19:37 compute-1 nova_compute[226101]: 2025-12-06 08:19:37.998 226109 INFO nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Using config drive
Dec 06 08:19:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:38.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:38.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:38 compute-1 nova_compute[226101]: 2025-12-06 08:19:38.028 226109 DEBUG nova.storage.rbd_utils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image f1b83445-6063-4baa-b214-de9700ef24ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:19:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/374432018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:39 compute-1 ceph-mon[81689]: pgmap v3819: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.184 226109 INFO nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Creating config drive at /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac/disk.config
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.188 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3ag5esv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.327 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3ag5esv" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.368 226109 DEBUG nova.storage.rbd_utils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image f1b83445-6063-4baa-b214-de9700ef24ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.372 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac/disk.config f1b83445-6063-4baa-b214-de9700ef24ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.574 226109 DEBUG oslo_concurrency.processutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac/disk.config f1b83445-6063-4baa-b214-de9700ef24ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.575 226109 INFO nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Deleting local config drive /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac/disk.config because it was imported into RBD.
Dec 06 08:19:39 compute-1 kernel: tap0552fe40-21: entered promiscuous mode
Dec 06 08:19:39 compute-1 NetworkManager[49031]: <info>  [1765009179.6345] manager: (tap0552fe40-21): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Dec 06 08:19:39 compute-1 ovn_controller[130279]: 2025-12-06T08:19:39Z|00839|binding|INFO|Claiming lport 0552fe40-217a-4175-8342-6ab3a50ce4d2 for this chassis.
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.632 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 ovn_controller[130279]: 2025-12-06T08:19:39Z|00840|binding|INFO|0552fe40-217a-4175-8342-6ab3a50ce4d2: Claiming fa:16:3e:cb:11:92 10.100.0.3
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.637 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.640 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.659 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:11:92 10.100.0.3'], port_security=['fa:16:3e:cb:11:92 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f1b83445-6063-4baa-b214-de9700ef24ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cd07b30-a335-4570-957e-3674d9a06120', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=0552fe40-217a-4175-8342-6ab3a50ce4d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.660 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 0552fe40-217a-4175-8342-6ab3a50ce4d2 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a bound to our chassis
Dec 06 08:19:39 compute-1 systemd-udevd[311747]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.662 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:19:39 compute-1 systemd-machined[190302]: New machine qemu-96-instance-000000d4.
Dec 06 08:19:39 compute-1 NetworkManager[49031]: <info>  [1765009179.6771] device (tap0552fe40-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:19:39 compute-1 NetworkManager[49031]: <info>  [1765009179.6781] device (tap0552fe40-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.681 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[25193ccd-4d87-4072-ae41-52996e94743f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.682 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4ef1374-91 in ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.684 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4ef1374-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.684 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[685552dc-e988-43aa-969c-e00cb7c3705c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.685 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f06c66-ac97-4919-afde-67b872d6ac72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.696 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[2b42d35e-3948-4d2d-b81e-826aa5c195f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.698 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 systemd[1]: Started Virtual Machine qemu-96-instance-000000d4.
Dec 06 08:19:39 compute-1 ovn_controller[130279]: 2025-12-06T08:19:39Z|00841|binding|INFO|Setting lport 0552fe40-217a-4175-8342-6ab3a50ce4d2 ovn-installed in OVS
Dec 06 08:19:39 compute-1 ovn_controller[130279]: 2025-12-06T08:19:39Z|00842|binding|INFO|Setting lport 0552fe40-217a-4175-8342-6ab3a50ce4d2 up in Southbound
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.703 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.710 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a8103c-50e9-4d04-b76d-4a0ad0cc9e91]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.735 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[b90a02f2-336c-4da5-9ae6-f6bb31826fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.741 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[802839cd-933e-46f7-8f03-4c6f4f7713cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 NetworkManager[49031]: <info>  [1765009179.7433] manager: (tapb4ef1374-90): new Veth device (/org/freedesktop/NetworkManager/Devices/390)
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.773 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1c4978-d527-4c98-8315-db5c93ab46c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.775 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[93f80da4-672f-495e-aecc-a50a29b005c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 NetworkManager[49031]: <info>  [1765009179.7979] device (tapb4ef1374-90): carrier: link connected
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.803 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[19d27bbf-1846-4e2f-ae2c-274f6ec93d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.818 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[12ded877-1276-4f98-ba6d-0502fcd8782e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 945930, 'reachable_time': 22385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311781, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.832 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7b9590-b586-44f7-8bd0-795de9ef7016]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:d4b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 945930, 'tstamp': 945930}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311782, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.846 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[abc03a5d-886a-4616-b0c7-9bef904eb619]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 945930, 'reachable_time': 22385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311783, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.879 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[b2faf4dc-82dd-4e7f-b554-93e5f8a3a9c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.941 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d317fa13-31ce-4764-96d6-55fdca649b27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.943 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.944 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.945 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4ef1374-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:39 compute-1 NetworkManager[49031]: <info>  [1765009179.9484] manager: (tapb4ef1374-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.948 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 kernel: tapb4ef1374-90: entered promiscuous mode
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.951 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4ef1374-90, col_values=(('external_ids', {'iface-id': '32c82c25-6496-4edd-ba74-1791824b99ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.951 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.954 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 ovn_controller[130279]: 2025-12-06T08:19:39Z|00843|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec 06 08:19:39 compute-1 nova_compute[226101]: 2025-12-06 08:19:39.969 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.971 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.972 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[522e5180-18c7-4b7d-9f2e-05ad78b849f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.973 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:19:39 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:39.973 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'env', 'PROCESS_TAG=haproxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4ef1374-9c77-45a7-8776-50aa60c7d84a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:19:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:40.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:40.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.085 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009180.0853376, f1b83445-6063-4baa-b214-de9700ef24ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.086 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] VM Started (Lifecycle Event)
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.130 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.133 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009180.0855992, f1b83445-6063-4baa-b214-de9700ef24ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.134 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] VM Paused (Lifecycle Event)
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.161 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.163 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.203 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:19:40 compute-1 podman[311857]: 2025-12-06 08:19:40.285052464 +0000 UTC m=+0.021499278 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:19:40 compute-1 podman[311857]: 2025-12-06 08:19:40.381967631 +0000 UTC m=+0.118414415 container create 85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 08:19:40 compute-1 systemd[1]: Started libpod-conmon-85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0.scope.
Dec 06 08:19:40 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:19:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97f17dde89acd0eb7b7156141c32a5cd97f489b757831bca3077aa5b230c87f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:40 compute-1 podman[311857]: 2025-12-06 08:19:40.473365752 +0000 UTC m=+0.209812576 container init 85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:19:40 compute-1 podman[311857]: 2025-12-06 08:19:40.478565101 +0000 UTC m=+0.215011895 container start 85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:19:40 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[311872]: [NOTICE]   (311876) : New worker (311878) forked
Dec 06 08:19:40 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[311872]: [NOTICE]   (311876) : Loading success.
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.562 226109 DEBUG nova.compute.manager [req-9c1da117-9330-4f57-ae8f-e569faf6cfb0 req-7217c593-9d6c-4470-b976-9cc37b4b1074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received event network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.562 226109 DEBUG oslo_concurrency.lockutils [req-9c1da117-9330-4f57-ae8f-e569faf6cfb0 req-7217c593-9d6c-4470-b976-9cc37b4b1074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.563 226109 DEBUG oslo_concurrency.lockutils [req-9c1da117-9330-4f57-ae8f-e569faf6cfb0 req-7217c593-9d6c-4470-b976-9cc37b4b1074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.563 226109 DEBUG oslo_concurrency.lockutils [req-9c1da117-9330-4f57-ae8f-e569faf6cfb0 req-7217c593-9d6c-4470-b976-9cc37b4b1074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.563 226109 DEBUG nova.compute.manager [req-9c1da117-9330-4f57-ae8f-e569faf6cfb0 req-7217c593-9d6c-4470-b976-9cc37b4b1074 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Processing event network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.564 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.568 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009180.5682788, f1b83445-6063-4baa-b214-de9700ef24ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.569 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] VM Resumed (Lifecycle Event)
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.571 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.575 226109 INFO nova.virt.libvirt.driver [-] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Instance spawned successfully.
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.575 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.599 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.609 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.613 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.614 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.614 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.615 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.615 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.616 226109 DEBUG nova.virt.libvirt.driver [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.648 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.687 226109 INFO nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Took 8.60 seconds to spawn the instance on the hypervisor.
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.688 226109 DEBUG nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.840 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.898 226109 INFO nova.compute.manager [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Took 12.03 seconds to build instance.
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.903 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.903 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.903 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.904 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:19:40 compute-1 nova_compute[226101]: 2025-12-06 08:19:40.904 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:41 compute-1 nova_compute[226101]: 2025-12-06 08:19:41.019 226109 DEBUG oslo_concurrency.lockutils [None req-ab832643-b622-444a-b517-d7db56d5ea67 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:41 compute-1 ceph-mon[81689]: pgmap v3820: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:19:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3771830527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:41 compute-1 nova_compute[226101]: 2025-12-06 08:19:41.342 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:42.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:42.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:42 compute-1 nova_compute[226101]: 2025-12-06 08:19:42.036 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:19:42 compute-1 nova_compute[226101]: 2025-12-06 08:19:42.037 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:19:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:42 compute-1 nova_compute[226101]: 2025-12-06 08:19:42.207 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:19:42 compute-1 nova_compute[226101]: 2025-12-06 08:19:42.208 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4130MB free_disk=20.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:19:42 compute-1 nova_compute[226101]: 2025-12-06 08:19:42.209 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:42 compute-1 nova_compute[226101]: 2025-12-06 08:19:42.209 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3771830527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:42 compute-1 nova_compute[226101]: 2025-12-06 08:19:42.937 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:42 compute-1 nova_compute[226101]: 2025-12-06 08:19:42.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.313 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance f1b83445-6063-4baa-b214-de9700ef24ac actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.313 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.313 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.344 226109 DEBUG nova.compute.manager [req-c74f53c3-70c1-40dd-93d6-deb9704d2f54 req-c3fb8613-040c-4526-88b7-dd470ecac32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received event network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.345 226109 DEBUG oslo_concurrency.lockutils [req-c74f53c3-70c1-40dd-93d6-deb9704d2f54 req-c3fb8613-040c-4526-88b7-dd470ecac32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.345 226109 DEBUG oslo_concurrency.lockutils [req-c74f53c3-70c1-40dd-93d6-deb9704d2f54 req-c3fb8613-040c-4526-88b7-dd470ecac32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.345 226109 DEBUG oslo_concurrency.lockutils [req-c74f53c3-70c1-40dd-93d6-deb9704d2f54 req-c3fb8613-040c-4526-88b7-dd470ecac32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.346 226109 DEBUG nova.compute.manager [req-c74f53c3-70c1-40dd-93d6-deb9704d2f54 req-c3fb8613-040c-4526-88b7-dd470ecac32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] No waiting events found dispatching network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.346 226109 WARNING nova.compute.manager [req-c74f53c3-70c1-40dd-93d6-deb9704d2f54 req-c3fb8613-040c-4526-88b7-dd470ecac32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received unexpected event network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 for instance with vm_state active and task_state None.
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.363 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.469 226109 DEBUG nova.network.neutron [req-5422eb2a-73d7-418d-b17a-4745cff80d56 req-41bfe751-218e-48f4-85aa-f83707f34e35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updated VIF entry in instance network info cache for port 0552fe40-217a-4175-8342-6ab3a50ce4d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.470 226109 DEBUG nova.network.neutron [req-5422eb2a-73d7-418d-b17a-4745cff80d56 req-41bfe751-218e-48f4-85aa-f83707f34e35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updating instance_info_cache with network_info: [{"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.496 226109 DEBUG oslo_concurrency.lockutils [req-5422eb2a-73d7-418d-b17a-4745cff80d56 req-41bfe751-218e-48f4-85aa-f83707f34e35 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:19:43 compute-1 sshd-session[311910]: Received disconnect from 154.219.116.39 port 58342:11: Bye Bye [preauth]
Dec 06 08:19:43 compute-1 sshd-session[311910]: Disconnected from authenticating user root 154.219.116.39 port 58342 [preauth]
Dec 06 08:19:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:19:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2898252828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.789 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.795 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.816 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.878 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:19:43 compute-1 nova_compute[226101]: 2025-12-06 08:19:43.879 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:44.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:44 compute-1 ceph-mon[81689]: pgmap v3821: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 11 op/s
Dec 06 08:19:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:44.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:44 compute-1 nova_compute[226101]: 2025-12-06 08:19:44.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:44 compute-1 nova_compute[226101]: 2025-12-06 08:19:44.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:19:45 compute-1 sshd-session[311934]: Received disconnect from 186.96.151.198 port 44518:11: Bye Bye [preauth]
Dec 06 08:19:45 compute-1 sshd-session[311934]: Disconnected from authenticating user root 186.96.151.198 port 44518 [preauth]
Dec 06 08:19:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2898252828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:45 compute-1 ceph-mon[81689]: pgmap v3822: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 11 op/s
Dec 06 08:19:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3533005233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:46.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:46.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:46 compute-1 nova_compute[226101]: 2025-12-06 08:19:46.615 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:46 compute-1 nova_compute[226101]: 2025-12-06 08:19:46.616 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:19:46 compute-1 nova_compute[226101]: 2025-12-06 08:19:46.616 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:19:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:47 compute-1 nova_compute[226101]: 2025-12-06 08:19:47.185 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:19:47 compute-1 nova_compute[226101]: 2025-12-06 08:19:47.186 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:19:47 compute-1 nova_compute[226101]: 2025-12-06 08:19:47.186 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:19:47 compute-1 nova_compute[226101]: 2025-12-06 08:19:47.186 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid f1b83445-6063-4baa-b214-de9700ef24ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:19:47 compute-1 ceph-mon[81689]: pgmap v3823: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 43 op/s
Dec 06 08:19:47 compute-1 nova_compute[226101]: 2025-12-06 08:19:47.982 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:47 compute-1 nova_compute[226101]: 2025-12-06 08:19:47.993 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:48.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:48.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:48 compute-1 podman[311939]: 2025-12-06 08:19:48.075816583 +0000 UTC m=+0.060703299 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 08:19:48 compute-1 podman[311938]: 2025-12-06 08:19:48.083246821 +0000 UTC m=+0.068094706 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:19:48 compute-1 podman[311940]: 2025-12-06 08:19:48.136780946 +0000 UTC m=+0.118507927 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:19:48 compute-1 ceph-mon[81689]: pgmap v3824: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:19:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:50.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:50.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:50 compute-1 NetworkManager[49031]: <info>  [1765009190.5119] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Dec 06 08:19:50 compute-1 NetworkManager[49031]: <info>  [1765009190.5127] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Dec 06 08:19:50 compute-1 nova_compute[226101]: 2025-12-06 08:19:50.512 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:50 compute-1 nova_compute[226101]: 2025-12-06 08:19:50.617 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:50 compute-1 ovn_controller[130279]: 2025-12-06T08:19:50Z|00844|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec 06 08:19:50 compute-1 nova_compute[226101]: 2025-12-06 08:19:50.630 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:51 compute-1 ceph-mon[81689]: pgmap v3825: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:19:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:51.035 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:19:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:19:51.037 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.221 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updating instance_info_cache with network_info: [{"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.430 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.431 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.753 226109 DEBUG nova.compute.manager [req-7b52d712-f40d-44f8-85f5-73eb511047e6 req-f3530702-0d38-4501-aa16-3039393ebc1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received event network-changed-0552fe40-217a-4175-8342-6ab3a50ce4d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.754 226109 DEBUG nova.compute.manager [req-7b52d712-f40d-44f8-85f5-73eb511047e6 req-f3530702-0d38-4501-aa16-3039393ebc1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Refreshing instance network info cache due to event network-changed-0552fe40-217a-4175-8342-6ab3a50ce4d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.754 226109 DEBUG oslo_concurrency.lockutils [req-7b52d712-f40d-44f8-85f5-73eb511047e6 req-f3530702-0d38-4501-aa16-3039393ebc1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.754 226109 DEBUG oslo_concurrency.lockutils [req-7b52d712-f40d-44f8-85f5-73eb511047e6 req-f3530702-0d38-4501-aa16-3039393ebc1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:19:51 compute-1 nova_compute[226101]: 2025-12-06 08:19:51.755 226109 DEBUG nova.network.neutron [req-7b52d712-f40d-44f8-85f5-73eb511047e6 req-f3530702-0d38-4501-aa16-3039393ebc1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Refreshing network info cache for port 0552fe40-217a-4175-8342-6ab3a50ce4d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:19:51 compute-1 sshd-session[311936]: Connection closed by 107.150.106.178 port 51146 [preauth]
Dec 06 08:19:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:52.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:52.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:52 compute-1 sshd-session[311997]: Received disconnect from 136.112.8.45 port 39408:11: Bye Bye [preauth]
Dec 06 08:19:52 compute-1 sshd-session[311997]: Disconnected from authenticating user root 136.112.8.45 port 39408 [preauth]
Dec 06 08:19:53 compute-1 nova_compute[226101]: 2025-12-06 08:19:53.016 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:53 compute-1 nova_compute[226101]: 2025-12-06 08:19:53.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:19:53 compute-1 ceph-mon[81689]: pgmap v3826: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 08:19:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:54.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:54.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:54 compute-1 nova_compute[226101]: 2025-12-06 08:19:54.487 226109 DEBUG nova.network.neutron [req-7b52d712-f40d-44f8-85f5-73eb511047e6 req-f3530702-0d38-4501-aa16-3039393ebc1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updated VIF entry in instance network info cache for port 0552fe40-217a-4175-8342-6ab3a50ce4d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:19:54 compute-1 nova_compute[226101]: 2025-12-06 08:19:54.487 226109 DEBUG nova.network.neutron [req-7b52d712-f40d-44f8-85f5-73eb511047e6 req-f3530702-0d38-4501-aa16-3039393ebc1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updating instance_info_cache with network_info: [{"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:19:54 compute-1 nova_compute[226101]: 2025-12-06 08:19:54.517 226109 DEBUG oslo_concurrency.lockutils [req-7b52d712-f40d-44f8-85f5-73eb511047e6 req-f3530702-0d38-4501-aa16-3039393ebc1c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f1b83445-6063-4baa-b214-de9700ef24ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:19:54 compute-1 ovn_controller[130279]: 2025-12-06T08:19:54Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:11:92 10.100.0.3
Dec 06 08:19:54 compute-1 ovn_controller[130279]: 2025-12-06T08:19:54Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:11:92 10.100.0.3
Dec 06 08:19:55 compute-1 nova_compute[226101]: 2025-12-06 08:19:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:56.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:56.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:56 compute-1 ceph-mon[81689]: pgmap v3827: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Dec 06 08:19:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1638807678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3758252918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:19:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854615589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:57 compute-1 ceph-mon[81689]: pgmap v3828: 305 pgs: 305 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Dec 06 08:19:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2854615589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:58 compute-1 nova_compute[226101]: 2025-12-06 08:19:58.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:19:58 compute-1 nova_compute[226101]: 2025-12-06 08:19:58.021 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:58 compute-1 nova_compute[226101]: 2025-12-06 08:19:58.021 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 08:19:58 compute-1 nova_compute[226101]: 2025-12-06 08:19:58.021 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:19:58 compute-1 nova_compute[226101]: 2025-12-06 08:19:58.022 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:19:58 compute-1 nova_compute[226101]: 2025-12-06 08:19:58.024 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:58.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:19:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3762066553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:58 compute-1 nova_compute[226101]: 2025-12-06 08:19:58.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:59 compute-1 nova_compute[226101]: 2025-12-06 08:19:59.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:59 compute-1 nova_compute[226101]: 2025-12-06 08:19:59.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:19:59 compute-1 ceph-mon[81689]: pgmap v3829: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Dec 06 08:20:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:00.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:00.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:00 compute-1 sshd-session[312001]: Received disconnect from 154.209.4.183 port 33828:11: Bye Bye [preauth]
Dec 06 08:20:00 compute-1 sshd-session[312001]: Disconnected from authenticating user root 154.209.4.183 port 33828 [preauth]
Dec 06 08:20:00 compute-1 sshd-session[311999]: Received disconnect from 101.100.194.199 port 33864:11: Bye Bye [preauth]
Dec 06 08:20:00 compute-1 sshd-session[311999]: Disconnected from authenticating user root 101.100.194.199 port 33864 [preauth]
Dec 06 08:20:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:01.040 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/656197683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:01 compute-1 ceph-mon[81689]: pgmap v3830: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 433 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 06 08:20:01 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 08:20:01 compute-1 nova_compute[226101]: 2025-12-06 08:20:01.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:01.697 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:01.697 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:01.698 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:02.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2368192170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:03 compute-1 nova_compute[226101]: 2025-12-06 08:20:03.026 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:20:03 compute-1 nova_compute[226101]: 2025-12-06 08:20:03.028 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:20:03 compute-1 nova_compute[226101]: 2025-12-06 08:20:03.028 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 08:20:03 compute-1 nova_compute[226101]: 2025-12-06 08:20:03.028 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:20:03 compute-1 nova_compute[226101]: 2025-12-06 08:20:03.035 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:03 compute-1 nova_compute[226101]: 2025-12-06 08:20:03.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:20:03 compute-1 ceph-mon[81689]: pgmap v3831: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Dec 06 08:20:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:04.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:05 compute-1 ceph-mon[81689]: pgmap v3832: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 08:20:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:06.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:06.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:06 compute-1 nova_compute[226101]: 2025-12-06 08:20:06.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:07 compute-1 nova_compute[226101]: 2025-12-06 08:20:07.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:07 compute-1 ceph-mon[81689]: pgmap v3833: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 08:20:07 compute-1 nova_compute[226101]: 2025-12-06 08:20:07.880 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "f1b83445-6063-4baa-b214-de9700ef24ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:07 compute-1 nova_compute[226101]: 2025-12-06 08:20:07.881 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:07 compute-1 nova_compute[226101]: 2025-12-06 08:20:07.881 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:07 compute-1 nova_compute[226101]: 2025-12-06 08:20:07.881 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:07 compute-1 nova_compute[226101]: 2025-12-06 08:20:07.882 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:07 compute-1 nova_compute[226101]: 2025-12-06 08:20:07.884 226109 INFO nova.compute.manager [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Terminating instance
Dec 06 08:20:07 compute-1 nova_compute[226101]: 2025-12-06 08:20:07.885 226109 DEBUG nova.compute.manager [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.036 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.039 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:20:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:08.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:08 compute-1 kernel: tap0552fe40-21 (unregistering): left promiscuous mode
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.093 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 08:20:08 compute-1 NetworkManager[49031]: <info>  [1765009208.0985] device (tap0552fe40-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.106 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-1 ovn_controller[130279]: 2025-12-06T08:20:08Z|00845|binding|INFO|Releasing lport 0552fe40-217a-4175-8342-6ab3a50ce4d2 from this chassis (sb_readonly=0)
Dec 06 08:20:08 compute-1 ovn_controller[130279]: 2025-12-06T08:20:08Z|00846|binding|INFO|Setting lport 0552fe40-217a-4175-8342-6ab3a50ce4d2 down in Southbound
Dec 06 08:20:08 compute-1 ovn_controller[130279]: 2025-12-06T08:20:08Z|00847|binding|INFO|Removing iface tap0552fe40-21 ovn-installed in OVS
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.108 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.117 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:11:92 10.100.0.3'], port_security=['fa:16:3e:cb:11:92 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f1b83445-6063-4baa-b214-de9700ef24ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cd07b30-a335-4570-957e-3674d9a06120', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=0552fe40-217a-4175-8342-6ab3a50ce4d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.118 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 0552fe40-217a-4175-8342-6ab3a50ce4d2 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a unbound from our chassis
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.119 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4ef1374-9c77-45a7-8776-50aa60c7d84a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.120 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ac997d8d-ba27-4246-a8e2-6d0f5e701219]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.120 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace which is not needed anymore
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.125 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-1 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d4.scope: Deactivated successfully.
Dec 06 08:20:08 compute-1 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d4.scope: Consumed 14.164s CPU time.
Dec 06 08:20:08 compute-1 systemd-machined[190302]: Machine qemu-96-instance-000000d4 terminated.
Dec 06 08:20:08 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[311872]: [NOTICE]   (311876) : haproxy version is 2.8.14-c23fe91
Dec 06 08:20:08 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[311872]: [NOTICE]   (311876) : path to executable is /usr/sbin/haproxy
Dec 06 08:20:08 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[311872]: [WARNING]  (311876) : Exiting Master process...
Dec 06 08:20:08 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[311872]: [ALERT]    (311876) : Current worker (311878) exited with code 143 (Terminated)
Dec 06 08:20:08 compute-1 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[311872]: [WARNING]  (311876) : All workers exited. Exiting... (0)
Dec 06 08:20:08 compute-1 systemd[1]: libpod-85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0.scope: Deactivated successfully.
Dec 06 08:20:08 compute-1 podman[312027]: 2025-12-06 08:20:08.281772568 +0000 UTC m=+0.082659457 container died 85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.323 226109 INFO nova.virt.libvirt.driver [-] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Instance destroyed successfully.
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.323 226109 DEBUG nova.objects.instance [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'resources' on Instance uuid f1b83445-6063-4baa-b214-de9700ef24ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:20:08 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0-userdata-shm.mount: Deactivated successfully.
Dec 06 08:20:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-e97f17dde89acd0eb7b7156141c32a5cd97f489b757831bca3077aa5b230c87f-merged.mount: Deactivated successfully.
Dec 06 08:20:08 compute-1 podman[312027]: 2025-12-06 08:20:08.345126346 +0000 UTC m=+0.146013225 container cleanup 85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.346 226109 DEBUG nova.virt.libvirt.vif [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:19:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1416313409',display_name='tempest-TestVolumeBootPattern-server-1416313409',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1416313409',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:19:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-qtx2qm50',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:19:40Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=f1b83445-6063-4baa-b214-de9700ef24ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.346 226109 DEBUG nova.network.os_vif_util [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "address": "fa:16:3e:cb:11:92", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0552fe40-21", "ovs_interfaceid": "0552fe40-217a-4175-8342-6ab3a50ce4d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.347 226109 DEBUG nova.network.os_vif_util [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:11:92,bridge_name='br-int',has_traffic_filtering=True,id=0552fe40-217a-4175-8342-6ab3a50ce4d2,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0552fe40-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.348 226109 DEBUG os_vif [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:11:92,bridge_name='br-int',has_traffic_filtering=True,id=0552fe40-217a-4175-8342-6ab3a50ce4d2,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0552fe40-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.350 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.350 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0552fe40-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.352 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-1 systemd[1]: libpod-conmon-85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0.scope: Deactivated successfully.
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.354 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.356 226109 INFO os_vif [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:11:92,bridge_name='br-int',has_traffic_filtering=True,id=0552fe40-217a-4175-8342-6ab3a50ce4d2,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0552fe40-21')
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.502 226109 DEBUG nova.compute.manager [req-162e7491-1317-4c23-b571-7987f98ffe92 req-872106a6-70d3-4da5-bb69-dae01e4af8ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received event network-vif-unplugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.503 226109 DEBUG oslo_concurrency.lockutils [req-162e7491-1317-4c23-b571-7987f98ffe92 req-872106a6-70d3-4da5-bb69-dae01e4af8ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.503 226109 DEBUG oslo_concurrency.lockutils [req-162e7491-1317-4c23-b571-7987f98ffe92 req-872106a6-70d3-4da5-bb69-dae01e4af8ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.504 226109 DEBUG oslo_concurrency.lockutils [req-162e7491-1317-4c23-b571-7987f98ffe92 req-872106a6-70d3-4da5-bb69-dae01e4af8ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.504 226109 DEBUG nova.compute.manager [req-162e7491-1317-4c23-b571-7987f98ffe92 req-872106a6-70d3-4da5-bb69-dae01e4af8ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] No waiting events found dispatching network-vif-unplugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.504 226109 DEBUG nova.compute.manager [req-162e7491-1317-4c23-b571-7987f98ffe92 req-872106a6-70d3-4da5-bb69-dae01e4af8ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received event network-vif-unplugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:20:08 compute-1 podman[312069]: 2025-12-06 08:20:08.593168006 +0000 UTC m=+0.217928593 container remove 85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.599 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfc7bc4-1c35-4538-8199-505c86acc36b]: (4, ('Sat Dec  6 08:20:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0)\n85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0\nSat Dec  6 08:20:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0)\n85987d468b97bf351c8b3d3cec937a6c1c4388d5986c7794b14bafa902418df0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.601 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccf5899-ca40-4af4-8d28-54e97d2ba4cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.602 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.603 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-1 kernel: tapb4ef1374-90: left promiscuous mode
Dec 06 08:20:08 compute-1 nova_compute[226101]: 2025-12-06 08:20:08.617 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.620 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f90376e2-c535-4194-abe1-4598e5e1599e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.642 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fed51020-583c-4fc0-893d-f7591f457292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.643 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6d740f99-3b15-425f-8a62-e299c3c71e5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.658 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc8f48a-3c81-4275-9cb1-70fa481082af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 945923, 'reachable_time': 27847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312104, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:08 compute-1 systemd[1]: run-netns-ovnmeta\x2db4ef1374\x2d9c77\x2d45a7\x2d8776\x2d50aa60c7d84a.mount: Deactivated successfully.
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.662 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:20:08 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:08.662 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[79f748f3-67ce-4902-acb7-4a79d967deea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:08 compute-1 ceph-mon[81689]: pgmap v3834: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 06 08:20:09 compute-1 nova_compute[226101]: 2025-12-06 08:20:09.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:09 compute-1 nova_compute[226101]: 2025-12-06 08:20:09.912 226109 INFO nova.virt.libvirt.driver [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Deleting instance files /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac_del
Dec 06 08:20:09 compute-1 nova_compute[226101]: 2025-12-06 08:20:09.912 226109 INFO nova.virt.libvirt.driver [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Deletion of /var/lib/nova/instances/f1b83445-6063-4baa-b214-de9700ef24ac_del complete
Dec 06 08:20:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:10.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:10 compute-1 sshd-session[312100]: Received disconnect from 14.225.3.79 port 43314:11: Bye Bye [preauth]
Dec 06 08:20:10 compute-1 sshd-session[312100]: Disconnected from authenticating user root 14.225.3.79 port 43314 [preauth]
Dec 06 08:20:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:10.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.399 226109 INFO nova.compute.manager [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Took 2.51 seconds to destroy the instance on the hypervisor.
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.400 226109 DEBUG oslo.service.loopingcall [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.400 226109 DEBUG nova.compute.manager [-] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.400 226109 DEBUG nova.network.neutron [-] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:20:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/922478015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:20:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/922478015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.657 226109 DEBUG nova.compute.manager [req-4f50de80-8824-4457-8125-b5fdcbab3b0d req-c57887bc-fad1-4d80-b591-d9117a5bd866 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received event network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.658 226109 DEBUG oslo_concurrency.lockutils [req-4f50de80-8824-4457-8125-b5fdcbab3b0d req-c57887bc-fad1-4d80-b591-d9117a5bd866 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.658 226109 DEBUG oslo_concurrency.lockutils [req-4f50de80-8824-4457-8125-b5fdcbab3b0d req-c57887bc-fad1-4d80-b591-d9117a5bd866 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.658 226109 DEBUG oslo_concurrency.lockutils [req-4f50de80-8824-4457-8125-b5fdcbab3b0d req-c57887bc-fad1-4d80-b591-d9117a5bd866 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.658 226109 DEBUG nova.compute.manager [req-4f50de80-8824-4457-8125-b5fdcbab3b0d req-c57887bc-fad1-4d80-b591-d9117a5bd866 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] No waiting events found dispatching network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:20:10 compute-1 nova_compute[226101]: 2025-12-06 08:20:10.659 226109 WARNING nova.compute.manager [req-4f50de80-8824-4457-8125-b5fdcbab3b0d req-c57887bc-fad1-4d80-b591-d9117a5bd866 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received unexpected event network-vif-plugged-0552fe40-217a-4175-8342-6ab3a50ce4d2 for instance with vm_state active and task_state deleting.
Dec 06 08:20:11 compute-1 ceph-mon[81689]: pgmap v3835: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 70 op/s
Dec 06 08:20:11 compute-1 nova_compute[226101]: 2025-12-06 08:20:11.564 226109 DEBUG nova.network.neutron [-] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:20:11 compute-1 nova_compute[226101]: 2025-12-06 08:20:11.651 226109 INFO nova.compute.manager [-] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Took 1.25 seconds to deallocate network for instance.
Dec 06 08:20:11 compute-1 nova_compute[226101]: 2025-12-06 08:20:11.739 226109 DEBUG nova.compute.manager [req-31621b8d-c3f7-48d4-b461-08c5f8c17e97 req-3338b471-ecc5-4abd-82e0-7c846a6b4e1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Received event network-vif-deleted-0552fe40-217a-4175-8342-6ab3a50ce4d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:20:11 compute-1 nova_compute[226101]: 2025-12-06 08:20:11.994 226109 INFO nova.compute.manager [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Took 0.34 seconds to detach 1 volumes for instance.
Dec 06 08:20:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:12.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:12.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:12 compute-1 nova_compute[226101]: 2025-12-06 08:20:12.089 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:12 compute-1 nova_compute[226101]: 2025-12-06 08:20:12.089 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:12 compute-1 nova_compute[226101]: 2025-12-06 08:20:12.169 226109 DEBUG oslo_concurrency.processutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:20:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3180353347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:12 compute-1 nova_compute[226101]: 2025-12-06 08:20:12.827 226109 DEBUG oslo_concurrency.processutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:12 compute-1 nova_compute[226101]: 2025-12-06 08:20:12.833 226109 DEBUG nova.compute.provider_tree [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:20:12 compute-1 nova_compute[226101]: 2025-12-06 08:20:12.878 226109 DEBUG nova.scheduler.client.report [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:20:12 compute-1 nova_compute[226101]: 2025-12-06 08:20:12.952 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:13 compute-1 nova_compute[226101]: 2025-12-06 08:20:13.039 226109 INFO nova.scheduler.client.report [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Deleted allocations for instance f1b83445-6063-4baa-b214-de9700ef24ac
Dec 06 08:20:13 compute-1 nova_compute[226101]: 2025-12-06 08:20:13.107 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:13 compute-1 nova_compute[226101]: 2025-12-06 08:20:13.267 226109 DEBUG oslo_concurrency.lockutils [None req-7582bdf2-7831-48e8-9ae2-90732e034166 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f1b83445-6063-4baa-b214-de9700ef24ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:13 compute-1 nova_compute[226101]: 2025-12-06 08:20:13.353 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:13 compute-1 ceph-mon[81689]: pgmap v3836: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 109 op/s
Dec 06 08:20:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3180353347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:14.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:14.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:14 compute-1 ceph-mon[81689]: pgmap v3837: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 1.9 MiB/s wr, 39 op/s
Dec 06 08:20:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:16.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:16.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:17 compute-1 ceph-mon[81689]: pgmap v3838: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:20:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:18.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:18.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:18 compute-1 nova_compute[226101]: 2025-12-06 08:20:18.137 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:18 compute-1 nova_compute[226101]: 2025-12-06 08:20:18.356 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:19 compute-1 podman[312131]: 2025-12-06 08:20:19.089695391 +0000 UTC m=+0.060088552 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 08:20:19 compute-1 podman[312130]: 2025-12-06 08:20:19.090189655 +0000 UTC m=+0.069207727 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:20:19 compute-1 podman[312132]: 2025-12-06 08:20:19.148323003 +0000 UTC m=+0.116198156 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec 06 08:20:19 compute-1 ceph-mon[81689]: pgmap v3839: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:20:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:20.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:20.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:20 compute-1 sudo[312192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:20 compute-1 sudo[312192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:20 compute-1 sudo[312192]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:20 compute-1 sudo[312217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:20:20 compute-1 sudo[312217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:20 compute-1 sudo[312217]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:20 compute-1 sudo[312242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:20 compute-1 sudo[312242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:20 compute-1 sudo[312242]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:20 compute-1 sudo[312267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:20:20 compute-1 sudo[312267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:20 compute-1 sudo[312267]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:21 compute-1 ceph-mon[81689]: pgmap v3840: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:20:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:22.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:22 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:22.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:22 compute-1 ceph-mon[81689]: pgmap v3841: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Dec 06 08:20:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:20:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:20:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:20:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:20:22 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:20:23 compute-1 nova_compute[226101]: 2025-12-06 08:20:23.143 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:23 compute-1 nova_compute[226101]: 2025-12-06 08:20:23.320 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009208.319071, f1b83445-6063-4baa-b214-de9700ef24ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:20:23 compute-1 nova_compute[226101]: 2025-12-06 08:20:23.321 226109 INFO nova.compute.manager [-] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] VM Stopped (Lifecycle Event)
Dec 06 08:20:23 compute-1 nova_compute[226101]: 2025-12-06 08:20:23.357 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:23 compute-1 sshd-session[312322]: Received disconnect from 91.144.158.231 port 19501:11: Bye Bye [preauth]
Dec 06 08:20:23 compute-1 sshd-session[312322]: Disconnected from authenticating user root 91.144.158.231 port 19501 [preauth]
Dec 06 08:20:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:24.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:24 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:24.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:24 compute-1 nova_compute[226101]: 2025-12-06 08:20:24.206 226109 DEBUG nova.compute.manager [None req-cbf8fb06-6949-4b40-8555-3c5ad5457402 - - - - - -] [instance: f1b83445-6063-4baa-b214-de9700ef24ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:20:25 compute-1 ceph-mon[81689]: pgmap v3842: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 218 KiB/s wr, 42 op/s
Dec 06 08:20:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:26.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:26.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:27 compute-1 ceph-mon[81689]: pgmap v3843: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 218 KiB/s wr, 42 op/s
Dec 06 08:20:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:28 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:28.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:28.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:28 compute-1 nova_compute[226101]: 2025-12-06 08:20:28.145 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:28 compute-1 sudo[312324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:28 compute-1 sudo[312324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:28 compute-1 sudo[312324]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:28 compute-1 nova_compute[226101]: 2025-12-06 08:20:28.393 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:28 compute-1 sudo[312349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:20:28 compute-1 sudo[312349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:28 compute-1 sudo[312349]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:28 compute-1 sshd[164848]: Timeout before authentication for connection from 14.103.118.136 to 38.102.83.204, pid = 311104
Dec 06 08:20:29 compute-1 ceph-mon[81689]: pgmap v3844: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 16 KiB/s wr, 7 op/s
Dec 06 08:20:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:29 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3079866157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:30.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:30.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:31 compute-1 ceph-mon[81689]: pgmap v3845: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 06 08:20:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2333218000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:20:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:32.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:32.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:33 compute-1 nova_compute[226101]: 2025-12-06 08:20:33.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:33 compute-1 ceph-mon[81689]: pgmap v3846: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 14 KiB/s wr, 0 op/s
Dec 06 08:20:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:33.315 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:20:33 compute-1 nova_compute[226101]: 2025-12-06 08:20:33.316 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:33.316 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:20:33 compute-1 nova_compute[226101]: 2025-12-06 08:20:33.394 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:34.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:34 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:34.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:34 compute-1 nova_compute[226101]: 2025-12-06 08:20:34.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3058139754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:36.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:36.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:36 compute-1 sshd-session[312374]: Received disconnect from 106.51.92.114 port 34109:11: Bye Bye [preauth]
Dec 06 08:20:36 compute-1 sshd-session[312374]: Disconnected from authenticating user root 106.51.92.114 port 34109 [preauth]
Dec 06 08:20:36 compute-1 ceph-mon[81689]: pgmap v3847: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 2.0 KiB/s wr, 0 op/s
Dec 06 08:20:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:37 compute-1 ceph-mon[81689]: pgmap v3848: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.9 KiB/s rd, 2.5 KiB/s wr, 13 op/s
Dec 06 08:20:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:38.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:38 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:38.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:38 compute-1 nova_compute[226101]: 2025-12-06 08:20:38.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:38 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:20:38.318 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:38 compute-1 nova_compute[226101]: 2025-12-06 08:20:38.396 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:38 compute-1 ceph-mon[81689]: pgmap v3849: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:40 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:40 compute-1 nova_compute[226101]: 2025-12-06 08:20:40.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:40 compute-1 nova_compute[226101]: 2025-12-06 08:20:40.655 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:40 compute-1 nova_compute[226101]: 2025-12-06 08:20:40.656 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:40 compute-1 nova_compute[226101]: 2025-12-06 08:20:40.657 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:40 compute-1 nova_compute[226101]: 2025-12-06 08:20:40.657 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:20:40 compute-1 nova_compute[226101]: 2025-12-06 08:20:40.658 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:40 compute-1 sshd-session[312376]: Received disconnect from 124.18.141.70 port 50620:11: Bye Bye [preauth]
Dec 06 08:20:40 compute-1 sshd-session[312376]: Disconnected from authenticating user root 124.18.141.70 port 50620 [preauth]
Dec 06 08:20:40 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3733810263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:20:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:20:41 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/967755823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:41 compute-1 nova_compute[226101]: 2025-12-06 08:20:41.272 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:41 compute-1 sshd-session[312380]: Received disconnect from 186.87.166.141 port 36546:11: Bye Bye [preauth]
Dec 06 08:20:41 compute-1 sshd-session[312380]: Disconnected from authenticating user root 186.87.166.141 port 36546 [preauth]
Dec 06 08:20:41 compute-1 nova_compute[226101]: 2025-12-06 08:20:41.443 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:20:41 compute-1 nova_compute[226101]: 2025-12-06 08:20:41.445 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4253MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:20:41 compute-1 nova_compute[226101]: 2025-12-06 08:20:41.445 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:41 compute-1 nova_compute[226101]: 2025-12-06 08:20:41.445 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:41 compute-1 ceph-mon[81689]: pgmap v3850: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:41 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/967755823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:41 compute-1 nova_compute[226101]: 2025-12-06 08:20:41.986 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:20:41 compute-1 nova_compute[226101]: 2025-12-06 08:20:41.986 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:20:42 compute-1 nova_compute[226101]: 2025-12-06 08:20:42.021 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:42 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:42.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:42.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:42 compute-1 nova_compute[226101]: 2025-12-06 08:20:42.416 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:20:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2172302351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:42 compute-1 nova_compute[226101]: 2025-12-06 08:20:42.545 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:42 compute-1 nova_compute[226101]: 2025-12-06 08:20:42.554 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:20:42 compute-1 nova_compute[226101]: 2025-12-06 08:20:42.569 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:42 compute-1 nova_compute[226101]: 2025-12-06 08:20:42.607 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:20:42 compute-1 ceph-mon[81689]: pgmap v3851: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:42 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2172302351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:43 compute-1 nova_compute[226101]: 2025-12-06 08:20:43.146 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:20:43 compute-1 nova_compute[226101]: 2025-12-06 08:20:43.147 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:43 compute-1 nova_compute[226101]: 2025-12-06 08:20:43.186 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:43 compute-1 nova_compute[226101]: 2025-12-06 08:20:43.398 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:44.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:44 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:44.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:45 compute-1 sshd-session[312378]: Connection closed by 165.154.55.146 port 40292 [preauth]
Dec 06 08:20:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:46.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:46.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:46 compute-1 ceph-mon[81689]: pgmap v3852: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:20:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:47 compute-1 ceph-mon[81689]: pgmap v3853: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 14 KiB/s wr, 70 op/s
Dec 06 08:20:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:48.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:48.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:48 compute-1 nova_compute[226101]: 2025-12-06 08:20:48.188 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:48 compute-1 nova_compute[226101]: 2025-12-06 08:20:48.399 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:49 compute-1 ceph-mon[81689]: pgmap v3854: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Dec 06 08:20:50 compute-1 podman[312429]: 2025-12-06 08:20:50.098051541 +0000 UTC m=+0.072424073 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 08:20:50 compute-1 podman[312428]: 2025-12-06 08:20:50.100150027 +0000 UTC m=+0.074829997 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:20:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:50 compute-1 podman[312430]: 2025-12-06 08:20:50.17560981 +0000 UTC m=+0.149335864 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 08:20:51 compute-1 nova_compute[226101]: 2025-12-06 08:20:51.147 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:51 compute-1 nova_compute[226101]: 2025-12-06 08:20:51.147 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:20:51 compute-1 nova_compute[226101]: 2025-12-06 08:20:51.148 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:20:51 compute-1 nova_compute[226101]: 2025-12-06 08:20:51.174 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:20:51 compute-1 ceph-mon[81689]: pgmap v3855: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:52.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:52.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:53 compute-1 nova_compute[226101]: 2025-12-06 08:20:53.190 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:53 compute-1 nova_compute[226101]: 2025-12-06 08:20:53.400 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:53 compute-1 ceph-mon[81689]: pgmap v3856: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:54.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:54.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:55 compute-1 sshd-session[312493]: Received disconnect from 45.120.216.232 port 55578:11: Bye Bye [preauth]
Dec 06 08:20:55 compute-1 sshd-session[312493]: Disconnected from authenticating user root 45.120.216.232 port 55578 [preauth]
Dec 06 08:20:55 compute-1 ceph-mon[81689]: pgmap v3857: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:56 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:56 compute-1 sshd-session[312495]: Received disconnect from 154.219.116.39 port 47210:11: Bye Bye [preauth]
Dec 06 08:20:56 compute-1 sshd-session[312495]: Disconnected from authenticating user root 154.219.116.39 port 47210 [preauth]
Dec 06 08:20:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:57 compute-1 ceph-mon[81689]: pgmap v3858: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 23 KiB/s wr, 93 op/s
Dec 06 08:20:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2101064357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:57 compute-1 sshd-session[312497]: Received disconnect from 186.96.151.198 port 53330:11: Bye Bye [preauth]
Dec 06 08:20:57 compute-1 sshd-session[312497]: Disconnected from authenticating user root 186.96.151.198 port 53330 [preauth]
Dec 06 08:20:57 compute-1 nova_compute[226101]: 2025-12-06 08:20:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:20:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:20:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:58.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:58.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:58 compute-1 nova_compute[226101]: 2025-12-06 08:20:58.191 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:58 compute-1 nova_compute[226101]: 2025-12-06 08:20:58.402 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1953930024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:58 compute-1 nova_compute[226101]: 2025-12-06 08:20:58.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:59 compute-1 nova_compute[226101]: 2025-12-06 08:20:59.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:59 compute-1 nova_compute[226101]: 2025-12-06 08:20:59.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:20:59 compute-1 ceph-mon[81689]: pgmap v3859: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:21:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:00 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2460462847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:01 compute-1 ceph-mon[81689]: pgmap v3860: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 501 KiB/s rd, 13 KiB/s wr, 41 op/s
Dec 06 08:21:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1718808240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:01.698 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:01.698 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:01.699 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:02 compute-1 nova_compute[226101]: 2025-12-06 08:21:02.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:03 compute-1 nova_compute[226101]: 2025-12-06 08:21:03.194 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:03 compute-1 nova_compute[226101]: 2025-12-06 08:21:03.443 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:03 compute-1 ceph-mon[81689]: pgmap v3861: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 44 op/s
Dec 06 08:21:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:04 compute-1 ceph-mon[81689]: pgmap v3862: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 44 op/s
Dec 06 08:21:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:06.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:06.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:07 compute-1 ceph-mon[81689]: pgmap v3863: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 27 KiB/s wr, 45 op/s
Dec 06 08:21:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:07 compute-1 nova_compute[226101]: 2025-12-06 08:21:07.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:07 compute-1 nova_compute[226101]: 2025-12-06 08:21:07.802 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:07 compute-1 nova_compute[226101]: 2025-12-06 08:21:07.802 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:07 compute-1 nova_compute[226101]: 2025-12-06 08:21:07.835 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:21:07 compute-1 nova_compute[226101]: 2025-12-06 08:21:07.959 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:07 compute-1 nova_compute[226101]: 2025-12-06 08:21:07.960 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:07 compute-1 nova_compute[226101]: 2025-12-06 08:21:07.970 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:21:07 compute-1 nova_compute[226101]: 2025-12-06 08:21:07.970 226109 INFO nova.compute.claims [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:21:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 e426: 3 total, 3 up, 3 in
Dec 06 08:21:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:08.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:08.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:08 compute-1 nova_compute[226101]: 2025-12-06 08:21:08.195 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:08 compute-1 nova_compute[226101]: 2025-12-06 08:21:08.363 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:08 compute-1 nova_compute[226101]: 2025-12-06 08:21:08.444 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:21:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1292675319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:08 compute-1 nova_compute[226101]: 2025-12-06 08:21:08.773 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:08 compute-1 nova_compute[226101]: 2025-12-06 08:21:08.779 226109 DEBUG nova.compute.provider_tree [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:21:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:21:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290350105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:21:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:21:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290350105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:21:09 compute-1 ceph-mon[81689]: pgmap v3864: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 474 KiB/s rd, 17 KiB/s wr, 28 op/s
Dec 06 08:21:09 compute-1 ceph-mon[81689]: osdmap e426: 3 total, 3 up, 3 in
Dec 06 08:21:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1292675319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4290350105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:21:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4290350105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:21:09 compute-1 nova_compute[226101]: 2025-12-06 08:21:09.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:09 compute-1 nova_compute[226101]: 2025-12-06 08:21:09.754 226109 DEBUG nova.scheduler.client.report [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:21:09 compute-1 nova_compute[226101]: 2025-12-06 08:21:09.795 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:09 compute-1 nova_compute[226101]: 2025-12-06 08:21:09.796 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:21:09 compute-1 nova_compute[226101]: 2025-12-06 08:21:09.893 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:21:09 compute-1 nova_compute[226101]: 2025-12-06 08:21:09.893 226109 DEBUG nova.network.neutron [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:21:09 compute-1 nova_compute[226101]: 2025-12-06 08:21:09.925 226109 INFO nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:21:09 compute-1 nova_compute[226101]: 2025-12-06 08:21:09.959 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.128 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.129 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.130 226109 INFO nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Creating image(s)
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.156 226109 DEBUG nova.storage.rbd_utils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.185 226109 DEBUG nova.storage.rbd_utils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:21:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:10.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:10.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.211 226109 DEBUG nova.storage.rbd_utils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.214 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.240 226109 DEBUG nova.policy [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0432cb6633e14c1b86fc320e7f3bb880', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.276 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.276 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.277 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.277 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.302 226109 DEBUG nova.storage.rbd_utils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.305 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.583 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.659 226109 DEBUG nova.storage.rbd_utils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] resizing rbd image 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.771 226109 DEBUG nova.objects.instance [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 173105e8-7c00-4b0a-86b1-b0acabe5cbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.847 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.847 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Ensure instance console log exists: /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.848 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.848 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:10 compute-1 nova_compute[226101]: 2025-12-06 08:21:10.848 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:11 compute-1 ceph-mon[81689]: pgmap v3866: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 197 KiB/s rd, 17 KiB/s wr, 7 op/s
Dec 06 08:21:11 compute-1 nova_compute[226101]: 2025-12-06 08:21:11.382 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:11.382 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:21:11 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:11.384 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:21:11 compute-1 nova_compute[226101]: 2025-12-06 08:21:11.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:12 compute-1 nova_compute[226101]: 2025-12-06 08:21:12.035 226109 DEBUG nova.network.neutron [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Successfully created port: 1bb1602d-28f5-4171-88dc-24be47ec272f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:21:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:12.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:12.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:13 compute-1 ceph-mon[81689]: pgmap v3867: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Dec 06 08:21:13 compute-1 nova_compute[226101]: 2025-12-06 08:21:13.197 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:13 compute-1 nova_compute[226101]: 2025-12-06 08:21:13.447 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:13 compute-1 sshd-session[312687]: Received disconnect from 101.100.194.199 port 41698:11: Bye Bye [preauth]
Dec 06 08:21:13 compute-1 sshd-session[312687]: Disconnected from authenticating user root 101.100.194.199 port 41698 [preauth]
Dec 06 08:21:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:14.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:15 compute-1 ceph-mon[81689]: pgmap v3868: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Dec 06 08:21:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:16.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:16.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:17.385 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:21:17 compute-1 ceph-mon[81689]: pgmap v3869: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 08:21:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:18 compute-1 nova_compute[226101]: 2025-12-06 08:21:18.199 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:18.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:18 compute-1 nova_compute[226101]: 2025-12-06 08:21:18.448 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:19 compute-1 ceph-mon[81689]: pgmap v3870: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 2.1 MiB/s wr, 174 op/s
Dec 06 08:21:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:20.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:21 compute-1 ceph-mon[81689]: pgmap v3871: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 168 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 08:21:21 compute-1 podman[312690]: 2025-12-06 08:21:21.095805507 +0000 UTC m=+0.074440317 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:21:21 compute-1 podman[312691]: 2025-12-06 08:21:21.120371894 +0000 UTC m=+0.085206484 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 06 08:21:21 compute-1 podman[312696]: 2025-12-06 08:21:21.126474769 +0000 UTC m=+0.093692823 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec 06 08:21:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:22.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:22.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:23 compute-1 nova_compute[226101]: 2025-12-06 08:21:23.201 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:23 compute-1 ceph-mon[81689]: pgmap v3872: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 216 KiB/s rd, 1.8 MiB/s wr, 226 op/s
Dec 06 08:21:23 compute-1 nova_compute[226101]: 2025-12-06 08:21:23.449 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:24.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:24 compute-1 nova_compute[226101]: 2025-12-06 08:21:24.413 226109 DEBUG nova.network.neutron [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Successfully updated port: 1bb1602d-28f5-4171-88dc-24be47ec272f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:21:24 compute-1 nova_compute[226101]: 2025-12-06 08:21:24.444 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:21:24 compute-1 nova_compute[226101]: 2025-12-06 08:21:24.444 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquired lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:21:24 compute-1 nova_compute[226101]: 2025-12-06 08:21:24.444 226109 DEBUG nova.network.neutron [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:21:24 compute-1 nova_compute[226101]: 2025-12-06 08:21:24.585 226109 DEBUG nova.compute.manager [req-ae56ba28-eed9-4071-a263-73df6902877f req-d6e769c1-3671-4672-bdb8-13c49f24cb4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-changed-1bb1602d-28f5-4171-88dc-24be47ec272f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:21:24 compute-1 nova_compute[226101]: 2025-12-06 08:21:24.585 226109 DEBUG nova.compute.manager [req-ae56ba28-eed9-4071-a263-73df6902877f req-d6e769c1-3671-4672-bdb8-13c49f24cb4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Refreshing instance network info cache due to event network-changed-1bb1602d-28f5-4171-88dc-24be47ec272f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:21:24 compute-1 nova_compute[226101]: 2025-12-06 08:21:24.586 226109 DEBUG oslo_concurrency.lockutils [req-ae56ba28-eed9-4071-a263-73df6902877f req-d6e769c1-3671-4672-bdb8-13c49f24cb4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:21:24 compute-1 nova_compute[226101]: 2025-12-06 08:21:24.753 226109 DEBUG nova.network.neutron [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:21:25 compute-1 ceph-mon[81689]: pgmap v3873: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 964 KiB/s wr, 185 op/s
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.625 226109 DEBUG nova.network.neutron [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updating instance_info_cache with network_info: [{"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.698 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Releasing lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.698 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Instance network_info: |[{"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.700 226109 DEBUG oslo_concurrency.lockutils [req-ae56ba28-eed9-4071-a263-73df6902877f req-d6e769c1-3671-4672-bdb8-13c49f24cb4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.700 226109 DEBUG nova.network.neutron [req-ae56ba28-eed9-4071-a263-73df6902877f req-d6e769c1-3671-4672-bdb8-13c49f24cb4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Refreshing network info cache for port 1bb1602d-28f5-4171-88dc-24be47ec272f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.706 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Start _get_guest_xml network_info=[{"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.713 226109 WARNING nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.721 226109 DEBUG nova.virt.libvirt.host [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.722 226109 DEBUG nova.virt.libvirt.host [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.725 226109 DEBUG nova.virt.libvirt.host [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.726 226109 DEBUG nova.virt.libvirt.host [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.727 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.727 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.728 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.728 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.728 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.728 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.729 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.729 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.729 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.729 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.729 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.730 226109 DEBUG nova.virt.hardware [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:21:25 compute-1 nova_compute[226101]: 2025-12-06 08:21:25.732 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:21:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/359431386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:26 compute-1 nova_compute[226101]: 2025-12-06 08:21:26.173 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:26 compute-1 nova_compute[226101]: 2025-12-06 08:21:26.204 226109 DEBUG nova.storage.rbd_utils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:21:26 compute-1 nova_compute[226101]: 2025-12-06 08:21:26.209 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:26.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/359431386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:21:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190565723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.100 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.102 226109 DEBUG nova.virt.libvirt.vif [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-200343605',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-200343605',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=215,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbFho9uZYw/C67SrAZAOll1Yn6PYb0ai5sIBHmAL6dW73Ch+qRjce9a6w2oaJggOsc3UVjACoOu/DVsfm+enspt+H1pyOFCemHZp5ou5LgCYmgT/pXXIRPYRaBdn6h/nw==',key_name='tempest-TestSecurityGroupsBasicOps-443327318',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-x1jocr9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:21:10Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=173105e8-7c00-4b0a-86b1-b0acabe5cbeb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.102 226109 DEBUG nova.network.os_vif_util [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.103 226109 DEBUG nova.network.os_vif_util [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:53:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bb1602d-28f5-4171-88dc-24be47ec272f,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb1602d-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.104 226109 DEBUG nova.objects.instance [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 173105e8-7c00-4b0a-86b1-b0acabe5cbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.121 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <uuid>173105e8-7c00-4b0a-86b1-b0acabe5cbeb</uuid>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <name>instance-000000d7</name>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-200343605</nova:name>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:21:25</nova:creationTime>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <nova:user uuid="0432cb6633e14c1b86fc320e7f3bb880">tempest-TestSecurityGroupsBasicOps-568463891-project-member</nova:user>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <nova:project uuid="5d23d1d6ffc142eaa9bee0ef93fe60e4">tempest-TestSecurityGroupsBasicOps-568463891</nova:project>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <nova:port uuid="1bb1602d-28f5-4171-88dc-24be47ec272f">
Dec 06 08:21:27 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <system>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <entry name="serial">173105e8-7c00-4b0a-86b1-b0acabe5cbeb</entry>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <entry name="uuid">173105e8-7c00-4b0a-86b1-b0acabe5cbeb</entry>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </system>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <os>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   </os>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <features>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   </features>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk">
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       </source>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk.config">
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       </source>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:21:27 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:c2:53:b8"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <target dev="tap1bb1602d-28"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb/console.log" append="off"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <video>
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </video>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:21:27 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:21:27 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:21:27 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:21:27 compute-1 nova_compute[226101]: </domain>
Dec 06 08:21:27 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.123 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Preparing to wait for external event network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.123 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.123 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.124 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.124 226109 DEBUG nova.virt.libvirt.vif [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-200343605',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-200343605',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=215,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbFho9uZYw/C67SrAZAOll1Yn6PYb0ai5sIBHmAL6dW73Ch+qRjce9a6w2oaJggOsc3UVjACoOu/DVsfm+enspt+H1pyOFCemHZp5ou5LgCYmgT/pXXIRPYRaBdn6h/nw==',key_name='tempest-TestSecurityGroupsBasicOps-443327318',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-x1jocr9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:21:10Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=173105e8-7c00-4b0a-86b1-b0acabe5cbeb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.125 226109 DEBUG nova.network.os_vif_util [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.126 226109 DEBUG nova.network.os_vif_util [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:53:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bb1602d-28f5-4171-88dc-24be47ec272f,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb1602d-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.126 226109 DEBUG os_vif [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:53:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bb1602d-28f5-4171-88dc-24be47ec272f,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb1602d-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.126 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.127 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.127 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.130 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.130 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bb1602d-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.131 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bb1602d-28, col_values=(('external_ids', {'iface-id': '1bb1602d-28f5-4171-88dc-24be47ec272f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:53:b8', 'vm-uuid': '173105e8-7c00-4b0a-86b1-b0acabe5cbeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.162 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:27 compute-1 NetworkManager[49031]: <info>  [1765009287.1640] manager: (tap1bb1602d-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.165 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.173 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.173 226109 INFO os_vif [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:53:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bb1602d-28f5-4171-88dc-24be47ec272f,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb1602d-28')
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.261 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.262 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.262 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No VIF found with MAC fa:16:3e:c2:53:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.262 226109 INFO nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Using config drive
Dec 06 08:21:27 compute-1 nova_compute[226101]: 2025-12-06 08:21:27.291 226109 DEBUG nova.storage.rbd_utils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:21:27 compute-1 ceph-mon[81689]: pgmap v3874: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 965 KiB/s wr, 185 op/s
Dec 06 08:21:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/18612945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/190565723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:28.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:28.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:28 compute-1 nova_compute[226101]: 2025-12-06 08:21:28.250 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:28 compute-1 nova_compute[226101]: 2025-12-06 08:21:28.423 226109 INFO nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Creating config drive at /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb/disk.config
Dec 06 08:21:28 compute-1 nova_compute[226101]: 2025-12-06 08:21:28.427 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzfbwfgcn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:28 compute-1 nova_compute[226101]: 2025-12-06 08:21:28.557 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzfbwfgcn" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:28 compute-1 sudo[312846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:28 compute-1 sudo[312846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:28 compute-1 sudo[312846]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:28 compute-1 sudo[312871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:21:28 compute-1 sudo[312871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:28 compute-1 sudo[312871]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:28 compute-1 sudo[312896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:28 compute-1 sudo[312896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:28 compute-1 sudo[312896]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:28 compute-1 sudo[312928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:21:28 compute-1 sudo[312928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:29 compute-1 ceph-mon[81689]: pgmap v3875: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 3.2 KiB/s wr, 117 op/s
Dec 06 08:21:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2345884700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.133 226109 DEBUG nova.storage.rbd_utils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.137 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb/disk.config 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.332 226109 DEBUG oslo_concurrency.processutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb/disk.config 173105e8-7c00-4b0a-86b1-b0acabe5cbeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.333 226109 INFO nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Deleting local config drive /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb/disk.config because it was imported into RBD.
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.336 226109 DEBUG nova.network.neutron [req-ae56ba28-eed9-4071-a263-73df6902877f req-d6e769c1-3671-4672-bdb8-13c49f24cb4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updated VIF entry in instance network info cache for port 1bb1602d-28f5-4171-88dc-24be47ec272f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.336 226109 DEBUG nova.network.neutron [req-ae56ba28-eed9-4071-a263-73df6902877f req-d6e769c1-3671-4672-bdb8-13c49f24cb4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updating instance_info_cache with network_info: [{"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:21:29 compute-1 virtqemud[225710]: End of file while reading data: Input/output error
Dec 06 08:21:29 compute-1 virtqemud[225710]: End of file while reading data: Input/output error
Dec 06 08:21:29 compute-1 podman[313046]: 2025-12-06 08:21:29.355898117 +0000 UTC m=+0.076538623 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.359 226109 DEBUG oslo_concurrency.lockutils [req-ae56ba28-eed9-4071-a263-73df6902877f req-d6e769c1-3671-4672-bdb8-13c49f24cb4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:21:29 compute-1 kernel: tap1bb1602d-28: entered promiscuous mode
Dec 06 08:21:29 compute-1 NetworkManager[49031]: <info>  [1765009289.3925] manager: (tap1bb1602d-28): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Dec 06 08:21:29 compute-1 ovn_controller[130279]: 2025-12-06T08:21:29Z|00848|binding|INFO|Claiming lport 1bb1602d-28f5-4171-88dc-24be47ec272f for this chassis.
Dec 06 08:21:29 compute-1 ovn_controller[130279]: 2025-12-06T08:21:29Z|00849|binding|INFO|1bb1602d-28f5-4171-88dc-24be47ec272f: Claiming fa:16:3e:c2:53:b8 10.100.0.6
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.465 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.473 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-1 systemd-udevd[313083]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:21:29 compute-1 systemd-machined[190302]: New machine qemu-97-instance-000000d7.
Dec 06 08:21:29 compute-1 NetworkManager[49031]: <info>  [1765009289.5024] device (tap1bb1602d-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:21:29 compute-1 NetworkManager[49031]: <info>  [1765009289.5037] device (tap1bb1602d-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.502 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:53:b8 10.100.0.6'], port_security=['fa:16:3e:c2:53:b8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '173105e8-7c00-4b0a-86b1-b0acabe5cbeb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '353e35c5-d221-4dce-9275-d54187ac5a75 c1d372ec-4c65-4af5-9c55-76198d157b1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d7af5e0-a504-41e1-b9f6-95b3d5bf94dd, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=1bb1602d-28f5-4171-88dc-24be47ec272f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.503 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb1602d-28f5-4171-88dc-24be47ec272f in datapath 8e3ac9aa-9766-45cd-a8b5-88cc15193af6 bound to our chassis
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.504 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e3ac9aa-9766-45cd-a8b5-88cc15193af6
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.516 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[22a3a70f-de60-4fde-bf30-7489318a6bbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.517 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e3ac9aa-91 in ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.518 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e3ac9aa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.518 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[78838ec8-d542-48c1-8ce3-1714177b58e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.519 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0faf2f30-21df-4265-b748-dcc94e55cc59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 podman[313046]: 2025-12-06 08:21:29.528047031 +0000 UTC m=+0.248687537 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.528 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-1 ovn_controller[130279]: 2025-12-06T08:21:29Z|00850|binding|INFO|Setting lport 1bb1602d-28f5-4171-88dc-24be47ec272f ovn-installed in OVS
Dec 06 08:21:29 compute-1 ovn_controller[130279]: 2025-12-06T08:21:29Z|00851|binding|INFO|Setting lport 1bb1602d-28f5-4171-88dc-24be47ec272f up in Southbound
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.530 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[e7701686-cbe9-4259-8212-83cfff2d974c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.530 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-1 systemd[1]: Started Virtual Machine qemu-97-instance-000000d7.
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.552 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[122bf83a-87ac-4401-90d9-9ad24765e536]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.580 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[a11d8be0-6eb4-47b0-8d81-c23bb3d065a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.586 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6156c607-c204-467a-bed4-0df94d32c8da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 NetworkManager[49031]: <info>  [1765009289.5870] manager: (tap8e3ac9aa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/396)
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.618 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[fa000da4-3fc3-4f73-b256-37452f3914d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.621 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[e2dbcf14-cc43-4dbb-91cf-10b5fafce48d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 NetworkManager[49031]: <info>  [1765009289.6466] device (tap8e3ac9aa-90): carrier: link connected
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.648 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[06c20981-6d36-4086-8fea-b1dff406f563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.668 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f616f31f-fc76-4217-9d6f-cfa4125717d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e3ac9aa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:38:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956914, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313129, 'error': None, 'target': 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.685 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9cf4a4-d9b4-4a2f-bba6-175e175b0efc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:3801'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956914, 'tstamp': 956914}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313130, 'error': None, 'target': 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.701 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a43ee7fd-9813-4322-b655-6afed220577d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e3ac9aa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:38:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956914, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313131, 'error': None, 'target': 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.732 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0b31dd11-480c-4824-9b61-df8d848687f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.786 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e2f7b7-2ef2-4750-b895-666236ef16e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.788 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e3ac9aa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.788 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.789 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e3ac9aa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.790 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-1 NetworkManager[49031]: <info>  [1765009289.7912] manager: (tap8e3ac9aa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Dec 06 08:21:29 compute-1 kernel: tap8e3ac9aa-90: entered promiscuous mode
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.793 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.794 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e3ac9aa-90, col_values=(('external_ids', {'iface-id': 'e79b0cd9-d776-4f44-8b09-bc4e745f59bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.795 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-1 ovn_controller[130279]: 2025-12-06T08:21:29Z|00852|binding|INFO|Releasing lport e79b0cd9-d776-4f44-8b09-bc4e745f59bc from this chassis (sb_readonly=0)
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.813 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.814 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e3ac9aa-9766-45cd-a8b5-88cc15193af6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e3ac9aa-9766-45cd-a8b5-88cc15193af6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.815 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a2842af2-a273-48ab-82c6-7d5624494853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.816 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-8e3ac9aa-9766-45cd-a8b5-88cc15193af6
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/8e3ac9aa-9766-45cd-a8b5-88cc15193af6.pid.haproxy
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 8e3ac9aa-9766-45cd-a8b5-88cc15193af6
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:21:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:21:29.817 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'env', 'PROCESS_TAG=haproxy-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e3ac9aa-9766-45cd-a8b5-88cc15193af6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.940 226109 DEBUG nova.compute.manager [req-5d7d5862-7c9f-4cb6-901f-e963313849fe req-065d24db-1504-4445-ac90-c595262a9c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.943 226109 DEBUG oslo_concurrency.lockutils [req-5d7d5862-7c9f-4cb6-901f-e963313849fe req-065d24db-1504-4445-ac90-c595262a9c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.943 226109 DEBUG oslo_concurrency.lockutils [req-5d7d5862-7c9f-4cb6-901f-e963313849fe req-065d24db-1504-4445-ac90-c595262a9c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.944 226109 DEBUG oslo_concurrency.lockutils [req-5d7d5862-7c9f-4cb6-901f-e963313849fe req-065d24db-1504-4445-ac90-c595262a9c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.944 226109 DEBUG nova.compute.manager [req-5d7d5862-7c9f-4cb6-901f-e963313849fe req-065d24db-1504-4445-ac90-c595262a9c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Processing event network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.970 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009289.9700518, 173105e8-7c00-4b0a-86b1-b0acabe5cbeb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.970 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] VM Started (Lifecycle Event)
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.972 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.976 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.980 226109 INFO nova.virt.libvirt.driver [-] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Instance spawned successfully.
Dec 06 08:21:29 compute-1 nova_compute[226101]: 2025-12-06 08:21:29.981 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.000 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.000 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.001 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.001 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.001 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.001 226109 DEBUG nova.virt.libvirt.driver [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.005 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.013 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.046 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.046 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009289.9702306, 173105e8-7c00-4b0a-86b1-b0acabe5cbeb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.046 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] VM Paused (Lifecycle Event)
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.074 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.081 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009289.9764907, 173105e8-7c00-4b0a-86b1-b0acabe5cbeb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.081 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] VM Resumed (Lifecycle Event)
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.098 226109 INFO nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Took 19.97 seconds to spawn the instance on the hypervisor.
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.098 226109 DEBUG nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:21:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.133 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.136 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.206 226109 INFO nova.compute.manager [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Took 22.29 seconds to build instance.
Dec 06 08:21:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:30.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:30.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:30 compute-1 podman[313269]: 2025-12-06 08:21:30.150081696 +0000 UTC m=+0.025841813 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:21:30 compute-1 nova_compute[226101]: 2025-12-06 08:21:30.433 226109 DEBUG oslo_concurrency.lockutils [None req-b69fb949-3bce-4773-85b0-5d7d7f956a78 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:30 compute-1 podman[313269]: 2025-12-06 08:21:30.479164499 +0000 UTC m=+0.354924596 container create 5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 08:21:30 compute-1 sshd-session[312999]: Received disconnect from 154.209.4.183 port 47436:11: Bye Bye [preauth]
Dec 06 08:21:30 compute-1 sshd-session[312999]: Disconnected from authenticating user root 154.209.4.183 port 47436 [preauth]
Dec 06 08:21:30 compute-1 systemd[1]: Started libpod-conmon-5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195.scope.
Dec 06 08:21:30 compute-1 sudo[312928]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:30 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:21:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3413ee81d5605eb0502e1c815cbc4489a92dd3e6561d4921d1195e47ef9eb791/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:30 compute-1 podman[313269]: 2025-12-06 08:21:30.607009596 +0000 UTC m=+0.482769703 container init 5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 08:21:30 compute-1 podman[313269]: 2025-12-06 08:21:30.613465538 +0000 UTC m=+0.489225635 container start 5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 08:21:30 compute-1 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[313307]: [NOTICE]   (313311) : New worker (313316) forked
Dec 06 08:21:30 compute-1 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[313307]: [NOTICE]   (313311) : Loading success.
Dec 06 08:21:30 compute-1 sudo[313313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:30 compute-1 sudo[313313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:30 compute-1 sudo[313313]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:30 compute-1 sudo[313347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:21:30 compute-1 sudo[313347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:30 compute-1 sudo[313347]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:30 compute-1 sudo[313372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:30 compute-1 sudo[313372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:30 compute-1 sudo[313372]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:30 compute-1 sudo[313397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:21:30 compute-1 sudo[313397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:31 compute-1 sudo[313397]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:31 compute-1 ceph-mon[81689]: pgmap v3876: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 84 op/s
Dec 06 08:21:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:21:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:21:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:21:32 compute-1 nova_compute[226101]: 2025-12-06 08:21:32.065 226109 DEBUG nova.compute.manager [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:21:32 compute-1 nova_compute[226101]: 2025-12-06 08:21:32.066 226109 DEBUG oslo_concurrency.lockutils [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:32 compute-1 nova_compute[226101]: 2025-12-06 08:21:32.066 226109 DEBUG oslo_concurrency.lockutils [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:32 compute-1 nova_compute[226101]: 2025-12-06 08:21:32.066 226109 DEBUG oslo_concurrency.lockutils [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:32 compute-1 nova_compute[226101]: 2025-12-06 08:21:32.066 226109 DEBUG nova.compute.manager [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] No waiting events found dispatching network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:21:32 compute-1 nova_compute[226101]: 2025-12-06 08:21:32.066 226109 WARNING nova.compute.manager [req-882fb4f1-05ae-462d-8f98-70ff0faa31c5 req-acee23e7-c5f6-4b03-900b-14b178556141 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received unexpected event network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f for instance with vm_state active and task_state None.
Dec 06 08:21:32 compute-1 nova_compute[226101]: 2025-12-06 08:21:32.163 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:32.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:32.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:21:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:21:33 compute-1 nova_compute[226101]: 2025-12-06 08:21:33.254 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:33 compute-1 sshd-session[313453]: Received disconnect from 136.112.8.45 port 59826:11: Bye Bye [preauth]
Dec 06 08:21:33 compute-1 sshd-session[313453]: Disconnected from authenticating user root 136.112.8.45 port 59826 [preauth]
Dec 06 08:21:33 compute-1 ceph-mon[81689]: pgmap v3877: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 491 KiB/s rd, 16 KiB/s wr, 109 op/s
Dec 06 08:21:33 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3905464386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:34.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:34.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:34 compute-1 ceph-mon[81689]: pgmap v3878: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 442 KiB/s rd, 16 KiB/s wr, 28 op/s
Dec 06 08:21:35 compute-1 sshd-session[312813]: Connection closed by 14.103.75.9 port 34896 [preauth]
Dec 06 08:21:35 compute-1 nova_compute[226101]: 2025-12-06 08:21:35.948 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:35 compute-1 NetworkManager[49031]: <info>  [1765009295.9632] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Dec 06 08:21:35 compute-1 NetworkManager[49031]: <info>  [1765009295.9643] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Dec 06 08:21:36 compute-1 nova_compute[226101]: 2025-12-06 08:21:36.083 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:36 compute-1 ovn_controller[130279]: 2025-12-06T08:21:36Z|00853|binding|INFO|Releasing lport e79b0cd9-d776-4f44-8b09-bc4e745f59bc from this chassis (sb_readonly=0)
Dec 06 08:21:36 compute-1 nova_compute[226101]: 2025-12-06 08:21:36.107 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:36 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:37 compute-1 ceph-mon[81689]: pgmap v3879: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 88 op/s
Dec 06 08:21:37 compute-1 nova_compute[226101]: 2025-12-06 08:21:37.165 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:37 compute-1 sudo[313457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:37 compute-1 sudo[313457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:37 compute-1 sudo[313457]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:37 compute-1 sudo[313482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:21:37 compute-1 sudo[313482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:37 compute-1 sudo[313482]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:38 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:38.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:38.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:38 compute-1 nova_compute[226101]: 2025-12-06 08:21:38.256 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:38 compute-1 nova_compute[226101]: 2025-12-06 08:21:38.715 226109 DEBUG nova.compute.manager [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-changed-1bb1602d-28f5-4171-88dc-24be47ec272f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:21:38 compute-1 nova_compute[226101]: 2025-12-06 08:21:38.715 226109 DEBUG nova.compute.manager [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Refreshing instance network info cache due to event network-changed-1bb1602d-28f5-4171-88dc-24be47ec272f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:21:38 compute-1 nova_compute[226101]: 2025-12-06 08:21:38.715 226109 DEBUG oslo_concurrency.lockutils [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:21:38 compute-1 nova_compute[226101]: 2025-12-06 08:21:38.715 226109 DEBUG oslo_concurrency.lockutils [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:21:38 compute-1 nova_compute[226101]: 2025-12-06 08:21:38.716 226109 DEBUG nova.network.neutron [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Refreshing network info cache for port 1bb1602d-28f5-4171-88dc-24be47ec272f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:21:39 compute-1 ceph-mon[81689]: pgmap v3880: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 106 op/s
Dec 06 08:21:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:40.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:40 compute-1 nova_compute[226101]: 2025-12-06 08:21:40.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:41 compute-1 ceph-mon[81689]: pgmap v3881: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 103 op/s
Dec 06 08:21:42 compute-1 nova_compute[226101]: 2025-12-06 08:21:42.167 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:42.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:42.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:43 compute-1 sshd-session[313507]: Received disconnect from 91.144.158.231 port 37924:11: Bye Bye [preauth]
Dec 06 08:21:43 compute-1 sshd-session[313507]: Disconnected from authenticating user root 91.144.158.231 port 37924 [preauth]
Dec 06 08:21:43 compute-1 nova_compute[226101]: 2025-12-06 08:21:43.258 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:43 compute-1 ceph-mon[81689]: pgmap v3882: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 32 KiB/s wr, 153 op/s
Dec 06 08:21:44 compute-1 nova_compute[226101]: 2025-12-06 08:21:44.211 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:44 compute-1 nova_compute[226101]: 2025-12-06 08:21:44.212 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:44 compute-1 nova_compute[226101]: 2025-12-06 08:21:44.212 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:44 compute-1 nova_compute[226101]: 2025-12-06 08:21:44.212 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:21:44 compute-1 nova_compute[226101]: 2025-12-06 08:21:44.214 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:44.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:44.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:21:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/14057990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:44 compute-1 nova_compute[226101]: 2025-12-06 08:21:44.673 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:44 compute-1 ovn_controller[130279]: 2025-12-06T08:21:44Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:53:b8 10.100.0.6
Dec 06 08:21:44 compute-1 ovn_controller[130279]: 2025-12-06T08:21:44Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:53:b8 10.100.0.6
Dec 06 08:21:44 compute-1 nova_compute[226101]: 2025-12-06 08:21:44.872 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:21:44 compute-1 nova_compute[226101]: 2025-12-06 08:21:44.873 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.086 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.087 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4013MB free_disk=20.967090606689453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.087 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.088 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.263 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 173105e8-7c00-4b0a-86b1-b0acabe5cbeb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.263 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.264 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.295 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.316 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.316 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.339 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.360 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.430 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.469 226109 DEBUG nova.network.neutron [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updated VIF entry in instance network info cache for port 1bb1602d-28f5-4171-88dc-24be47ec272f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.470 226109 DEBUG nova.network.neutron [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updating instance_info_cache with network_info: [{"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:21:45 compute-1 sshd-session[313509]: Invalid user admin from 91.202.233.33 port 60714
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.516 226109 DEBUG oslo_concurrency.lockutils [req-b0a86b59-c754-4267-b8e5-a1fa7192d1f2 req-a97868e7-2e43-4522-9fee-c31d942d97a8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:21:45 compute-1 ceph-mon[81689]: pgmap v3883: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 128 op/s
Dec 06 08:21:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/14057990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:21:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2852534258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.923 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.930 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:21:45 compute-1 sshd-session[313509]: Connection reset by invalid user admin 91.202.233.33 port 60714 [preauth]
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.965 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:21:45 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.999 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:21:46 compute-1 nova_compute[226101]: 2025-12-06 08:21:45.999 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:46 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2852534258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:47 compute-1 nova_compute[226101]: 2025-12-06 08:21:47.168 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:47 compute-1 ceph-mon[81689]: pgmap v3884: 305 pgs: 305 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 168 op/s
Dec 06 08:21:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:48 compute-1 sshd-session[313558]: Connection reset by authenticating user root 91.202.233.33 port 60726 [preauth]
Dec 06 08:21:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:48 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:48.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:48.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:48 compute-1 nova_compute[226101]: 2025-12-06 08:21:48.260 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:49 compute-1 ceph-mon[81689]: pgmap v3885: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec 06 08:21:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:50.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:50 compute-1 sshd-session[313560]: Connection reset by authenticating user root 91.202.233.33 port 60732 [preauth]
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.636950) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310636986, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 2389, "num_deletes": 253, "total_data_size": 5674155, "memory_usage": 5750800, "flush_reason": "Manual Compaction"}
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310672361, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 3696882, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83106, "largest_seqno": 85489, "table_properties": {"data_size": 3687334, "index_size": 5977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19993, "raw_average_key_size": 20, "raw_value_size": 3668179, "raw_average_value_size": 3762, "num_data_blocks": 261, "num_entries": 975, "num_filter_entries": 975, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009105, "oldest_key_time": 1765009105, "file_creation_time": 1765009310, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 35633 microseconds, and 7529 cpu microseconds.
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.672577) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 3696882 bytes OK
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.672665) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.674560) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.674579) EVENT_LOG_v1 {"time_micros": 1765009310674572, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.674604) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 5663755, prev total WAL file size 5663755, number of live WAL files 2.
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.676876) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(3610KB)], [171(11MB)]
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310676985, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 15453012, "oldest_snapshot_seqno": -1}
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 11282 keys, 13444903 bytes, temperature: kUnknown
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310771290, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 13444903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13373717, "index_size": 41889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 298654, "raw_average_key_size": 26, "raw_value_size": 13178103, "raw_average_value_size": 1168, "num_data_blocks": 1582, "num_entries": 11282, "num_filter_entries": 11282, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009310, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.771750) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 13444903 bytes
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.773061) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.6 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 11807, records dropped: 525 output_compression: NoCompression
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.773080) EVENT_LOG_v1 {"time_micros": 1765009310773070, "job": 110, "event": "compaction_finished", "compaction_time_micros": 94433, "compaction_time_cpu_micros": 32891, "output_level": 6, "num_output_files": 1, "total_output_size": 13444903, "num_input_records": 11807, "num_output_records": 11282, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310774228, "job": 110, "event": "table_file_deletion", "file_number": 173}
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310776839, "job": 110, "event": "table_file_deletion", "file_number": 171}
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.676672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:21:50.776889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:51 compute-1 ceph-mon[81689]: pgmap v3886: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Dec 06 08:21:52 compute-1 podman[313567]: 2025-12-06 08:21:52.070256786 +0000 UTC m=+0.054184294 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:21:52 compute-1 podman[313566]: 2025-12-06 08:21:52.106466387 +0000 UTC m=+0.090415035 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 08:21:52 compute-1 podman[313568]: 2025-12-06 08:21:52.155562123 +0000 UTC m=+0.135363070 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:21:52 compute-1 nova_compute[226101]: 2025-12-06 08:21:52.170 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:52.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:52.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:52 compute-1 sshd-session[313563]: Received disconnect from 106.51.92.114 port 48894:11: Bye Bye [preauth]
Dec 06 08:21:52 compute-1 sshd-session[313563]: Disconnected from authenticating user root 106.51.92.114 port 48894 [preauth]
Dec 06 08:21:52 compute-1 ceph-mon[81689]: pgmap v3887: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Dec 06 08:21:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:52 compute-1 sshd-session[313562]: Connection reset by authenticating user root 91.202.233.33 port 60758 [preauth]
Dec 06 08:21:53 compute-1 nova_compute[226101]: 2025-12-06 08:21:53.345 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:54.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:54.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:55 compute-1 sshd-session[313633]: Connection reset by authenticating user root 91.202.233.33 port 22748 [preauth]
Dec 06 08:21:55 compute-1 ceph-mon[81689]: pgmap v3888: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Dec 06 08:21:56 compute-1 nova_compute[226101]: 2025-12-06 08:21:56.000 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:56 compute-1 nova_compute[226101]: 2025-12-06 08:21:56.000 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:21:56 compute-1 nova_compute[226101]: 2025-12-06 08:21:56.001 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:21:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:56.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:56.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:56 compute-1 nova_compute[226101]: 2025-12-06 08:21:56.432 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:21:56 compute-1 nova_compute[226101]: 2025-12-06 08:21:56.432 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:21:56 compute-1 nova_compute[226101]: 2025-12-06 08:21:56.433 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:21:56 compute-1 nova_compute[226101]: 2025-12-06 08:21:56.433 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 173105e8-7c00-4b0a-86b1-b0acabe5cbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:21:57 compute-1 nova_compute[226101]: 2025-12-06 08:21:57.172 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:57 compute-1 ceph-mon[81689]: pgmap v3889: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Dec 06 08:21:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:21:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:21:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:58.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:58.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:58 compute-1 nova_compute[226101]: 2025-12-06 08:21:58.347 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:59 compute-1 ceph-mon[81689]: pgmap v3890: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 851 KiB/s wr, 75 op/s
Dec 06 08:21:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2892796958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:22:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:00.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:00 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:00.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:01 compute-1 nova_compute[226101]: 2025-12-06 08:22:01.523 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updating instance_info_cache with network_info: [{"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:01.700 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:01.700 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:01.701 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:01 compute-1 ceph-mon[81689]: pgmap v3891: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 594 KiB/s wr, 53 op/s
Dec 06 08:22:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2736176045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:02 compute-1 nova_compute[226101]: 2025-12-06 08:22:02.173 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:22:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:02 compute-1 sshd-session[313635]: Received disconnect from 14.225.3.79 port 43958:11: Bye Bye [preauth]
Dec 06 08:22:02 compute-1 sshd-session[313635]: Disconnected from authenticating user root 14.225.3.79 port 43958 [preauth]
Dec 06 08:22:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:03 compute-1 nova_compute[226101]: 2025-12-06 08:22:03.191 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:22:03 compute-1 nova_compute[226101]: 2025-12-06 08:22:03.192 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:22:03 compute-1 nova_compute[226101]: 2025-12-06 08:22:03.192 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:03 compute-1 nova_compute[226101]: 2025-12-06 08:22:03.192 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:03 compute-1 nova_compute[226101]: 2025-12-06 08:22:03.193 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:03 compute-1 nova_compute[226101]: 2025-12-06 08:22:03.193 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:22:03 compute-1 nova_compute[226101]: 2025-12-06 08:22:03.349 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3420947166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:03 compute-1 ceph-mon[81689]: pgmap v3892: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 600 KiB/s wr, 56 op/s
Dec 06 08:22:04 compute-1 nova_compute[226101]: 2025-12-06 08:22:04.155 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:04.156 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:22:04 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:04.157 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:22:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:04 compute-1 nova_compute[226101]: 2025-12-06 08:22:04.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3353675565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:05 compute-1 ceph-mon[81689]: pgmap v3893: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 142 KiB/s rd, 89 KiB/s wr, 6 op/s
Dec 06 08:22:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3274571355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:06.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:06.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:07.159 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:07 compute-1 nova_compute[226101]: 2025-12-06 08:22:07.175 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:07 compute-1 nova_compute[226101]: 2025-12-06 08:22:07.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:07 compute-1 ceph-mon[81689]: pgmap v3894: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 90 KiB/s wr, 18 op/s
Dec 06 08:22:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:22:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:08 compute-1 nova_compute[226101]: 2025-12-06 08:22:08.350 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:22:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2209579158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:22:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:22:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2209579158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:22:09 compute-1 nova_compute[226101]: 2025-12-06 08:22:09.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:09 compute-1 sshd-session[313637]: Received disconnect from 124.18.141.70 port 36128:11: Bye Bye [preauth]
Dec 06 08:22:09 compute-1 sshd-session[313637]: Disconnected from authenticating user root 124.18.141.70 port 36128 [preauth]
Dec 06 08:22:10 compute-1 ceph-mon[81689]: pgmap v3895: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 90 KiB/s wr, 18 op/s
Dec 06 08:22:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/192332593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2209579158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:22:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2209579158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:22:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:22:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:10.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:10 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:10.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:11 compute-1 ceph-mon[81689]: pgmap v3896: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 217 KiB/s rd, 6.6 KiB/s wr, 15 op/s
Dec 06 08:22:11 compute-1 sshd-session[313639]: Received disconnect from 186.96.151.198 port 60398:11: Bye Bye [preauth]
Dec 06 08:22:11 compute-1 sshd-session[313639]: Disconnected from authenticating user root 186.96.151.198 port 60398 [preauth]
Dec 06 08:22:12 compute-1 nova_compute[226101]: 2025-12-06 08:22:12.176 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:22:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:12 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:12 compute-1 nova_compute[226101]: 2025-12-06 08:22:12.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:12 compute-1 sshd-session[313641]: Received disconnect from 154.219.116.39 port 56690:11: Bye Bye [preauth]
Dec 06 08:22:12 compute-1 sshd-session[313641]: Disconnected from authenticating user root 154.219.116.39 port 56690 [preauth]
Dec 06 08:22:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:13 compute-1 ceph-mon[81689]: pgmap v3897: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Dec 06 08:22:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/186175830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:22:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/186175830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:22:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2265643239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:22:13 compute-1 nova_compute[226101]: 2025-12-06 08:22:13.353 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:22:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:14.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e427 e427: 3 total, 3 up, 3 in
Dec 06 08:22:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3254266222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:22:15 compute-1 ceph-mon[81689]: pgmap v3898: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 163 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 06 08:22:15 compute-1 ceph-mon[81689]: osdmap e427: 3 total, 3 up, 3 in
Dec 06 08:22:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:16.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:16.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:17 compute-1 sshd-session[313646]: Received disconnect from 45.120.216.232 port 55914:11: Bye Bye [preauth]
Dec 06 08:22:17 compute-1 sshd-session[313646]: Disconnected from authenticating user root 45.120.216.232 port 55914 [preauth]
Dec 06 08:22:17 compute-1 nova_compute[226101]: 2025-12-06 08:22:17.177 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:17 compute-1 sshd-session[313644]: Connection reset by authenticating user root 45.140.17.124 port 52154 [preauth]
Dec 06 08:22:17 compute-1 ceph-mon[81689]: pgmap v3900: 305 pgs: 305 active+clean; 340 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Dec 06 08:22:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:18.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:18.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:18 compute-1 nova_compute[226101]: 2025-12-06 08:22:18.354 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:18 compute-1 ceph-mon[81689]: pgmap v3901: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 06 08:22:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 e428: 3 total, 3 up, 3 in
Dec 06 08:22:19 compute-1 sshd-session[313650]: Received disconnect from 186.87.166.141 port 37662:11: Bye Bye [preauth]
Dec 06 08:22:19 compute-1 sshd-session[313650]: Disconnected from authenticating user root 186.87.166.141 port 37662 [preauth]
Dec 06 08:22:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:20.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:20.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:20 compute-1 sshd-session[313648]: Connection reset by authenticating user root 45.140.17.124 port 52168 [preauth]
Dec 06 08:22:20 compute-1 ceph-mon[81689]: osdmap e428: 3 total, 3 up, 3 in
Dec 06 08:22:20 compute-1 ceph-mon[81689]: pgmap v3903: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 24 KiB/s wr, 51 op/s
Dec 06 08:22:22 compute-1 nova_compute[226101]: 2025-12-06 08:22:22.179 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:22.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:22.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:23 compute-1 ceph-mon[81689]: pgmap v3904: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 175 op/s
Dec 06 08:22:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4093370634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:23 compute-1 podman[313654]: 2025-12-06 08:22:23.07247031 +0000 UTC m=+0.056596788 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 08:22:23 compute-1 podman[313655]: 2025-12-06 08:22:23.08176837 +0000 UTC m=+0.057308128 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 08:22:23 compute-1 podman[313656]: 2025-12-06 08:22:23.116651255 +0000 UTC m=+0.088871374 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 06 08:22:23 compute-1 nova_compute[226101]: 2025-12-06 08:22:23.357 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:23 compute-1 sshd-session[313652]: Connection reset by authenticating user root 45.140.17.124 port 52174 [preauth]
Dec 06 08:22:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:24.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:25 compute-1 ceph-mon[81689]: pgmap v3905: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 145 op/s
Dec 06 08:22:26 compute-1 sshd-session[313717]: Connection reset by authenticating user root 45.140.17.124 port 26504 [preauth]
Dec 06 08:22:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:26.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:27 compute-1 ceph-mon[81689]: pgmap v3906: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 KiB/s wr, 103 op/s
Dec 06 08:22:27 compute-1 nova_compute[226101]: 2025-12-06 08:22:27.181 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:27 compute-1 sshd-session[313719]: Received disconnect from 101.100.194.199 port 56230:11: Bye Bye [preauth]
Dec 06 08:22:27 compute-1 sshd-session[313719]: Disconnected from authenticating user root 101.100.194.199 port 56230 [preauth]
Dec 06 08:22:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:22:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433424503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:22:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:22:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433424503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:22:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:28.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:28 compute-1 sshd-session[313721]: Invalid user ftpuser from 45.140.17.124 port 26524
Dec 06 08:22:28 compute-1 nova_compute[226101]: 2025-12-06 08:22:28.359 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/433424503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:22:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/433424503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:22:28 compute-1 sshd-session[313721]: Connection reset by invalid user ftpuser 45.140.17.124 port 26524 [preauth]
Dec 06 08:22:30 compute-1 ceph-mon[81689]: pgmap v3907: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 100 op/s
Dec 06 08:22:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:30.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:30.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:31 compute-1 ceph-mon[81689]: pgmap v3908: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 98 op/s
Dec 06 08:22:32 compute-1 nova_compute[226101]: 2025-12-06 08:22:32.182 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:32.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:32.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:33 compute-1 ceph-mon[81689]: pgmap v3909: 305 pgs: 305 active+clean; 268 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 136 op/s
Dec 06 08:22:33 compute-1 ovn_controller[130279]: 2025-12-06T08:22:33Z|00854|binding|INFO|Releasing lport e79b0cd9-d776-4f44-8b09-bc4e745f59bc from this chassis (sb_readonly=0)
Dec 06 08:22:33 compute-1 nova_compute[226101]: 2025-12-06 08:22:33.376 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:34.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:34.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:35 compute-1 ceph-mon[81689]: pgmap v3910: 305 pgs: 305 active+clean; 268 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 177 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 06 08:22:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:36.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:36.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:37 compute-1 nova_compute[226101]: 2025-12-06 08:22:37.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:37 compute-1 ceph-mon[81689]: pgmap v3911: 305 pgs: 305 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:22:37 compute-1 sudo[313723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:37 compute-1 sudo[313723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:37 compute-1 sudo[313723]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:37 compute-1 sudo[313748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:22:37 compute-1 sudo[313748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:37 compute-1 sudo[313748]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:37 compute-1 sudo[313773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:37 compute-1 sudo[313773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:37 compute-1 sudo[313773]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:37 compute-1 sudo[313798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:22:37 compute-1 sudo[313798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:38 compute-1 sudo[313798]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:38.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:38 compute-1 nova_compute[226101]: 2025-12-06 08:22:38.387 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:22:38 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:22:38 compute-1 nova_compute[226101]: 2025-12-06 08:22:38.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:39 compute-1 ceph-mon[81689]: pgmap v3912: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 08:22:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:22:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:22:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:22:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:40.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:40.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:40.718 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:22:40 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:40.719 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:22:40 compute-1 nova_compute[226101]: 2025-12-06 08:22:40.736 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-1 ovn_controller[130279]: 2025-12-06T08:22:40Z|00855|binding|INFO|Releasing lport e79b0cd9-d776-4f44-8b09-bc4e745f59bc from this chassis (sb_readonly=0)
Dec 06 08:22:40 compute-1 nova_compute[226101]: 2025-12-06 08:22:40.833 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:41 compute-1 ceph-mon[81689]: pgmap v3913: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 08:22:41 compute-1 nova_compute[226101]: 2025-12-06 08:22:41.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:41 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:41.721 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:41 compute-1 nova_compute[226101]: 2025-12-06 08:22:41.759 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:41 compute-1 nova_compute[226101]: 2025-12-06 08:22:41.759 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:41 compute-1 nova_compute[226101]: 2025-12-06 08:22:41.760 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:41 compute-1 nova_compute[226101]: 2025-12-06 08:22:41.760 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:22:41 compute-1 nova_compute[226101]: 2025-12-06 08:22:41.761 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:42 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/779887381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.184 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.185 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.258 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.258 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:22:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.443 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.444 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4014MB free_disk=20.89727020263672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.444 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.444 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.591 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 173105e8-7c00-4b0a-86b1-b0acabe5cbeb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.592 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.592 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:22:42 compute-1 nova_compute[226101]: 2025-12-06 08:22:42.629 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3962467000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:43 compute-1 nova_compute[226101]: 2025-12-06 08:22:43.033 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:43 compute-1 nova_compute[226101]: 2025-12-06 08:22:43.040 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:22:43 compute-1 nova_compute[226101]: 2025-12-06 08:22:43.056 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:22:43 compute-1 nova_compute[226101]: 2025-12-06 08:22:43.058 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:22:43 compute-1 nova_compute[226101]: 2025-12-06 08:22:43.058 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/779887381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:43 compute-1 nova_compute[226101]: 2025-12-06 08:22:43.389 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:44 compute-1 ceph-mon[81689]: pgmap v3914: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Dec 06 08:22:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3962467000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:44.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:44 compute-1 sshd-session[313853]: Received disconnect from 165.154.55.146 port 46708:11: Bye Bye [preauth]
Dec 06 08:22:44 compute-1 sshd-session[313853]: Disconnected from authenticating user root 165.154.55.146 port 46708 [preauth]
Dec 06 08:22:44 compute-1 sudo[313899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:44 compute-1 sudo[313899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:44 compute-1 sudo[313899]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:44 compute-1 sudo[313924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:22:44 compute-1 sudo[313924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:44 compute-1 sudo[313924]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:45 compute-1 ceph-mon[81689]: pgmap v3915: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 110 KiB/s wr, 34 op/s
Dec 06 08:22:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:46.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3679459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:47 compute-1 nova_compute[226101]: 2025-12-06 08:22:47.187 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:47 compute-1 ceph-mon[81689]: pgmap v3916: 305 pgs: 305 active+clean; 225 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 215 KiB/s rd, 111 KiB/s wr, 56 op/s
Dec 06 08:22:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:48.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:48 compute-1 nova_compute[226101]: 2025-12-06 08:22:48.391 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:48 compute-1 ceph-mon[81689]: pgmap v3917: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 22 KiB/s wr, 30 op/s
Dec 06 08:22:49 compute-1 nova_compute[226101]: 2025-12-06 08:22:49.543 226109 DEBUG nova.compute.manager [req-04335e81-7fa0-471e-a529-cfda414baff9 req-5445372c-6b36-449e-bd29-e4b10b69fd1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-changed-1bb1602d-28f5-4171-88dc-24be47ec272f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:49 compute-1 nova_compute[226101]: 2025-12-06 08:22:49.543 226109 DEBUG nova.compute.manager [req-04335e81-7fa0-471e-a529-cfda414baff9 req-5445372c-6b36-449e-bd29-e4b10b69fd1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Refreshing instance network info cache due to event network-changed-1bb1602d-28f5-4171-88dc-24be47ec272f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:22:49 compute-1 nova_compute[226101]: 2025-12-06 08:22:49.544 226109 DEBUG oslo_concurrency.lockutils [req-04335e81-7fa0-471e-a529-cfda414baff9 req-5445372c-6b36-449e-bd29-e4b10b69fd1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:22:49 compute-1 nova_compute[226101]: 2025-12-06 08:22:49.544 226109 DEBUG oslo_concurrency.lockutils [req-04335e81-7fa0-471e-a529-cfda414baff9 req-5445372c-6b36-449e-bd29-e4b10b69fd1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:22:49 compute-1 nova_compute[226101]: 2025-12-06 08:22:49.544 226109 DEBUG nova.network.neutron [req-04335e81-7fa0-471e-a529-cfda414baff9 req-5445372c-6b36-449e-bd29-e4b10b69fd1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Refreshing network info cache for port 1bb1602d-28f5-4171-88dc-24be47ec272f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:22:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:50.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:50.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.807 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.808 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.808 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.809 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.810 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.812 226109 INFO nova.compute.manager [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Terminating instance
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.814 226109 DEBUG nova.compute.manager [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:22:50 compute-1 kernel: tap1bb1602d-28 (unregistering): left promiscuous mode
Dec 06 08:22:50 compute-1 NetworkManager[49031]: <info>  [1765009370.8751] device (tap1bb1602d-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:22:50 compute-1 ovn_controller[130279]: 2025-12-06T08:22:50Z|00856|binding|INFO|Releasing lport 1bb1602d-28f5-4171-88dc-24be47ec272f from this chassis (sb_readonly=0)
Dec 06 08:22:50 compute-1 ovn_controller[130279]: 2025-12-06T08:22:50Z|00857|binding|INFO|Setting lport 1bb1602d-28f5-4171-88dc-24be47ec272f down in Southbound
Dec 06 08:22:50 compute-1 ovn_controller[130279]: 2025-12-06T08:22:50Z|00858|binding|INFO|Removing iface tap1bb1602d-28 ovn-installed in OVS
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.889 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:50.895 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:53:b8 10.100.0.6'], port_security=['fa:16:3e:c2:53:b8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '173105e8-7c00-4b0a-86b1-b0acabe5cbeb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '353e35c5-d221-4dce-9275-d54187ac5a75 c1d372ec-4c65-4af5-9c55-76198d157b1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d7af5e0-a504-41e1-b9f6-95b3d5bf94dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=1bb1602d-28f5-4171-88dc-24be47ec272f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:22:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:50.896 139580 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb1602d-28f5-4171-88dc-24be47ec272f in datapath 8e3ac9aa-9766-45cd-a8b5-88cc15193af6 unbound from our chassis
Dec 06 08:22:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:50.897 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e3ac9aa-9766-45cd-a8b5-88cc15193af6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:22:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:50.899 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3a576d-0c67-465d-9827-df382d7809aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:50 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:50.899 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 namespace which is not needed anymore
Dec 06 08:22:50 compute-1 nova_compute[226101]: 2025-12-06 08:22:50.912 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:50 compute-1 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d7.scope: Deactivated successfully.
Dec 06 08:22:50 compute-1 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d7.scope: Consumed 16.584s CPU time.
Dec 06 08:22:50 compute-1 systemd-machined[190302]: Machine qemu-97-instance-000000d7 terminated.
Dec 06 08:22:51 compute-1 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[313307]: [NOTICE]   (313311) : haproxy version is 2.8.14-c23fe91
Dec 06 08:22:51 compute-1 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[313307]: [NOTICE]   (313311) : path to executable is /usr/sbin/haproxy
Dec 06 08:22:51 compute-1 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[313307]: [WARNING]  (313311) : Exiting Master process...
Dec 06 08:22:51 compute-1 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[313307]: [ALERT]    (313311) : Current worker (313316) exited with code 143 (Terminated)
Dec 06 08:22:51 compute-1 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[313307]: [WARNING]  (313311) : All workers exited. Exiting... (0)
Dec 06 08:22:51 compute-1 NetworkManager[49031]: <info>  [1765009371.0360] manager: (tap1bb1602d-28): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Dec 06 08:22:51 compute-1 systemd[1]: libpod-5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195.scope: Deactivated successfully.
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.037 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:51 compute-1 conmon[313307]: conmon 5d92a7a1a175e6678682 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195.scope/container/memory.events
Dec 06 08:22:51 compute-1 podman[313974]: 2025-12-06 08:22:51.044856613 +0000 UTC m=+0.050872316 container died 5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.050 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.052 226109 INFO nova.virt.libvirt.driver [-] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Instance destroyed successfully.
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.053 226109 DEBUG nova.objects.instance [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'resources' on Instance uuid 173105e8-7c00-4b0a-86b1-b0acabe5cbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.070 226109 DEBUG nova.virt.libvirt.vif [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-200343605',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-200343605',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=215,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbFho9uZYw/C67SrAZAOll1Yn6PYb0ai5sIBHmAL6dW73Ch+qRjce9a6w2oaJggOsc3UVjACoOu/DVsfm+enspt+H1pyOFCemHZp5ou5LgCYmgT/pXXIRPYRaBdn6h/nw==',key_name='tempest-TestSecurityGroupsBasicOps-443327318',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:21:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-x1jocr9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:21:30Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=173105e8-7c00-4b0a-86b1-b0acabe5cbeb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.070 226109 DEBUG nova.network.os_vif_util [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.071 226109 DEBUG nova.network.os_vif_util [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:53:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bb1602d-28f5-4171-88dc-24be47ec272f,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb1602d-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.071 226109 DEBUG os_vif [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:53:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bb1602d-28f5-4171-88dc-24be47ec272f,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb1602d-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.074 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.075 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bb1602d-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.076 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.078 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.079 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:51 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195-userdata-shm.mount: Deactivated successfully.
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.083 226109 INFO os_vif [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:53:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bb1602d-28f5-4171-88dc-24be47ec272f,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb1602d-28')
Dec 06 08:22:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-3413ee81d5605eb0502e1c815cbc4489a92dd3e6561d4921d1195e47ef9eb791-merged.mount: Deactivated successfully.
Dec 06 08:22:51 compute-1 podman[313974]: 2025-12-06 08:22:51.096276161 +0000 UTC m=+0.102291854 container cleanup 5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:22:51 compute-1 systemd[1]: libpod-conmon-5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195.scope: Deactivated successfully.
Dec 06 08:22:51 compute-1 podman[314028]: 2025-12-06 08:22:51.164577832 +0000 UTC m=+0.037002143 container remove 5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.171 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d4959811-b93d-47cd-95d0-860112bc8694]: (4, ('Sat Dec  6 08:22:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 (5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195)\n5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195\nSat Dec  6 08:22:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 (5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195)\n5d92a7a1a175e6678682882bd13fc25ef813c3e6037bf8241f22e829c80b3195\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.172 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ed41a3-52f4-4b6a-a410-6e2402d2ff88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.173 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e3ac9aa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.174 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:51 compute-1 kernel: tap8e3ac9aa-90: left promiscuous mode
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.186 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.188 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.190 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[80e37bcc-edf9-448e-9aaa-b4d2cff59f76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.212 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[55857365-59bc-4cac-81af-40ceda1d384f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.214 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ed67d311-7f2d-45a7-bf63-f93bb6dc8de3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.230 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[74a0e8ce-71e4-4747-ac16-0e1761d19405]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956907, 'reachable_time': 19852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314046, 'error': None, 'target': 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:51 compute-1 systemd[1]: run-netns-ovnmeta\x2d8e3ac9aa\x2d9766\x2d45cd\x2da8b5\x2d88cc15193af6.mount: Deactivated successfully.
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.234 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:22:51 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:22:51.234 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1f1cab-1a4b-4575-829a-078dacfdddd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:51 compute-1 ceph-mon[81689]: pgmap v3918: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.445 226109 INFO nova.virt.libvirt.driver [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Deleting instance files /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb_del
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.446 226109 INFO nova.virt.libvirt.driver [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Deletion of /var/lib/nova/instances/173105e8-7c00-4b0a-86b1-b0acabe5cbeb_del complete
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.497 226109 INFO nova.compute.manager [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Took 0.68 seconds to destroy the instance on the hypervisor.
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.498 226109 DEBUG oslo.service.loopingcall [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.498 226109 DEBUG nova.compute.manager [-] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.498 226109 DEBUG nova.network.neutron [-] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.703 226109 DEBUG nova.compute.manager [req-59490002-545c-4cd3-ba5b-1b178c7f2880 req-3dd82caf-2098-4139-8438-029a1138fbc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-vif-unplugged-1bb1602d-28f5-4171-88dc-24be47ec272f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.704 226109 DEBUG oslo_concurrency.lockutils [req-59490002-545c-4cd3-ba5b-1b178c7f2880 req-3dd82caf-2098-4139-8438-029a1138fbc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.704 226109 DEBUG oslo_concurrency.lockutils [req-59490002-545c-4cd3-ba5b-1b178c7f2880 req-3dd82caf-2098-4139-8438-029a1138fbc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.704 226109 DEBUG oslo_concurrency.lockutils [req-59490002-545c-4cd3-ba5b-1b178c7f2880 req-3dd82caf-2098-4139-8438-029a1138fbc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.705 226109 DEBUG nova.compute.manager [req-59490002-545c-4cd3-ba5b-1b178c7f2880 req-3dd82caf-2098-4139-8438-029a1138fbc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] No waiting events found dispatching network-vif-unplugged-1bb1602d-28f5-4171-88dc-24be47ec272f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:22:51 compute-1 nova_compute[226101]: 2025-12-06 08:22:51.705 226109 DEBUG nova.compute.manager [req-59490002-545c-4cd3-ba5b-1b178c7f2880 req-3dd82caf-2098-4139-8438-029a1138fbc4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-vif-unplugged-1bb1602d-28f5-4171-88dc-24be47ec272f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:22:52 compute-1 nova_compute[226101]: 2025-12-06 08:22:52.058 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:52 compute-1 nova_compute[226101]: 2025-12-06 08:22:52.058 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:22:52 compute-1 nova_compute[226101]: 2025-12-06 08:22:52.058 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:22:52 compute-1 nova_compute[226101]: 2025-12-06 08:22:52.076 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 06 08:22:52 compute-1 nova_compute[226101]: 2025-12-06 08:22:52.076 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:22:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:52.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:52.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.007 226109 DEBUG nova.network.neutron [req-04335e81-7fa0-471e-a529-cfda414baff9 req-5445372c-6b36-449e-bd29-e4b10b69fd1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updated VIF entry in instance network info cache for port 1bb1602d-28f5-4171-88dc-24be47ec272f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.007 226109 DEBUG nova.network.neutron [req-04335e81-7fa0-471e-a529-cfda414baff9 req-5445372c-6b36-449e-bd29-e4b10b69fd1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updating instance_info_cache with network_info: [{"id": "1bb1602d-28f5-4171-88dc-24be47ec272f", "address": "fa:16:3e:c2:53:b8", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb1602d-28", "ovs_interfaceid": "1bb1602d-28f5-4171-88dc-24be47ec272f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.028 226109 DEBUG oslo_concurrency.lockutils [req-04335e81-7fa0-471e-a529-cfda414baff9 req-5445372c-6b36-449e-bd29-e4b10b69fd1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-173105e8-7c00-4b0a-86b1-b0acabe5cbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.224 226109 DEBUG nova.network.neutron [-] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.240 226109 INFO nova.compute.manager [-] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Took 1.74 seconds to deallocate network for instance.
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.392 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.479 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.479 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:53 compute-1 ceph-mon[81689]: pgmap v3919: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 29 op/s
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.531 226109 DEBUG oslo_concurrency.processutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/877748460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.950 226109 DEBUG oslo_concurrency.processutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:53 compute-1 nova_compute[226101]: 2025-12-06 08:22:53.957 226109 DEBUG nova.compute.provider_tree [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:22:54 compute-1 podman[314071]: 2025-12-06 08:22:54.069339332 +0000 UTC m=+0.055733095 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:22:54 compute-1 podman[314070]: 2025-12-06 08:22:54.073282537 +0000 UTC m=+0.062536487 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec 06 08:22:54 compute-1 podman[314072]: 2025-12-06 08:22:54.154784992 +0000 UTC m=+0.140442876 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.322 226109 DEBUG nova.compute.manager [req-a9732ed9-850e-4b20-8d90-bf810e347ccb req-cf93cb20-8fa2-4f45-81bc-8f14f588cc52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.323 226109 DEBUG oslo_concurrency.lockutils [req-a9732ed9-850e-4b20-8d90-bf810e347ccb req-cf93cb20-8fa2-4f45-81bc-8f14f588cc52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.324 226109 DEBUG oslo_concurrency.lockutils [req-a9732ed9-850e-4b20-8d90-bf810e347ccb req-cf93cb20-8fa2-4f45-81bc-8f14f588cc52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.325 226109 DEBUG oslo_concurrency.lockutils [req-a9732ed9-850e-4b20-8d90-bf810e347ccb req-cf93cb20-8fa2-4f45-81bc-8f14f588cc52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.325 226109 DEBUG nova.compute.manager [req-a9732ed9-850e-4b20-8d90-bf810e347ccb req-cf93cb20-8fa2-4f45-81bc-8f14f588cc52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] No waiting events found dispatching network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.325 226109 WARNING nova.compute.manager [req-a9732ed9-850e-4b20-8d90-bf810e347ccb req-cf93cb20-8fa2-4f45-81bc-8f14f588cc52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received unexpected event network-vif-plugged-1bb1602d-28f5-4171-88dc-24be47ec272f for instance with vm_state deleted and task_state None.
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.326 226109 DEBUG nova.compute.manager [req-a9732ed9-850e-4b20-8d90-bf810e347ccb req-cf93cb20-8fa2-4f45-81bc-8f14f588cc52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Received event network-vif-deleted-1bb1602d-28f5-4171-88dc-24be47ec272f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.330 226109 DEBUG nova.scheduler.client.report [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:22:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:54.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:54.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.422 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/877748460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.554 226109 INFO nova.scheduler.client.report [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Deleted allocations for instance 173105e8-7c00-4b0a-86b1-b0acabe5cbeb
Dec 06 08:22:54 compute-1 nova_compute[226101]: 2025-12-06 08:22:54.819 226109 DEBUG oslo_concurrency.lockutils [None req-5c7f42e0-f516-4d2d-adba-f3c8bf27e6a0 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "173105e8-7c00-4b0a-86b1-b0acabe5cbeb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:55 compute-1 ceph-mon[81689]: pgmap v3920: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 2.2 KiB/s wr, 23 op/s
Dec 06 08:22:56 compute-1 nova_compute[226101]: 2025-12-06 08:22:56.078 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:56.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:56.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:56 compute-1 ceph-mon[81689]: pgmap v3921: 305 pgs: 305 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 2.7 KiB/s wr, 36 op/s
Dec 06 08:22:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:58.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:22:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:58.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:58 compute-1 nova_compute[226101]: 2025-12-06 08:22:58.410 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:58 compute-1 nova_compute[226101]: 2025-12-06 08:22:58.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:59 compute-1 ceph-mon[81689]: pgmap v3922: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 08:22:59 compute-1 nova_compute[226101]: 2025-12-06 08:22:59.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3546864504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3666536907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:00.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:01 compute-1 nova_compute[226101]: 2025-12-06 08:23:01.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:01 compute-1 ceph-mon[81689]: pgmap v3923: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:01 compute-1 nova_compute[226101]: 2025-12-06 08:23:01.637 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:01.701 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:01.701 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:01.701 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:01 compute-1 nova_compute[226101]: 2025-12-06 08:23:01.713 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:02.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:23:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:02.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:02 compute-1 nova_compute[226101]: 2025-12-06 08:23:02.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:02 compute-1 nova_compute[226101]: 2025-12-06 08:23:02.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:23:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:03 compute-1 nova_compute[226101]: 2025-12-06 08:23:03.412 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:03 compute-1 ceph-mon[81689]: pgmap v3924: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1354063911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4214306056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:04.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:04.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:04 compute-1 sshd-session[314136]: Received disconnect from 91.144.158.231 port 12835:11: Bye Bye [preauth]
Dec 06 08:23:04 compute-1 sshd-session[314136]: Disconnected from authenticating user root 91.144.158.231 port 12835 [preauth]
Dec 06 08:23:04 compute-1 ceph-mon[81689]: pgmap v3925: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:05 compute-1 sshd-session[314138]: Received disconnect from 154.209.4.183 port 48882:11: Bye Bye [preauth]
Dec 06 08:23:05 compute-1 sshd-session[314138]: Disconnected from authenticating user root 154.209.4.183 port 48882 [preauth]
Dec 06 08:23:05 compute-1 nova_compute[226101]: 2025-12-06 08:23:05.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:06 compute-1 nova_compute[226101]: 2025-12-06 08:23:06.049 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009371.0476406, 173105e8-7c00-4b0a-86b1-b0acabe5cbeb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:23:06 compute-1 nova_compute[226101]: 2025-12-06 08:23:06.050 226109 INFO nova.compute.manager [-] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] VM Stopped (Lifecycle Event)
Dec 06 08:23:06 compute-1 nova_compute[226101]: 2025-12-06 08:23:06.085 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:06 compute-1 nova_compute[226101]: 2025-12-06 08:23:06.218 226109 DEBUG nova.compute.manager [None req-1b83b3f8-d638-424a-9d40-15b1c84b6aae - - - - - -] [instance: 173105e8-7c00-4b0a-86b1-b0acabe5cbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:23:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:07 compute-1 ceph-mon[81689]: pgmap v3926: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:23:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:08 compute-1 nova_compute[226101]: 2025-12-06 08:23:08.413 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:09 compute-1 ceph-mon[81689]: pgmap v3927: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 682 B/s wr, 13 op/s
Dec 06 08:23:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2801272996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:23:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2801272996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:23:09 compute-1 sshd-session[314140]: Received disconnect from 136.112.8.45 port 57104:11: Bye Bye [preauth]
Dec 06 08:23:09 compute-1 sshd-session[314140]: Disconnected from authenticating user root 136.112.8.45 port 57104 [preauth]
Dec 06 08:23:09 compute-1 nova_compute[226101]: 2025-12-06 08:23:09.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:10 compute-1 sshd-session[314142]: Received disconnect from 106.51.92.114 port 35449:11: Bye Bye [preauth]
Dec 06 08:23:10 compute-1 sshd-session[314142]: Disconnected from authenticating user root 106.51.92.114 port 35449 [preauth]
Dec 06 08:23:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:10.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:10 compute-1 nova_compute[226101]: 2025-12-06 08:23:10.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:11 compute-1 nova_compute[226101]: 2025-12-06 08:23:11.087 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:11 compute-1 ceph-mon[81689]: pgmap v3928: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:12.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:12.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:12 compute-1 nova_compute[226101]: 2025-12-06 08:23:12.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:12 compute-1 ceph-mon[81689]: pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:13 compute-1 nova_compute[226101]: 2025-12-06 08:23:13.415 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:23:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:14.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:14.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:15 compute-1 ceph-mon[81689]: pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:16 compute-1 nova_compute[226101]: 2025-12-06 08:23:16.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:16.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:16.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:17 compute-1 ceph-mon[81689]: pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:18.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:18 compute-1 nova_compute[226101]: 2025-12-06 08:23:18.417 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:19 compute-1 ceph-mon[81689]: pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:20.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:20 compute-1 nova_compute[226101]: 2025-12-06 08:23:20.531 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:20 compute-1 nova_compute[226101]: 2025-12-06 08:23:20.532 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:20 compute-1 nova_compute[226101]: 2025-12-06 08:23:20.552 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:23:20 compute-1 nova_compute[226101]: 2025-12-06 08:23:20.643 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:20 compute-1 nova_compute[226101]: 2025-12-06 08:23:20.643 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:20 compute-1 nova_compute[226101]: 2025-12-06 08:23:20.657 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:23:20 compute-1 nova_compute[226101]: 2025-12-06 08:23:20.658 226109 INFO nova.compute.claims [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:23:20 compute-1 ceph-mon[81689]: pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:20 compute-1 nova_compute[226101]: 2025-12-06 08:23:20.847 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:23:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2848336609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.282 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.287 226109 DEBUG nova.compute.provider_tree [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.334 226109 DEBUG nova.scheduler.client.report [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.514 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.514 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.662 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.663 226109 DEBUG nova.network.neutron [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.694 226109 INFO nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.711 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.821 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.822 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.822 226109 INFO nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Creating image(s)
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.848 226109 DEBUG nova.storage.rbd_utils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2848336609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.882 226109 DEBUG nova.storage.rbd_utils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.916 226109 DEBUG nova.storage.rbd_utils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.921 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:21 compute-1 nova_compute[226101]: 2025-12-06 08:23:21.961 226109 DEBUG nova.policy [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0432cb6633e14c1b86fc320e7f3bb880', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.015 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.016 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.016 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.016 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.040 226109 DEBUG nova.storage.rbd_utils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.043 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:22.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:22.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.423 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.490 226109 DEBUG nova.storage.rbd_utils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] resizing rbd image 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.584 226109 DEBUG nova.objects.instance [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.600 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.601 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Ensure instance console log exists: /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.601 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.601 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:22 compute-1 nova_compute[226101]: 2025-12-06 08:23:22.602 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:22 compute-1 ceph-mon[81689]: pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:23 compute-1 nova_compute[226101]: 2025-12-06 08:23:23.224 226109 DEBUG nova.network.neutron [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Successfully created port: c080e6e6-5887-437f-958e-1e981cc49f14 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:23:23 compute-1 nova_compute[226101]: 2025-12-06 08:23:23.420 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:24 compute-1 nova_compute[226101]: 2025-12-06 08:23:24.123 226109 DEBUG nova.network.neutron [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Successfully updated port: c080e6e6-5887-437f-958e-1e981cc49f14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:23:24 compute-1 nova_compute[226101]: 2025-12-06 08:23:24.309 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:23:24 compute-1 nova_compute[226101]: 2025-12-06 08:23:24.309 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquired lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:23:24 compute-1 nova_compute[226101]: 2025-12-06 08:23:24.310 226109 DEBUG nova.network.neutron [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:23:24 compute-1 nova_compute[226101]: 2025-12-06 08:23:24.352 226109 DEBUG nova.compute.manager [req-ef3fc261-ccb1-491b-9fac-e5a25034046c req-9af15b23-bfe8-43fa-be4d-a29b85cff20c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-changed-c080e6e6-5887-437f-958e-1e981cc49f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:23:24 compute-1 nova_compute[226101]: 2025-12-06 08:23:24.353 226109 DEBUG nova.compute.manager [req-ef3fc261-ccb1-491b-9fac-e5a25034046c req-9af15b23-bfe8-43fa-be4d-a29b85cff20c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Refreshing instance network info cache due to event network-changed-c080e6e6-5887-437f-958e-1e981cc49f14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:23:24 compute-1 nova_compute[226101]: 2025-12-06 08:23:24.354 226109 DEBUG oslo_concurrency.lockutils [req-ef3fc261-ccb1-491b-9fac-e5a25034046c req-9af15b23-bfe8-43fa-be4d-a29b85cff20c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:23:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:24.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:24.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:24 compute-1 nova_compute[226101]: 2025-12-06 08:23:24.562 226109 DEBUG nova.network.neutron [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:23:25 compute-1 podman[314334]: 2025-12-06 08:23:25.068562447 +0000 UTC m=+0.052395286 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:23:25 compute-1 podman[314335]: 2025-12-06 08:23:25.069919994 +0000 UTC m=+0.053816674 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 08:23:25 compute-1 ceph-mon[81689]: pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:25 compute-1 podman[314336]: 2025-12-06 08:23:25.093987869 +0000 UTC m=+0.073965064 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.222 226109 DEBUG nova.network.neutron [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updating instance_info_cache with network_info: [{"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.258 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Releasing lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.258 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Instance network_info: |[{"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.258 226109 DEBUG oslo_concurrency.lockutils [req-ef3fc261-ccb1-491b-9fac-e5a25034046c req-9af15b23-bfe8-43fa-be4d-a29b85cff20c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.259 226109 DEBUG nova.network.neutron [req-ef3fc261-ccb1-491b-9fac-e5a25034046c req-9af15b23-bfe8-43fa-be4d-a29b85cff20c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Refreshing network info cache for port c080e6e6-5887-437f-958e-1e981cc49f14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.261 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Start _get_guest_xml network_info=[{"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.265 226109 WARNING nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.270 226109 DEBUG nova.virt.libvirt.host [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.271 226109 DEBUG nova.virt.libvirt.host [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.273 226109 DEBUG nova.virt.libvirt.host [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.274 226109 DEBUG nova.virt.libvirt.host [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.275 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.275 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.275 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.275 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.276 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.276 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.276 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.276 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.276 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.276 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.276 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.277 226109 DEBUG nova.virt.hardware [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.279 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:23:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2036688055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.710 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.742 226109 DEBUG nova.storage.rbd_utils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:25 compute-1 nova_compute[226101]: 2025-12-06 08:23:25.746 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.097 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2036688055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:23:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:23:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3111255366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.211 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.213 226109 DEBUG nova.virt.libvirt.vif [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-287011916',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-287011916',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=218,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC8BznMB0Xu/ykT7z/N8+gel7XJlQSn0npteW8yVXEIGRqJoEANmDfv68DjMaYfGNX4b0z8xP/ctyWvQ7TYfkeAXD05ZkFtdVnjgdZHjlMKaS2ob4Ytz/egC3eVPS5uFkw==',key_name='tempest-TestSecurityGroupsBasicOps-1814186017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-rlepsp0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:23:21Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=1cd6bfb7-50f5-47c3-ab52-d36bc3401354,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.214 226109 DEBUG nova.network.os_vif_util [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.215 226109 DEBUG nova.network.os_vif_util [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=c080e6e6-5887-437f-958e-1e981cc49f14,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc080e6e6-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.217 226109 DEBUG nova.objects.instance [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.242 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <uuid>1cd6bfb7-50f5-47c3-ab52-d36bc3401354</uuid>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <name>instance-000000da</name>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-287011916</nova:name>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:23:25</nova:creationTime>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <nova:user uuid="0432cb6633e14c1b86fc320e7f3bb880">tempest-TestSecurityGroupsBasicOps-568463891-project-member</nova:user>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <nova:project uuid="5d23d1d6ffc142eaa9bee0ef93fe60e4">tempest-TestSecurityGroupsBasicOps-568463891</nova:project>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <nova:port uuid="c080e6e6-5887-437f-958e-1e981cc49f14">
Dec 06 08:23:26 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <system>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <entry name="serial">1cd6bfb7-50f5-47c3-ab52-d36bc3401354</entry>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <entry name="uuid">1cd6bfb7-50f5-47c3-ab52-d36bc3401354</entry>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </system>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <os>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   </os>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <features>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   </features>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk">
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       </source>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk.config">
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       </source>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:23:26 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:68:f5:42"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <target dev="tapc080e6e6-58"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354/console.log" append="off"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <video>
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </video>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:23:26 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:23:26 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:23:26 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:23:26 compute-1 nova_compute[226101]: </domain>
Dec 06 08:23:26 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.244 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Preparing to wait for external event network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.244 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.244 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.245 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.245 226109 DEBUG nova.virt.libvirt.vif [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-287011916',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-287011916',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=218,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC8BznMB0Xu/ykT7z/N8+gel7XJlQSn0npteW8yVXEIGRqJoEANmDfv68DjMaYfGNX4b0z8xP/ctyWvQ7TYfkeAXD05ZkFtdVnjgdZHjlMKaS2ob4Ytz/egC3eVPS5uFkw==',key_name='tempest-TestSecurityGroupsBasicOps-1814186017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-rlepsp0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:23:21Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=1cd6bfb7-50f5-47c3-ab52-d36bc3401354,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.245 226109 DEBUG nova.network.os_vif_util [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.246 226109 DEBUG nova.network.os_vif_util [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=c080e6e6-5887-437f-958e-1e981cc49f14,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc080e6e6-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.246 226109 DEBUG os_vif [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=c080e6e6-5887-437f-958e-1e981cc49f14,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc080e6e6-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.247 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.247 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.248 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.250 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.250 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc080e6e6-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.251 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc080e6e6-58, col_values=(('external_ids', {'iface-id': 'c080e6e6-5887-437f-958e-1e981cc49f14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:f5:42', 'vm-uuid': '1cd6bfb7-50f5-47c3-ab52-d36bc3401354'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.253 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:26 compute-1 NetworkManager[49031]: <info>  [1765009406.2546] manager: (tapc080e6e6-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.255 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.259 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.260 226109 INFO os_vif [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=c080e6e6-5887-437f-958e-1e981cc49f14,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc080e6e6-58')
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.329 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.329 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.330 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No VIF found with MAC fa:16:3e:68:f5:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.330 226109 INFO nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Using config drive
Dec 06 08:23:26 compute-1 nova_compute[226101]: 2025-12-06 08:23:26.361 226109 DEBUG nova.storage.rbd_utils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:26.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:27 compute-1 sshd-session[314480]: Received disconnect from 186.96.151.198 port 41030:11: Bye Bye [preauth]
Dec 06 08:23:27 compute-1 sshd-session[314480]: Disconnected from authenticating user root 186.96.151.198 port 41030 [preauth]
Dec 06 08:23:27 compute-1 ceph-mon[81689]: pgmap v3936: 305 pgs: 305 active+clean; 156 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Dec 06 08:23:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3111255366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:23:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:28.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.421 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.568 226109 INFO nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Creating config drive at /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354/disk.config
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.572 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv24513bx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.709 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv24513bx" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.741 226109 DEBUG nova.storage.rbd_utils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.749 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354/disk.config 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.930 226109 DEBUG oslo_concurrency.processutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354/disk.config 1cd6bfb7-50f5-47c3-ab52-d36bc3401354_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.931 226109 INFO nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Deleting local config drive /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354/disk.config because it was imported into RBD.
Dec 06 08:23:28 compute-1 kernel: tapc080e6e6-58: entered promiscuous mode
Dec 06 08:23:28 compute-1 NetworkManager[49031]: <info>  [1765009408.9764] manager: (tapc080e6e6-58): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Dec 06 08:23:28 compute-1 ovn_controller[130279]: 2025-12-06T08:23:28Z|00859|binding|INFO|Claiming lport c080e6e6-5887-437f-958e-1e981cc49f14 for this chassis.
Dec 06 08:23:28 compute-1 ovn_controller[130279]: 2025-12-06T08:23:28Z|00860|binding|INFO|c080e6e6-5887-437f-958e-1e981cc49f14: Claiming fa:16:3e:68:f5:42 10.100.0.14
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.977 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:28 compute-1 nova_compute[226101]: 2025-12-06 08:23:28.988 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:28.995 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:f5:42 10.100.0.14'], port_security=['fa:16:3e:68:f5:42 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1cd6bfb7-50f5-47c3-ab52-d36bc3401354', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189eab69-7772-4260-9abd-06a6f9690645', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05ba0383-51c9-4f05-9aff-9ce5d118989a 706ebad5-6fe3-4dd3-9d43-a7533ecec0ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daab64e6-4b17-43eb-9bf1-b22faaf53364, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=c080e6e6-5887-437f-958e-1e981cc49f14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:23:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:28.996 139580 INFO neutron.agent.ovn.metadata.agent [-] Port c080e6e6-5887-437f-958e-1e981cc49f14 in datapath 189eab69-7772-4260-9abd-06a6f9690645 bound to our chassis
Dec 06 08:23:28 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:28.997 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 189eab69-7772-4260-9abd-06a6f9690645
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.008 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2369d3ec-b852-4dd0-9502-8c75569ae43d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.009 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap189eab69-71 in ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.011 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap189eab69-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.011 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9df082-113d-4303-9bbc-80c03e9d69b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.012 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9f30e9f8-5301-45d6-ac6c-81d41680d495]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 systemd-machined[190302]: New machine qemu-98-instance-000000da.
Dec 06 08:23:29 compute-1 systemd-udevd[314537]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.023 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[ebad30f9-ee40-4ead-8608-cdb19f7f7a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 NetworkManager[49031]: <info>  [1765009409.0349] device (tapc080e6e6-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:23:29 compute-1 NetworkManager[49031]: <info>  [1765009409.0368] device (tapc080e6e6-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:23:29 compute-1 systemd[1]: Started Virtual Machine qemu-98-instance-000000da.
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.051 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[d5008363-e1f6-481b-99d4-4f63585928a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_controller[130279]: 2025-12-06T08:23:29Z|00861|binding|INFO|Setting lport c080e6e6-5887-437f-958e-1e981cc49f14 ovn-installed in OVS
Dec 06 08:23:29 compute-1 ovn_controller[130279]: 2025-12-06T08:23:29Z|00862|binding|INFO|Setting lport c080e6e6-5887-437f-958e-1e981cc49f14 up in Southbound
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.068 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.079 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f2ce10-2daa-450d-9914-6aedbd48838f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.084 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f22e03be-df36-4b25-ab5a-fff70cca920a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 NetworkManager[49031]: <info>  [1765009409.0850] manager: (tap189eab69-70): new Veth device (/org/freedesktop/NetworkManager/Devices/403)
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.114 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb556cd-debe-4616-9483-7a64f7f00eb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.117 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[4191297b-4551-4e35-97b2-1366e11b39f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 NetworkManager[49031]: <info>  [1765009409.1355] device (tap189eab69-70): carrier: link connected
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.139 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3bfd09-0d23-40d6-ba21-13db35f97b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ceph-mon[81689]: pgmap v3937: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.155 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[177aca6d-e35b-4d18-94fa-760d20f4b4f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap189eab69-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:c9:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 968863, 'reachable_time': 19641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314568, 'error': None, 'target': 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.169 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[72e3cbaa-4f82-4c4e-a995-276d7a7b3b2d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:c9d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 968863, 'tstamp': 968863}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314569, 'error': None, 'target': 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.185 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[ac64b5aa-5c02-4c06-b4e8-9c80f77f66c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap189eab69-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:c9:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 968863, 'reachable_time': 19641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314570, 'error': None, 'target': 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.216 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ee7881-a2ef-4b3d-8a50-cf84824b4175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.219 226109 DEBUG nova.network.neutron [req-ef3fc261-ccb1-491b-9fac-e5a25034046c req-9af15b23-bfe8-43fa-be4d-a29b85cff20c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updated VIF entry in instance network info cache for port c080e6e6-5887-437f-958e-1e981cc49f14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.220 226109 DEBUG nova.network.neutron [req-ef3fc261-ccb1-491b-9fac-e5a25034046c req-9af15b23-bfe8-43fa-be4d-a29b85cff20c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updating instance_info_cache with network_info: [{"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.254 226109 DEBUG oslo_concurrency.lockutils [req-ef3fc261-ccb1-491b-9fac-e5a25034046c req-9af15b23-bfe8-43fa-be4d-a29b85cff20c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.278 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[67723208-b77d-4c24-b40c-1f82ded06634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.280 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap189eab69-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.280 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.280 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap189eab69-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:29 compute-1 NetworkManager[49031]: <info>  [1765009409.2833] manager: (tap189eab69-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Dec 06 08:23:29 compute-1 kernel: tap189eab69-70: entered promiscuous mode
Dec 06 08:23:29 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.286 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.286 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap189eab69-70, col_values=(('external_ids', {'iface-id': '46835515-f38d-4eac-91aa-acd0b877297f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.287 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:29 compute-1 ovn_controller[130279]: 2025-12-06T08:23:29Z|00863|binding|INFO|Releasing lport 46835515-f38d-4eac-91aa-acd0b877297f from this chassis (sb_readonly=0)
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.300 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.301 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/189eab69-7772-4260-9abd-06a6f9690645.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/189eab69-7772-4260-9abd-06a6f9690645.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.302 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed0c38b-63ca-491b-b06b-9b87c6497e1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.303 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-189eab69-7772-4260-9abd-06a6f9690645
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/189eab69-7772-4260-9abd-06a6f9690645.pid.haproxy
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 189eab69-7772-4260-9abd-06a6f9690645
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.304 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'env', 'PROCESS_TAG=haproxy-189eab69-7772-4260-9abd-06a6f9690645', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/189eab69-7772-4260-9abd-06a6f9690645.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.344 226109 DEBUG nova.compute.manager [req-b4090fc7-f653-436d-8589-9b8c59e028fb req-871f2659-493d-4ca4-befd-25fc55e8aa67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.345 226109 DEBUG oslo_concurrency.lockutils [req-b4090fc7-f653-436d-8589-9b8c59e028fb req-871f2659-493d-4ca4-befd-25fc55e8aa67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.345 226109 DEBUG oslo_concurrency.lockutils [req-b4090fc7-f653-436d-8589-9b8c59e028fb req-871f2659-493d-4ca4-befd-25fc55e8aa67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.346 226109 DEBUG oslo_concurrency.lockutils [req-b4090fc7-f653-436d-8589-9b8c59e028fb req-871f2659-493d-4ca4-befd-25fc55e8aa67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.346 226109 DEBUG nova.compute.manager [req-b4090fc7-f653-436d-8589-9b8c59e028fb req-871f2659-493d-4ca4-befd-25fc55e8aa67 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Processing event network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.583 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.583 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009409.5822504, 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.584 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] VM Started (Lifecycle Event)
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.590 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.594 226109 INFO nova.virt.libvirt.driver [-] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Instance spawned successfully.
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.594 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:23:29 compute-1 podman[314645]: 2025-12-06 08:23:29.658443029 +0000 UTC m=+0.049211210 container create 14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.675 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.675 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.678 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.685 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.689 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.690 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.690 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.691 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.691 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.692 226109 DEBUG nova.virt.libvirt.driver [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:23:29 compute-1 systemd[1]: Started libpod-conmon-14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8.scope.
Dec 06 08:23:29 compute-1 podman[314645]: 2025-12-06 08:23:29.630770927 +0000 UTC m=+0.021539138 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.727 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.728 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009409.5826135, 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.728 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] VM Paused (Lifecycle Event)
Dec 06 08:23:29 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:23:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249886831e6962209294defa0ce607af04762c6c54d28702149a4fa2ceb1ea63/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:29 compute-1 podman[314645]: 2025-12-06 08:23:29.747612169 +0000 UTC m=+0.138380360 container init 14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 08:23:29 compute-1 podman[314645]: 2025-12-06 08:23:29.754845344 +0000 UTC m=+0.145613515 container start 14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.758 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.764 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009409.5904906, 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.764 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] VM Resumed (Lifecycle Event)
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.767 226109 INFO nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Took 7.95 seconds to spawn the instance on the hypervisor.
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.767 226109 DEBUG nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:23:29 compute-1 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[314661]: [NOTICE]   (314665) : New worker (314667) forked
Dec 06 08:23:29 compute-1 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[314661]: [NOTICE]   (314665) : Loading success.
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.793 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.797 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:23:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:29.808 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.846 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.856 226109 INFO nova.compute.manager [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Took 9.25 seconds to build instance.
Dec 06 08:23:29 compute-1 nova_compute[226101]: 2025-12-06 08:23:29.874 226109 DEBUG oslo_concurrency.lockutils [None req-7fd83e33-8d10-4aaa-a2ae-40b44d77ed8f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:30.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:30 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:23:30.809 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:23:31 compute-1 ceph-mon[81689]: pgmap v3938: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:23:31 compute-1 nova_compute[226101]: 2025-12-06 08:23:31.297 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:31 compute-1 nova_compute[226101]: 2025-12-06 08:23:31.468 226109 DEBUG nova.compute.manager [req-3bcfba9c-da07-4457-9435-56739d99eb13 req-82944762-ee5e-4092-97c4-9f28a68acae0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:23:31 compute-1 nova_compute[226101]: 2025-12-06 08:23:31.469 226109 DEBUG oslo_concurrency.lockutils [req-3bcfba9c-da07-4457-9435-56739d99eb13 req-82944762-ee5e-4092-97c4-9f28a68acae0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:31 compute-1 nova_compute[226101]: 2025-12-06 08:23:31.470 226109 DEBUG oslo_concurrency.lockutils [req-3bcfba9c-da07-4457-9435-56739d99eb13 req-82944762-ee5e-4092-97c4-9f28a68acae0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:31 compute-1 nova_compute[226101]: 2025-12-06 08:23:31.470 226109 DEBUG oslo_concurrency.lockutils [req-3bcfba9c-da07-4457-9435-56739d99eb13 req-82944762-ee5e-4092-97c4-9f28a68acae0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:31 compute-1 nova_compute[226101]: 2025-12-06 08:23:31.470 226109 DEBUG nova.compute.manager [req-3bcfba9c-da07-4457-9435-56739d99eb13 req-82944762-ee5e-4092-97c4-9f28a68acae0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] No waiting events found dispatching network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:23:31 compute-1 nova_compute[226101]: 2025-12-06 08:23:31.470 226109 WARNING nova.compute.manager [req-3bcfba9c-da07-4457-9435-56739d99eb13 req-82944762-ee5e-4092-97c4-9f28a68acae0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received unexpected event network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 for instance with vm_state active and task_state None.
Dec 06 08:23:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:32.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:32 compute-1 sshd-session[314676]: Received disconnect from 154.219.116.39 port 57082:11: Bye Bye [preauth]
Dec 06 08:23:32 compute-1 sshd-session[314676]: Disconnected from authenticating user root 154.219.116.39 port 57082 [preauth]
Dec 06 08:23:33 compute-1 ceph-mon[81689]: pgmap v3939: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:23:33 compute-1 nova_compute[226101]: 2025-12-06 08:23:33.425 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:34.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:34 compute-1 NetworkManager[49031]: <info>  [1765009414.9392] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Dec 06 08:23:34 compute-1 NetworkManager[49031]: <info>  [1765009414.9405] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Dec 06 08:23:34 compute-1 nova_compute[226101]: 2025-12-06 08:23:34.938 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:34 compute-1 ovn_controller[130279]: 2025-12-06T08:23:34Z|00864|binding|INFO|Releasing lport 46835515-f38d-4eac-91aa-acd0b877297f from this chassis (sb_readonly=0)
Dec 06 08:23:34 compute-1 nova_compute[226101]: 2025-12-06 08:23:34.990 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:34 compute-1 ovn_controller[130279]: 2025-12-06T08:23:34Z|00865|binding|INFO|Releasing lport 46835515-f38d-4eac-91aa-acd0b877297f from this chassis (sb_readonly=0)
Dec 06 08:23:34 compute-1 nova_compute[226101]: 2025-12-06 08:23:34.995 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:35 compute-1 ceph-mon[81689]: pgmap v3940: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:23:35 compute-1 sshd-session[314678]: Received disconnect from 124.18.141.70 port 56260:11: Bye Bye [preauth]
Dec 06 08:23:35 compute-1 sshd-session[314678]: Disconnected from authenticating user root 124.18.141.70 port 56260 [preauth]
Dec 06 08:23:35 compute-1 nova_compute[226101]: 2025-12-06 08:23:35.721 226109 DEBUG nova.compute.manager [req-f56ee882-d6d6-451b-bdfb-5a06555f41e6 req-9bc90576-84c6-4b10-aace-319aa93c06d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-changed-c080e6e6-5887-437f-958e-1e981cc49f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:23:35 compute-1 nova_compute[226101]: 2025-12-06 08:23:35.721 226109 DEBUG nova.compute.manager [req-f56ee882-d6d6-451b-bdfb-5a06555f41e6 req-9bc90576-84c6-4b10-aace-319aa93c06d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Refreshing instance network info cache due to event network-changed-c080e6e6-5887-437f-958e-1e981cc49f14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:23:35 compute-1 nova_compute[226101]: 2025-12-06 08:23:35.721 226109 DEBUG oslo_concurrency.lockutils [req-f56ee882-d6d6-451b-bdfb-5a06555f41e6 req-9bc90576-84c6-4b10-aace-319aa93c06d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:23:35 compute-1 nova_compute[226101]: 2025-12-06 08:23:35.722 226109 DEBUG oslo_concurrency.lockutils [req-f56ee882-d6d6-451b-bdfb-5a06555f41e6 req-9bc90576-84c6-4b10-aace-319aa93c06d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:23:35 compute-1 nova_compute[226101]: 2025-12-06 08:23:35.722 226109 DEBUG nova.network.neutron [req-f56ee882-d6d6-451b-bdfb-5a06555f41e6 req-9bc90576-84c6-4b10-aace-319aa93c06d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Refreshing network info cache for port c080e6e6-5887-437f-958e-1e981cc49f14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:23:36 compute-1 nova_compute[226101]: 2025-12-06 08:23:36.300 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:36.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:36.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:37 compute-1 nova_compute[226101]: 2025-12-06 08:23:37.130 226109 DEBUG nova.network.neutron [req-f56ee882-d6d6-451b-bdfb-5a06555f41e6 req-9bc90576-84c6-4b10-aace-319aa93c06d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updated VIF entry in instance network info cache for port c080e6e6-5887-437f-958e-1e981cc49f14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:23:37 compute-1 nova_compute[226101]: 2025-12-06 08:23:37.130 226109 DEBUG nova.network.neutron [req-f56ee882-d6d6-451b-bdfb-5a06555f41e6 req-9bc90576-84c6-4b10-aace-319aa93c06d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updating instance_info_cache with network_info: [{"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:23:37 compute-1 ceph-mon[81689]: pgmap v3941: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 08:23:37 compute-1 nova_compute[226101]: 2025-12-06 08:23:37.487 226109 DEBUG oslo_concurrency.lockutils [req-f56ee882-d6d6-451b-bdfb-5a06555f41e6 req-9bc90576-84c6-4b10-aace-319aa93c06d1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:23:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:38.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:38.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:38 compute-1 nova_compute[226101]: 2025-12-06 08:23:38.427 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:39 compute-1 ceph-mon[81689]: pgmap v3942: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 350 KiB/s wr, 74 op/s
Dec 06 08:23:40 compute-1 sshd-session[314681]: Received disconnect from 45.120.216.232 port 56250:11: Bye Bye [preauth]
Dec 06 08:23:40 compute-1 sshd-session[314681]: Disconnected from authenticating user root 45.120.216.232 port 56250 [preauth]
Dec 06 08:23:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:40.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:40.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:41 compute-1 nova_compute[226101]: 2025-12-06 08:23:41.303 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:41 compute-1 ceph-mon[81689]: pgmap v3943: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:23:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:42.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:42 compute-1 nova_compute[226101]: 2025-12-06 08:23:42.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:42 compute-1 nova_compute[226101]: 2025-12-06 08:23:42.745 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:42 compute-1 nova_compute[226101]: 2025-12-06 08:23:42.745 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:42 compute-1 nova_compute[226101]: 2025-12-06 08:23:42.746 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:42 compute-1 nova_compute[226101]: 2025-12-06 08:23:42.746 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:23:42 compute-1 nova_compute[226101]: 2025-12-06 08:23:42.746 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:23:43 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1091127023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.170 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.248 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.248 226109 DEBUG nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.405 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.407 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3995MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.407 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.408 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.430 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.515 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Instance 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.516 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.516 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:23:43 compute-1 ceph-mon[81689]: pgmap v3944: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:23:43 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1091127023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:43 compute-1 nova_compute[226101]: 2025-12-06 08:23:43.616 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:43 compute-1 ovn_controller[130279]: 2025-12-06T08:23:43Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:f5:42 10.100.0.14
Dec 06 08:23:43 compute-1 ovn_controller[130279]: 2025-12-06T08:23:43Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:f5:42 10.100.0.14
Dec 06 08:23:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:23:44 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4095322003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:44 compute-1 nova_compute[226101]: 2025-12-06 08:23:44.051 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:44 compute-1 nova_compute[226101]: 2025-12-06 08:23:44.059 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:23:44 compute-1 nova_compute[226101]: 2025-12-06 08:23:44.085 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:23:44 compute-1 nova_compute[226101]: 2025-12-06 08:23:44.121 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:23:44 compute-1 nova_compute[226101]: 2025-12-06 08:23:44.122 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:44.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:44 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4095322003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:44 compute-1 sudo[314730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:44 compute-1 sudo[314730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:44 compute-1 sudo[314730]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:44 compute-1 sudo[314755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:23:44 compute-1 sudo[314755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:44 compute-1 sudo[314755]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:44 compute-1 sudo[314780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:44 compute-1 sudo[314780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:45 compute-1 sudo[314780]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:45 compute-1 sudo[314805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:23:45 compute-1 sudo[314805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:45 compute-1 sshd-session[314706]: Received disconnect from 101.100.194.199 port 45976:11: Bye Bye [preauth]
Dec 06 08:23:45 compute-1 sshd-session[314706]: Disconnected from authenticating user root 101.100.194.199 port 45976 [preauth]
Dec 06 08:23:45 compute-1 sudo[314805]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:45 compute-1 sudo[314851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:45 compute-1 sudo[314851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:45 compute-1 sudo[314851]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:45 compute-1 sshd[164848]: Timeout before authentication for connection from 107.150.106.178 to 38.102.83.204, pid = 313510
Dec 06 08:23:45 compute-1 sudo[314876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:23:45 compute-1 sudo[314876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:45 compute-1 sudo[314876]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:45 compute-1 sudo[314901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:45 compute-1 sudo[314901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:45 compute-1 sudo[314901]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:45 compute-1 ceph-mon[81689]: pgmap v3945: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 33 op/s
Dec 06 08:23:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:45 compute-1 sudo[314926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:23:45 compute-1 sudo[314926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:46 compute-1 sudo[314926]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:46 compute-1 sshd[164848]: drop connection #0 from [107.150.106.178]:46124 on [38.102.83.204]:22 penalty: exceeded LoginGraceTime
Dec 06 08:23:46 compute-1 nova_compute[226101]: 2025-12-06 08:23:46.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:46.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:23:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:23:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:23:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:23:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:23:47 compute-1 ceph-mon[81689]: pgmap v3946: 305 pgs: 305 active+clean; 184 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 977 KiB/s wr, 72 op/s
Dec 06 08:23:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:48.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:48 compute-1 nova_compute[226101]: 2025-12-06 08:23:48.432 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:48.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:49 compute-1 ceph-mon[81689]: pgmap v3947: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:50.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:50.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:50 compute-1 ceph-mon[81689]: pgmap v3948: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:51 compute-1 nova_compute[226101]: 2025-12-06 08:23:51.310 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:51 compute-1 sudo[314982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:51 compute-1 sudo[314982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:51 compute-1 sudo[314982]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:51 compute-1 sudo[315007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:23:51 compute-1 sudo[315007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:51 compute-1 sudo[315007]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:52.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:52.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:52 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:53 compute-1 nova_compute[226101]: 2025-12-06 08:23:53.434 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:53 compute-1 sshd-session[315032]: Received disconnect from 14.225.3.79 port 44602:11: Bye Bye [preauth]
Dec 06 08:23:53 compute-1 sshd-session[315032]: Disconnected from authenticating user root 14.225.3.79 port 44602 [preauth]
Dec 06 08:23:53 compute-1 ceph-mon[81689]: pgmap v3949: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:54 compute-1 nova_compute[226101]: 2025-12-06 08:23:54.123 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:54 compute-1 nova_compute[226101]: 2025-12-06 08:23:54.124 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:23:54 compute-1 nova_compute[226101]: 2025-12-06 08:23:54.124 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:23:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:54.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:54.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:54 compute-1 nova_compute[226101]: 2025-12-06 08:23:54.455 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:23:54 compute-1 nova_compute[226101]: 2025-12-06 08:23:54.456 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquired lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:23:54 compute-1 nova_compute[226101]: 2025-12-06 08:23:54.456 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:23:54 compute-1 nova_compute[226101]: 2025-12-06 08:23:54.456 226109 DEBUG nova.objects.instance [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:23:54 compute-1 ceph-mon[81689]: pgmap v3950: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:55 compute-1 nova_compute[226101]: 2025-12-06 08:23:55.757 226109 DEBUG nova.network.neutron [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updating instance_info_cache with network_info: [{"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:23:55 compute-1 nova_compute[226101]: 2025-12-06 08:23:55.784 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Releasing lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:23:55 compute-1 nova_compute[226101]: 2025-12-06 08:23:55.784 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:23:56 compute-1 podman[315035]: 2025-12-06 08:23:56.070110861 +0000 UTC m=+0.055648612 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:23:56 compute-1 podman[315034]: 2025-12-06 08:23:56.081534298 +0000 UTC m=+0.068317792 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:23:56 compute-1 podman[315036]: 2025-12-06 08:23:56.108207373 +0000 UTC m=+0.090386384 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:56 compute-1 nova_compute[226101]: 2025-12-06 08:23:56.312 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:56.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:57 compute-1 ceph-mon[81689]: pgmap v3951: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/532581924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:23:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:58.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:58 compute-1 nova_compute[226101]: 2025-12-06 08:23:58.466 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:59 compute-1 ceph-mon[81689]: pgmap v3952: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Dec 06 08:24:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1206065533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:00.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:00.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:00 compute-1 sshd-session[315096]: Received disconnect from 186.87.166.141 port 38780:11: Bye Bye [preauth]
Dec 06 08:24:00 compute-1 sshd-session[315096]: Disconnected from authenticating user root 186.87.166.141 port 38780 [preauth]
Dec 06 08:24:00 compute-1 nova_compute[226101]: 2025-12-06 08:24:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:00 compute-1 nova_compute[226101]: 2025-12-06 08:24:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:01 compute-1 ceph-mon[81689]: pgmap v3953: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 06 08:24:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4088412875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:01 compute-1 nova_compute[226101]: 2025-12-06 08:24:01.314 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:01.701 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:01.702 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:01.702 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:02.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:02.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:02 compute-1 nova_compute[226101]: 2025-12-06 08:24:02.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:02 compute-1 nova_compute[226101]: 2025-12-06 08:24:02.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:24:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:03 compute-1 ceph-mon[81689]: pgmap v3954: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:03 compute-1 nova_compute[226101]: 2025-12-06 08:24:03.469 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/158196637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3774930002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:24:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:04.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:05 compute-1 ceph-mon[81689]: pgmap v3955: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2615928693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:24:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1585537670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:06 compute-1 nova_compute[226101]: 2025-12-06 08:24:06.317 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:06.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:06.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:06 compute-1 nova_compute[226101]: 2025-12-06 08:24:06.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:06 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:06.687 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=106, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=105) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:24:06 compute-1 nova_compute[226101]: 2025-12-06 08:24:06.688 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:06.689 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:24:07 compute-1 ceph-mon[81689]: pgmap v3956: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:07 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:07.691 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '106'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:08.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:08.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:08 compute-1 nova_compute[226101]: 2025-12-06 08:24:08.472 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:08 compute-1 ceph-mon[81689]: pgmap v3957: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:09 compute-1 nova_compute[226101]: 2025-12-06 08:24:09.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1369561069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:24:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1369561069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:24:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:10.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:10 compute-1 ceph-mon[81689]: pgmap v3958: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:11 compute-1 nova_compute[226101]: 2025-12-06 08:24:11.321 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:12.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:12 compute-1 nova_compute[226101]: 2025-12-06 08:24:12.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:12 compute-1 nova_compute[226101]: 2025-12-06 08:24:12.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:13 compute-1 ceph-mon[81689]: pgmap v3959: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 08:24:13 compute-1 nova_compute[226101]: 2025-12-06 08:24:13.477 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:14.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:14.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:15 compute-1 ceph-mon[81689]: pgmap v3960: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:24:16 compute-1 nova_compute[226101]: 2025-12-06 08:24:16.326 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:16.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:16.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:17 compute-1 ceph-mon[81689]: pgmap v3961: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 08:24:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:18.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:18.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:18 compute-1 nova_compute[226101]: 2025-12-06 08:24:18.525 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:19 compute-1 ceph-mon[81689]: pgmap v3962: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:24:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:20.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:21 compute-1 ceph-mon[81689]: pgmap v3963: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:24:21 compute-1 nova_compute[226101]: 2025-12-06 08:24:21.329 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:22.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:22.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:23 compute-1 ceph-mon[81689]: pgmap v3964: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 08:24:23 compute-1 nova_compute[226101]: 2025-12-06 08:24:23.527 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:24.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:24 compute-1 nova_compute[226101]: 2025-12-06 08:24:24.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:24 compute-1 nova_compute[226101]: 2025-12-06 08:24:24.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:24:25 compute-1 ceph-mon[81689]: pgmap v3965: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:24:25 compute-1 nova_compute[226101]: 2025-12-06 08:24:25.760 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:24:25 compute-1 sshd-session[315100]: Received disconnect from 91.144.158.231 port 39877:11: Bye Bye [preauth]
Dec 06 08:24:25 compute-1 sshd-session[315100]: Disconnected from authenticating user root 91.144.158.231 port 39877 [preauth]
Dec 06 08:24:25 compute-1 sshd-session[315102]: error: kex_exchange_identification: read: Connection reset by peer
Dec 06 08:24:25 compute-1 sshd-session[315102]: Connection reset by 14.103.75.9 port 16294
Dec 06 08:24:26 compute-1 nova_compute[226101]: 2025-12-06 08:24:26.332 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:26.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:26.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:27 compute-1 podman[315104]: 2025-12-06 08:24:27.082734533 +0000 UTC m=+0.057263416 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:24:27 compute-1 podman[315105]: 2025-12-06 08:24:27.110237 +0000 UTC m=+0.083646313 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:24:27 compute-1 podman[315103]: 2025-12-06 08:24:27.126219798 +0000 UTC m=+0.093943759 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Dec 06 08:24:27 compute-1 ceph-mon[81689]: pgmap v3966: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:24:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:28 compute-1 nova_compute[226101]: 2025-12-06 08:24:28.114 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:28.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:28 compute-1 nova_compute[226101]: 2025-12-06 08:24:28.531 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:28 compute-1 sshd-session[315167]: Received disconnect from 106.51.92.114 port 50235:11: Bye Bye [preauth]
Dec 06 08:24:28 compute-1 sshd-session[315167]: Disconnected from authenticating user root 106.51.92.114 port 50235 [preauth]
Dec 06 08:24:29 compute-1 ceph-mon[81689]: pgmap v3967: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:24:29 compute-1 ovn_controller[130279]: 2025-12-06T08:24:29Z|00866|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 06 08:24:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:24:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:30 compute-1 nova_compute[226101]: 2025-12-06 08:24:30.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:31 compute-1 nova_compute[226101]: 2025-12-06 08:24:31.334 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:31 compute-1 ceph-mon[81689]: pgmap v3968: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:24:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1010346848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:24:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:33 compute-1 ceph-mon[81689]: pgmap v3969: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.578 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.814 226109 DEBUG nova.compute.manager [req-e92f0f4a-0e36-48b8-a984-888298626b70 req-283d4197-4ed0-4d44-886b-828692d21b6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-changed-c080e6e6-5887-437f-958e-1e981cc49f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.814 226109 DEBUG nova.compute.manager [req-e92f0f4a-0e36-48b8-a984-888298626b70 req-283d4197-4ed0-4d44-886b-828692d21b6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Refreshing instance network info cache due to event network-changed-c080e6e6-5887-437f-958e-1e981cc49f14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.815 226109 DEBUG oslo_concurrency.lockutils [req-e92f0f4a-0e36-48b8-a984-888298626b70 req-283d4197-4ed0-4d44-886b-828692d21b6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.815 226109 DEBUG oslo_concurrency.lockutils [req-e92f0f4a-0e36-48b8-a984-888298626b70 req-283d4197-4ed0-4d44-886b-828692d21b6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.815 226109 DEBUG nova.network.neutron [req-e92f0f4a-0e36-48b8-a984-888298626b70 req-283d4197-4ed0-4d44-886b-828692d21b6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Refreshing network info cache for port c080e6e6-5887-437f-958e-1e981cc49f14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.898 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.898 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.898 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.898 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.899 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.900 226109 INFO nova.compute.manager [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Terminating instance
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.900 226109 DEBUG nova.compute.manager [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:24:33 compute-1 kernel: tapc080e6e6-58 (unregistering): left promiscuous mode
Dec 06 08:24:33 compute-1 NetworkManager[49031]: <info>  [1765009473.9609] device (tapc080e6e6-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:24:33 compute-1 ovn_controller[130279]: 2025-12-06T08:24:33Z|00867|binding|INFO|Releasing lport c080e6e6-5887-437f-958e-1e981cc49f14 from this chassis (sb_readonly=0)
Dec 06 08:24:33 compute-1 ovn_controller[130279]: 2025-12-06T08:24:33Z|00868|binding|INFO|Setting lport c080e6e6-5887-437f-958e-1e981cc49f14 down in Southbound
Dec 06 08:24:33 compute-1 ovn_controller[130279]: 2025-12-06T08:24:33Z|00869|binding|INFO|Removing iface tapc080e6e6-58 ovn-installed in OVS
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.976 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:33.981 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:f5:42 10.100.0.14'], port_security=['fa:16:3e:68:f5:42 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1cd6bfb7-50f5-47c3-ab52-d36bc3401354', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189eab69-7772-4260-9abd-06a6f9690645', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05ba0383-51c9-4f05-9aff-9ce5d118989a 706ebad5-6fe3-4dd3-9d43-a7533ecec0ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daab64e6-4b17-43eb-9bf1-b22faaf53364, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=c080e6e6-5887-437f-958e-1e981cc49f14) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:24:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:33.982 139580 INFO neutron.agent.ovn.metadata.agent [-] Port c080e6e6-5887-437f-958e-1e981cc49f14 in datapath 189eab69-7772-4260-9abd-06a6f9690645 unbound from our chassis
Dec 06 08:24:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:33.984 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 189eab69-7772-4260-9abd-06a6f9690645, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:24:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:33.986 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[567cd4a1-f9f1-4d67-b8b2-bd94e048747a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:33.986 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 namespace which is not needed anymore
Dec 06 08:24:33 compute-1 nova_compute[226101]: 2025-12-06 08:24:33.996 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:34 compute-1 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000da.scope: Deactivated successfully.
Dec 06 08:24:34 compute-1 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000da.scope: Consumed 16.057s CPU time.
Dec 06 08:24:34 compute-1 systemd-machined[190302]: Machine qemu-98-instance-000000da terminated.
Dec 06 08:24:34 compute-1 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[314661]: [NOTICE]   (314665) : haproxy version is 2.8.14-c23fe91
Dec 06 08:24:34 compute-1 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[314661]: [NOTICE]   (314665) : path to executable is /usr/sbin/haproxy
Dec 06 08:24:34 compute-1 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[314661]: [WARNING]  (314665) : Exiting Master process...
Dec 06 08:24:34 compute-1 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[314661]: [ALERT]    (314665) : Current worker (314667) exited with code 143 (Terminated)
Dec 06 08:24:34 compute-1 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[314661]: [WARNING]  (314665) : All workers exited. Exiting... (0)
Dec 06 08:24:34 compute-1 systemd[1]: libpod-14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8.scope: Deactivated successfully.
Dec 06 08:24:34 compute-1 podman[315194]: 2025-12-06 08:24:34.127575104 +0000 UTC m=+0.050129404 container died 14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.133 226109 INFO nova.virt.libvirt.driver [-] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Instance destroyed successfully.
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.134 226109 DEBUG nova.objects.instance [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'resources' on Instance uuid 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.151 226109 DEBUG nova.virt.libvirt.vif [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-287011916',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-287011916',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=218,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC8BznMB0Xu/ykT7z/N8+gel7XJlQSn0npteW8yVXEIGRqJoEANmDfv68DjMaYfGNX4b0z8xP/ctyWvQ7TYfkeAXD05ZkFtdVnjgdZHjlMKaS2ob4Ytz/egC3eVPS5uFkw==',key_name='tempest-TestSecurityGroupsBasicOps-1814186017',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:23:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-rlepsp0y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:23:29Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=1cd6bfb7-50f5-47c3-ab52-d36bc3401354,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.152 226109 DEBUG nova.network.os_vif_util [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.152 226109 DEBUG nova.network.os_vif_util [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=c080e6e6-5887-437f-958e-1e981cc49f14,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc080e6e6-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.153 226109 DEBUG os_vif [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=c080e6e6-5887-437f-958e-1e981cc49f14,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc080e6e6-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.154 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.154 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc080e6e6-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:34 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8-userdata-shm.mount: Deactivated successfully.
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.157 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-249886831e6962209294defa0ce607af04762c6c54d28702149a4fa2ceb1ea63-merged.mount: Deactivated successfully.
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.160 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.160 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.162 226109 INFO os_vif [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=c080e6e6-5887-437f-958e-1e981cc49f14,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc080e6e6-58')
Dec 06 08:24:34 compute-1 podman[315194]: 2025-12-06 08:24:34.171982285 +0000 UTC m=+0.094536585 container cleanup 14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:24:34 compute-1 systemd[1]: libpod-conmon-14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8.scope: Deactivated successfully.
Dec 06 08:24:34 compute-1 podman[315250]: 2025-12-06 08:24:34.232797476 +0000 UTC m=+0.041816202 container remove 14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.240 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0e0f1db5-dc6e-4d47-b505-7201b0364317]: (4, ('Sat Dec  6 08:24:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 (14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8)\n14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8\nSat Dec  6 08:24:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 (14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8)\n14219c35178d85a11c6d7a380bd9436270a90115178c9dd0340b77d12fd8b7d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.242 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[11a22296-a2d1-498e-88bb-8337f10b0be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.243 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap189eab69-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:34 compute-1 kernel: tap189eab69-70: left promiscuous mode
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.244 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.254 226109 DEBUG nova.compute.manager [req-fbd6171a-7dce-4275-9f68-5e3920896e7d req-195a5fa8-5843-4e01-bb7e-5766d2257bc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-vif-unplugged-c080e6e6-5887-437f-958e-1e981cc49f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.254 226109 DEBUG oslo_concurrency.lockutils [req-fbd6171a-7dce-4275-9f68-5e3920896e7d req-195a5fa8-5843-4e01-bb7e-5766d2257bc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.254 226109 DEBUG oslo_concurrency.lockutils [req-fbd6171a-7dce-4275-9f68-5e3920896e7d req-195a5fa8-5843-4e01-bb7e-5766d2257bc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.255 226109 DEBUG oslo_concurrency.lockutils [req-fbd6171a-7dce-4275-9f68-5e3920896e7d req-195a5fa8-5843-4e01-bb7e-5766d2257bc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.255 226109 DEBUG nova.compute.manager [req-fbd6171a-7dce-4275-9f68-5e3920896e7d req-195a5fa8-5843-4e01-bb7e-5766d2257bc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] No waiting events found dispatching network-vif-unplugged-c080e6e6-5887-437f-958e-1e981cc49f14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.255 226109 DEBUG nova.compute.manager [req-fbd6171a-7dce-4275-9f68-5e3920896e7d req-195a5fa8-5843-4e01-bb7e-5766d2257bc0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-vif-unplugged-c080e6e6-5887-437f-958e-1e981cc49f14 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.257 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.262 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3eaea8d1-f12d-4ab1-9d9f-98054f45a50a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.284 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8d784e-2b76-4806-a689-2cc49e96975e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.285 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[5d70b70b-0c1e-4c04-9aff-ce46beced12c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.299 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7ee403-bef0-463f-a5fa-e2bf6d195046]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 968857, 'reachable_time': 27538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315268, 'error': None, 'target': 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:34 compute-1 systemd[1]: run-netns-ovnmeta\x2d189eab69\x2d7772\x2d4260\x2d9abd\x2d06a6f9690645.mount: Deactivated successfully.
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.302 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:24:34 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:24:34.303 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[5cab6b29-29d6-496d-b7fe-dd1266149c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:34.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:34.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.531 226109 INFO nova.virt.libvirt.driver [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Deleting instance files /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354_del
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.532 226109 INFO nova.virt.libvirt.driver [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Deletion of /var/lib/nova/instances/1cd6bfb7-50f5-47c3-ab52-d36bc3401354_del complete
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.599 226109 INFO nova.compute.manager [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Took 0.70 seconds to destroy the instance on the hypervisor.
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.599 226109 DEBUG oslo.service.loopingcall [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.599 226109 DEBUG nova.compute.manager [-] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:24:34 compute-1 nova_compute[226101]: 2025-12-06 08:24:34.600 226109 DEBUG nova.network.neutron [-] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:24:35 compute-1 ceph-mon[81689]: pgmap v3970: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 24 KiB/s wr, 30 op/s
Dec 06 08:24:35 compute-1 nova_compute[226101]: 2025-12-06 08:24:35.445 226109 DEBUG nova.network.neutron [req-e92f0f4a-0e36-48b8-a984-888298626b70 req-283d4197-4ed0-4d44-886b-828692d21b6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updated VIF entry in instance network info cache for port c080e6e6-5887-437f-958e-1e981cc49f14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:24:35 compute-1 nova_compute[226101]: 2025-12-06 08:24:35.446 226109 DEBUG nova.network.neutron [req-e92f0f4a-0e36-48b8-a984-888298626b70 req-283d4197-4ed0-4d44-886b-828692d21b6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updating instance_info_cache with network_info: [{"id": "c080e6e6-5887-437f-958e-1e981cc49f14", "address": "fa:16:3e:68:f5:42", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc080e6e6-58", "ovs_interfaceid": "c080e6e6-5887-437f-958e-1e981cc49f14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:24:35 compute-1 nova_compute[226101]: 2025-12-06 08:24:35.637 226109 DEBUG oslo_concurrency.lockutils [req-e92f0f4a-0e36-48b8-a984-888298626b70 req-283d4197-4ed0-4d44-886b-828692d21b6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-1cd6bfb7-50f5-47c3-ab52-d36bc3401354" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:24:35 compute-1 nova_compute[226101]: 2025-12-06 08:24:35.644 226109 DEBUG nova.network.neutron [-] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:24:35 compute-1 nova_compute[226101]: 2025-12-06 08:24:35.665 226109 INFO nova.compute.manager [-] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Took 1.07 seconds to deallocate network for instance.
Dec 06 08:24:35 compute-1 nova_compute[226101]: 2025-12-06 08:24:35.808 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:35 compute-1 nova_compute[226101]: 2025-12-06 08:24:35.808 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:35 compute-1 nova_compute[226101]: 2025-12-06 08:24:35.858 226109 DEBUG oslo_concurrency.processutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.085 226109 DEBUG nova.compute.manager [req-7b7c4365-992c-4e68-b3f4-520ee8196813 req-58e7c8c5-4dd1-4cf2-a216-08005be76148 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-vif-deleted-c080e6e6-5887-437f-958e-1e981cc49f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:36 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:24:36 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2290888888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.307 226109 DEBUG oslo_concurrency.processutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.315 226109 DEBUG nova.compute.provider_tree [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.335 226109 DEBUG nova.scheduler.client.report [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.343 226109 DEBUG nova.compute.manager [req-abca23f0-046e-4616-aab7-feca5419b404 req-bc873b3a-9873-426b-8953-c69b22260fc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received event network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.344 226109 DEBUG oslo_concurrency.lockutils [req-abca23f0-046e-4616-aab7-feca5419b404 req-bc873b3a-9873-426b-8953-c69b22260fc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.344 226109 DEBUG oslo_concurrency.lockutils [req-abca23f0-046e-4616-aab7-feca5419b404 req-bc873b3a-9873-426b-8953-c69b22260fc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.344 226109 DEBUG oslo_concurrency.lockutils [req-abca23f0-046e-4616-aab7-feca5419b404 req-bc873b3a-9873-426b-8953-c69b22260fc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.345 226109 DEBUG nova.compute.manager [req-abca23f0-046e-4616-aab7-feca5419b404 req-bc873b3a-9873-426b-8953-c69b22260fc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] No waiting events found dispatching network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.345 226109 WARNING nova.compute.manager [req-abca23f0-046e-4616-aab7-feca5419b404 req-bc873b3a-9873-426b-8953-c69b22260fc8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Received unexpected event network-vif-plugged-c080e6e6-5887-437f-958e-1e981cc49f14 for instance with vm_state deleted and task_state None.
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.360 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.384 226109 INFO nova.scheduler.client.report [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Deleted allocations for instance 1cd6bfb7-50f5-47c3-ab52-d36bc3401354
Dec 06 08:24:36 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2290888888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:36 compute-1 nova_compute[226101]: 2025-12-06 08:24:36.473 226109 DEBUG oslo_concurrency.lockutils [None req-61494f1a-c05d-44c8-8159-0d75455eb17f 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "1cd6bfb7-50f5-47c3-ab52-d36bc3401354" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:36.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:37 compute-1 ceph-mon[81689]: pgmap v3971: 305 pgs: 305 active+clean; 147 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 25 KiB/s wr, 54 op/s
Dec 06 08:24:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:38.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:38 compute-1 nova_compute[226101]: 2025-12-06 08:24:38.580 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:38 compute-1 nova_compute[226101]: 2025-12-06 08:24:38.594 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:39 compute-1 nova_compute[226101]: 2025-12-06 08:24:39.158 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:39 compute-1 ceph-mon[81689]: pgmap v3972: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Dec 06 08:24:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:40.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:40.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:41 compute-1 sshd-session[315296]: Received disconnect from 186.96.151.198 port 54602:11: Bye Bye [preauth]
Dec 06 08:24:41 compute-1 sshd-session[315296]: Disconnected from authenticating user root 186.96.151.198 port 54602 [preauth]
Dec 06 08:24:41 compute-1 sshd-session[315293]: Received disconnect from 154.209.4.183 port 50928:11: Bye Bye [preauth]
Dec 06 08:24:41 compute-1 sshd-session[315293]: Disconnected from authenticating user root 154.209.4.183 port 50928 [preauth]
Dec 06 08:24:41 compute-1 ceph-mon[81689]: pgmap v3973: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 55 op/s
Dec 06 08:24:42 compute-1 sshd-session[315295]: Received disconnect from 136.112.8.45 port 47394:11: Bye Bye [preauth]
Dec 06 08:24:42 compute-1 sshd-session[315295]: Disconnected from authenticating user root 136.112.8.45 port 47394 [preauth]
Dec 06 08:24:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:24:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:42.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:42 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:42.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:42 compute-1 nova_compute[226101]: 2025-12-06 08:24:42.677 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:42 compute-1 sshd-session[315299]: Received disconnect from 165.154.55.146 port 49706:11: Bye Bye [preauth]
Dec 06 08:24:42 compute-1 sshd-session[315299]: Disconnected from authenticating user root 165.154.55.146 port 49706 [preauth]
Dec 06 08:24:42 compute-1 nova_compute[226101]: 2025-12-06 08:24:42.788 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:43 compute-1 ceph-mon[81689]: pgmap v3974: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 55 op/s
Dec 06 08:24:43 compute-1 nova_compute[226101]: 2025-12-06 08:24:43.582 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:44 compute-1 nova_compute[226101]: 2025-12-06 08:24:44.160 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:24:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:44.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:44 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:44.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:44 compute-1 nova_compute[226101]: 2025-12-06 08:24:44.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:44 compute-1 nova_compute[226101]: 2025-12-06 08:24:44.611 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:44 compute-1 nova_compute[226101]: 2025-12-06 08:24:44.611 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:44 compute-1 nova_compute[226101]: 2025-12-06 08:24:44.611 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:44 compute-1 nova_compute[226101]: 2025-12-06 08:24:44.612 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:24:44 compute-1 nova_compute[226101]: 2025-12-06 08:24:44.612 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:24:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/142033772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.059 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.224 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.225 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4280MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.226 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.226 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.305 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.305 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.318 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:45 compute-1 ceph-mon[81689]: pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:24:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/142033772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:45 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:24:45 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/474660219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.715 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.720 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.868 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.912 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:24:45 compute-1 nova_compute[226101]: 2025-12-06 08:24:45.913 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:46.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:46.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/474660219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:47 compute-1 ceph-mon[81689]: pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:24:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:48.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:48.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:48 compute-1 nova_compute[226101]: 2025-12-06 08:24:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:48 compute-1 nova_compute[226101]: 2025-12-06 08:24:48.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:24:48 compute-1 nova_compute[226101]: 2025-12-06 08:24:48.623 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:49 compute-1 nova_compute[226101]: 2025-12-06 08:24:49.131 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009474.1305442, 1cd6bfb7-50f5-47c3-ab52-d36bc3401354 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:24:49 compute-1 nova_compute[226101]: 2025-12-06 08:24:49.132 226109 INFO nova.compute.manager [-] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] VM Stopped (Lifecycle Event)
Dec 06 08:24:49 compute-1 nova_compute[226101]: 2025-12-06 08:24:49.162 226109 DEBUG nova.compute.manager [None req-084edb62-011d-4d5c-bce5-afd3b4e1401b - - - - - -] [instance: 1cd6bfb7-50f5-47c3-ab52-d36bc3401354] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:24:49 compute-1 nova_compute[226101]: 2025-12-06 08:24:49.163 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:49 compute-1 ceph-mon[81689]: pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 08:24:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:24:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:50 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:50.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:50.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:51 compute-1 ceph-mon[81689]: pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:52 compute-1 sudo[315349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:52 compute-1 sudo[315349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:52 compute-1 sudo[315349]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 compute-1 sudo[315374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:24:52 compute-1 sudo[315374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:52 compute-1 sudo[315374]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 compute-1 sudo[315399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:52 compute-1 sudo[315399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:52 compute-1 sudo[315399]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 compute-1 sudo[315424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:24:52 compute-1 sudo[315424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:52.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:52.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:52 compute-1 sudo[315424]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:52 compute-1 sshd-session[315347]: Received disconnect from 154.219.116.39 port 57054:11: Bye Bye [preauth]
Dec 06 08:24:52 compute-1 sshd-session[315347]: Disconnected from authenticating user root 154.219.116.39 port 57054 [preauth]
Dec 06 08:24:53 compute-1 ceph-mon[81689]: pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:24:53 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:24:53 compute-1 nova_compute[226101]: 2025-12-06 08:24:53.626 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:54 compute-1 nova_compute[226101]: 2025-12-06 08:24:54.212 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:54.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:54 compute-1 nova_compute[226101]: 2025-12-06 08:24:54.605 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:54 compute-1 nova_compute[226101]: 2025-12-06 08:24:54.605 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:24:54 compute-1 nova_compute[226101]: 2025-12-06 08:24:54.605 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:24:54 compute-1 nova_compute[226101]: 2025-12-06 08:24:54.640 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:24:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:54 compute-1 ceph-mon[81689]: pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:24:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:24:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:24:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:24:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:24:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:56.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:56.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:57 compute-1 ceph-mon[81689]: pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:58 compute-1 podman[315481]: 2025-12-06 08:24:58.068549847 +0000 UTC m=+0.055159120 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 06 08:24:58 compute-1 podman[315480]: 2025-12-06 08:24:58.080176889 +0000 UTC m=+0.067029378 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 08:24:58 compute-1 podman[315482]: 2025-12-06 08:24:58.104333066 +0000 UTC m=+0.087859656 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:24:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:58.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:24:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:58.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:58 compute-1 nova_compute[226101]: 2025-12-06 08:24:58.628 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:59 compute-1 ceph-mon[81689]: pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:59 compute-1 nova_compute[226101]: 2025-12-06 08:24:59.261 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.796148) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499796200, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2133, "num_deletes": 252, "total_data_size": 5081900, "memory_usage": 5151576, "flush_reason": "Manual Compaction"}
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499817523, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 1984809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85494, "largest_seqno": 87622, "table_properties": {"data_size": 1978402, "index_size": 3288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17032, "raw_average_key_size": 21, "raw_value_size": 1964130, "raw_average_value_size": 2439, "num_data_blocks": 146, "num_entries": 805, "num_filter_entries": 805, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009311, "oldest_key_time": 1765009311, "file_creation_time": 1765009499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 21419 microseconds, and 5253 cpu microseconds.
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.817570) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 1984809 bytes OK
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.817594) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.830478) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.830494) EVENT_LOG_v1 {"time_micros": 1765009499830488, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.830513) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 5072489, prev total WAL file size 5072489, number of live WAL files 2.
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.832094) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303238' seq:72057594037927935, type:22 .. '6D6772737461740033323830' seq:0, type:0; will stop at (end)
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(1938KB)], [174(12MB)]
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499832157, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 15429712, "oldest_snapshot_seqno": -1}
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 11650 keys, 12896535 bytes, temperature: kUnknown
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499931232, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 12896535, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12825019, "index_size": 41259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29189, "raw_key_size": 306560, "raw_average_key_size": 26, "raw_value_size": 12625111, "raw_average_value_size": 1083, "num_data_blocks": 1561, "num_entries": 11650, "num_filter_entries": 11650, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.931533) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 12896535 bytes
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.932888) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.6 rd, 130.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.3) write-amplify(6.5) OK, records in: 12087, records dropped: 437 output_compression: NoCompression
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.932902) EVENT_LOG_v1 {"time_micros": 1765009499932895, "job": 112, "event": "compaction_finished", "compaction_time_micros": 99158, "compaction_time_cpu_micros": 36651, "output_level": 6, "num_output_files": 1, "total_output_size": 12896535, "num_input_records": 12087, "num_output_records": 11650, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499933426, "job": 112, "event": "table_file_deletion", "file_number": 176}
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499936066, "job": 112, "event": "table_file_deletion", "file_number": 174}
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.832011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.936256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.936268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.936270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.936271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:24:59.936272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:00.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:00 compute-1 nova_compute[226101]: 2025-12-06 08:25:00.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:00 compute-1 nova_compute[226101]: 2025-12-06 08:25:00.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:00 compute-1 sshd-session[315539]: Received disconnect from 101.100.194.199 port 56138:11: Bye Bye [preauth]
Dec 06 08:25:00 compute-1 sshd-session[315539]: Disconnected from authenticating user root 101.100.194.199 port 56138 [preauth]
Dec 06 08:25:00 compute-1 sudo[315543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:25:00 compute-1 sudo[315543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:00 compute-1 sudo[315543]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:00 compute-1 sudo[315568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:25:00 compute-1 sudo[315568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:00 compute-1 sudo[315568]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:01 compute-1 ceph-mon[81689]: pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:25:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:25:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:01.703 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:01.703 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:01.703 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:01 compute-1 sshd-session[315541]: Received disconnect from 45.120.216.232 port 56584:11: Bye Bye [preauth]
Dec 06 08:25:01 compute-1 sshd-session[315541]: Disconnected from authenticating user root 45.120.216.232 port 56584 [preauth]
Dec 06 08:25:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:25:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:02 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:02.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:02.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3681230716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:03 compute-1 sshd-session[315593]: Received disconnect from 124.18.141.70 port 38608:11: Bye Bye [preauth]
Dec 06 08:25:03 compute-1 sshd-session[315593]: Disconnected from authenticating user root 124.18.141.70 port 38608 [preauth]
Dec 06 08:25:03 compute-1 nova_compute[226101]: 2025-12-06 08:25:03.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:03 compute-1 nova_compute[226101]: 2025-12-06 08:25:03.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:25:03 compute-1 nova_compute[226101]: 2025-12-06 08:25:03.681 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:03 compute-1 ceph-mon[81689]: pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/991267142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:03 compute-1 nova_compute[226101]: 2025-12-06 08:25:03.946 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:03 compute-1 nova_compute[226101]: 2025-12-06 08:25:03.946 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:03 compute-1 nova_compute[226101]: 2025-12-06 08:25:03.960 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.030 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.030 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.038 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.040 226109 INFO nova.compute.claims [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.174 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.263 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:25:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:04 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:25:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2341981030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.646 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.653 226109 DEBUG nova.compute.provider_tree [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.683 226109 DEBUG nova.scheduler.client.report [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.705 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.705 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.750 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.750 226109 DEBUG nova.network.neutron [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.774 226109 INFO nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.794 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:25:04 compute-1 ceph-mon[81689]: pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2341981030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.869 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.871 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.871 226109 INFO nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Creating image(s)
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.907 226109 DEBUG nova.storage.rbd_utils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] rbd image 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.933 226109 DEBUG nova.storage.rbd_utils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] rbd image 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.960 226109 DEBUG nova.storage.rbd_utils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] rbd image 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.964 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:04 compute-1 nova_compute[226101]: 2025-12-06 08:25:04.995 226109 DEBUG nova.policy [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ebd2983853994f2684b4bba5a9593dd0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd82b0e57c7a246a7abb3cc909b561513', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.030 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.031 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.032 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.032 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.056 226109 DEBUG nova.storage.rbd_utils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] rbd image 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.060 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.325 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.400 226109 DEBUG nova.storage.rbd_utils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] resizing rbd image 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.490 226109 DEBUG nova.objects.instance [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'migration_context' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.505 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.505 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Ensure instance console log exists: /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.506 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.506 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.506 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:05 compute-1 nova_compute[226101]: 2025-12-06 08:25:05.624 226109 DEBUG nova.network.neutron [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Successfully created port: b1839354-6d4c-4442-a3d9-c8bacfc650d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:25:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3901546734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:25:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:06 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:06 compute-1 ceph-mon[81689]: pgmap v3986: 305 pgs: 305 active+clean; 125 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 45 KiB/s wr, 2 op/s
Dec 06 08:25:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3298858900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:07 compute-1 nova_compute[226101]: 2025-12-06 08:25:07.293 226109 DEBUG nova.network.neutron [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Successfully updated port: b1839354-6d4c-4442-a3d9-c8bacfc650d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:25:07 compute-1 nova_compute[226101]: 2025-12-06 08:25:07.337 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:25:07 compute-1 nova_compute[226101]: 2025-12-06 08:25:07.337 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquired lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:25:07 compute-1 nova_compute[226101]: 2025-12-06 08:25:07.337 226109 DEBUG nova.network.neutron [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:25:07 compute-1 nova_compute[226101]: 2025-12-06 08:25:07.462 226109 DEBUG nova.compute.manager [req-d1d79a53-148d-415a-a9eb-bfef9a13e19e req-950eb1a5-96bf-45ec-b669-f10fb5011749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-changed-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:07 compute-1 nova_compute[226101]: 2025-12-06 08:25:07.463 226109 DEBUG nova.compute.manager [req-d1d79a53-148d-415a-a9eb-bfef9a13e19e req-950eb1a5-96bf-45ec-b669-f10fb5011749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Refreshing instance network info cache due to event network-changed-b1839354-6d4c-4442-a3d9-c8bacfc650d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:25:07 compute-1 nova_compute[226101]: 2025-12-06 08:25:07.463 226109 DEBUG oslo_concurrency.lockutils [req-d1d79a53-148d-415a-a9eb-bfef9a13e19e req-950eb1a5-96bf-45ec-b669-f10fb5011749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:25:07 compute-1 nova_compute[226101]: 2025-12-06 08:25:07.606 226109 DEBUG nova.network.neutron [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:25:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.253 226109 DEBUG nova.network.neutron [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Updating instance_info_cache with network_info: [{"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.423 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Releasing lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.423 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance network_info: |[{"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.424 226109 DEBUG oslo_concurrency.lockutils [req-d1d79a53-148d-415a-a9eb-bfef9a13e19e req-950eb1a5-96bf-45ec-b669-f10fb5011749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.424 226109 DEBUG nova.network.neutron [req-d1d79a53-148d-415a-a9eb-bfef9a13e19e req-950eb1a5-96bf-45ec-b669-f10fb5011749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Refreshing network info cache for port b1839354-6d4c-4442-a3d9-c8bacfc650d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.426 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Start _get_guest_xml network_info=[{"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.430 226109 WARNING nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.445 226109 DEBUG nova.virt.libvirt.host [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.446 226109 DEBUG nova.virt.libvirt.host [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.453 226109 DEBUG nova.virt.libvirt.host [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.453 226109 DEBUG nova.virt.libvirt.host [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.454 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.455 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.455 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.455 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.456 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.456 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.456 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.456 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.456 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.457 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.457 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.457 226109 DEBUG nova.virt.hardware [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.459 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:08.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:08 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:25:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3461588636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:08.999 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.027 226109 DEBUG nova.storage.rbd_utils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] rbd image 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.032 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:25:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3417346296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:25:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:25:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3417346296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:25:09 compute-1 ceph-mon[81689]: pgmap v3987: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 539 KiB/s wr, 25 op/s
Dec 06 08:25:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3461588636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:25:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3417346296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:25:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3417346296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.299 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:25:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/241042467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.549 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.551 226109 DEBUG nova.virt.libvirt.vif [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:25:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1729091669',display_name='tempest-TestServerAdvancedOps-server-1729091669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1729091669',id=220,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d82b0e57c7a246a7abb3cc909b561513',ramdisk_id='',reservation_id='r-hbvamvwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-950012428',owner_user_name='tempest-TestServerAdvancedOps-950012428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:25:04Z,user_data=None,user_id='ebd2983853994f2684b4bba5a9593dd0',uuid=72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.551 226109 DEBUG nova.network.os_vif_util [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converting VIF {"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.552 226109 DEBUG nova.network.os_vif_util [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.553 226109 DEBUG nova.objects.instance [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'pci_devices' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.570 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <uuid>72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3</uuid>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <name>instance-000000dc</name>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <nova:name>tempest-TestServerAdvancedOps-server-1729091669</nova:name>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:25:08</nova:creationTime>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <nova:user uuid="ebd2983853994f2684b4bba5a9593dd0">tempest-TestServerAdvancedOps-950012428-project-member</nova:user>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <nova:project uuid="d82b0e57c7a246a7abb3cc909b561513">tempest-TestServerAdvancedOps-950012428</nova:project>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <nova:port uuid="b1839354-6d4c-4442-a3d9-c8bacfc650d3">
Dec 06 08:25:09 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <system>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <entry name="serial">72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3</entry>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <entry name="uuid">72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3</entry>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </system>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <os>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   </os>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <features>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   </features>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk">
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       </source>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk.config">
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       </source>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:25:09 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:cd:2a:d6"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <target dev="tapb1839354-6d"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3/console.log" append="off"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <video>
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </video>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:25:09 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:25:09 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:25:09 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:25:09 compute-1 nova_compute[226101]: </domain>
Dec 06 08:25:09 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.571 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Preparing to wait for external event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.571 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.571 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.572 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.572 226109 DEBUG nova.virt.libvirt.vif [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:25:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1729091669',display_name='tempest-TestServerAdvancedOps-server-1729091669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1729091669',id=220,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d82b0e57c7a246a7abb3cc909b561513',ramdisk_id='',reservation_id='r-hbvamvwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-950012428',owner_user_name='tempest-TestServerAdvancedOps-950012428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:25:04Z,user_data=None,user_id='ebd2983853994f2684b4bba5a9593dd0',uuid=72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.573 226109 DEBUG nova.network.os_vif_util [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converting VIF {"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.573 226109 DEBUG nova.network.os_vif_util [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.573 226109 DEBUG os_vif [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.574 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.574 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.575 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.578 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.578 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1839354-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.578 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1839354-6d, col_values=(('external_ids', {'iface-id': 'b1839354-6d4c-4442-a3d9-c8bacfc650d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:2a:d6', 'vm-uuid': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.580 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:09 compute-1 NetworkManager[49031]: <info>  [1765009509.5815] manager: (tapb1839354-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.582 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.587 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.588 226109 INFO os_vif [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d')
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.801814) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509801837, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 389, "num_deletes": 259, "total_data_size": 352256, "memory_usage": 361104, "flush_reason": "Manual Compaction"}
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509805665, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 232373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87627, "largest_seqno": 88011, "table_properties": {"data_size": 230045, "index_size": 427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5756, "raw_average_key_size": 18, "raw_value_size": 225311, "raw_average_value_size": 713, "num_data_blocks": 18, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009500, "oldest_key_time": 1765009500, "file_creation_time": 1765009509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 3897 microseconds, and 996 cpu microseconds.
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.805707) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 232373 bytes OK
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.805725) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.808169) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.808192) EVENT_LOG_v1 {"time_micros": 1765009509808186, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.808218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 349658, prev total WAL file size 349658, number of live WAL files 2.
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.808862) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323830' seq:72057594037927935, type:22 .. '6C6F676D0033353335' seq:0, type:0; will stop at (end)
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(226KB)], [177(12MB)]
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509808922, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 13128908, "oldest_snapshot_seqno": -1}
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 11436 keys, 13004435 bytes, temperature: kUnknown
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509885726, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 13004435, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12933699, "index_size": 41055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28613, "raw_key_size": 303086, "raw_average_key_size": 26, "raw_value_size": 12736787, "raw_average_value_size": 1113, "num_data_blocks": 1550, "num_entries": 11436, "num_filter_entries": 11436, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.886049) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13004435 bytes
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.888895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.7 rd, 169.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(112.5) write-amplify(56.0) OK, records in: 11966, records dropped: 530 output_compression: NoCompression
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.888927) EVENT_LOG_v1 {"time_micros": 1765009509888914, "job": 114, "event": "compaction_finished", "compaction_time_micros": 76893, "compaction_time_cpu_micros": 29701, "output_level": 6, "num_output_files": 1, "total_output_size": 13004435, "num_input_records": 11966, "num_output_records": 11436, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509889161, "job": 114, "event": "table_file_deletion", "file_number": 179}
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509893445, "job": 114, "event": "table_file_deletion", "file_number": 177}
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.808731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.893529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.893537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.893540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.893544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:09.893548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-1 nova_compute[226101]: 2025-12-06 08:25:09.925 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:10 compute-1 nova_compute[226101]: 2025-12-06 08:25:10.467 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Triggering sync for uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 08:25:10 compute-1 nova_compute[226101]: 2025-12-06 08:25:10.468 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/241042467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:25:10 compute-1 nova_compute[226101]: 2025-12-06 08:25:10.538 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:25:10 compute-1 nova_compute[226101]: 2025-12-06 08:25:10.538 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:25:10 compute-1 nova_compute[226101]: 2025-12-06 08:25:10.538 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] No VIF found with MAC fa:16:3e:cd:2a:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:25:10 compute-1 nova_compute[226101]: 2025-12-06 08:25:10.539 226109 INFO nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Using config drive
Dec 06 08:25:10 compute-1 nova_compute[226101]: 2025-12-06 08:25:10.563 226109 DEBUG nova.storage.rbd_utils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] rbd image 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:25:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:10.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:10.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:11 compute-1 ceph-mon[81689]: pgmap v3988: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 539 KiB/s wr, 25 op/s
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.132 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:12.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:25:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:12 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:12.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.642 226109 INFO nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Creating config drive at /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3/disk.config
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.647 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmmd761t5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.779 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmmd761t5" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.809 226109 DEBUG nova.storage.rbd_utils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] rbd image 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.812 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3/disk.config 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.982 226109 DEBUG oslo_concurrency.processutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3/disk.config 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:12 compute-1 nova_compute[226101]: 2025-12-06 08:25:12.983 226109 INFO nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Deleting local config drive /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3/disk.config because it was imported into RBD.
Dec 06 08:25:13 compute-1 kernel: tapb1839354-6d: entered promiscuous mode
Dec 06 08:25:13 compute-1 NetworkManager[49031]: <info>  [1765009513.0440] manager: (tapb1839354-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/408)
Dec 06 08:25:13 compute-1 ovn_controller[130279]: 2025-12-06T08:25:13Z|00870|binding|INFO|Claiming lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 for this chassis.
Dec 06 08:25:13 compute-1 ovn_controller[130279]: 2025-12-06T08:25:13Z|00871|binding|INFO|b1839354-6d4c-4442-a3d9-c8bacfc650d3: Claiming fa:16:3e:cd:2a:d6 10.100.0.12
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.043 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.048 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:13.067 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2a:d6 10.100.0.12'], port_security=['fa:16:3e:cd:2a:d6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36f50448-aea8-4f70-a635-f51c905b14cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd82b0e57c7a246a7abb3cc909b561513', 'neutron:revision_number': '2', 'neutron:security_group_ids': '304e9f17-cc9f-41ef-bb37-605ef52c4a7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b764814a-f207-4922-b616-1ca7bf2e289f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b1839354-6d4c-4442-a3d9-c8bacfc650d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:25:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:13.069 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b1839354-6d4c-4442-a3d9-c8bacfc650d3 in datapath 36f50448-aea8-4f70-a635-f51c905b14cb bound to our chassis
Dec 06 08:25:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:13.071 139580 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36f50448-aea8-4f70-a635-f51c905b14cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 08:25:13 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:13.073 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[340789e5-a69b-474b-909a-288aa47df7a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:25:13 compute-1 systemd-udevd[315917]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:25:13 compute-1 systemd-machined[190302]: New machine qemu-99-instance-000000dc.
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.083 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:13 compute-1 ovn_controller[130279]: 2025-12-06T08:25:13Z|00872|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 ovn-installed in OVS
Dec 06 08:25:13 compute-1 ovn_controller[130279]: 2025-12-06T08:25:13Z|00873|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 up in Southbound
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:13 compute-1 NetworkManager[49031]: <info>  [1765009513.0924] device (tapb1839354-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:25:13 compute-1 NetworkManager[49031]: <info>  [1765009513.0935] device (tapb1839354-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:25:13 compute-1 systemd[1]: Started Virtual Machine qemu-99-instance-000000dc.
Dec 06 08:25:13 compute-1 ceph-mon[81689]: pgmap v3989: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.630 226109 DEBUG nova.network.neutron [req-d1d79a53-148d-415a-a9eb-bfef9a13e19e req-950eb1a5-96bf-45ec-b669-f10fb5011749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Updated VIF entry in instance network info cache for port b1839354-6d4c-4442-a3d9-c8bacfc650d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.631 226109 DEBUG nova.network.neutron [req-d1d79a53-148d-415a-a9eb-bfef9a13e19e req-950eb1a5-96bf-45ec-b669-f10fb5011749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Updating instance_info_cache with network_info: [{"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.649 226109 DEBUG oslo_concurrency.lockutils [req-d1d79a53-148d-415a-a9eb-bfef9a13e19e req-950eb1a5-96bf-45ec-b669-f10fb5011749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.728 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.765 226109 DEBUG nova.compute.manager [req-eb93221e-06b3-4bc8-a079-88b19f809397 req-213ca346-6a1d-470b-ba78-5fd25aaad673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.765 226109 DEBUG oslo_concurrency.lockutils [req-eb93221e-06b3-4bc8-a079-88b19f809397 req-213ca346-6a1d-470b-ba78-5fd25aaad673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.766 226109 DEBUG oslo_concurrency.lockutils [req-eb93221e-06b3-4bc8-a079-88b19f809397 req-213ca346-6a1d-470b-ba78-5fd25aaad673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.766 226109 DEBUG oslo_concurrency.lockutils [req-eb93221e-06b3-4bc8-a079-88b19f809397 req-213ca346-6a1d-470b-ba78-5fd25aaad673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.766 226109 DEBUG nova.compute.manager [req-eb93221e-06b3-4bc8-a079-88b19f809397 req-213ca346-6a1d-470b-ba78-5fd25aaad673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Processing event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.788 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.789 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009513.7885096, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.790 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Started (Lifecycle Event)
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.792 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.795 226109 INFO nova.virt.libvirt.driver [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance spawned successfully.
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.795 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.812 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.817 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.820 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.820 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.820 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.821 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.821 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.822 226109 DEBUG nova.virt.libvirt.driver [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.852 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.852 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009513.7893636, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.853 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Paused (Lifecycle Event)
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.879 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.882 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009513.7919285, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.883 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Resumed (Lifecycle Event)
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.889 226109 INFO nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Took 9.02 seconds to spawn the instance on the hypervisor.
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.889 226109 DEBUG nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.897 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.900 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.919 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.941 226109 INFO nova.compute.manager [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Took 9.94 seconds to build instance.
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.959 226109 DEBUG oslo_concurrency.lockutils [None req-a91caec3-d4a1-4c1e-a1be-6abb9399c25e ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.960 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.960 226109 INFO nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:25:13 compute-1 nova_compute[226101]: 2025-12-06 08:25:13.960 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:14 compute-1 nova_compute[226101]: 2025-12-06 08:25:14.580 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:14.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:25:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:14.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:15 compute-1 ceph-mon[81689]: pgmap v3990: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:25:15 compute-1 nova_compute[226101]: 2025-12-06 08:25:15.866 226109 DEBUG nova.compute.manager [req-73cc80a3-e03a-4fec-b921-fa13368c347b req-279f8a4d-751b-49c3-97c8-6bb15361dda1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:15 compute-1 nova_compute[226101]: 2025-12-06 08:25:15.867 226109 DEBUG oslo_concurrency.lockutils [req-73cc80a3-e03a-4fec-b921-fa13368c347b req-279f8a4d-751b-49c3-97c8-6bb15361dda1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:15 compute-1 nova_compute[226101]: 2025-12-06 08:25:15.868 226109 DEBUG oslo_concurrency.lockutils [req-73cc80a3-e03a-4fec-b921-fa13368c347b req-279f8a4d-751b-49c3-97c8-6bb15361dda1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:15 compute-1 nova_compute[226101]: 2025-12-06 08:25:15.868 226109 DEBUG oslo_concurrency.lockutils [req-73cc80a3-e03a-4fec-b921-fa13368c347b req-279f8a4d-751b-49c3-97c8-6bb15361dda1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:15 compute-1 nova_compute[226101]: 2025-12-06 08:25:15.869 226109 DEBUG nova.compute.manager [req-73cc80a3-e03a-4fec-b921-fa13368c347b req-279f8a4d-751b-49c3-97c8-6bb15361dda1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:15 compute-1 nova_compute[226101]: 2025-12-06 08:25:15.869 226109 WARNING nova.compute.manager [req-73cc80a3-e03a-4fec-b921-fa13368c347b req-279f8a4d-751b-49c3-97c8-6bb15361dda1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state active and task_state None.
Dec 06 08:25:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:16.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:16.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.084 226109 DEBUG nova.objects.instance [None req-3d119af1-922f-4a5b-897c-3ff34021ab2b ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'pci_devices' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.156 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009517.156387, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.156 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Paused (Lifecycle Event)
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.183 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.187 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.205 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 06 08:25:17 compute-1 ceph-mon[81689]: pgmap v3991: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Dec 06 08:25:17 compute-1 kernel: tapb1839354-6d (unregistering): left promiscuous mode
Dec 06 08:25:17 compute-1 NetworkManager[49031]: <info>  [1765009517.7916] device (tapb1839354-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:25:17 compute-1 ovn_controller[130279]: 2025-12-06T08:25:17Z|00874|binding|INFO|Releasing lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 from this chassis (sb_readonly=0)
Dec 06 08:25:17 compute-1 ovn_controller[130279]: 2025-12-06T08:25:17Z|00875|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 down in Southbound
Dec 06 08:25:17 compute-1 ovn_controller[130279]: 2025-12-06T08:25:17Z|00876|binding|INFO|Removing iface tapb1839354-6d ovn-installed in OVS
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.802 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.803 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:17.809 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2a:d6 10.100.0.12'], port_security=['fa:16:3e:cd:2a:d6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36f50448-aea8-4f70-a635-f51c905b14cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd82b0e57c7a246a7abb3cc909b561513', 'neutron:revision_number': '4', 'neutron:security_group_ids': '304e9f17-cc9f-41ef-bb37-605ef52c4a7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b764814a-f207-4922-b616-1ca7bf2e289f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b1839354-6d4c-4442-a3d9-c8bacfc650d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:25:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:17.810 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b1839354-6d4c-4442-a3d9-c8bacfc650d3 in datapath 36f50448-aea8-4f70-a635-f51c905b14cb unbound from our chassis
Dec 06 08:25:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:17.811 139580 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36f50448-aea8-4f70-a635-f51c905b14cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 08:25:17 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:17.812 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[52f98249-a2e8-4225-aacc-824402f9d394]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.815 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:17 compute-1 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Dec 06 08:25:17 compute-1 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000dc.scope: Consumed 4.218s CPU time.
Dec 06 08:25:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:17 compute-1 systemd-machined[190302]: Machine qemu-99-instance-000000dc terminated.
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.980 226109 DEBUG nova.compute.manager [req-6d61e921-0534-4846-867a-29395983bc79 req-3dcd5b89-43c0-4067-9c2d-8ade1d2e0b64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.981 226109 DEBUG oslo_concurrency.lockutils [req-6d61e921-0534-4846-867a-29395983bc79 req-3dcd5b89-43c0-4067-9c2d-8ade1d2e0b64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.981 226109 DEBUG oslo_concurrency.lockutils [req-6d61e921-0534-4846-867a-29395983bc79 req-3dcd5b89-43c0-4067-9c2d-8ade1d2e0b64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.981 226109 DEBUG oslo_concurrency.lockutils [req-6d61e921-0534-4846-867a-29395983bc79 req-3dcd5b89-43c0-4067-9c2d-8ade1d2e0b64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.981 226109 DEBUG nova.compute.manager [req-6d61e921-0534-4846-867a-29395983bc79 req-3dcd5b89-43c0-4067-9c2d-8ade1d2e0b64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.981 226109 WARNING nova.compute.manager [req-6d61e921-0534-4846-867a-29395983bc79 req-3dcd5b89-43c0-4067-9c2d-8ade1d2e0b64 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state active and task_state suspending.
Dec 06 08:25:17 compute-1 nova_compute[226101]: 2025-12-06 08:25:17.994 226109 DEBUG nova.compute.manager [None req-3d119af1-922f-4a5b-897c-3ff34021ab2b ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:18.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:18.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:18 compute-1 nova_compute[226101]: 2025-12-06 08:25:18.729 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:18 compute-1 ceph-mon[81689]: pgmap v3992: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 93 op/s
Dec 06 08:25:19 compute-1 nova_compute[226101]: 2025-12-06 08:25:19.421 226109 INFO nova.compute.manager [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Resuming
Dec 06 08:25:19 compute-1 nova_compute[226101]: 2025-12-06 08:25:19.422 226109 DEBUG nova.objects.instance [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'flavor' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:19 compute-1 nova_compute[226101]: 2025-12-06 08:25:19.466 226109 DEBUG oslo_concurrency.lockutils [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:25:19 compute-1 nova_compute[226101]: 2025-12-06 08:25:19.466 226109 DEBUG oslo_concurrency.lockutils [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquired lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:25:19 compute-1 nova_compute[226101]: 2025-12-06 08:25:19.467 226109 DEBUG nova.network.neutron [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:25:19 compute-1 nova_compute[226101]: 2025-12-06 08:25:19.582 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:20 compute-1 nova_compute[226101]: 2025-12-06 08:25:20.060 226109 DEBUG nova.compute.manager [req-35aeccb1-bdcc-4fb3-8769-bd0292ed44e4 req-95331a16-6893-4893-b801-ccbdbd9aeca4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:20 compute-1 nova_compute[226101]: 2025-12-06 08:25:20.060 226109 DEBUG oslo_concurrency.lockutils [req-35aeccb1-bdcc-4fb3-8769-bd0292ed44e4 req-95331a16-6893-4893-b801-ccbdbd9aeca4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:20 compute-1 nova_compute[226101]: 2025-12-06 08:25:20.060 226109 DEBUG oslo_concurrency.lockutils [req-35aeccb1-bdcc-4fb3-8769-bd0292ed44e4 req-95331a16-6893-4893-b801-ccbdbd9aeca4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:20 compute-1 nova_compute[226101]: 2025-12-06 08:25:20.061 226109 DEBUG oslo_concurrency.lockutils [req-35aeccb1-bdcc-4fb3-8769-bd0292ed44e4 req-95331a16-6893-4893-b801-ccbdbd9aeca4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:20 compute-1 nova_compute[226101]: 2025-12-06 08:25:20.061 226109 DEBUG nova.compute.manager [req-35aeccb1-bdcc-4fb3-8769-bd0292ed44e4 req-95331a16-6893-4893-b801-ccbdbd9aeca4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:20 compute-1 nova_compute[226101]: 2025-12-06 08:25:20.061 226109 WARNING nova.compute.manager [req-35aeccb1-bdcc-4fb3-8769-bd0292ed44e4 req-95331a16-6893-4893-b801-ccbdbd9aeca4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state suspended and task_state resuming.
Dec 06 08:25:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:20.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:20.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:20 compute-1 ceph-mon[81689]: pgmap v3993: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 70 op/s
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.835504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520835627, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 372, "num_deletes": 251, "total_data_size": 316236, "memory_usage": 323832, "flush_reason": "Manual Compaction"}
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520839663, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 208199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88016, "largest_seqno": 88383, "table_properties": {"data_size": 205998, "index_size": 364, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5534, "raw_average_key_size": 18, "raw_value_size": 201608, "raw_average_value_size": 676, "num_data_blocks": 16, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009510, "oldest_key_time": 1765009510, "file_creation_time": 1765009520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 4193 microseconds, and 1983 cpu microseconds.
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.839712) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 208199 bytes OK
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.839734) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.841493) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.841525) EVENT_LOG_v1 {"time_micros": 1765009520841515, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.841548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 313765, prev total WAL file size 313765, number of live WAL files 2.
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.842190) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(203KB)], [180(12MB)]
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520842291, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 13212634, "oldest_snapshot_seqno": -1}
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 11221 keys, 11300773 bytes, temperature: kUnknown
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520946960, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 11300773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11232937, "index_size": 38667, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 299293, "raw_average_key_size": 26, "raw_value_size": 11041187, "raw_average_value_size": 983, "num_data_blocks": 1443, "num_entries": 11221, "num_filter_entries": 11221, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.948198) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11300773 bytes
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.950041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.1 rd, 107.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(117.7) write-amplify(54.3) OK, records in: 11734, records dropped: 513 output_compression: NoCompression
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.950078) EVENT_LOG_v1 {"time_micros": 1765009520950061, "job": 116, "event": "compaction_finished", "compaction_time_micros": 104803, "compaction_time_cpu_micros": 54247, "output_level": 6, "num_output_files": 1, "total_output_size": 11300773, "num_input_records": 11734, "num_output_records": 11221, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520951101, "job": 116, "event": "table_file_deletion", "file_number": 182}
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520955598, "job": 116, "event": "table_file_deletion", "file_number": 180}
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.841972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.955820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.955828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.955831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.955834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:25:20.955837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:22.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:22.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:23 compute-1 ceph-mon[81689]: pgmap v3994: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 75 op/s
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.632 226109 DEBUG nova.network.neutron [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Updating instance_info_cache with network_info: [{"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.651 226109 DEBUG oslo_concurrency.lockutils [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Releasing lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.656 226109 DEBUG nova.virt.libvirt.vif [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:25:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1729091669',display_name='tempest-TestServerAdvancedOps-server-1729091669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1729091669',id=220,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:25:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d82b0e57c7a246a7abb3cc909b561513',ramdisk_id='',reservation_id='r-hbvamvwg',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-950012428',owner_user_name='tempest-TestServerAdvancedOps-950012428-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:25:18Z,user_data=None,user_id='ebd2983853994f2684b4bba5a9593dd0',uuid=72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.656 226109 DEBUG nova.network.os_vif_util [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converting VIF {"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.657 226109 DEBUG nova.network.os_vif_util [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.658 226109 DEBUG os_vif [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.658 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.659 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.659 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.662 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.663 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1839354-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.663 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1839354-6d, col_values=(('external_ids', {'iface-id': 'b1839354-6d4c-4442-a3d9-c8bacfc650d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:2a:d6', 'vm-uuid': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.663 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.664 226109 INFO os_vif [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d')
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.695 226109 DEBUG nova.objects.instance [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'numa_topology' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.732 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:23 compute-1 kernel: tapb1839354-6d: entered promiscuous mode
Dec 06 08:25:23 compute-1 NetworkManager[49031]: <info>  [1765009523.7722] manager: (tapb1839354-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Dec 06 08:25:23 compute-1 ovn_controller[130279]: 2025-12-06T08:25:23Z|00877|binding|INFO|Claiming lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 for this chassis.
Dec 06 08:25:23 compute-1 ovn_controller[130279]: 2025-12-06T08:25:23Z|00878|binding|INFO|b1839354-6d4c-4442-a3d9-c8bacfc650d3: Claiming fa:16:3e:cd:2a:d6 10.100.0.12
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.772 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:23.779 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2a:d6 10.100.0.12'], port_security=['fa:16:3e:cd:2a:d6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36f50448-aea8-4f70-a635-f51c905b14cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd82b0e57c7a246a7abb3cc909b561513', 'neutron:revision_number': '5', 'neutron:security_group_ids': '304e9f17-cc9f-41ef-bb37-605ef52c4a7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b764814a-f207-4922-b616-1ca7bf2e289f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b1839354-6d4c-4442-a3d9-c8bacfc650d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:25:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:23.781 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b1839354-6d4c-4442-a3d9-c8bacfc650d3 in datapath 36f50448-aea8-4f70-a635-f51c905b14cb bound to our chassis
Dec 06 08:25:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:23.781 139580 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36f50448-aea8-4f70-a635-f51c905b14cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 08:25:23 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:23.782 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[16f1ff09-cc9f-4c54-b1df-52ee5c722038]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:25:23 compute-1 ovn_controller[130279]: 2025-12-06T08:25:23Z|00879|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 up in Southbound
Dec 06 08:25:23 compute-1 ovn_controller[130279]: 2025-12-06T08:25:23Z|00880|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 ovn-installed in OVS
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.792 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:23 compute-1 systemd-udevd[316004]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:25:23 compute-1 nova_compute[226101]: 2025-12-06 08:25:23.800 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:23 compute-1 NetworkManager[49031]: <info>  [1765009523.8117] device (tapb1839354-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:25:23 compute-1 NetworkManager[49031]: <info>  [1765009523.8126] device (tapb1839354-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:25:23 compute-1 systemd-machined[190302]: New machine qemu-100-instance-000000dc.
Dec 06 08:25:23 compute-1 systemd[1]: Started Virtual Machine qemu-100-instance-000000dc.
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.549 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.549 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009524.5489779, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.550 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Started (Lifecycle Event)
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.575 226109 DEBUG nova.compute.manager [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.575 226109 DEBUG nova.objects.instance [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'pci_devices' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.579 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.583 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.584 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.589 226109 INFO nova.virt.libvirt.driver [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance running successfully.
Dec 06 08:25:24 compute-1 virtqemud[225710]: argument unsupported: QEMU guest agent is not configured
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.591 226109 DEBUG nova.virt.libvirt.guest [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.592 226109 DEBUG nova.compute.manager [None req-524b744e-7815-4565-bafe-1c4b74ffb527 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:24.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:24.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.614 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.614 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009524.5539885, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.615 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Resumed (Lifecycle Event)
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.619 226109 DEBUG nova.compute.manager [req-f018f989-f101-48e2-91e0-c07f7b13a93c req-06ffeb72-72dd-4e07-818f-062727b8b3fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.619 226109 DEBUG oslo_concurrency.lockutils [req-f018f989-f101-48e2-91e0-c07f7b13a93c req-06ffeb72-72dd-4e07-818f-062727b8b3fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.620 226109 DEBUG oslo_concurrency.lockutils [req-f018f989-f101-48e2-91e0-c07f7b13a93c req-06ffeb72-72dd-4e07-818f-062727b8b3fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.620 226109 DEBUG oslo_concurrency.lockutils [req-f018f989-f101-48e2-91e0-c07f7b13a93c req-06ffeb72-72dd-4e07-818f-062727b8b3fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.620 226109 DEBUG nova.compute.manager [req-f018f989-f101-48e2-91e0-c07f7b13a93c req-06ffeb72-72dd-4e07-818f-062727b8b3fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.621 226109 WARNING nova.compute.manager [req-f018f989-f101-48e2-91e0-c07f7b13a93c req-06ffeb72-72dd-4e07-818f-062727b8b3fe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state suspended and task_state resuming.
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.642 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:24 compute-1 nova_compute[226101]: 2025-12-06 08:25:24.646 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:25:25 compute-1 ceph-mon[81689]: pgmap v3995: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:25:25 compute-1 nova_compute[226101]: 2025-12-06 08:25:25.557 226109 DEBUG nova.objects.instance [None req-0088b2f3-9f46-4b1a-ba1b-0f3c9f7fa93d ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'pci_devices' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:25 compute-1 nova_compute[226101]: 2025-12-06 08:25:25.579 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009525.5796275, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:25 compute-1 nova_compute[226101]: 2025-12-06 08:25:25.580 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Paused (Lifecycle Event)
Dec 06 08:25:25 compute-1 nova_compute[226101]: 2025-12-06 08:25:25.600 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:25 compute-1 nova_compute[226101]: 2025-12-06 08:25:25.605 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:25:25 compute-1 nova_compute[226101]: 2025-12-06 08:25:25.627 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 06 08:25:26 compute-1 kernel: tapb1839354-6d (unregistering): left promiscuous mode
Dec 06 08:25:26 compute-1 NetworkManager[49031]: <info>  [1765009526.2994] device (tapb1839354-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:25:26 compute-1 ovn_controller[130279]: 2025-12-06T08:25:26Z|00881|binding|INFO|Releasing lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 from this chassis (sb_readonly=0)
Dec 06 08:25:26 compute-1 ovn_controller[130279]: 2025-12-06T08:25:26Z|00882|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 down in Southbound
Dec 06 08:25:26 compute-1 ovn_controller[130279]: 2025-12-06T08:25:26Z|00883|binding|INFO|Removing iface tapb1839354-6d ovn-installed in OVS
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.311 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:26.316 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2a:d6 10.100.0.12'], port_security=['fa:16:3e:cd:2a:d6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36f50448-aea8-4f70-a635-f51c905b14cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd82b0e57c7a246a7abb3cc909b561513', 'neutron:revision_number': '6', 'neutron:security_group_ids': '304e9f17-cc9f-41ef-bb37-605ef52c4a7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b764814a-f207-4922-b616-1ca7bf2e289f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b1839354-6d4c-4442-a3d9-c8bacfc650d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:25:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:26.317 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b1839354-6d4c-4442-a3d9-c8bacfc650d3 in datapath 36f50448-aea8-4f70-a635-f51c905b14cb unbound from our chassis
Dec 06 08:25:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:26.318 139580 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36f50448-aea8-4f70-a635-f51c905b14cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 08:25:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:26.319 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[3b89ea16-b359-4acf-82d2-6a24c373c971]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.326 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:26 compute-1 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Dec 06 08:25:26 compute-1 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000dc.scope: Consumed 1.787s CPU time.
Dec 06 08:25:26 compute-1 systemd-machined[190302]: Machine qemu-100-instance-000000dc terminated.
Dec 06 08:25:26 compute-1 NetworkManager[49031]: <info>  [1765009526.4673] manager: (tapb1839354-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.483 226109 DEBUG nova.compute.manager [None req-0088b2f3-9f46-4b1a-ba1b-0f3c9f7fa93d ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:26.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:25:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:26 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:26.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.716 226109 DEBUG nova.compute.manager [req-3bf917ff-4530-40c8-89b9-e8f32ec1ea20 req-06eacbfa-156f-4631-8185-e8d7ac55d3fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.716 226109 DEBUG oslo_concurrency.lockutils [req-3bf917ff-4530-40c8-89b9-e8f32ec1ea20 req-06eacbfa-156f-4631-8185-e8d7ac55d3fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.717 226109 DEBUG oslo_concurrency.lockutils [req-3bf917ff-4530-40c8-89b9-e8f32ec1ea20 req-06eacbfa-156f-4631-8185-e8d7ac55d3fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.717 226109 DEBUG oslo_concurrency.lockutils [req-3bf917ff-4530-40c8-89b9-e8f32ec1ea20 req-06eacbfa-156f-4631-8185-e8d7ac55d3fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.717 226109 DEBUG nova.compute.manager [req-3bf917ff-4530-40c8-89b9-e8f32ec1ea20 req-06eacbfa-156f-4631-8185-e8d7ac55d3fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:26 compute-1 nova_compute[226101]: 2025-12-06 08:25:26.718 226109 WARNING nova.compute.manager [req-3bf917ff-4530-40c8-89b9-e8f32ec1ea20 req-06eacbfa-156f-4631-8185-e8d7ac55d3fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state suspended and task_state None.
Dec 06 08:25:27 compute-1 ceph-mon[81689]: pgmap v3996: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec 06 08:25:27 compute-1 nova_compute[226101]: 2025-12-06 08:25:27.757 226109 INFO nova.compute.manager [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Resuming
Dec 06 08:25:27 compute-1 nova_compute[226101]: 2025-12-06 08:25:27.758 226109 DEBUG nova.objects.instance [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'flavor' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:27 compute-1 nova_compute[226101]: 2025-12-06 08:25:27.792 226109 DEBUG oslo_concurrency.lockutils [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:25:27 compute-1 nova_compute[226101]: 2025-12-06 08:25:27.794 226109 DEBUG oslo_concurrency.lockutils [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquired lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:25:27 compute-1 nova_compute[226101]: 2025-12-06 08:25:27.794 226109 DEBUG nova.network.neutron [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:25:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:28.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:28.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.773 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.829 226109 DEBUG nova.compute.manager [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.829 226109 DEBUG oslo_concurrency.lockutils [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.829 226109 DEBUG oslo_concurrency.lockutils [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.829 226109 DEBUG oslo_concurrency.lockutils [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.829 226109 DEBUG nova.compute.manager [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.830 226109 WARNING nova.compute.manager [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state suspended and task_state resuming.
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.830 226109 DEBUG nova.compute.manager [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.830 226109 DEBUG oslo_concurrency.lockutils [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.830 226109 DEBUG oslo_concurrency.lockutils [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.830 226109 DEBUG oslo_concurrency.lockutils [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.830 226109 DEBUG nova.compute.manager [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.831 226109 WARNING nova.compute.manager [req-24cf330d-47f7-4b40-bd5d-5ce995b16c6e req-7788bb0d-557e-4982-bb45-1c13fbc136cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state suspended and task_state resuming.
Dec 06 08:25:28 compute-1 nova_compute[226101]: 2025-12-06 08:25:28.992 226109 DEBUG nova.network.neutron [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Updating instance_info_cache with network_info: [{"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.007 226109 DEBUG oslo_concurrency.lockutils [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Releasing lock "refresh_cache-72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.011 226109 DEBUG nova.virt.libvirt.vif [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:25:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1729091669',display_name='tempest-TestServerAdvancedOps-server-1729091669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1729091669',id=220,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:25:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d82b0e57c7a246a7abb3cc909b561513',ramdisk_id='',reservation_id='r-hbvamvwg',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-950012428',owner_user_name='tempest-TestServerAdvancedOps-950012428-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:25:26Z,user_data=None,user_id='ebd2983853994f2684b4bba5a9593dd0',uuid=72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.011 226109 DEBUG nova.network.os_vif_util [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converting VIF {"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.012 226109 DEBUG nova.network.os_vif_util [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.012 226109 DEBUG os_vif [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.012 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.013 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.013 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.015 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.016 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1839354-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.016 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1839354-6d, col_values=(('external_ids', {'iface-id': 'b1839354-6d4c-4442-a3d9-c8bacfc650d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:2a:d6', 'vm-uuid': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.017 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.019 226109 INFO os_vif [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d')
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.044 226109 DEBUG nova.objects.instance [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'numa_topology' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:29 compute-1 podman[316076]: 2025-12-06 08:25:29.071808061 +0000 UTC m=+0.057366529 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 08:25:29 compute-1 podman[316075]: 2025-12-06 08:25:29.078967003 +0000 UTC m=+0.065276211 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 06 08:25:29 compute-1 podman[316077]: 2025-12-06 08:25:29.103173623 +0000 UTC m=+0.082115014 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 08:25:29 compute-1 kernel: tapb1839354-6d: entered promiscuous mode
Dec 06 08:25:29 compute-1 NetworkManager[49031]: <info>  [1765009529.1078] manager: (tapb1839354-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.108 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:29 compute-1 ovn_controller[130279]: 2025-12-06T08:25:29Z|00884|binding|INFO|Claiming lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 for this chassis.
Dec 06 08:25:29 compute-1 ovn_controller[130279]: 2025-12-06T08:25:29Z|00885|binding|INFO|b1839354-6d4c-4442-a3d9-c8bacfc650d3: Claiming fa:16:3e:cd:2a:d6 10.100.0.12
Dec 06 08:25:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:29.116 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2a:d6 10.100.0.12'], port_security=['fa:16:3e:cd:2a:d6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36f50448-aea8-4f70-a635-f51c905b14cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd82b0e57c7a246a7abb3cc909b561513', 'neutron:revision_number': '7', 'neutron:security_group_ids': '304e9f17-cc9f-41ef-bb37-605ef52c4a7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b764814a-f207-4922-b616-1ca7bf2e289f, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b1839354-6d4c-4442-a3d9-c8bacfc650d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:25:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:29.117 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b1839354-6d4c-4442-a3d9-c8bacfc650d3 in datapath 36f50448-aea8-4f70-a635-f51c905b14cb bound to our chassis
Dec 06 08:25:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:29.118 139580 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36f50448-aea8-4f70-a635-f51c905b14cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 08:25:29 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:29.118 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f8893f36-90f8-4fd2-a4fc-4096713dc17e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:25:29 compute-1 ovn_controller[130279]: 2025-12-06T08:25:29Z|00886|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 up in Southbound
Dec 06 08:25:29 compute-1 ovn_controller[130279]: 2025-12-06T08:25:29Z|00887|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 ovn-installed in OVS
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.124 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.126 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:29 compute-1 systemd-machined[190302]: New machine qemu-101-instance-000000dc.
Dec 06 08:25:29 compute-1 systemd-udevd[316155]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:25:29 compute-1 NetworkManager[49031]: <info>  [1765009529.1547] device (tapb1839354-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:25:29 compute-1 systemd[1]: Started Virtual Machine qemu-101-instance-000000dc.
Dec 06 08:25:29 compute-1 NetworkManager[49031]: <info>  [1765009529.1567] device (tapb1839354-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:25:29 compute-1 ceph-mon[81689]: pgmap v3997: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 676 KiB/s rd, 28 op/s
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.585 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.805 226109 DEBUG nova.virt.libvirt.host [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Removed pending event for 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.806 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009529.8057168, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.806 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Started (Lifecycle Event)
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.820 226109 DEBUG nova.compute.manager [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.820 226109 DEBUG nova.objects.instance [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'pci_devices' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.833 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.837 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.894 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.894 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009529.8086941, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.895 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Resumed (Lifecycle Event)
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.897 226109 INFO nova.virt.libvirt.driver [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance running successfully.
Dec 06 08:25:29 compute-1 virtqemud[225710]: argument unsupported: QEMU guest agent is not configured
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.900 226109 DEBUG nova.virt.libvirt.guest [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.900 226109 DEBUG nova.compute.manager [None req-c40b9ca6-deba-41e8-828f-3f7e11a5a702 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.908 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.911 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:25:29 compute-1 nova_compute[226101]: 2025-12-06 08:25:29.931 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 06 08:25:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:30.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:30.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.907 226109 DEBUG nova.compute.manager [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.908 226109 DEBUG oslo_concurrency.lockutils [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.909 226109 DEBUG oslo_concurrency.lockutils [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.909 226109 DEBUG oslo_concurrency.lockutils [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.909 226109 DEBUG nova.compute.manager [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.909 226109 WARNING nova.compute.manager [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state active and task_state None.
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.909 226109 DEBUG nova.compute.manager [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.910 226109 DEBUG oslo_concurrency.lockutils [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.910 226109 DEBUG oslo_concurrency.lockutils [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.910 226109 DEBUG oslo_concurrency.lockutils [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.910 226109 DEBUG nova.compute.manager [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:30 compute-1 nova_compute[226101]: 2025-12-06 08:25:30.910 226109 WARNING nova.compute.manager [req-d09b4576-eb57-4243-8a16-57d70b56c73f req-08520220-caa4-40b0-a684-ce22303b2fe5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state active and task_state None.
Dec 06 08:25:31 compute-1 nova_compute[226101]: 2025-12-06 08:25:31.911 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:31 compute-1 nova_compute[226101]: 2025-12-06 08:25:31.912 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:31 compute-1 nova_compute[226101]: 2025-12-06 08:25:31.912 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:31 compute-1 nova_compute[226101]: 2025-12-06 08:25:31.912 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:31 compute-1 nova_compute[226101]: 2025-12-06 08:25:31.913 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:31 compute-1 nova_compute[226101]: 2025-12-06 08:25:31.915 226109 INFO nova.compute.manager [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Terminating instance
Dec 06 08:25:31 compute-1 nova_compute[226101]: 2025-12-06 08:25:31.917 226109 DEBUG nova.compute.manager [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:25:31 compute-1 ceph-mon[81689]: pgmap v3998: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 9 op/s
Dec 06 08:25:32 compute-1 kernel: tapb1839354-6d (unregistering): left promiscuous mode
Dec 06 08:25:32 compute-1 NetworkManager[49031]: <info>  [1765009532.0323] device (tapb1839354-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:25:32 compute-1 ovn_controller[130279]: 2025-12-06T08:25:32Z|00888|binding|INFO|Releasing lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 from this chassis (sb_readonly=0)
Dec 06 08:25:32 compute-1 ovn_controller[130279]: 2025-12-06T08:25:32Z|00889|binding|INFO|Setting lport b1839354-6d4c-4442-a3d9-c8bacfc650d3 down in Southbound
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.045 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:32 compute-1 ovn_controller[130279]: 2025-12-06T08:25:32Z|00890|binding|INFO|Removing iface tapb1839354-6d ovn-installed in OVS
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.047 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:32.053 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2a:d6 10.100.0.12'], port_security=['fa:16:3e:cd:2a:d6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36f50448-aea8-4f70-a635-f51c905b14cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd82b0e57c7a246a7abb3cc909b561513', 'neutron:revision_number': '8', 'neutron:security_group_ids': '304e9f17-cc9f-41ef-bb37-605ef52c4a7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b764814a-f207-4922-b616-1ca7bf2e289f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=b1839354-6d4c-4442-a3d9-c8bacfc650d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:25:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:32.054 139580 INFO neutron.agent.ovn.metadata.agent [-] Port b1839354-6d4c-4442-a3d9-c8bacfc650d3 in datapath 36f50448-aea8-4f70-a635-f51c905b14cb unbound from our chassis
Dec 06 08:25:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:32.055 139580 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36f50448-aea8-4f70-a635-f51c905b14cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 08:25:32 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:32.056 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[030b168d-920b-487d-805d-8fc79ee025f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.076 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:32 compute-1 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Dec 06 08:25:32 compute-1 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000dc.scope: Consumed 2.773s CPU time.
Dec 06 08:25:32 compute-1 systemd-machined[190302]: Machine qemu-101-instance-000000dc terminated.
Dec 06 08:25:32 compute-1 NetworkManager[49031]: <info>  [1765009532.1415] manager: (tapb1839354-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.158 226109 INFO nova.virt.libvirt.driver [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Instance destroyed successfully.
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.159 226109 DEBUG nova.objects.instance [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lazy-loading 'resources' on Instance uuid 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.177 226109 DEBUG nova.virt.libvirt.vif [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:25:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1729091669',display_name='tempest-TestServerAdvancedOps-server-1729091669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1729091669',id=220,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:25:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d82b0e57c7a246a7abb3cc909b561513',ramdisk_id='',reservation_id='r-hbvamvwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-950012428',owner_user_name='tempest-TestServerAdvancedOps-950012428-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:25:29Z,user_data=None,user_id='ebd2983853994f2684b4bba5a9593dd0',uuid=72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.178 226109 DEBUG nova.network.os_vif_util [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converting VIF {"id": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "address": "fa:16:3e:cd:2a:d6", "network": {"id": "36f50448-aea8-4f70-a635-f51c905b14cb", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-59843899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d82b0e57c7a246a7abb3cc909b561513", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1839354-6d", "ovs_interfaceid": "b1839354-6d4c-4442-a3d9-c8bacfc650d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.180 226109 DEBUG nova.network.os_vif_util [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.180 226109 DEBUG os_vif [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.184 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.185 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1839354-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.187 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.189 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.192 226109 INFO os_vif [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2a:d6,bridge_name='br-int',has_traffic_filtering=True,id=b1839354-6d4c-4442-a3d9-c8bacfc650d3,network=Network(36f50448-aea8-4f70-a635-f51c905b14cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1839354-6d')
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.599 226109 INFO nova.virt.libvirt.driver [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Deleting instance files /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_del
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.600 226109 INFO nova.virt.libvirt.driver [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Deletion of /var/lib/nova/instances/72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3_del complete
Dec 06 08:25:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:32.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.655 226109 INFO nova.compute.manager [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Took 0.74 seconds to destroy the instance on the hypervisor.
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.656 226109 DEBUG oslo.service.loopingcall [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.656 226109 DEBUG nova.compute.manager [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.657 226109 DEBUG nova.network.neutron [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:25:32 compute-1 ceph-mon[81689]: pgmap v3999: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 14 op/s
Dec 06 08:25:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.992 226109 DEBUG nova.compute.manager [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.993 226109 DEBUG oslo_concurrency.lockutils [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.993 226109 DEBUG oslo_concurrency.lockutils [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.994 226109 DEBUG oslo_concurrency.lockutils [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.994 226109 DEBUG nova.compute.manager [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.994 226109 DEBUG nova.compute.manager [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-unplugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.994 226109 DEBUG nova.compute.manager [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.995 226109 DEBUG oslo_concurrency.lockutils [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.995 226109 DEBUG oslo_concurrency.lockutils [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.995 226109 DEBUG oslo_concurrency.lockutils [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.996 226109 DEBUG nova.compute.manager [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] No waiting events found dispatching network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:25:32 compute-1 nova_compute[226101]: 2025-12-06 08:25:32.996 226109 WARNING nova.compute.manager [req-33f69b75-d90b-4b01-908a-479709023807 req-c681301f-f192-4f40-8d78-e4c9abf32bda 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received unexpected event network-vif-plugged-b1839354-6d4c-4442-a3d9-c8bacfc650d3 for instance with vm_state active and task_state deleting.
Dec 06 08:25:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:33.630 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=107, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=106) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:25:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:33.632 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:25:33 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:25:33.632 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '107'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:33 compute-1 nova_compute[226101]: 2025-12-06 08:25:33.632 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:33 compute-1 nova_compute[226101]: 2025-12-06 08:25:33.670 226109 DEBUG nova.network.neutron [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:25:33 compute-1 nova_compute[226101]: 2025-12-06 08:25:33.698 226109 INFO nova.compute.manager [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Took 1.04 seconds to deallocate network for instance.
Dec 06 08:25:33 compute-1 nova_compute[226101]: 2025-12-06 08:25:33.744 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:33 compute-1 nova_compute[226101]: 2025-12-06 08:25:33.745 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:33 compute-1 nova_compute[226101]: 2025-12-06 08:25:33.767 226109 DEBUG nova.compute.manager [req-15c2f0aa-100b-4483-91b3-4ebbf16aaf73 req-88e98797-22fc-444c-b895-e8bf9d9adcf8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Received event network-vif-deleted-b1839354-6d4c-4442-a3d9-c8bacfc650d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:25:33 compute-1 nova_compute[226101]: 2025-12-06 08:25:33.777 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:33 compute-1 nova_compute[226101]: 2025-12-06 08:25:33.804 226109 DEBUG oslo_concurrency.processutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:34 compute-1 nova_compute[226101]: 2025-12-06 08:25:34.351 226109 DEBUG oslo_concurrency.processutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:34 compute-1 nova_compute[226101]: 2025-12-06 08:25:34.364 226109 DEBUG nova.compute.provider_tree [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:25:34 compute-1 nova_compute[226101]: 2025-12-06 08:25:34.388 226109 DEBUG nova.scheduler.client.report [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:25:34 compute-1 nova_compute[226101]: 2025-12-06 08:25:34.430 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:34 compute-1 nova_compute[226101]: 2025-12-06 08:25:34.466 226109 INFO nova.scheduler.client.report [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Deleted allocations for instance 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3
Dec 06 08:25:34 compute-1 nova_compute[226101]: 2025-12-06 08:25:34.541 226109 DEBUG oslo_concurrency.lockutils [None req-981e6af0-75e2-446c-84c6-809339d41242 ebd2983853994f2684b4bba5a9593dd0 d82b0e57c7a246a7abb3cc909b561513 - - default default] Lock "72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:34.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:35 compute-1 ceph-mon[81689]: pgmap v4000: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 KiB/s rd, 9 op/s
Dec 06 08:25:35 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3315669685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:35 compute-1 nova_compute[226101]: 2025-12-06 08:25:35.839 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:36.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:36.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:37 compute-1 ceph-mon[81689]: pgmap v4001: 305 pgs: 305 active+clean; 139 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 853 B/s wr, 25 op/s
Dec 06 08:25:37 compute-1 nova_compute[226101]: 2025-12-06 08:25:37.189 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:38.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:38.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:38 compute-1 nova_compute[226101]: 2025-12-06 08:25:38.779 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:39 compute-1 ceph-mon[81689]: pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Dec 06 08:25:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:40.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:40 compute-1 sshd-session[316265]: Received disconnect from 186.87.166.141 port 39898:11: Bye Bye [preauth]
Dec 06 08:25:40 compute-1 sshd-session[316265]: Disconnected from authenticating user root 186.87.166.141 port 39898 [preauth]
Dec 06 08:25:41 compute-1 ceph-mon[81689]: pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 06 08:25:42 compute-1 nova_compute[226101]: 2025-12-06 08:25:42.193 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:42.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:43 compute-1 sshd-session[316267]: Received disconnect from 14.225.3.79 port 45246:11: Bye Bye [preauth]
Dec 06 08:25:43 compute-1 sshd-session[316267]: Disconnected from authenticating user root 14.225.3.79 port 45246 [preauth]
Dec 06 08:25:43 compute-1 ceph-mon[81689]: pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 06 08:25:43 compute-1 nova_compute[226101]: 2025-12-06 08:25:43.813 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:44.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:44.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:44 compute-1 ceph-mon[81689]: pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:25:45 compute-1 sshd-session[316271]: Received disconnect from 91.144.158.231 port 6061:11: Bye Bye [preauth]
Dec 06 08:25:45 compute-1 sshd-session[316271]: Disconnected from authenticating user root 91.144.158.231 port 6061 [preauth]
Dec 06 08:25:45 compute-1 nova_compute[226101]: 2025-12-06 08:25:45.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:45 compute-1 nova_compute[226101]: 2025-12-06 08:25:45.629 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:45 compute-1 nova_compute[226101]: 2025-12-06 08:25:45.629 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:45 compute-1 nova_compute[226101]: 2025-12-06 08:25:45.629 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:45 compute-1 nova_compute[226101]: 2025-12-06 08:25:45.630 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:25:45 compute-1 nova_compute[226101]: 2025-12-06 08:25:45.630 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:45 compute-1 sshd-session[316273]: Received disconnect from 106.51.92.114 port 36788:11: Bye Bye [preauth]
Dec 06 08:25:45 compute-1 sshd-session[316273]: Disconnected from authenticating user root 106.51.92.114 port 36788 [preauth]
Dec 06 08:25:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:25:46 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3770189070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:46 compute-1 nova_compute[226101]: 2025-12-06 08:25:46.085 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:46 compute-1 nova_compute[226101]: 2025-12-06 08:25:46.240 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:25:46 compute-1 nova_compute[226101]: 2025-12-06 08:25:46.241 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4164MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:25:46 compute-1 nova_compute[226101]: 2025-12-06 08:25:46.241 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:46 compute-1 nova_compute[226101]: 2025-12-06 08:25:46.241 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:46 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3770189070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:46.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:46.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.157 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009532.1557727, 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.158 226109 INFO nova.compute.manager [-] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] VM Stopped (Lifecycle Event)
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.197 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.210 226109 DEBUG nova.compute.manager [None req-008454a0-5dae-4c38-bcab-b3408d12954e - - - - - -] [instance: 72f6a78e-261f-4f39-97a5-a6c4bd1c6ea3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.250 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.251 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.285 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:47 compute-1 ceph-mon[81689]: pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:25:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:25:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/375367801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.710 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.716 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.731 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.752 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:25:47 compute-1 nova_compute[226101]: 2025-12-06 08:25:47.753 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:48.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:48.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:48 compute-1 nova_compute[226101]: 2025-12-06 08:25:48.815 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/375367801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:50.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:51 compute-1 ceph-mon[81689]: pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Dec 06 08:25:51 compute-1 ceph-mon[81689]: pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:51 compute-1 sshd-session[316320]: Received disconnect from 186.96.151.198 port 55394:11: Bye Bye [preauth]
Dec 06 08:25:51 compute-1 sshd-session[316320]: Disconnected from authenticating user root 186.96.151.198 port 55394 [preauth]
Dec 06 08:25:52 compute-1 nova_compute[226101]: 2025-12-06 08:25:52.201 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:52.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:52.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:53 compute-1 ceph-mon[81689]: pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:53 compute-1 nova_compute[226101]: 2025-12-06 08:25:53.863 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:54.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:54.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:54 compute-1 ceph-mon[81689]: pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:56.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:56 compute-1 nova_compute[226101]: 2025-12-06 08:25:56.753 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:56 compute-1 nova_compute[226101]: 2025-12-06 08:25:56.753 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:25:56 compute-1 nova_compute[226101]: 2025-12-06 08:25:56.753 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:25:56 compute-1 nova_compute[226101]: 2025-12-06 08:25:56.767 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:25:57 compute-1 nova_compute[226101]: 2025-12-06 08:25:57.239 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:57 compute-1 ceph-mon[81689]: pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:58.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:25:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:58.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:58 compute-1 nova_compute[226101]: 2025-12-06 08:25:58.866 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:59 compute-1 ceph-mon[81689]: pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:00 compute-1 podman[316323]: 2025-12-06 08:26:00.07087789 +0000 UTC m=+0.046225080 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:26:00 compute-1 podman[316322]: 2025-12-06 08:26:00.080343144 +0000 UTC m=+0.058796028 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:26:00 compute-1 podman[316324]: 2025-12-06 08:26:00.156330211 +0000 UTC m=+0.115339483 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:26:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:00.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:00 compute-1 sudo[316383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:00 compute-1 sudo[316383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:00 compute-1 sudo[316383]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:00 compute-1 sudo[316408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:26:00 compute-1 sudo[316408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:00 compute-1 sudo[316408]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-1 sudo[316433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:01 compute-1 sudo[316433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:01 compute-1 sudo[316433]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-1 sudo[316458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:26:01 compute-1 sudo[316458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:01 compute-1 sudo[316458]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-1 ceph-mon[81689]: pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:26:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:26:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:26:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:26:01 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:01.704 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:01.705 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:01.705 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:02 compute-1 nova_compute[226101]: 2025-12-06 08:26:02.242 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:02 compute-1 nova_compute[226101]: 2025-12-06 08:26:02.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:02 compute-1 nova_compute[226101]: 2025-12-06 08:26:02.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:02.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:02 compute-1 ceph-mon[81689]: pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:03 compute-1 nova_compute[226101]: 2025-12-06 08:26:03.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:03 compute-1 nova_compute[226101]: 2025-12-06 08:26:03.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:26:03 compute-1 nova_compute[226101]: 2025-12-06 08:26:03.867 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1078334676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:04.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2690275473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:04 compute-1 ceph-mon[81689]: pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3266479852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/382677093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:05 compute-1 nova_compute[226101]: 2025-12-06 08:26:05.921 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "280fb28b-e8b2-40d1-ade2-c117de080ce1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:05 compute-1 nova_compute[226101]: 2025-12-06 08:26:05.921 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:05 compute-1 nova_compute[226101]: 2025-12-06 08:26:05.944 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.114 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.115 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.125 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.126 226109 INFO nova.compute.claims [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Claim successful on node compute-1.ctlplane.example.com
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.254 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:06.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:26:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1855592645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.789 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.799 226109 DEBUG nova.compute.provider_tree [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.817 226109 DEBUG nova.scheduler.client.report [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.850 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.852 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.909 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.909 226109 DEBUG nova.network.neutron [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:26:06 compute-1 ceph-mon[81689]: pgmap v4016: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 975 KiB/s wr, 1 op/s
Dec 06 08:26:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3789081481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1855592645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.936 226109 INFO nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:26:06 compute-1 nova_compute[226101]: 2025-12-06 08:26:06.962 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.063 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.065 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.066 226109 INFO nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Creating image(s)
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.105 226109 DEBUG nova.storage.rbd_utils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.143 226109 DEBUG nova.storage.rbd_utils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.183 226109 DEBUG nova.storage.rbd_utils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.188 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.251 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.278 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.280 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.282 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.282 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.320 226109 DEBUG nova.storage.rbd_utils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.326 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.613 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.689 226109 DEBUG nova.storage.rbd_utils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] resizing rbd image 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.801 226109 DEBUG nova.network.neutron [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Successfully created port: e0c4d35f-9815-42ef-889d-8547f1c32618 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.812 226109 DEBUG nova.objects.instance [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'migration_context' on Instance uuid 280fb28b-e8b2-40d1-ade2-c117de080ce1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.827 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.827 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Ensure instance console log exists: /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.828 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.828 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:07 compute-1 nova_compute[226101]: 2025-12-06 08:26:07.828 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:07 compute-1 sudo[316704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:07 compute-1 sudo[316704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:07 compute-1 sudo[316704]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:07 compute-1 sudo[316729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:26:08 compute-1 sudo[316729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:08 compute-1 sudo[316729]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.593 226109 DEBUG nova.network.neutron [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Successfully updated port: e0c4d35f-9815-42ef-889d-8547f1c32618 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.615 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "refresh_cache-280fb28b-e8b2-40d1-ade2-c117de080ce1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.615 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquired lock "refresh_cache-280fb28b-e8b2-40d1-ade2-c117de080ce1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.616 226109 DEBUG nova.network.neutron [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:26:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:08.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:08.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.716 226109 DEBUG nova.compute.manager [req-fa95224e-23df-424a-95e4-27740e80a100 req-dc9785d7-775e-415c-99bf-ec03543ae270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received event network-changed-e0c4d35f-9815-42ef-889d-8547f1c32618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.717 226109 DEBUG nova.compute.manager [req-fa95224e-23df-424a-95e4-27740e80a100 req-dc9785d7-775e-415c-99bf-ec03543ae270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Refreshing instance network info cache due to event network-changed-e0c4d35f-9815-42ef-889d-8547f1c32618. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.717 226109 DEBUG oslo_concurrency.lockutils [req-fa95224e-23df-424a-95e4-27740e80a100 req-dc9785d7-775e-415c-99bf-ec03543ae270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-280fb28b-e8b2-40d1-ade2-c117de080ce1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:26:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:08 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:08 compute-1 ceph-mon[81689]: pgmap v4017: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Dec 06 08:26:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4278581340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:08 compute-1 nova_compute[226101]: 2025-12-06 08:26:08.869 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:09 compute-1 nova_compute[226101]: 2025-12-06 08:26:09.085 226109 DEBUG nova.network.neutron [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:26:09 compute-1 sshd-session[316754]: Received disconnect from 154.219.116.39 port 39326:11: Bye Bye [preauth]
Dec 06 08:26:09 compute-1 sshd-session[316754]: Disconnected from authenticating user root 154.219.116.39 port 39326 [preauth]
Dec 06 08:26:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3946932057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2751347995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/110990899' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:26:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/110990899' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.660 226109 DEBUG nova.network.neutron [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Updating instance_info_cache with network_info: [{"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:26:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:10.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:10.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.680 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Releasing lock "refresh_cache-280fb28b-e8b2-40d1-ade2-c117de080ce1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.680 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Instance network_info: |[{"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.680 226109 DEBUG oslo_concurrency.lockutils [req-fa95224e-23df-424a-95e4-27740e80a100 req-dc9785d7-775e-415c-99bf-ec03543ae270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-280fb28b-e8b2-40d1-ade2-c117de080ce1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.681 226109 DEBUG nova.network.neutron [req-fa95224e-23df-424a-95e4-27740e80a100 req-dc9785d7-775e-415c-99bf-ec03543ae270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Refreshing network info cache for port e0c4d35f-9815-42ef-889d-8547f1c32618 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.683 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Start _get_guest_xml network_info=[{"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'encryption_options': None, 'guest_format': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.688 226109 WARNING nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.694 226109 DEBUG nova.virt.libvirt.host [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.694 226109 DEBUG nova.virt.libvirt.host [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.701 226109 DEBUG nova.virt.libvirt.host [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.701 226109 DEBUG nova.virt.libvirt.host [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.702 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.702 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.703 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.703 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.703 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.703 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.703 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.704 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.704 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.704 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.704 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.704 226109 DEBUG nova.virt.hardware [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:26:10 compute-1 nova_compute[226101]: 2025-12-06 08:26:10.707 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:10 compute-1 ceph-mon[81689]: pgmap v4018: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Dec 06 08:26:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:26:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3155474753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.158 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.192 226109 DEBUG nova.storage.rbd_utils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.197 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:26:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1140979052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.672 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.674 226109 DEBUG nova.virt.libvirt.vif [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:26:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1049833208',display_name='tempest-TestServerMultinode-server-1049833208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1049833208',id=222,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-bb6ovva8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:26:06Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=280fb28b-e8b2-40d1-ade2-c117de080ce1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.674 226109 DEBUG nova.network.os_vif_util [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.675 226109 DEBUG nova.network.os_vif_util [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:1d:05,bridge_name='br-int',has_traffic_filtering=True,id=e0c4d35f-9815-42ef-889d-8547f1c32618,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c4d35f-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.677 226109 DEBUG nova.objects.instance [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'pci_devices' on Instance uuid 280fb28b-e8b2-40d1-ade2-c117de080ce1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:26:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3155474753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1140979052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.812 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <uuid>280fb28b-e8b2-40d1-ade2-c117de080ce1</uuid>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <name>instance-000000de</name>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <memory>131072</memory>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <vcpu>1</vcpu>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <metadata>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <nova:name>tempest-TestServerMultinode-server-1049833208</nova:name>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <nova:creationTime>2025-12-06 08:26:10</nova:creationTime>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <nova:flavor name="m1.nano">
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <nova:memory>128</nova:memory>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <nova:disk>1</nova:disk>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <nova:swap>0</nova:swap>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       </nova:flavor>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <nova:owner>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <nova:user uuid="4989b6252b64443aaec21b075dbc29d9">tempest-TestServerMultinode-1864301627-project-admin</nova:user>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <nova:project uuid="7102dcd5b58d4dec801a71dacc60eaaf">tempest-TestServerMultinode-1864301627</nova:project>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       </nova:owner>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <nova:ports>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <nova:port uuid="e0c4d35f-9815-42ef-889d-8547f1c32618">
Dec 06 08:26:11 compute-1 nova_compute[226101]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         </nova:port>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       </nova:ports>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </nova:instance>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   </metadata>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <sysinfo type="smbios">
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <system>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <entry name="serial">280fb28b-e8b2-40d1-ade2-c117de080ce1</entry>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <entry name="uuid">280fb28b-e8b2-40d1-ade2-c117de080ce1</entry>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </system>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   </sysinfo>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <os>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <boot dev="hd"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <smbios mode="sysinfo"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   </os>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <features>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <acpi/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <apic/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <vmcoreinfo/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   </features>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <clock offset="utc">
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <timer name="hpet" present="no"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   </clock>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <cpu mode="custom" match="exact">
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <model>Nehalem</model>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   </cpu>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   <devices>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <disk type="network" device="disk">
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/280fb28b-e8b2-40d1-ade2-c117de080ce1_disk">
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       </source>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <target dev="vda" bus="virtio"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <disk type="network" device="cdrom">
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <driver type="raw" cache="none"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <source protocol="rbd" name="vms/280fb28b-e8b2-40d1-ade2-c117de080ce1_disk.config">
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       </source>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <auth username="openstack">
Dec 06 08:26:11 compute-1 nova_compute[226101]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       </auth>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <target dev="sda" bus="sata"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </disk>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <interface type="ethernet">
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <mac address="fa:16:3e:d0:1d:05"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <mtu size="1442"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <target dev="tape0c4d35f-98"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </interface>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <serial type="pty">
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <log file="/var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1/console.log" append="off"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </serial>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <video>
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <model type="virtio"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </video>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <input type="tablet" bus="usb"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <rng model="virtio">
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </rng>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <controller type="usb" index="0"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     <memballoon model="virtio">
Dec 06 08:26:11 compute-1 nova_compute[226101]:       <stats period="10"/>
Dec 06 08:26:11 compute-1 nova_compute[226101]:     </memballoon>
Dec 06 08:26:11 compute-1 nova_compute[226101]:   </devices>
Dec 06 08:26:11 compute-1 nova_compute[226101]: </domain>
Dec 06 08:26:11 compute-1 nova_compute[226101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.813 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Preparing to wait for external event network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.814 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.814 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.814 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.815 226109 DEBUG nova.virt.libvirt.vif [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:26:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1049833208',display_name='tempest-TestServerMultinode-server-1049833208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1049833208',id=222,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-bb6ovva8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:26:06Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=280fb28b-e8b2-40d1-ade2-c117de080ce1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.816 226109 DEBUG nova.network.os_vif_util [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.817 226109 DEBUG nova.network.os_vif_util [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:1d:05,bridge_name='br-int',has_traffic_filtering=True,id=e0c4d35f-9815-42ef-889d-8547f1c32618,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c4d35f-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.817 226109 DEBUG os_vif [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:1d:05,bridge_name='br-int',has_traffic_filtering=True,id=e0c4d35f-9815-42ef-889d-8547f1c32618,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c4d35f-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.819 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.819 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.820 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.824 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.825 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0c4d35f-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.825 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0c4d35f-98, col_values=(('external_ids', {'iface-id': 'e0c4d35f-9815-42ef-889d-8547f1c32618', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:1d:05', 'vm-uuid': '280fb28b-e8b2-40d1-ade2-c117de080ce1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.827 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:11 compute-1 NetworkManager[49031]: <info>  [1765009571.8283] manager: (tape0c4d35f-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.829 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.837 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.839 226109 INFO os_vif [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:1d:05,bridge_name='br-int',has_traffic_filtering=True,id=e0c4d35f-9815-42ef-889d-8547f1c32618,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c4d35f-98')
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.957 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.959 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.959 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No VIF found with MAC fa:16:3e:d0:1d:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:26:11 compute-1 nova_compute[226101]: 2025-12-06 08:26:11.960 226109 INFO nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Using config drive
Dec 06 08:26:12 compute-1 nova_compute[226101]: 2025-12-06 08:26:12.001 226109 DEBUG nova.storage.rbd_utils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:12 compute-1 nova_compute[226101]: 2025-12-06 08:26:12.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:12.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:12 compute-1 ceph-mon[81689]: pgmap v4019: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 MiB/s wr, 86 op/s
Dec 06 08:26:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:13 compute-1 sshd-session[316821]: Received disconnect from 101.100.194.199 port 47864:11: Bye Bye [preauth]
Dec 06 08:26:13 compute-1 sshd-session[316821]: Disconnected from authenticating user root 101.100.194.199 port 47864 [preauth]
Dec 06 08:26:13 compute-1 sshd-session[316818]: Received disconnect from 136.112.8.45 port 52992:11: Bye Bye [preauth]
Dec 06 08:26:13 compute-1 sshd-session[316818]: Disconnected from authenticating user root 136.112.8.45 port 52992 [preauth]
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.634 226109 INFO nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Creating config drive at /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1/disk.config
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.641 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9t6wek_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.774 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9t6wek_" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.807 226109 DEBUG nova.storage.rbd_utils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.814 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1/disk.config 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.990 226109 DEBUG oslo_concurrency.processutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1/disk.config 280fb28b-e8b2-40d1-ade2-c117de080ce1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:13 compute-1 nova_compute[226101]: 2025-12-06 08:26:13.991 226109 INFO nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Deleting local config drive /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1/disk.config because it was imported into RBD.
Dec 06 08:26:14 compute-1 kernel: tape0c4d35f-98: entered promiscuous mode
Dec 06 08:26:14 compute-1 NetworkManager[49031]: <info>  [1765009574.0457] manager: (tape0c4d35f-98): new Tun device (/org/freedesktop/NetworkManager/Devices/414)
Dec 06 08:26:14 compute-1 ovn_controller[130279]: 2025-12-06T08:26:14Z|00891|binding|INFO|Claiming lport e0c4d35f-9815-42ef-889d-8547f1c32618 for this chassis.
Dec 06 08:26:14 compute-1 ovn_controller[130279]: 2025-12-06T08:26:14Z|00892|binding|INFO|e0c4d35f-9815-42ef-889d-8547f1c32618: Claiming fa:16:3e:d0:1d:05 10.100.0.11
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.047 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.051 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.054 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.062 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:1d:05 10.100.0.11'], port_security=['fa:16:3e:d0:1d:05 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '280fb28b-e8b2-40d1-ade2-c117de080ce1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7102dcd5b58d4dec801a71dacc60eaaf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '542d1f4a-a306-4d0a-9719-694ed1b1f413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d50a2125-562d-4707-b67c-0f0d40fd3bbc, chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=e0c4d35f-9815-42ef-889d-8547f1c32618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.063 139580 INFO neutron.agent.ovn.metadata.agent [-] Port e0c4d35f-9815-42ef-889d-8547f1c32618 in datapath 0cfa750a-9a18-4c24-bbf9-75517f0157ee bound to our chassis
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.064 139580 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cfa750a-9a18-4c24-bbf9-75517f0157ee
Dec 06 08:26:14 compute-1 systemd-udevd[316895]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.077 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f92419-d9ae-41f0-8cf8-c7a37064cc71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.078 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cfa750a-91 in ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.080 229936 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cfa750a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.080 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[32067d42-c703-47ff-9201-430bfd50403b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 systemd-machined[190302]: New machine qemu-102-instance-000000de.
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.082 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[72fa9654-3828-4937-8db9-48de1042b63b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 NetworkManager[49031]: <info>  [1765009574.0903] device (tape0c4d35f-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:26:14 compute-1 NetworkManager[49031]: <info>  [1765009574.0912] device (tape0c4d35f-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.095 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[0f435f4e-0f6e-4026-96a2-98a1cfdec23f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 systemd[1]: Started Virtual Machine qemu-102-instance-000000de.
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.120 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.121 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[623f5331-8c95-42d0-a254-8f75ba76b5ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_controller[130279]: 2025-12-06T08:26:14Z|00893|binding|INFO|Setting lport e0c4d35f-9815-42ef-889d-8547f1c32618 ovn-installed in OVS
Dec 06 08:26:14 compute-1 ovn_controller[130279]: 2025-12-06T08:26:14Z|00894|binding|INFO|Setting lport e0c4d35f-9815-42ef-889d-8547f1c32618 up in Southbound
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.127 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.148 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[67d5cacf-95f4-4161-bf39-675f5002a5c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 systemd-udevd[316899]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.155 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[524824ad-8b93-4d6e-9009-78f0538e5e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 NetworkManager[49031]: <info>  [1765009574.1560] manager: (tap0cfa750a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/415)
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.195 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[5d55b118-4bda-4d45-af66-e6eb0f9185c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.199 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[51fc1ca7-4056-4135-a9d9-6abd0868fc80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 NetworkManager[49031]: <info>  [1765009574.2240] device (tap0cfa750a-90): carrier: link connected
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.234 229991 DEBUG oslo.privsep.daemon [-] privsep: reply[75502ac5-2b33-4919-89c6-de79328097c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.255 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[e45770c0-cb3c-41d0-9c37-895426542ca5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfa750a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:93:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985372, 'reachable_time': 22605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316928, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.280 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f8807695-c3cc-41a0-8c8d-2fc37cd71698]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:93e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 985372, 'tstamp': 985372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316929, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.301 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[66e565db-0a9f-4cc5-ab44-eeea8195063a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfa750a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:93:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985372, 'reachable_time': 22605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316930, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.349 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[cd87e786-d390-4d7f-8090-984516021013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.412 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea760a5-f7e3-4e72-9125-b1c61bde72b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.414 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfa750a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.414 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.415 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cfa750a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:14 compute-1 NetworkManager[49031]: <info>  [1765009574.4173] manager: (tap0cfa750a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Dec 06 08:26:14 compute-1 kernel: tap0cfa750a-90: entered promiscuous mode
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.416 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.420 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cfa750a-90, col_values=(('external_ids', {'iface-id': '9c309117-cb51-4d66-b962-5f9b07ea29e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:14 compute-1 ovn_controller[130279]: 2025-12-06T08:26:14Z|00895|binding|INFO|Releasing lport 9c309117-cb51-4d66-b962-5f9b07ea29e2 from this chassis (sb_readonly=0)
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.444 139580 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.444 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.446 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[63afa9b2-870c-40ab-a826-5a9aeabef3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.447 139580 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: global
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     log         /dev/log local0 debug
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     log-tag     haproxy-metadata-proxy-0cfa750a-9a18-4c24-bbf9-75517f0157ee
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     user        root
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     group       root
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     maxconn     1024
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     pidfile     /var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     daemon
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: defaults
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     log global
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     mode http
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     option httplog
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     option dontlognull
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     option http-server-close
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     option forwardfor
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     retries                 3
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     timeout http-request    30s
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     timeout connect         30s
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     timeout client          32s
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     timeout server          32s
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     timeout http-keep-alive 30s
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: listen listener
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     bind 169.254.169.254:80
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:     http-request add-header X-OVN-Network-ID 0cfa750a-9a18-4c24-bbf9-75517f0157ee
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:26:14 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:14.448 139580 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'env', 'PROCESS_TAG=haproxy-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cfa750a-9a18-4c24-bbf9-75517f0157ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.452 226109 DEBUG nova.network.neutron [req-fa95224e-23df-424a-95e4-27740e80a100 req-dc9785d7-775e-415c-99bf-ec03543ae270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Updated VIF entry in instance network info cache for port e0c4d35f-9815-42ef-889d-8547f1c32618. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.452 226109 DEBUG nova.network.neutron [req-fa95224e-23df-424a-95e4-27740e80a100 req-dc9785d7-775e-415c-99bf-ec03543ae270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Updating instance_info_cache with network_info: [{"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.471 226109 DEBUG oslo_concurrency.lockutils [req-fa95224e-23df-424a-95e4-27740e80a100 req-dc9785d7-775e-415c-99bf-ec03543ae270 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-280fb28b-e8b2-40d1-ade2-c117de080ce1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:14.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:14.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:14 compute-1 podman[316964]: 2025-12-06 08:26:14.822148972 +0000 UTC m=+0.050519755 container create a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:26:14 compute-1 systemd[1]: Started libpod-conmon-a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7.scope.
Dec 06 08:26:14 compute-1 podman[316964]: 2025-12-06 08:26:14.795163338 +0000 UTC m=+0.023534121 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:26:14 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:26:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655b8cab2c0cf17c003eac9a53452864ba755e1b65d773b4c1a306c241334545/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:14 compute-1 podman[316964]: 2025-12-06 08:26:14.912916636 +0000 UTC m=+0.141287449 container init a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 08:26:14 compute-1 podman[316964]: 2025-12-06 08:26:14.922185424 +0000 UTC m=+0.150556207 container start a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 08:26:14 compute-1 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[317013]: [NOTICE]   (317024) : New worker (317027) forked
Dec 06 08:26:14 compute-1 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[317013]: [NOTICE]   (317024) : Loading success.
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.981 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009574.980678, 280fb28b-e8b2-40d1-ade2-c117de080ce1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:26:14 compute-1 nova_compute[226101]: 2025-12-06 08:26:14.982 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] VM Started (Lifecycle Event)
Dec 06 08:26:15 compute-1 nova_compute[226101]: 2025-12-06 08:26:15.003 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:15 compute-1 nova_compute[226101]: 2025-12-06 08:26:15.007 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009574.9809797, 280fb28b-e8b2-40d1-ade2-c117de080ce1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:26:15 compute-1 nova_compute[226101]: 2025-12-06 08:26:15.007 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] VM Paused (Lifecycle Event)
Dec 06 08:26:15 compute-1 nova_compute[226101]: 2025-12-06 08:26:15.199 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:15 compute-1 nova_compute[226101]: 2025-12-06 08:26:15.203 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:26:15 compute-1 nova_compute[226101]: 2025-12-06 08:26:15.224 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:26:15 compute-1 ceph-mon[81689]: pgmap v4020: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 MiB/s wr, 86 op/s
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.196 226109 DEBUG nova.compute.manager [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received event network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.196 226109 DEBUG oslo_concurrency.lockutils [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.197 226109 DEBUG oslo_concurrency.lockutils [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.197 226109 DEBUG oslo_concurrency.lockutils [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.197 226109 DEBUG nova.compute.manager [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Processing event network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.198 226109 DEBUG nova.compute.manager [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received event network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.198 226109 DEBUG oslo_concurrency.lockutils [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.198 226109 DEBUG oslo_concurrency.lockutils [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.198 226109 DEBUG oslo_concurrency.lockutils [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.199 226109 DEBUG nova.compute.manager [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] No waiting events found dispatching network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.199 226109 WARNING nova.compute.manager [req-35d9d79c-bd4b-443b-8e24-27179445d426 req-5e3026ae-cee1-4d36-98eb-13571ffdaf07 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received unexpected event network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 for instance with vm_state building and task_state spawning.
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.200 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.205 226109 DEBUG nova.virt.driver [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] Emitting event <LifecycleEvent: 1765009576.203564, 280fb28b-e8b2-40d1-ade2-c117de080ce1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.206 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] VM Resumed (Lifecycle Event)
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.208 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.211 226109 INFO nova.virt.libvirt.driver [-] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Instance spawned successfully.
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.211 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.231 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.241 226109 DEBUG nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.246 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.247 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.247 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.248 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.248 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.249 226109 DEBUG nova.virt.libvirt.driver [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.269 226109 INFO nova.compute.manager [None req-cf330639-f7eb-44f1-bbe7-a5f596ba5e7a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.300 226109 INFO nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Took 9.24 seconds to spawn the instance on the hypervisor.
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.301 226109 DEBUG nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.363 226109 INFO nova.compute.manager [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Took 10.29 seconds to build instance.
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.414 226109 DEBUG oslo_concurrency.lockutils [None req-c66f1b9f-f98b-4ecf-8169-e4af10bfe2c4 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:16 compute-1 sshd-session[316940]: Received disconnect from 154.209.4.183 port 48444:11: Bye Bye [preauth]
Dec 06 08:26:16 compute-1 sshd-session[316940]: Disconnected from authenticating user root 154.209.4.183 port 48444 [preauth]
Dec 06 08:26:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:16.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:16 compute-1 nova_compute[226101]: 2025-12-06 08:26:16.829 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:17 compute-1 ceph-mon[81689]: pgmap v4021: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.3 MiB/s wr, 144 op/s
Dec 06 08:26:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1269390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/920395903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:18.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:18.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:18 compute-1 nova_compute[226101]: 2025-12-06 08:26:18.872 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:19 compute-1 ceph-mon[81689]: pgmap v4022: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 163 op/s
Dec 06 08:26:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:20.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 08:26:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 08:26:21 compute-1 ceph-mon[81689]: pgmap v4023: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 151 op/s
Dec 06 08:26:21 compute-1 nova_compute[226101]: 2025-12-06 08:26:21.832 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:22.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:23 compute-1 ceph-mon[81689]: pgmap v4024: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 271 op/s
Dec 06 08:26:23 compute-1 sshd-session[317036]: Received disconnect from 45.120.216.232 port 56920:11: Bye Bye [preauth]
Dec 06 08:26:23 compute-1 sshd-session[317036]: Disconnected from authenticating user root 45.120.216.232 port 56920 [preauth]
Dec 06 08:26:23 compute-1 nova_compute[226101]: 2025-12-06 08:26:23.874 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:25 compute-1 ceph-mon[81689]: pgmap v4025: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 25 KiB/s wr, 197 op/s
Dec 06 08:26:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1229264166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.335 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "280fb28b-e8b2-40d1-ade2-c117de080ce1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.335 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.335 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.336 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.336 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.337 226109 INFO nova.compute.manager [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Terminating instance
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.338 226109 DEBUG nova.compute.manager [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:26:26 compute-1 kernel: tape0c4d35f-98 (unregistering): left promiscuous mode
Dec 06 08:26:26 compute-1 NetworkManager[49031]: <info>  [1765009586.3773] device (tape0c4d35f-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:26:26 compute-1 ovn_controller[130279]: 2025-12-06T08:26:26Z|00896|binding|INFO|Releasing lport e0c4d35f-9815-42ef-889d-8547f1c32618 from this chassis (sb_readonly=0)
Dec 06 08:26:26 compute-1 ovn_controller[130279]: 2025-12-06T08:26:26Z|00897|binding|INFO|Setting lport e0c4d35f-9815-42ef-889d-8547f1c32618 down in Southbound
Dec 06 08:26:26 compute-1 ovn_controller[130279]: 2025-12-06T08:26:26Z|00898|binding|INFO|Removing iface tape0c4d35f-98 ovn-installed in OVS
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.396 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.400 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:1d:05 10.100.0.11'], port_security=['fa:16:3e:d0:1d:05 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '280fb28b-e8b2-40d1-ade2-c117de080ce1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7102dcd5b58d4dec801a71dacc60eaaf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '542d1f4a-a306-4d0a-9719-694ed1b1f413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d50a2125-562d-4707-b67c-0f0d40fd3bbc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>], logical_port=e0c4d35f-9815-42ef-889d-8547f1c32618) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f2f83fc7880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.402 139580 INFO neutron.agent.ovn.metadata.agent [-] Port e0c4d35f-9815-42ef-889d-8547f1c32618 in datapath 0cfa750a-9a18-4c24-bbf9-75517f0157ee unbound from our chassis
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.403 139580 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cfa750a-9a18-4c24-bbf9-75517f0157ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.404 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[95170ce4-0e18-4a6d-814d-0cbf1dd24923]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.405 139580 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee namespace which is not needed anymore
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.416 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:26 compute-1 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000de.scope: Deactivated successfully.
Dec 06 08:26:26 compute-1 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000de.scope: Consumed 11.280s CPU time.
Dec 06 08:26:26 compute-1 systemd-machined[190302]: Machine qemu-102-instance-000000de terminated.
Dec 06 08:26:26 compute-1 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[317013]: [NOTICE]   (317024) : haproxy version is 2.8.14-c23fe91
Dec 06 08:26:26 compute-1 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[317013]: [NOTICE]   (317024) : path to executable is /usr/sbin/haproxy
Dec 06 08:26:26 compute-1 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[317013]: [WARNING]  (317024) : Exiting Master process...
Dec 06 08:26:26 compute-1 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[317013]: [WARNING]  (317024) : Exiting Master process...
Dec 06 08:26:26 compute-1 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[317013]: [ALERT]    (317024) : Current worker (317027) exited with code 143 (Terminated)
Dec 06 08:26:26 compute-1 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[317013]: [WARNING]  (317024) : All workers exited. Exiting... (0)
Dec 06 08:26:26 compute-1 systemd[1]: libpod-a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7.scope: Deactivated successfully.
Dec 06 08:26:26 compute-1 podman[317059]: 2025-12-06 08:26:26.542478724 +0000 UTC m=+0.044851594 container died a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:26:26 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7-userdata-shm.mount: Deactivated successfully.
Dec 06 08:26:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-655b8cab2c0cf17c003eac9a53452864ba755e1b65d773b4c1a306c241334545-merged.mount: Deactivated successfully.
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.573 226109 INFO nova.virt.libvirt.driver [-] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Instance destroyed successfully.
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.574 226109 DEBUG nova.objects.instance [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'resources' on Instance uuid 280fb28b-e8b2-40d1-ade2-c117de080ce1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:26:26 compute-1 podman[317059]: 2025-12-06 08:26:26.58896742 +0000 UTC m=+0.091340290 container cleanup a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 08:26:26 compute-1 systemd[1]: libpod-conmon-a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7.scope: Deactivated successfully.
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.603 226109 DEBUG nova.virt.libvirt.vif [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:26:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1049833208',display_name='tempest-TestServerMultinode-server-1049833208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1049833208',id=222,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:26:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-bb6ovva8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:26:16Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=280fb28b-e8b2-40d1-ade2-c117de080ce1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.604 226109 DEBUG nova.network.os_vif_util [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "e0c4d35f-9815-42ef-889d-8547f1c32618", "address": "fa:16:3e:d0:1d:05", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c4d35f-98", "ovs_interfaceid": "e0c4d35f-9815-42ef-889d-8547f1c32618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.605 226109 DEBUG nova.network.os_vif_util [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:1d:05,bridge_name='br-int',has_traffic_filtering=True,id=e0c4d35f-9815-42ef-889d-8547f1c32618,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c4d35f-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.605 226109 DEBUG os_vif [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:1d:05,bridge_name='br-int',has_traffic_filtering=True,id=e0c4d35f-9815-42ef-889d-8547f1c32618,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c4d35f-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.607 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.607 226109 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0c4d35f-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.609 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.612 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.614 226109 INFO os_vif [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:1d:05,bridge_name='br-int',has_traffic_filtering=True,id=e0c4d35f-9815-42ef-889d-8547f1c32618,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c4d35f-98')
Dec 06 08:26:26 compute-1 podman[317100]: 2025-12-06 08:26:26.650110378 +0000 UTC m=+0.039221522 container remove a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.655 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc61acb-6989-4f12-9133-80a1a25c6d09]: (4, ('Sat Dec  6 08:26:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee (a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7)\na5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7\nSat Dec  6 08:26:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee (a5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7)\na5effd5ab83a5c16a349b4ce05a893b4ebd8f298dbb9ed223563986fb927a4c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.658 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[98104e80-fb8c-4f8b-bb01-07720ef8678d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.659 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfa750a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.660 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:26 compute-1 kernel: tap0cfa750a-90: left promiscuous mode
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.673 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.674 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.676 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef96ad4-fca0-47f1-bab9-38fa1230500a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.688 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[9301016f-df6f-43c3-857e-3ac630c42a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.689 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[defd48b4-c03b-4252-a8ab-b4bf69cecd5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:26.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:26.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.705 229936 DEBUG oslo.privsep.daemon [-] privsep: reply[f86dd042-fa03-42a7-aa43-1bd66807239d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985364, 'reachable_time': 22689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317131, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:26 compute-1 systemd[1]: run-netns-ovnmeta\x2d0cfa750a\x2d9a18\x2d4c24\x2dbbf9\x2d75517f0157ee.mount: Deactivated successfully.
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.709 139694 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:26:26 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:26.710 139694 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c735c6-7388-4c3b-87a2-fe684aeaa92a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.962 226109 INFO nova.virt.libvirt.driver [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Deleting instance files /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1_del
Dec 06 08:26:26 compute-1 nova_compute[226101]: 2025-12-06 08:26:26.963 226109 INFO nova.virt.libvirt.driver [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Deletion of /var/lib/nova/instances/280fb28b-e8b2-40d1-ade2-c117de080ce1_del complete
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.037 226109 INFO nova.compute.manager [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Took 0.70 seconds to destroy the instance on the hypervisor.
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.037 226109 DEBUG oslo.service.loopingcall [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.038 226109 DEBUG nova.compute.manager [-] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.038 226109 DEBUG nova.network.neutron [-] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.060 226109 DEBUG nova.compute.manager [req-90b81782-409c-47b7-a37a-8354987f88f4 req-99570531-cdd4-46f1-9001-75777c8cdc51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received event network-vif-unplugged-e0c4d35f-9815-42ef-889d-8547f1c32618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.061 226109 DEBUG oslo_concurrency.lockutils [req-90b81782-409c-47b7-a37a-8354987f88f4 req-99570531-cdd4-46f1-9001-75777c8cdc51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.061 226109 DEBUG oslo_concurrency.lockutils [req-90b81782-409c-47b7-a37a-8354987f88f4 req-99570531-cdd4-46f1-9001-75777c8cdc51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.062 226109 DEBUG oslo_concurrency.lockutils [req-90b81782-409c-47b7-a37a-8354987f88f4 req-99570531-cdd4-46f1-9001-75777c8cdc51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.062 226109 DEBUG nova.compute.manager [req-90b81782-409c-47b7-a37a-8354987f88f4 req-99570531-cdd4-46f1-9001-75777c8cdc51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] No waiting events found dispatching network-vif-unplugged-e0c4d35f-9815-42ef-889d-8547f1c32618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.062 226109 DEBUG nova.compute.manager [req-90b81782-409c-47b7-a37a-8354987f88f4 req-99570531-cdd4-46f1-9001-75777c8cdc51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received event network-vif-unplugged-e0c4d35f-9815-42ef-889d-8547f1c32618 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:26:27 compute-1 ceph-mon[81689]: pgmap v4026: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.9 MiB/s wr, 268 op/s
Dec 06 08:26:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.879 226109 DEBUG nova.network.neutron [-] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.907 226109 INFO nova.compute.manager [-] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Took 0.87 seconds to deallocate network for instance.
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.951 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:27 compute-1 nova_compute[226101]: 2025-12-06 08:26:27.951 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:28 compute-1 nova_compute[226101]: 2025-12-06 08:26:28.009 226109 DEBUG oslo_concurrency.processutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:26:28 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3779432085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:28 compute-1 nova_compute[226101]: 2025-12-06 08:26:28.430 226109 DEBUG oslo_concurrency.processutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:28 compute-1 nova_compute[226101]: 2025-12-06 08:26:28.436 226109 DEBUG nova.compute.provider_tree [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:26:28 compute-1 nova_compute[226101]: 2025-12-06 08:26:28.451 226109 DEBUG nova.scheduler.client.report [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:26:28 compute-1 nova_compute[226101]: 2025-12-06 08:26:28.471 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:28 compute-1 nova_compute[226101]: 2025-12-06 08:26:28.497 226109 INFO nova.scheduler.client.report [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Deleted allocations for instance 280fb28b-e8b2-40d1-ade2-c117de080ce1
Dec 06 08:26:28 compute-1 nova_compute[226101]: 2025-12-06 08:26:28.553 226109 DEBUG oslo_concurrency.lockutils [None req-f1177378-04cd-463e-926d-0572f19c726d 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:28.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:28.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:28 compute-1 nova_compute[226101]: 2025-12-06 08:26:28.881 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:29 compute-1 nova_compute[226101]: 2025-12-06 08:26:29.139 226109 DEBUG nova.compute.manager [req-82afce14-11af-44a5-9de9-ef3de5372746 req-abcdb762-bcb1-4d46-a281-148e96e3c684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received event network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:29 compute-1 nova_compute[226101]: 2025-12-06 08:26:29.139 226109 DEBUG oslo_concurrency.lockutils [req-82afce14-11af-44a5-9de9-ef3de5372746 req-abcdb762-bcb1-4d46-a281-148e96e3c684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:29 compute-1 nova_compute[226101]: 2025-12-06 08:26:29.140 226109 DEBUG oslo_concurrency.lockutils [req-82afce14-11af-44a5-9de9-ef3de5372746 req-abcdb762-bcb1-4d46-a281-148e96e3c684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:29 compute-1 nova_compute[226101]: 2025-12-06 08:26:29.140 226109 DEBUG oslo_concurrency.lockutils [req-82afce14-11af-44a5-9de9-ef3de5372746 req-abcdb762-bcb1-4d46-a281-148e96e3c684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "280fb28b-e8b2-40d1-ade2-c117de080ce1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:29 compute-1 nova_compute[226101]: 2025-12-06 08:26:29.140 226109 DEBUG nova.compute.manager [req-82afce14-11af-44a5-9de9-ef3de5372746 req-abcdb762-bcb1-4d46-a281-148e96e3c684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] No waiting events found dispatching network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:26:29 compute-1 nova_compute[226101]: 2025-12-06 08:26:29.140 226109 WARNING nova.compute.manager [req-82afce14-11af-44a5-9de9-ef3de5372746 req-abcdb762-bcb1-4d46-a281-148e96e3c684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received unexpected event network-vif-plugged-e0c4d35f-9815-42ef-889d-8547f1c32618 for instance with vm_state deleted and task_state None.
Dec 06 08:26:29 compute-1 nova_compute[226101]: 2025-12-06 08:26:29.140 226109 DEBUG nova.compute.manager [req-82afce14-11af-44a5-9de9-ef3de5372746 req-abcdb762-bcb1-4d46-a281-148e96e3c684 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Received event network-vif-deleted-e0c4d35f-9815-42ef-889d-8547f1c32618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:29 compute-1 ceph-mon[81689]: pgmap v4027: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Dec 06 08:26:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3779432085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:30.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:30.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:31 compute-1 podman[317159]: 2025-12-06 08:26:31.063879519 +0000 UTC m=+0.049347404 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:26:31 compute-1 podman[317158]: 2025-12-06 08:26:31.068379299 +0000 UTC m=+0.056667269 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 08:26:31 compute-1 podman[317160]: 2025-12-06 08:26:31.095194738 +0000 UTC m=+0.077783866 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 08:26:31 compute-1 ceph-mon[81689]: pgmap v4028: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 225 op/s
Dec 06 08:26:31 compute-1 nova_compute[226101]: 2025-12-06 08:26:31.610 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:31 compute-1 sshd-session[317156]: Received disconnect from 124.18.141.70 port 42370:11: Bye Bye [preauth]
Dec 06 08:26:31 compute-1 sshd-session[317156]: Disconnected from authenticating user root 124.18.141.70 port 42370 [preauth]
Dec 06 08:26:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:32.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:33 compute-1 ceph-mon[81689]: pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 272 op/s
Dec 06 08:26:33 compute-1 nova_compute[226101]: 2025-12-06 08:26:33.883 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:34 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/992866140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:34.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:34.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:35 compute-1 ceph-mon[81689]: pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 923 KiB/s rd, 2.1 MiB/s wr, 152 op/s
Dec 06 08:26:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:35.603 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=108, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=107) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:26:35 compute-1 nova_compute[226101]: 2025-12-06 08:26:35.604 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:35 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:35.604 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:26:36 compute-1 nova_compute[226101]: 2025-12-06 08:26:36.612 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:36.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:36.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:37 compute-1 ceph-mon[81689]: pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 930 KiB/s rd, 2.1 MiB/s wr, 160 op/s
Dec 06 08:26:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:38.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:38.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:38 compute-1 nova_compute[226101]: 2025-12-06 08:26:38.919 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:38 compute-1 nova_compute[226101]: 2025-12-06 08:26:38.922 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:39 compute-1 ceph-mon[81689]: pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 132 KiB/s rd, 319 KiB/s wr, 89 op/s
Dec 06 08:26:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:40.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:40.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:41 compute-1 ceph-mon[81689]: pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 54 op/s
Dec 06 08:26:41 compute-1 nova_compute[226101]: 2025-12-06 08:26:41.569 226109 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009586.5682302, 280fb28b-e8b2-40d1-ade2-c117de080ce1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:26:41 compute-1 nova_compute[226101]: 2025-12-06 08:26:41.569 226109 INFO nova.compute.manager [-] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] VM Stopped (Lifecycle Event)
Dec 06 08:26:41 compute-1 nova_compute[226101]: 2025-12-06 08:26:41.599 226109 DEBUG nova.compute.manager [None req-46d23051-acb1-4c57-838a-17787016b87a - - - - - -] [instance: 280fb28b-e8b2-40d1-ade2-c117de080ce1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:41 compute-1 nova_compute[226101]: 2025-12-06 08:26:41.616 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:42 compute-1 nova_compute[226101]: 2025-12-06 08:26:42.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:42 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:26:42.606 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '108'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:42.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:43 compute-1 ceph-mon[81689]: pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 54 op/s
Dec 06 08:26:43 compute-1 nova_compute[226101]: 2025-12-06 08:26:43.921 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:44.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:44.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:45 compute-1 ceph-mon[81689]: pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Dec 06 08:26:46 compute-1 nova_compute[226101]: 2025-12-06 08:26:46.618 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:46.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:46.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:47 compute-1 ceph-mon[81689]: pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Dec 06 08:26:47 compute-1 nova_compute[226101]: 2025-12-06 08:26:47.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:47 compute-1 nova_compute[226101]: 2025-12-06 08:26:47.623 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:47 compute-1 nova_compute[226101]: 2025-12-06 08:26:47.623 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:47 compute-1 nova_compute[226101]: 2025-12-06 08:26:47.624 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:47 compute-1 nova_compute[226101]: 2025-12-06 08:26:47.624 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:26:47 compute-1 nova_compute[226101]: 2025-12-06 08:26:47.624 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:26:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2275139996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.157 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.317 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.318 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4229MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.318 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.319 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.379 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.380 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.403 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.424 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.425 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.440 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.463 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.482 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2275139996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:48.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:48.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:26:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3575716990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.923 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.927 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.931 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.947 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.967 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:26:48 compute-1 nova_compute[226101]: 2025-12-06 08:26:48.967 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:49 compute-1 sshd-session[317221]: Connection closed by 165.154.55.146 port 37408 [preauth]
Dec 06 08:26:50 compute-1 ceph-mon[81689]: pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3575716990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:50.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:50.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:51 compute-1 ceph-mon[81689]: pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:51 compute-1 nova_compute[226101]: 2025-12-06 08:26:51.622 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:52.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:53 compute-1 ceph-mon[81689]: pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:53 compute-1 nova_compute[226101]: 2025-12-06 08:26:53.925 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:54.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:54.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:55 compute-1 ceph-mon[81689]: pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:56 compute-1 nova_compute[226101]: 2025-12-06 08:26:56.677 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:56.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:56.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:56 compute-1 nova_compute[226101]: 2025-12-06 08:26:56.968 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:56 compute-1 nova_compute[226101]: 2025-12-06 08:26:56.968 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:26:56 compute-1 nova_compute[226101]: 2025-12-06 08:26:56.968 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:26:56 compute-1 nova_compute[226101]: 2025-12-06 08:26:56.984 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:26:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:57 compute-1 ceph-mon[81689]: pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:58.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:26:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:58.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:58 compute-1 sshd-session[317268]: Received disconnect from 186.96.151.198 port 49372:11: Bye Bye [preauth]
Dec 06 08:26:58 compute-1 sshd-session[317268]: Disconnected from authenticating user root 186.96.151.198 port 49372 [preauth]
Dec 06 08:26:58 compute-1 nova_compute[226101]: 2025-12-06 08:26:58.927 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:58 compute-1 ceph-mon[81689]: pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:00.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:00.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:01 compute-1 ceph-mon[81689]: pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:01 compute-1 nova_compute[226101]: 2025-12-06 08:27:01.679 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:27:01.704 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:27:01.705 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:27:01.705 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:27:02 compute-1 sshd-session[317270]: Received disconnect from 91.144.158.231 port 13516:11: Bye Bye [preauth]
Dec 06 08:27:02 compute-1 sshd-session[317270]: Disconnected from authenticating user root 91.144.158.231 port 13516 [preauth]
Dec 06 08:27:02 compute-1 podman[317275]: 2025-12-06 08:27:02.072253397 +0000 UTC m=+0.050951446 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:27:02 compute-1 podman[317274]: 2025-12-06 08:27:02.08239441 +0000 UTC m=+0.063171766 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:27:02 compute-1 podman[317276]: 2025-12-06 08:27:02.109801674 +0000 UTC m=+0.079289006 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:27:02 compute-1 nova_compute[226101]: 2025-12-06 08:27:02.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:02.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:02 compute-1 sshd-session[317272]: Received disconnect from 106.51.92.114 port 51574:11: Bye Bye [preauth]
Dec 06 08:27:02 compute-1 sshd-session[317272]: Disconnected from authenticating user root 106.51.92.114 port 51574 [preauth]
Dec 06 08:27:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:03 compute-1 ceph-mon[81689]: pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:03 compute-1 nova_compute[226101]: 2025-12-06 08:27:03.930 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3503660495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:04 compute-1 nova_compute[226101]: 2025-12-06 08:27:04.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:04.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:05 compute-1 ceph-mon[81689]: pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2109562262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:05 compute-1 nova_compute[226101]: 2025-12-06 08:27:05.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:05 compute-1 nova_compute[226101]: 2025-12-06 08:27:05.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:27:06 compute-1 nova_compute[226101]: 2025-12-06 08:27:06.682 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:06.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:06.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:07 compute-1 ceph-mon[81689]: pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1869144123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:08 compute-1 sudo[317338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:08 compute-1 sudo[317338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-1 sudo[317338]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:08 compute-1 sudo[317363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:27:08 compute-1 sudo[317363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-1 sudo[317363]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:08 compute-1 sudo[317388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:08 compute-1 sudo[317388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-1 sudo[317388]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:08 compute-1 sudo[317413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:27:08 compute-1 sudo[317413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:27:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:08 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:08 compute-1 ceph-mon[81689]: pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:08 compute-1 nova_compute[226101]: 2025-12-06 08:27:08.932 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:09 compute-1 sudo[317413]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:09 compute-1 nova_compute[226101]: 2025-12-06 08:27:09.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3898552235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1091282979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1091282979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:27:09 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:27:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:10.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:27:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:10 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:10.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:10 compute-1 ceph-mon[81689]: pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:11 compute-1 nova_compute[226101]: 2025-12-06 08:27:11.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:12 compute-1 nova_compute[226101]: 2025-12-06 08:27:12.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:27:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:12.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:12 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:12.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:13 compute-1 ceph-mon[81689]: pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:13 compute-1 nova_compute[226101]: 2025-12-06 08:27:13.933 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:14 compute-1 ovn_controller[130279]: 2025-12-06T08:27:14Z|00899|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 08:27:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:27:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:14.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:14 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:14.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:15 compute-1 ceph-mon[81689]: pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:15 compute-1 sudo[317472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:15 compute-1 sudo[317472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:15 compute-1 sudo[317472]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:15 compute-1 nova_compute[226101]: 2025-12-06 08:27:15.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:15 compute-1 nova_compute[226101]: 2025-12-06 08:27:15.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:15 compute-1 sudo[317497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:27:15 compute-1 sudo[317497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:15 compute-1 sudo[317497]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:16 compute-1 nova_compute[226101]: 2025-12-06 08:27:16.686 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:16.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:27:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:16 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:16.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:17 compute-1 ceph-mon[81689]: pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:18 compute-1 sshd-session[317524]: Received disconnect from 186.87.166.141 port 41008:11: Bye Bye [preauth]
Dec 06 08:27:18 compute-1 sshd-session[317524]: Disconnected from authenticating user root 186.87.166.141 port 41008 [preauth]
Dec 06 08:27:18 compute-1 sshd-session[317522]: Received disconnect from 43.225.159.111 port 43428:11:  [preauth]
Dec 06 08:27:18 compute-1 sshd-session[317522]: Disconnected from authenticating user root 43.225.159.111 port 43428 [preauth]
Dec 06 08:27:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:27:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:18.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:18 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:18 compute-1 nova_compute[226101]: 2025-12-06 08:27:18.934 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:19 compute-1 ceph-mon[81689]: pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:27:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:20 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:21 compute-1 ceph-mon[81689]: pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:21 compute-1 nova_compute[226101]: 2025-12-06 08:27:21.688 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:22 compute-1 sshd-session[317526]: Received disconnect from 101.100.194.199 port 46126:11: Bye Bye [preauth]
Dec 06 08:27:22 compute-1 sshd-session[317526]: Disconnected from authenticating user root 101.100.194.199 port 46126 [preauth]
Dec 06 08:27:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:22.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 08:27:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:22.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 08:27:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:23 compute-1 ceph-mon[81689]: pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:23 compute-1 nova_compute[226101]: 2025-12-06 08:27:23.936 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:24.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:25 compute-1 ceph-mon[81689]: pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:26 compute-1 nova_compute[226101]: 2025-12-06 08:27:26.690 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:26 compute-1 ceph-mon[81689]: pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:26.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:28.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:28 compute-1 nova_compute[226101]: 2025-12-06 08:27:28.939 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:29 compute-1 ceph-mon[81689]: pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:29 compute-1 sshd-session[317528]: Received disconnect from 14.225.3.79 port 45888:11: Bye Bye [preauth]
Dec 06 08:27:29 compute-1 sshd-session[317528]: Disconnected from authenticating user root 14.225.3.79 port 45888 [preauth]
Dec 06 08:27:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:30.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:30.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:31 compute-1 nova_compute[226101]: 2025-12-06 08:27:31.693 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:32 compute-1 ceph-mon[81689]: pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:32.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:32 compute-1 ceph-mon[81689]: pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Dec 06 08:27:32 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:33 compute-1 podman[317530]: 2025-12-06 08:27:33.09114691 +0000 UTC m=+0.070272625 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:27:33 compute-1 podman[317532]: 2025-12-06 08:27:33.100265135 +0000 UTC m=+0.081824425 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:27:33 compute-1 podman[317531]: 2025-12-06 08:27:33.100595923 +0000 UTC m=+0.086252922 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:27:33 compute-1 nova_compute[226101]: 2025-12-06 08:27:33.941 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:34.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:34.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:35 compute-1 ceph-mon[81689]: pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Dec 06 08:27:36 compute-1 nova_compute[226101]: 2025-12-06 08:27:36.696 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:36.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:36.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:37 compute-1 ceph-mon[81689]: pgmap v4061: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 26 op/s
Dec 06 08:27:37 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:38 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/891910200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:27:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:38.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:38.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:38 compute-1 nova_compute[226101]: 2025-12-06 08:27:38.942 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:39 compute-1 ceph-mon[81689]: pgmap v4062: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:27:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e429 e429: 3 total, 3 up, 3 in
Dec 06 08:27:40 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e430 e430: 3 total, 3 up, 3 in
Dec 06 08:27:40 compute-1 ceph-mon[81689]: osdmap e429: 3 total, 3 up, 3 in
Dec 06 08:27:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:40.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:40.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:41 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e431 e431: 3 total, 3 up, 3 in
Dec 06 08:27:41 compute-1 ceph-mon[81689]: pgmap v4064: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 49 op/s
Dec 06 08:27:41 compute-1 ceph-mon[81689]: osdmap e430: 3 total, 3 up, 3 in
Dec 06 08:27:41 compute-1 nova_compute[226101]: 2025-12-06 08:27:41.698 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:42 compute-1 sshd[164848]: Timeout before authentication for connection from 107.150.106.178 to 38.102.83.204, pid = 316269
Dec 06 08:27:42 compute-1 ceph-mon[81689]: osdmap e431: 3 total, 3 up, 3 in
Dec 06 08:27:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e432 e432: 3 total, 3 up, 3 in
Dec 06 08:27:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:42.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:42.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:42 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:43 compute-1 ceph-mon[81689]: pgmap v4067: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 MiB/s wr, 35 op/s
Dec 06 08:27:43 compute-1 ceph-mon[81689]: osdmap e432: 3 total, 3 up, 3 in
Dec 06 08:27:43 compute-1 nova_compute[226101]: 2025-12-06 08:27:43.944 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:44 compute-1 sshd-session[317595]: Received disconnect from 45.120.216.232 port 57254:11: Bye Bye [preauth]
Dec 06 08:27:44 compute-1 sshd-session[317595]: Disconnected from authenticating user root 45.120.216.232 port 57254 [preauth]
Dec 06 08:27:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:44.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:44.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:45 compute-1 ceph-mon[81689]: pgmap v4069: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 KiB/s rd, 838 B/s wr, 7 op/s
Dec 06 08:27:45 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1713963602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:27:46 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e433 e433: 3 total, 3 up, 3 in
Dec 06 08:27:46 compute-1 nova_compute[226101]: 2025-12-06 08:27:46.701 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:46.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:46.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:47 compute-1 sshd-session[317597]: Received disconnect from 136.112.8.45 port 39160:11: Bye Bye [preauth]
Dec 06 08:27:47 compute-1 sshd-session[317597]: Disconnected from authenticating user root 136.112.8.45 port 39160 [preauth]
Dec 06 08:27:47 compute-1 ceph-mon[81689]: pgmap v4070: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 78 op/s
Dec 06 08:27:47 compute-1 ceph-mon[81689]: osdmap e433: 3 total, 3 up, 3 in
Dec 06 08:27:47 compute-1 nova_compute[226101]: 2025-12-06 08:27:47.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:47 compute-1 nova_compute[226101]: 2025-12-06 08:27:47.628 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:47 compute-1 nova_compute[226101]: 2025-12-06 08:27:47.628 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:47 compute-1 nova_compute[226101]: 2025-12-06 08:27:47.629 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:27:47 compute-1 nova_compute[226101]: 2025-12-06 08:27:47.629 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:27:47 compute-1 nova_compute[226101]: 2025-12-06 08:27:47.629 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:27:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3032119297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.069 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.224 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.226 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4236MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.226 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.227 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3032119297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.369 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.369 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.497 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:27:48.612 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=109, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=108) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:27:48 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:27:48.613 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.614 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:48.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:48.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:48 compute-1 nova_compute[226101]: 2025-12-06 08:27:48.986 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:27:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/394519015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:49 compute-1 nova_compute[226101]: 2025-12-06 08:27:49.016 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:49 compute-1 nova_compute[226101]: 2025-12-06 08:27:49.023 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:27:49 compute-1 nova_compute[226101]: 2025-12-06 08:27:49.038 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:27:49 compute-1 nova_compute[226101]: 2025-12-06 08:27:49.040 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:27:49 compute-1 nova_compute[226101]: 2025-12-06 08:27:49.040 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:27:49 compute-1 ceph-mon[81689]: pgmap v4072: 305 pgs: 305 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 88 op/s
Dec 06 08:27:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/394519015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:49 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:27:49.616 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '109'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:27:49 compute-1 sshd-session[317602]: Received disconnect from 154.209.4.183 port 50870:11: Bye Bye [preauth]
Dec 06 08:27:49 compute-1 sshd-session[317602]: Disconnected from authenticating user root 154.209.4.183 port 50870 [preauth]
Dec 06 08:27:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e434 e434: 3 total, 3 up, 3 in
Dec 06 08:27:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:50.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:50.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:51 compute-1 ceph-mon[81689]: pgmap v4073: 305 pgs: 305 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 76 op/s
Dec 06 08:27:51 compute-1 ceph-mon[81689]: osdmap e434: 3 total, 3 up, 3 in
Dec 06 08:27:51 compute-1 nova_compute[226101]: 2025-12-06 08:27:51.704 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:52.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:27:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:52.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:53 compute-1 ceph-mon[81689]: pgmap v4075: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 93 op/s
Dec 06 08:27:53 compute-1 nova_compute[226101]: 2025-12-06 08:27:53.988 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:54.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:54.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:54 compute-1 ceph-mon[81689]: pgmap v4076: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 39 op/s
Dec 06 08:27:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2038309903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:27:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3327544377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:27:56 compute-1 nova_compute[226101]: 2025-12-06 08:27:56.708 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:56.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:56.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:57 compute-1 ceph-mon[81689]: pgmap v4077: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 516 KiB/s rd, 536 KiB/s wr, 14 op/s
Dec 06 08:27:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3327544377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:27:57 compute-1 sshd-session[317645]: Received disconnect from 124.18.141.70 port 55732:11: Bye Bye [preauth]
Dec 06 08:27:57 compute-1 sshd-session[317645]: Disconnected from authenticating user root 124.18.141.70 port 55732 [preauth]
Dec 06 08:27:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:58 compute-1 nova_compute[226101]: 2025-12-06 08:27:58.041 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:58 compute-1 nova_compute[226101]: 2025-12-06 08:27:58.041 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:27:58 compute-1 nova_compute[226101]: 2025-12-06 08:27:58.041 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:27:58 compute-1 nova_compute[226101]: 2025-12-06 08:27:58.070 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:27:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:58.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:27:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:58.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:58 compute-1 nova_compute[226101]: 2025-12-06 08:27:58.990 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:00 compute-1 ceph-mon[81689]: pgmap v4078: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 525 KiB/s wr, 14 op/s
Dec 06 08:28:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:00.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:00 compute-1 ceph-mon[81689]: pgmap v4079: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 525 KiB/s wr, 14 op/s
Dec 06 08:28:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2366344083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:28:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:28:01.705 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:28:01.705 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:28:01.705 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:01 compute-1 nova_compute[226101]: 2025-12-06 08:28:01.712 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:02.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:03 compute-1 ceph-mon[81689]: pgmap v4080: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 351 B/s wr, 9 op/s
Dec 06 08:28:04 compute-1 nova_compute[226101]: 2025-12-06 08:28:04.038 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:04 compute-1 podman[317648]: 2025-12-06 08:28:04.160051384 +0000 UTC m=+0.087182758 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 08:28:04 compute-1 podman[317647]: 2025-12-06 08:28:04.163735393 +0000 UTC m=+0.096011895 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:28:04 compute-1 podman[317649]: 2025-12-06 08:28:04.190310545 +0000 UTC m=+0.114585853 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:28:04 compute-1 nova_compute[226101]: 2025-12-06 08:28:04.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:04 compute-1 nova_compute[226101]: 2025-12-06 08:28:04.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:04.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:04.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1065690781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:05 compute-1 nova_compute[226101]: 2025-12-06 08:28:05.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:05 compute-1 nova_compute[226101]: 2025-12-06 08:28:05.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:28:05 compute-1 sshd-session[317710]: Received disconnect from 186.96.151.198 port 60766:11: Bye Bye [preauth]
Dec 06 08:28:05 compute-1 sshd-session[317710]: Disconnected from authenticating user root 186.96.151.198 port 60766 [preauth]
Dec 06 08:28:06 compute-1 ceph-mon[81689]: pgmap v4081: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3422550437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:06 compute-1 nova_compute[226101]: 2025-12-06 08:28:06.714 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:06.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:06.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:07 compute-1 ceph-mon[81689]: pgmap v4082: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 54 op/s
Dec 06 08:28:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:08.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:08.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:09 compute-1 nova_compute[226101]: 2025-12-06 08:28:09.039 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:09 compute-1 ceph-mon[81689]: pgmap v4083: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2913063228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1864675316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:09 compute-1 nova_compute[226101]: 2025-12-06 08:28:09.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1310669518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:28:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1310669518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:28:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:10.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:10.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:11 compute-1 ceph-mon[81689]: pgmap v4084: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:11 compute-1 nova_compute[226101]: 2025-12-06 08:28:11.717 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:12 compute-1 nova_compute[226101]: 2025-12-06 08:28:12.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:12.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:12.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:13 compute-1 ceph-mon[81689]: pgmap v4085: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:14 compute-1 nova_compute[226101]: 2025-12-06 08:28:14.042 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:14.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:14.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:15 compute-1 ceph-mon[81689]: pgmap v4086: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:15 compute-1 sudo[317712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:15 compute-1 sudo[317712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:15 compute-1 sudo[317712]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:15 compute-1 sudo[317737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:28:15 compute-1 sudo[317737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:15 compute-1 sudo[317737]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:15 compute-1 sudo[317762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:15 compute-1 sudo[317762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:15 compute-1 sudo[317762]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:16 compute-1 sudo[317787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:28:16 compute-1 sudo[317787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:16 compute-1 sudo[317787]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:16 compute-1 nova_compute[226101]: 2025-12-06 08:28:16.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:16 compute-1 nova_compute[226101]: 2025-12-06 08:28:16.719 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:16.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:16.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:17 compute-1 ceph-mon[81689]: pgmap v4087: 305 pgs: 305 active+clean; 271 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 90 op/s
Dec 06 08:28:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:28:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:28:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:28:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:28:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:28:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:28:17 compute-1 nova_compute[226101]: 2025-12-06 08:28:17.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:18.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:18.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:19 compute-1 nova_compute[226101]: 2025-12-06 08:28:19.044 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:19 compute-1 ceph-mon[81689]: pgmap v4088: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 603 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Dec 06 08:28:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:20.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:20.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:21 compute-1 ceph-mon[81689]: pgmap v4089: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 159 KiB/s rd, 1.1 MiB/s wr, 36 op/s
Dec 06 08:28:21 compute-1 sshd-session[317843]: Received disconnect from 106.51.92.114 port 38127:11: Bye Bye [preauth]
Dec 06 08:28:21 compute-1 sshd-session[317843]: Disconnected from authenticating user root 106.51.92.114 port 38127 [preauth]
Dec 06 08:28:21 compute-1 nova_compute[226101]: 2025-12-06 08:28:21.722 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:28:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7205.4 total, 600.0 interval
                                           Cumulative writes: 67K writes, 265K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s
                                           Cumulative WAL: 67K writes, 24K syncs, 2.75 writes per sync, written: 0.25 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3267 writes, 12K keys, 3267 commit groups, 1.0 writes per commit group, ingest: 12.60 MB, 0.02 MB/s
                                           Interval WAL: 3267 writes, 1351 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:28:22 compute-1 sudo[317847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:22 compute-1 sudo[317847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:22 compute-1 sudo[317847]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:22 compute-1 sudo[317872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:28:22 compute-1 sudo[317872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:22 compute-1 sudo[317872]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:22.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:22.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:23 compute-1 ceph-mon[81689]: pgmap v4090: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 06 08:28:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:23 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:23 compute-1 sshd-session[317845]: Received disconnect from 91.144.158.231 port 16838:11: Bye Bye [preauth]
Dec 06 08:28:23 compute-1 sshd-session[317845]: Disconnected from authenticating user root 91.144.158.231 port 16838 [preauth]
Dec 06 08:28:24 compute-1 nova_compute[226101]: 2025-12-06 08:28:24.081 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:24 compute-1 ceph-mon[81689]: pgmap v4091: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 06 08:28:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/666306502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:24.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:24.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e435 e435: 3 total, 3 up, 3 in
Dec 06 08:28:26 compute-1 nova_compute[226101]: 2025-12-06 08:28:26.724 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:26.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:26.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:28:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/536961981' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:28:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:28:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/536961981' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:28:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e436 e436: 3 total, 3 up, 3 in
Dec 06 08:28:27 compute-1 ceph-mon[81689]: pgmap v4092: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Dec 06 08:28:27 compute-1 ceph-mon[81689]: osdmap e435: 3 total, 3 up, 3 in
Dec 06 08:28:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/536961981' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:28:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/536961981' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:28:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:28 compute-1 ceph-mon[81689]: osdmap e436: 3 total, 3 up, 3 in
Dec 06 08:28:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1100284927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:28:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1100284927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:28:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:28.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:28.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:29 compute-1 nova_compute[226101]: 2025-12-06 08:28:29.084 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:29 compute-1 ceph-mon[81689]: pgmap v4095: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Dec 06 08:28:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:30.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:30 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:30.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:31 compute-1 ceph-mon[81689]: pgmap v4096: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 20 KiB/s wr, 20 op/s
Dec 06 08:28:31 compute-1 nova_compute[226101]: 2025-12-06 08:28:31.727 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:32 compute-1 sshd-session[317899]: Received disconnect from 101.100.194.199 port 47496:11: Bye Bye [preauth]
Dec 06 08:28:32 compute-1 sshd-session[317899]: Disconnected from authenticating user root 101.100.194.199 port 47496 [preauth]
Dec 06 08:28:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:32 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:32.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:32.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:33 compute-1 ceph-mon[81689]: pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 24 KiB/s wr, 115 op/s
Dec 06 08:28:34 compute-1 nova_compute[226101]: 2025-12-06 08:28:34.086 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:34 compute-1 ceph-mon[81689]: pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 95 op/s
Dec 06 08:28:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:34 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:34.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:34.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:35 compute-1 podman[317901]: 2025-12-06 08:28:35.038364095 +0000 UTC m=+0.068580181 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:28:35 compute-1 podman[317902]: 2025-12-06 08:28:35.053007598 +0000 UTC m=+0.083979264 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:28:35 compute-1 podman[317903]: 2025-12-06 08:28:35.061127815 +0000 UTC m=+0.090981751 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:28:35 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 e437: 3 total, 3 up, 3 in
Dec 06 08:28:36 compute-1 ceph-mon[81689]: osdmap e437: 3 total, 3 up, 3 in
Dec 06 08:28:36 compute-1 nova_compute[226101]: 2025-12-06 08:28:36.730 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:36 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:37 compute-1 ceph-mon[81689]: pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 4.2 KiB/s wr, 88 op/s
Dec 06 08:28:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:38.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:38 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:38.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:39 compute-1 nova_compute[226101]: 2025-12-06 08:28:39.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:39 compute-1 ceph-mon[81689]: pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 76 op/s
Dec 06 08:28:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:40.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:40 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:40.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:41 compute-1 ceph-mon[81689]: pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 76 op/s
Dec 06 08:28:41 compute-1 nova_compute[226101]: 2025-12-06 08:28:41.732 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:42 compute-1 ceph-mon[81689]: pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:42.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:42.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:44 compute-1 nova_compute[226101]: 2025-12-06 08:28:44.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:44.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:44.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:45 compute-1 ceph-mon[81689]: pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:46 compute-1 nova_compute[226101]: 2025-12-06 08:28:46.733 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:46.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:46.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:47 compute-1 ceph-mon[81689]: pgmap v4105: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:47 compute-1 nova_compute[226101]: 2025-12-06 08:28:47.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:48 compute-1 nova_compute[226101]: 2025-12-06 08:28:48.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:48 compute-1 nova_compute[226101]: 2025-12-06 08:28:48.747 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:48 compute-1 nova_compute[226101]: 2025-12-06 08:28:48.748 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:48 compute-1 nova_compute[226101]: 2025-12-06 08:28:48.748 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:48 compute-1 nova_compute[226101]: 2025-12-06 08:28:48.749 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:28:48 compute-1 nova_compute[226101]: 2025-12-06 08:28:48.749 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:28:48 compute-1 sshd-session[317962]: Connection reset by 165.154.55.146 port 42892 [preauth]
Dec 06 08:28:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:48.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:48.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.091 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:28:49 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/642553328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.188 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:28:49 compute-1 ceph-mon[81689]: pgmap v4106: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/642553328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.352 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.353 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4239MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.354 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.354 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.625 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.626 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:28:49 compute-1 nova_compute[226101]: 2025-12-06 08:28:49.642 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:28:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:28:50 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/478041238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:50 compute-1 nova_compute[226101]: 2025-12-06 08:28:50.112 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:28:50 compute-1 nova_compute[226101]: 2025-12-06 08:28:50.117 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:28:50 compute-1 nova_compute[226101]: 2025-12-06 08:28:50.137 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:28:50 compute-1 nova_compute[226101]: 2025-12-06 08:28:50.141 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:28:50 compute-1 nova_compute[226101]: 2025-12-06 08:28:50.141 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/478041238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:50.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:50.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:51 compute-1 ceph-mon[81689]: pgmap v4107: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:51 compute-1 nova_compute[226101]: 2025-12-06 08:28:51.735 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:52.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:52 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:52.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:53 compute-1 ceph-mon[81689]: pgmap v4108: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.516774) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733516825, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 2414, "num_deletes": 254, "total_data_size": 5771046, "memory_usage": 5866480, "flush_reason": "Manual Compaction"}
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733612524, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 3771841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88389, "largest_seqno": 90797, "table_properties": {"data_size": 3762129, "index_size": 6141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20097, "raw_average_key_size": 20, "raw_value_size": 3742679, "raw_average_value_size": 3819, "num_data_blocks": 269, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009521, "oldest_key_time": 1765009521, "file_creation_time": 1765009733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 95790 microseconds, and 8334 cpu microseconds.
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.612568) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 3771841 bytes OK
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.612588) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.617664) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.617682) EVENT_LOG_v1 {"time_micros": 1765009733617677, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.617699) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 5760542, prev total WAL file size 5760542, number of live WAL files 2.
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.619015) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(3683KB)], [183(10MB)]
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733619134, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 15072614, "oldest_snapshot_seqno": -1}
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 11678 keys, 13164680 bytes, temperature: kUnknown
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733781769, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 13164680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13092057, "index_size": 42302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 309632, "raw_average_key_size": 26, "raw_value_size": 12890591, "raw_average_value_size": 1103, "num_data_blocks": 1592, "num_entries": 11678, "num_filter_entries": 11678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.782222) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 13164680 bytes
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.846557) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.6 rd, 80.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 12201, records dropped: 523 output_compression: NoCompression
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.846609) EVENT_LOG_v1 {"time_micros": 1765009733846591, "job": 118, "event": "compaction_finished", "compaction_time_micros": 162813, "compaction_time_cpu_micros": 33460, "output_level": 6, "num_output_files": 1, "total_output_size": 13164680, "num_input_records": 12201, "num_output_records": 11678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733847519, "job": 118, "event": "table_file_deletion", "file_number": 185}
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733850211, "job": 118, "event": "table_file_deletion", "file_number": 183}
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.618896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.850262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.850269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.850271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.850273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:28:53.850276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:54 compute-1 nova_compute[226101]: 2025-12-06 08:28:54.093 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:54.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:54 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:54.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:55 compute-1 ceph-mon[81689]: pgmap v4109: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:56 compute-1 nova_compute[226101]: 2025-12-06 08:28:56.736 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:56 compute-1 sshd-session[318008]: Received disconnect from 186.87.166.141 port 42122:11: Bye Bye [preauth]
Dec 06 08:28:56 compute-1 sshd-session[318008]: Disconnected from authenticating user root 186.87.166.141 port 42122 [preauth]
Dec 06 08:28:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:56.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:56 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:56.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:57 compute-1 ceph-mon[81689]: pgmap v4110: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:28:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:28:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:58.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:58 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:59 compute-1 nova_compute[226101]: 2025-12-06 08:28:59.097 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:59 compute-1 nova_compute[226101]: 2025-12-06 08:28:59.143 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:59 compute-1 nova_compute[226101]: 2025-12-06 08:28:59.143 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:28:59 compute-1 nova_compute[226101]: 2025-12-06 08:28:59.144 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:28:59 compute-1 nova_compute[226101]: 2025-12-06 08:28:59.197 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:28:59 compute-1 ceph-mon[81689]: pgmap v4111: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:00.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:00.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:01 compute-1 ceph-mon[81689]: pgmap v4112: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:29:01.707 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:29:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:29:01.707 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:29:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:29:01.707 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:29:01 compute-1 nova_compute[226101]: 2025-12-06 08:29:01.739 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:02.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:02.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:03 compute-1 ceph-mon[81689]: pgmap v4113: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:04 compute-1 nova_compute[226101]: 2025-12-06 08:29:04.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:05.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:05.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:05 compute-1 ceph-mon[81689]: pgmap v4114: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1708173048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:05 compute-1 nova_compute[226101]: 2025-12-06 08:29:05.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:05 compute-1 nova_compute[226101]: 2025-12-06 08:29:05.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:06 compute-1 podman[318010]: 2025-12-06 08:29:06.101406621 +0000 UTC m=+0.068128197 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:29:06 compute-1 podman[318011]: 2025-12-06 08:29:06.108317677 +0000 UTC m=+0.070107412 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:29:06 compute-1 podman[318012]: 2025-12-06 08:29:06.139196445 +0000 UTC m=+0.101177245 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 06 08:29:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3976858743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:06 compute-1 nova_compute[226101]: 2025-12-06 08:29:06.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:06 compute-1 nova_compute[226101]: 2025-12-06 08:29:06.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:29:06 compute-1 nova_compute[226101]: 2025-12-06 08:29:06.740 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:07.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:07.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:07 compute-1 ceph-mon[81689]: pgmap v4115: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:09.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:09.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:09 compute-1 nova_compute[226101]: 2025-12-06 08:29:09.100 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:09 compute-1 ceph-mon[81689]: pgmap v4116: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3761457245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:29:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3761457245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:29:10 compute-1 sshd-session[318073]: Received disconnect from 45.120.216.232 port 57592:11: Bye Bye [preauth]
Dec 06 08:29:10 compute-1 sshd-session[318073]: Disconnected from authenticating user root 45.120.216.232 port 57592 [preauth]
Dec 06 08:29:10 compute-1 nova_compute[226101]: 2025-12-06 08:29:10.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:11.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:11.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3044378136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:11 compute-1 ceph-mon[81689]: pgmap v4117: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3507224446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:11 compute-1 nova_compute[226101]: 2025-12-06 08:29:11.743 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:13.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:13.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:13 compute-1 ceph-mon[81689]: pgmap v4118: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:14 compute-1 nova_compute[226101]: 2025-12-06 08:29:14.102 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:14 compute-1 sshd-session[318075]: Received disconnect from 186.96.151.198 port 48680:11: Bye Bye [preauth]
Dec 06 08:29:14 compute-1 sshd-session[318075]: Disconnected from authenticating user root 186.96.151.198 port 48680 [preauth]
Dec 06 08:29:14 compute-1 nova_compute[226101]: 2025-12-06 08:29:14.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:15.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:15.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:15 compute-1 ceph-mon[81689]: pgmap v4119: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:16 compute-1 nova_compute[226101]: 2025-12-06 08:29:16.745 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:17.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:17.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:17 compute-1 ceph-mon[81689]: pgmap v4120: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:17 compute-1 nova_compute[226101]: 2025-12-06 08:29:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:18 compute-1 sshd-session[318077]: Received disconnect from 14.225.3.79 port 46530:11: Bye Bye [preauth]
Dec 06 08:29:18 compute-1 sshd-session[318077]: Disconnected from authenticating user root 14.225.3.79 port 46530 [preauth]
Dec 06 08:29:18 compute-1 nova_compute[226101]: 2025-12-06 08:29:18.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:19.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:19 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:19.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:19 compute-1 nova_compute[226101]: 2025-12-06 08:29:19.104 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:19 compute-1 ceph-mon[81689]: pgmap v4121: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:21.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:21 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:21.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:21 compute-1 ceph-mon[81689]: pgmap v4122: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:21 compute-1 nova_compute[226101]: 2025-12-06 08:29:21.748 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:22 compute-1 sshd-session[318081]: Received disconnect from 136.112.8.45 port 58478:11: Bye Bye [preauth]
Dec 06 08:29:22 compute-1 sshd-session[318081]: Disconnected from authenticating user root 136.112.8.45 port 58478 [preauth]
Dec 06 08:29:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:23.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:23 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:23.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:23 compute-1 sudo[318085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:23 compute-1 sudo[318085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:23 compute-1 sudo[318085]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:23 compute-1 sshd-session[318079]: Received disconnect from 154.209.4.183 port 45046:11: Bye Bye [preauth]
Dec 06 08:29:23 compute-1 sshd-session[318079]: Disconnected from authenticating user root 154.209.4.183 port 45046 [preauth]
Dec 06 08:29:23 compute-1 sudo[318110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:29:23 compute-1 sudo[318110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:23 compute-1 sudo[318110]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:23 compute-1 sudo[318135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:23 compute-1 sudo[318135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:23 compute-1 sudo[318135]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:23 compute-1 sshd-session[318083]: Received disconnect from 124.18.141.70 port 55834:11: Bye Bye [preauth]
Dec 06 08:29:23 compute-1 sshd-session[318083]: Disconnected from authenticating user root 124.18.141.70 port 55834 [preauth]
Dec 06 08:29:23 compute-1 sudo[318160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:29:23 compute-1 sudo[318160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:23 compute-1 sudo[318160]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:24 compute-1 sudo[318214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:24 compute-1 sudo[318214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:24 compute-1 sudo[318214]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:24 compute-1 sudo[318239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:29:24 compute-1 nova_compute[226101]: 2025-12-06 08:29:24.144 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:24 compute-1 sudo[318239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:24 compute-1 sudo[318239]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:24 compute-1 sudo[318264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:24 compute-1 sudo[318264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:24 compute-1 sudo[318264]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:24 compute-1 sudo[318289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 08:29:24 compute-1 sudo[318289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:24 compute-1 ceph-mon[81689]: pgmap v4123: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:24 compute-1 sudo[318289]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:25.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:25 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:25.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:25 compute-1 ceph-mon[81689]: pgmap v4124: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:29:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:29:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:29:25 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:26 compute-1 nova_compute[226101]: 2025-12-06 08:29:26.750 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:27 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:27.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:27.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:27 compute-1 ceph-mon[81689]: pgmap v4125: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:29 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:29.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:29.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:29 compute-1 nova_compute[226101]: 2025-12-06 08:29:29.149 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:29 compute-1 ceph-mon[81689]: pgmap v4126: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:29:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.5 total, 600.0 interval
                                           Cumulative writes: 17K writes, 91K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1555 writes, 7336 keys, 1555 commit groups, 1.0 writes per commit group, ingest: 15.59 MB, 0.03 MB/s
                                           Interval WAL: 1555 writes, 1555 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     18.7      6.08              0.33        59    0.103       0      0       0.0       0.0
                                             L6      1/0   12.55 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5     49.6     42.7     14.66              1.77        58    0.253    494K    31K       0.0       0.0
                                            Sum      1/0   12.55 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.5     35.1     35.7     20.75              2.10       117    0.177    494K    31K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4     98.6    100.6      0.70              0.21        10    0.070     59K   2528       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     49.6     42.7     14.66              1.77        58    0.253    494K    31K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     18.8      6.03              0.33        58    0.104       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.111, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.72 GB write, 0.10 MB/s write, 0.71 GB read, 0.10 MB/s read, 20.7 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 79.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000651 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4497,75.66 MB,24.8883%) FilterBlock(117,1.33 MB,0.438565%) IndexBlock(117,2.14 MB,0.704299%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:29:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:31.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:31.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:31 compute-1 ceph-mon[81689]: pgmap v4127: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:31 compute-1 sudo[318332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:31 compute-1 sudo[318332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:31 compute-1 sudo[318332]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:31 compute-1 sudo[318357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:29:31 compute-1 sudo[318357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:31 compute-1 sudo[318357]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:31 compute-1 nova_compute[226101]: 2025-12-06 08:29:31.753 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:33 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:34 compute-1 nova_compute[226101]: 2025-12-06 08:29:34.149 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:34 compute-1 nova_compute[226101]: 2025-12-06 08:29:34.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:34 compute-1 nova_compute[226101]: 2025-12-06 08:29:34.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:29:34 compute-1 nova_compute[226101]: 2025-12-06 08:29:34.727 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:29:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:35 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:35.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:35.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:35 compute-1 ceph-mon[81689]: pgmap v4128: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:36 compute-1 ceph-mon[81689]: pgmap v4129: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:36 compute-1 nova_compute[226101]: 2025-12-06 08:29:36.794 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:37.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:37 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:37.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:37 compute-1 podman[318385]: 2025-12-06 08:29:37.106029151 +0000 UTC m=+0.074381526 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:29:37 compute-1 podman[318386]: 2025-12-06 08:29:37.142310534 +0000 UTC m=+0.108377207 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:29:37 compute-1 podman[318384]: 2025-12-06 08:29:37.143744683 +0000 UTC m=+0.114041280 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:29:37 compute-1 ceph-mon[81689]: pgmap v4130: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:39.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:39.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:39 compute-1 nova_compute[226101]: 2025-12-06 08:29:39.150 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:39 compute-1 ceph-mon[81689]: pgmap v4131: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:39 compute-1 nova_compute[226101]: 2025-12-06 08:29:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:41.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:41.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:41 compute-1 ceph-mon[81689]: pgmap v4132: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:41 compute-1 nova_compute[226101]: 2025-12-06 08:29:41.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:42 compute-1 ceph-mon[81689]: pgmap v4133: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:43.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:43 compute-1 sshd-session[318450]: Accepted publickey for zuul from 192.168.122.10 port 34448 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 08:29:43 compute-1 systemd-logind[788]: New session 60 of user zuul.
Dec 06 08:29:43 compute-1 systemd[1]: Started Session 60 of User zuul.
Dec 06 08:29:43 compute-1 sshd-session[318450]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 08:29:43 compute-1 sudo[318454]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 06 08:29:43 compute-1 sudo[318454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:29:43 compute-1 sshd-session[318382]: Connection closed by 107.150.106.178 port 38590 [preauth]
Dec 06 08:29:43 compute-1 sshd-session[318448]: Received disconnect from 106.51.92.114 port 52915:11: Bye Bye [preauth]
Dec 06 08:29:43 compute-1 sshd-session[318448]: Disconnected from authenticating user root 106.51.92.114 port 52915 [preauth]
Dec 06 08:29:44 compute-1 nova_compute[226101]: 2025-12-06 08:29:44.152 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:44 compute-1 sshd-session[318485]: Received disconnect from 101.100.194.199 port 59502:11: Bye Bye [preauth]
Dec 06 08:29:44 compute-1 sshd-session[318485]: Disconnected from authenticating user root 101.100.194.199 port 59502 [preauth]
Dec 06 08:29:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:45.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:45.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:46 compute-1 ceph-mon[81689]: pgmap v4134: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:46 compute-1 nova_compute[226101]: 2025-12-06 08:29:46.799 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:47.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:47.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:47 compute-1 ceph-mon[81689]: from='client.45353 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:47 compute-1 ceph-mon[81689]: pgmap v4135: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:47 compute-1 ceph-mon[81689]: from='client.45359 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:47 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 06 08:29:47 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/760709345' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:29:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/760709345' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:29:48 compute-1 ceph-mon[81689]: from='client.37110 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:48 compute-1 sshd-session[318705]: Received disconnect from 91.144.158.231 port 34293:11: Bye Bye [preauth]
Dec 06 08:29:48 compute-1 sshd-session[318705]: Disconnected from authenticating user root 91.144.158.231 port 34293 [preauth]
Dec 06 08:29:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:49.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:49.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:49 compute-1 ceph-mon[81689]: from='client.46204 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:49 compute-1 ceph-mon[81689]: from='client.37116 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:49 compute-1 ceph-mon[81689]: pgmap v4136: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:49 compute-1 ceph-mon[81689]: from='client.46216 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3959119430' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:29:49 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3120292559' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:29:49 compute-1 nova_compute[226101]: 2025-12-06 08:29:49.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:49 compute-1 nova_compute[226101]: 2025-12-06 08:29:49.884 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:49 compute-1 nova_compute[226101]: 2025-12-06 08:29:49.953 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:29:49 compute-1 nova_compute[226101]: 2025-12-06 08:29:49.953 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:29:49 compute-1 nova_compute[226101]: 2025-12-06 08:29:49.954 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:29:49 compute-1 nova_compute[226101]: 2025-12-06 08:29:49.954 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:29:49 compute-1 nova_compute[226101]: 2025-12-06 08:29:49.954 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:29:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:29:50 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1792754580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:50 compute-1 nova_compute[226101]: 2025-12-06 08:29:50.458 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:29:50 compute-1 nova_compute[226101]: 2025-12-06 08:29:50.646 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:29:50 compute-1 nova_compute[226101]: 2025-12-06 08:29:50.649 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4141MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:29:50 compute-1 nova_compute[226101]: 2025-12-06 08:29:50.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:29:50 compute-1 nova_compute[226101]: 2025-12-06 08:29:50.649 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:29:50 compute-1 nova_compute[226101]: 2025-12-06 08:29:50.761 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:29:50 compute-1 nova_compute[226101]: 2025-12-06 08:29:50.762 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:29:50 compute-1 ovs-vsctl[318764]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 06 08:29:50 compute-1 nova_compute[226101]: 2025-12-06 08:29:50.889 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:29:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:51.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:51 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:51.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:29:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/185607013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:51 compute-1 nova_compute[226101]: 2025-12-06 08:29:51.357 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:29:51 compute-1 nova_compute[226101]: 2025-12-06 08:29:51.366 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:29:51 compute-1 ceph-mon[81689]: pgmap v4137: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1792754580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/185607013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:51 compute-1 nova_compute[226101]: 2025-12-06 08:29:51.802 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:51 compute-1 virtqemud[225710]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 06 08:29:51 compute-1 virtqemud[225710]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 06 08:29:51 compute-1 virtqemud[225710]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 08:29:52 compute-1 nova_compute[226101]: 2025-12-06 08:29:52.176 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:29:52 compute-1 nova_compute[226101]: 2025-12-06 08:29:52.178 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:29:52 compute-1 nova_compute[226101]: 2025-12-06 08:29:52.178 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:29:52 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: cache status {prefix=cache status} (starting...)
Dec 06 08:29:52 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:52 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: client ls {prefix=client ls} (starting...)
Dec 06 08:29:52 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:52 compute-1 lvm[319127]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 08:29:52 compute-1 lvm[319127]: VG ceph_vg0 finished
Dec 06 08:29:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:53 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: damage ls {prefix=damage ls} (starting...)
Dec 06 08:29:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump loads {prefix=dump loads} (starting...)
Dec 06 08:29:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 06 08:29:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 06 08:29:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 06 08:29:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2589679709' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 06 08:29:54 compute-1 ceph-mon[81689]: pgmap v4138: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:54 compute-1 nova_compute[226101]: 2025-12-06 08:29:54.155 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:29:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1676675540' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec 06 08:29:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3727707845' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: ops {prefix=ops} (starting...)
Dec 06 08:29:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.45386 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.45392 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2589679709' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: pgmap v4139: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.46240 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1676675540' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.46255 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.37143 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/201956814' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.45416 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3727707845' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2728014241' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d70d6f0 =====
Dec 06 08:29:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d70d6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:55 compute-1 radosgw[82404]: beast: 0x7fc73d70d6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec 06 08:29:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2337824083' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec 06 08:29:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1783511862' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 06 08:29:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3402814757' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:55 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: session ls {prefix=session ls} (starting...)
Dec 06 08:29:55 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:29:55 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: status {prefix=status} (starting...)
Dec 06 08:29:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:29:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3166734127' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.37158 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2740306402' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2337824083' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1783511862' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1262805181' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.46270 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3793706944' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3402814757' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.45455 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.37185 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2945805798' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/863163011' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1809216193' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3166734127' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 06 08:29:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2964182196' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 06 08:29:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3992149126' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:56 compute-1 nova_compute[226101]: 2025-12-06 08:29:56.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 08:29:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/586738722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec 06 08:29:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/486751230' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:29:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:57.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 06 08:29:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2792455046' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.45473 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2943769363' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2980343332' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: pgmap v4140: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.46315 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/291798680' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2964182196' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3992149126' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.46321 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2848158878' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4207179364' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.37230 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/586738722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/589179532' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/486751230' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1722038647' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4108570152' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 06 08:29:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2100372371' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:29:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec 06 08:29:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1074309655' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec 06 08:29:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/538665449' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.37242 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2792455046' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3902118403' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.45518 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3005104597' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/886029708' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/905335147' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2100372371' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1074309655' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3607517829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2015455620' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2206128586' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3947966807' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3263639216' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/538665449' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:29:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 06 08:29:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1008488265' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:59.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:29:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:59 compute-1 nova_compute[226101]: 2025-12-06 08:29:59.157 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:29:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/209309539' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a1e15000/0x0/0x1bfc00000, data 0x60fb73f/0x6319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:10.111743+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437043200 unmapped: 79224832 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b555389800 session 0x55b554abb2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b554b20000 session 0x55b5534e5a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:11.111957+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a1dda000/0x0/0x1bfc00000, data 0x613673f/0x6354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437059584 unmapped: 79208448 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2768000/0x0/0x1bfc00000, data 0x557873f/0x5796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b553872800 session 0x55b5522f2b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:12.112118+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437059584 unmapped: 79208448 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:13.112267+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2999000/0x0/0x1bfc00000, data 0x557872f/0x5795000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437059584 unmapped: 79208448 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:14.112376+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4957339 data_alloc: 251658240 data_used: 49938432
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437067776 unmapped: 79200256 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:15.112548+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437067776 unmapped: 79200256 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:16.112777+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437198848 unmapped: 79069184 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.570928574s of 10.838595390s, submitted: 85
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5566e3000 session 0x55b55348f0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5546cb800 session 0x55b555372960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:17.112933+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b558124c00 session 0x55b55336f860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b561042000 session 0x55b55336e3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437215232 unmapped: 79052800 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b553872800 session 0x55b5553d21e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b554b20000 session 0x55b5558fe780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:18.113153+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437223424 unmapped: 79044608 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2d60000/0x0/0x1bfc00000, data 0x51b26d9/0x53cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:19.113480+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4923350 data_alloc: 251658240 data_used: 50769920
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437223424 unmapped: 79044608 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:20.113617+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b555389800 session 0x55b5529274a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b553872800 session 0x55b5550e7680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:21.113825+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:22.114033+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a3da8000/0x0/0x1bfc00000, data 0x416d6c9/0x4386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:23.114250+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a3da8000/0x0/0x1bfc00000, data 0x416d6c9/0x4386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:24.114538+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4730932 data_alloc: 234881024 data_used: 42237952
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:25.114772+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a3da8000/0x0/0x1bfc00000, data 0x416d6c9/0x4386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:26.114936+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a3da8000/0x0/0x1bfc00000, data 0x416d6c9/0x4386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:27.115144+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:28.115311+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:29.115470+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4730932 data_alloc: 234881024 data_used: 42237952
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:30.115628+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a3da8000/0x0/0x1bfc00000, data 0x416d6c9/0x4386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:31.115841+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.811237335s of 14.762109756s, submitted: 116
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:32.116008+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a3da8000/0x0/0x1bfc00000, data 0x416d6c9/0x4386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434798592 unmapped: 81469440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:33.116213+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434806784 unmapped: 81461248 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:34.116352+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a3da7000/0x0/0x1bfc00000, data 0x416d6c9/0x4386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4731108 data_alloc: 234881024 data_used: 42237952
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434806784 unmapped: 81461248 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:35.116473+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b552b37800 session 0x55b55336fa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b55537b800 session 0x55b555372d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434806784 unmapped: 81461248 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:36.116639+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b554b20000 session 0x55b55271fa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429211648 unmapped: 87056384 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:37.116806+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429211648 unmapped: 87056384 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:38.116998+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429211648 unmapped: 87056384 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e98000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:39.117195+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555394 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429219840 unmapped: 87048192 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:40.117370+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429277184 unmapped: 86990848 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:41.117603+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429277184 unmapped: 86990848 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:42.117785+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429277184 unmapped: 86990848 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:43.117927+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.269996643s of 11.734186172s, submitted: 140
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429285376 unmapped: 86982656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:44.118079+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555290 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429285376 unmapped: 86982656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:45.118212+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429293568 unmapped: 86974464 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:46.118563+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429301760 unmapped: 86966272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:47.118808+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429309952 unmapped: 86958080 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:48.118979+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429342720 unmapped: 86925312 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:49.119108+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555290 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429342720 unmapped: 86925312 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:50.119271+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429359104 unmapped: 86908928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:51.119501+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429359104 unmapped: 86908928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:52.119650+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429359104 unmapped: 86908928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:53.119808+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429359104 unmapped: 86908928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:54.119977+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555218 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429359104 unmapped: 86908928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:55.120192+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 86900736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:56.120371+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 86900736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:57.120516+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 86900736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:58.120856+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 86900736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:59.121007+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555218 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 86900736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:00.121160+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 86900736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:01.121359+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 86900736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:02.121481+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 86900736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:03.121759+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429375488 unmapped: 86892544 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:04.121957+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555218 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429375488 unmapped: 86892544 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:05.122110+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429375488 unmapped: 86892544 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:06.122253+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429375488 unmapped: 86892544 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:07.122490+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429375488 unmapped: 86892544 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:08.122695+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429383680 unmapped: 86884352 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:09.122964+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555218 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429383680 unmapped: 86884352 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:10.123134+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429383680 unmapped: 86884352 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:11.123352+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429391872 unmapped: 86876160 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:12.123518+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429391872 unmapped: 86876160 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:13.123663+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429391872 unmapped: 86876160 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:14.123861+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555218 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429391872 unmapped: 86876160 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:15.124027+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429391872 unmapped: 86876160 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:16.124131+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:17.124291+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:18.124494+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:19.124657+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555218 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:20.124816+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:21.125013+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:22.125169+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e99000/0x0/0x1bfc00000, data 0x307e657/0x3295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:23.125322+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:24.125402+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4555218 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:25.125547+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b558124c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b558124c00 session 0x55b5533f1680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b552b37800 session 0x55b55348f680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b553872800 session 0x55b554aba1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b554b20000 session 0x55b5550e6d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537b800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.065654755s of 42.362091064s, submitted: 192
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:26.125685+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4e98000/0x0/0x1bfc00000, data 0x307e667/0x3296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429400064 unmapped: 86867968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:27.125846+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429416448 unmapped: 86851584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:28.126048+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438149120 unmapped: 78118912 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:29.126247+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b55537b800 session 0x55b554f232c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561042000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b561042000 session 0x55b5534ae3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b552b37800 session 0x55b55395d0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b553872800 session 0x55b5550e6960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4628000/0x0/0x1bfc00000, data 0x38ee667/0x3b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b554b20000 session 0x55b5558fe3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4628844 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429424640 unmapped: 86843392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:30.126395+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4625000/0x0/0x1bfc00000, data 0x38f1667/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429424640 unmapped: 86843392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4625000/0x0/0x1bfc00000, data 0x38f1667/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:31.126625+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429424640 unmapped: 86843392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:32.126815+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429424640 unmapped: 86843392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:33.126977+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4625000/0x0/0x1bfc00000, data 0x38f1667/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429424640 unmapped: 86843392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:34.127180+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4628844 data_alloc: 234881024 data_used: 35753984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537b800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b55537b800 session 0x55b5550de5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429424640 unmapped: 86843392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:35.127319+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429432832 unmapped: 86835200 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:36.127465+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5566e3000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5566e3000 session 0x55b554eded20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429432832 unmapped: 86835200 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:37.127635+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4625000/0x0/0x1bfc00000, data 0x38f1667/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b552b37800 session 0x55b5533ef4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.456848145s of 11.795546532s, submitted: 16
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429432832 unmapped: 86835200 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5590ea800 session 0x55b5559d5a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:38.127806+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b553872800 session 0x55b553912d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590ea800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537b800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429432832 unmapped: 86835200 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:39.127968+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4629585 data_alloc: 234881024 data_used: 35762176
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429441024 unmapped: 86827008 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:40.128087+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4624000/0x0/0x1bfc00000, data 0x38f168a/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429580288 unmapped: 86687744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:41.128253+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429580288 unmapped: 86687744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:42.128485+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429580288 unmapped: 86687744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:43.128621+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429580288 unmapped: 86687744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:44.128766+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4684625 data_alloc: 234881024 data_used: 43479040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429580288 unmapped: 86687744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a4624000/0x0/0x1bfc00000, data 0x38f168a/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:45.128893+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429580288 unmapped: 86687744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:46.129074+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429580288 unmapped: 86687744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:47.129207+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5566e3000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5566e3000 session 0x55b5539123c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5546bcc00 session 0x55b5559d5680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bec00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5546bec00 session 0x55b5527c74a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.715276718s of 10.001116753s, submitted: 21
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b552b37800 session 0x55b554edfe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b553872800 session 0x55b5534e4d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429768704 unmapped: 86499328 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:48.129348+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429768704 unmapped: 86499328 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:49.129531+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4756494 data_alloc: 234881024 data_used: 43483136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429768704 unmapped: 86499328 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:50.129723+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a3cc5000/0x0/0x1bfc00000, data 0x425068a/0x4469000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 429768704 unmapped: 86499328 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:51.129875+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433086464 unmapped: 83181568 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:52.130017+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433086464 unmapped: 83181568 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:53.130152+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433094656 unmapped: 83173376 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:54.130396+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4827196 data_alloc: 234881024 data_used: 43855872
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433094656 unmapped: 83173376 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:55.130644+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5546bcc00 session 0x55b5550de5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433102848 unmapped: 83165184 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:56.130815+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5566e3000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a34b4000/0x0/0x1bfc00000, data 0x4a6168a/0x4c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433102848 unmapped: 83165184 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:57.130962+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a34b4000/0x0/0x1bfc00000, data 0x4a6168a/0x4c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434380800 unmapped: 81887232 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:58.131135+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.893210411s of 10.627355576s, submitted: 82
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:59.131312+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4895101 data_alloc: 251658240 data_used: 53596160
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:00.131514+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:01.131684+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:02.131845+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:03.131963+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a34b1000/0x0/0x1bfc00000, data 0x4a6468a/0x4c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:04.132109+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4895101 data_alloc: 251658240 data_used: 53596160
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:05.132272+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a34b1000/0x0/0x1bfc00000, data 0x4a6468a/0x4c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:06.132392+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:07.132562+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435200000 unmapped: 81068032 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:08.170314+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a34b1000/0x0/0x1bfc00000, data 0x4a6468a/0x4c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.561195374s of 10.569696426s, submitted: 3
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438222848 unmapped: 78045184 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:09.170451+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4932629 data_alloc: 251658240 data_used: 54104064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438657024 unmapped: 77611008 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:10.170552+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438657024 unmapped: 77611008 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:11.170693+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438657024 unmapped: 77611008 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:12.170832+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2fc5000/0x0/0x1bfc00000, data 0x4f5068a/0x5169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438771712 unmapped: 77496320 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:13.170994+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2fc5000/0x0/0x1bfc00000, data 0x4f5068a/0x5169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438771712 unmapped: 77496320 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:14.171145+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4938957 data_alloc: 251658240 data_used: 54751232
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x4f5168a/0x516a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438771712 unmapped: 77496320 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:15.171315+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438779904 unmapped: 77488128 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b5590ea800 session 0x55b554edf860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b55537b800 session 0x55b553389860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:16.171536+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x4f5168a/0x516a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438779904 unmapped: 77488128 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:17.171726+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438779904 unmapped: 77488128 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x4f5168a/0x516a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:18.171904+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438779904 unmapped: 77488128 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:19.172051+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4938957 data_alloc: 251658240 data_used: 54751232
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438779904 unmapped: 77488128 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:20.172186+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x4f5168a/0x516a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438779904 unmapped: 77488128 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:21.172357+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438779904 unmapped: 77488128 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x4f5168a/0x516a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:22.172507+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438779904 unmapped: 77488128 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:23.172703+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x4f5168a/0x516a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438796288 unmapped: 77471744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:24.172857+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4938957 data_alloc: 251658240 data_used: 54751232
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438796288 unmapped: 77471744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:25.173114+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537b800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 ms_handle_reset con 0x55b55537b800 session 0x55b5533f0d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438796288 unmapped: 77471744 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:26.173245+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.420572281s of 17.578294754s, submitted: 60
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b552b37800 session 0x55b5534ae960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b553872800 session 0x55b5529d0d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b5546bcc00 session 0x55b5533efe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 77463552 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:27.173386+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590ea800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b5590ea800 session 0x55b5550dfa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 77463552 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:28.173572+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b552b37800 session 0x55b5550de1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a2fc0000/0x0/0x1bfc00000, data 0x4f53356/0x516d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 77463552 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:29.173912+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4942747 data_alloc: 251658240 data_used: 54763520
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438804480 unmapped: 77463552 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:30.174151+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a2fc0000/0x0/0x1bfc00000, data 0x4f53356/0x516d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:31.174324+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:32.174468+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:33.174630+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a2fc0000/0x0/0x1bfc00000, data 0x4f53356/0x516d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:34.174792+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4943387 data_alloc: 251658240 data_used: 54853632
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:35.174961+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:36.175146+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a2fc0000/0x0/0x1bfc00000, data 0x4f53356/0x516d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:37.175308+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b553872800 session 0x55b554abb2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b5546bcc00 session 0x55b553499860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:38.175506+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a2fc0000/0x0/0x1bfc00000, data 0x4f53356/0x516d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438812672 unmapped: 77455360 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:39.175711+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4943387 data_alloc: 251658240 data_used: 54853632
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438820864 unmapped: 77447168 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:40.175878+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438820864 unmapped: 77447168 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:41.176117+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.128643036s of 15.134619713s, submitted: 2
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438820864 unmapped: 77447168 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:42.176257+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 heartbeat osd_stat(store_statfs(0x1a2fc0000/0x0/0x1bfc00000, data 0x4f53356/0x516d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438829056 unmapped: 77438976 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:43.176487+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438829056 unmapped: 77438976 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:44.176606+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4944523 data_alloc: 251658240 data_used: 54923264
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438829056 unmapped: 77438976 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:45.176765+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537b800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b55537b800 session 0x55b5558fe5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538a800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b55538a800 session 0x55b5534e52c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438878208 unmapped: 77389824 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:46.176951+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 ms_handle_reset con 0x55b552b37800 session 0x55b5533ee000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 412 ms_handle_reset con 0x55b553872800 session 0x55b555372000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 412 heartbeat osd_stat(store_statfs(0x1a2fc1000/0x0/0x1bfc00000, data 0x5fdb356/0x516d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438910976 unmapped: 77357056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:47.177123+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537b800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438910976 unmapped: 77357056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:48.177174+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 412 ms_handle_reset con 0x55b5566e3000 session 0x55b5550e6d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 412 ms_handle_reset con 0x55b554b20400 session 0x55b5550e72c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546cc000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437518336 unmapped: 78749696 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:49.177283+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 412 ms_handle_reset con 0x55b5546cc000 session 0x55b554ede000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4821017 data_alloc: 234881024 data_used: 43966464
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437542912 unmapped: 78725120 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:50.177440+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 412 heartbeat osd_stat(store_statfs(0x1a3e09000/0x0/0x1bfc00000, data 0x410a076/0x4325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:51.177619+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437542912 unmapped: 78725120 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:52.177746+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437542912 unmapped: 78725120 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:53.177917+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437542912 unmapped: 78725120 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:54.178093+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437542912 unmapped: 78725120 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4821017 data_alloc: 234881024 data_used: 43966464
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.095635414s of 13.333178520s, submitted: 83
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:55.178266+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437551104 unmapped: 78716928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:56.178379+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437551104 unmapped: 78716928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3e05000/0x0/0x1bfc00000, data 0x410bc28/0x4328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:57.178531+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437551104 unmapped: 78716928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:58.178660+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437551104 unmapped: 78716928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3e05000/0x0/0x1bfc00000, data 0x410bc28/0x4328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:59.178870+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437551104 unmapped: 78716928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4825991 data_alloc: 234881024 data_used: 43995136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:00.179033+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437551104 unmapped: 78716928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:01.179185+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3e05000/0x0/0x1bfc00000, data 0x410bc28/0x4328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:02.179331+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:03.179467+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:04.179632+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4833511 data_alloc: 234881024 data_used: 44875776
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:05.179756+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:06.179865+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3e05000/0x0/0x1bfc00000, data 0x410bc28/0x4328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:07.180011+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:08.180178+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bcc00 session 0x55b5550e74a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b55537b800 session 0x55b553426780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:09.180339+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437559296 unmapped: 78708736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.603641510s of 14.612390518s, submitted: 12
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4725645 data_alloc: 234881024 data_used: 44875776
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:10.180502+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437567488 unmapped: 78700544 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:11.180671+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 84287488 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:12.180835+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b55336f680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 84287488 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a4e8f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:13.180968+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 84287488 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:14.181162+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 84287488 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4596533 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:15.181308+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 84287488 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:16.181491+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 84287488 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a4e8f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:17.181662+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 84287488 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:18.181983+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431980544 unmapped: 84287488 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:19.182227+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431988736 unmapped: 84279296 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4596533 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a4e8f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:20.182455+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 431996928 unmapped: 84271104 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.626304626s of 10.962327957s, submitted: 28
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b5522f2b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554b20400 session 0x55b5553d32c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b554f234a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b55392b860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bcc00 session 0x55b55463a3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a4e8f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:21.182682+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:22.182999+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:23.183212+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:24.183407+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668235 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:25.183640+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:26.183860+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a4684000/0x0/0x1bfc00000, data 0x388ec57/0x3aaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:27.184054+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537b800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b55537b800 session 0x55b55392d860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:28.184212+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5566e3000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5566e3000 session 0x55b5527c61e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:29.184400+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433217536 unmapped: 83050496 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b552ee4f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b555373e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4674619 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537b800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:30.184673+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433119232 unmapped: 83148800 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a465e000/0x0/0x1bfc00000, data 0x38b2c8a/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:31.184844+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433029120 unmapped: 83238912 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:32.185119+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:33.185324+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:34.185466+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a465e000/0x0/0x1bfc00000, data 0x38b2c8a/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4733123 data_alloc: 234881024 data_used: 43839488
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:35.185673+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:36.185915+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:37.186056+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a465e000/0x0/0x1bfc00000, data 0x38b2c8a/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:38.186282+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:39.186693+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4733123 data_alloc: 234881024 data_used: 43839488
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:40.186865+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a465e000/0x0/0x1bfc00000, data 0x38b2c8a/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:41.187271+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:42.187550+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435814400 unmapped: 80453632 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.062652588s of 22.274658203s, submitted: 59
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:43.187807+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a465e000/0x0/0x1bfc00000, data 0x38b2c8a/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438517760 unmapped: 77750272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3f0e000/0x0/0x1bfc00000, data 0x3beac8a/0x3e08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:44.188080+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438517760 unmapped: 77750272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9dc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552c9dc00 session 0x55b554abb860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557d7e400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557d7e400 session 0x55b5529d0b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bac00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bac00 session 0x55b553912d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5559d52c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9dc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552c9dc00 session 0x55b5559d4960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4791649 data_alloc: 234881024 data_used: 45199360
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:45.188261+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 77651968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:46.188467+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 77651968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:47.188621+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c70000/0x0/0x1bfc00000, data 0x3e8ac8a/0x40a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 77651968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:48.188885+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 77651968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:49.189150+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 77651968 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4796095 data_alloc: 234881024 data_used: 45101056
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:50.189401+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438640640 unmapped: 77627392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c70000/0x0/0x1bfc00000, data 0x3e8ac8a/0x40a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:51.189706+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438640640 unmapped: 77627392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c57000/0x0/0x1bfc00000, data 0x3ea9c8a/0x40c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:52.189869+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438640640 unmapped: 77627392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b5553725a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c57000/0x0/0x1bfc00000, data 0x3ea9c8a/0x40c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:53.190008+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557d7e400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438640640 unmapped: 77627392 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:54.190372+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438706176 unmapped: 77561856 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.657628059s of 11.943797112s, submitted: 118
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bcc00 session 0x55b553427e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b55537b800 session 0x55b55506c960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4797775 data_alloc: 251658240 data_used: 45723648
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:55.190522+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5522f2b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:56.190726+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a49b2000/0x0/0x1bfc00000, data 0x314ebf5/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:57.190905+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:58.191119+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:59.191284+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4626266 data_alloc: 234881024 data_used: 36397056
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:00.191477+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a49b2000/0x0/0x1bfc00000, data 0x314ebf5/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:01.191673+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:02.191812+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:03.191943+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:04.192090+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4626746 data_alloc: 234881024 data_used: 36409344
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:05.192269+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434479104 unmapped: 81788928 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.743455887s of 10.950803757s, submitted: 69
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a49b2000/0x0/0x1bfc00000, data 0x314ebf5/0x3369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [0,0,0,0,15,2])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:06.192430+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433987584 unmapped: 82280448 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:07.192571+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435208192 unmapped: 81059840 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:08.192720+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:09.192882+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a424f000/0x0/0x1bfc00000, data 0x38b4bf5/0x3acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4687600 data_alloc: 234881024 data_used: 36438016
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:10.193064+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:11.193251+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:12.193476+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:13.193717+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:14.193882+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4686708 data_alloc: 234881024 data_used: 36438016
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:15.194103+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a422b000/0x0/0x1bfc00000, data 0x38d8bf5/0x3af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:16.194331+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:17.194464+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:18.194686+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435429376 unmapped: 80838656 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:19.194927+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.134960175s of 13.690268517s, submitted: 65
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a422b000/0x0/0x1bfc00000, data 0x38d8bf5/0x3af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435437568 unmapped: 80830464 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4686588 data_alloc: 234881024 data_used: 36438016
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:20.195157+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435437568 unmapped: 80830464 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9dc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552c9dc00 session 0x55b55348a1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b5559d43c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bcc00 session 0x55b5559d4f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba000 session 0x55b55336e3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b55392a780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:21.195397+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:22.195670+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3cf7000/0x0/0x1bfc00000, data 0x3e0cbf5/0x4027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3cf7000/0x0/0x1bfc00000, data 0x3e0cbf5/0x4027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:23.195818+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:24.195988+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4729062 data_alloc: 234881024 data_used: 36438016
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:25.196128+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3cf7000/0x0/0x1bfc00000, data 0x3e0cbf5/0x4027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:26.196280+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9dc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552c9dc00 session 0x55b5534ae3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:27.196506+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b5558ff2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557d7e400 session 0x55b553368960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e42800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e42800 session 0x55b554edf2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b55463be00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b554edfe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:28.196642+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:29.196864+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9dc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:30.197024+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735942 data_alloc: 234881024 data_used: 37359616
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435445760 unmapped: 80822272 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3cf7000/0x0/0x1bfc00000, data 0x3e0cbf5/0x4027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:31.197280+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 80396288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:32.197447+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 80396288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:33.197722+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 80396288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:34.197897+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 80396288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:35.198116+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759622 data_alloc: 234881024 data_used: 40738816
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 80396288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3cf7000/0x0/0x1bfc00000, data 0x3e0cbf5/0x4027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:36.198274+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 80396288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:37.198503+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 80396288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:38.198754+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.052833557s of 19.143138885s, submitted: 10
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435871744 unmapped: 80396288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3cf7000/0x0/0x1bfc00000, data 0x3e0cbf5/0x4027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:39.198987+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435879936 unmapped: 80388096 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:40.199206+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759270 data_alloc: 234881024 data_used: 40738816
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435879936 unmapped: 80388096 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b55506cb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bcc00 session 0x55b5559d45a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557d7e400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:41.199372+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435879936 unmapped: 80388096 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557d7e400 session 0x55b554f22780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:42.199554+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432717824 unmapped: 83550208 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:43.199744+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432349184 unmapped: 83918848 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c5d000/0x0/0x1bfc00000, data 0x3ea6bf5/0x40c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:44.199946+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:45.200124+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783012 data_alloc: 234881024 data_used: 41570304
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:46.200279+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:47.200479+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:48.200671+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:49.200832+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c45000/0x0/0x1bfc00000, data 0x3ebdbf5/0x40d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:50.200992+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783332 data_alloc: 234881024 data_used: 41578496
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:51.201213+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:52.201519+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432300032 unmapped: 83968000 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:53.201735+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432308224 unmapped: 83959808 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:54.201952+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432308224 unmapped: 83959808 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:55.202119+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784292 data_alloc: 234881024 data_used: 41648128
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c45000/0x0/0x1bfc00000, data 0x3ebdbf5/0x40d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432324608 unmapped: 83943424 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:56.202373+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432324608 unmapped: 83943424 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:57.202532+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432324608 unmapped: 83943424 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:58.202794+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432324608 unmapped: 83943424 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:59.203005+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432324608 unmapped: 83943424 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c45000/0x0/0x1bfc00000, data 0x3ebdbf5/0x40d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:00.203208+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4784292 data_alloc: 234881024 data_used: 41648128
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432324608 unmapped: 83943424 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3c45000/0x0/0x1bfc00000, data 0x3ebdbf5/0x40d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:01.203515+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432324608 unmapped: 83943424 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:02.203885+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b55392da40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b5550dfe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bcc00 session 0x55b5550e70e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b55506d2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.135629654s of 24.070560455s, submitted: 75
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432324608 unmapped: 83943424 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b55918a800 session 0x55b55348f0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b553404780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b5550e61e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bcc00 session 0x55b5553d21e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b5559d5e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:03.204007+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432340992 unmapped: 83927040 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:04.204204+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432340992 unmapped: 83927040 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3bdf000/0x0/0x1bfc00000, data 0x3f23c05/0x413f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:05.204339+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4791838 data_alloc: 234881024 data_used: 41648128
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432340992 unmapped: 83927040 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:06.204571+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432340992 unmapped: 83927040 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:07.204785+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 432340992 unmapped: 83927040 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:08.204956+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554698000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 433758208 unmapped: 82509824 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b552ee5c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:09.205095+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554698000 session 0x55b552ee4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b5534afc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5534a4f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a388c000/0x0/0x1bfc00000, data 0x4276c05/0x4492000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b554edef00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bcc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bcc00 session 0x55b553404780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434069504 unmapped: 82198528 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:10.205265+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4832338 data_alloc: 234881024 data_used: 41652224
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a388c000/0x0/0x1bfc00000, data 0x4276c05/0x4492000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434069504 unmapped: 82198528 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:11.205509+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:12.205692+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a388c000/0x0/0x1bfc00000, data 0x4276c05/0x4492000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:13.205844+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:14.206052+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:15.206210+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.618503571s of 12.740406036s, submitted: 37
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4833030 data_alloc: 234881024 data_used: 41783296
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:16.206402+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:17.206611+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554698000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554698000 session 0x55b5534261e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:18.206746+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a388a000/0x0/0x1bfc00000, data 0x4277c05/0x4493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:19.206912+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434077696 unmapped: 82190336 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:20.207071+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4833162 data_alloc: 234881024 data_used: 41783296
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 434503680 unmapped: 81764352 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:21.207252+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 81108992 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:22.207383+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 81108992 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:23.207556+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a388a000/0x0/0x1bfc00000, data 0x4277c05/0x4493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 435159040 unmapped: 81108992 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:24.207679+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438255616 unmapped: 78012416 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:25.207836+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.118734360s of 10.038772583s, submitted: 32
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4892126 data_alloc: 234881024 data_used: 44847104
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438255616 unmapped: 78012416 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:26.207960+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438280192 unmapped: 77987840 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:27.208168+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a3394000/0x0/0x1bfc00000, data 0x4766c05/0x4982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 438280192 unmapped: 77987840 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:28.208296+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a335f000/0x0/0x1bfc00000, data 0x47a3c05/0x49bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437608448 unmapped: 78659584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:29.208496+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a335f000/0x0/0x1bfc00000, data 0x47a3c05/0x49bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437608448 unmapped: 78659584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:30.208646+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4906188 data_alloc: 234881024 data_used: 45010944
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 437608448 unmapped: 78659584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:31.208797+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 440958976 unmapped: 75309056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:32.208975+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 442507264 unmapped: 73760768 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:33.209131+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a187a000/0x0/0x1bfc00000, data 0x50e6c05/0x5302000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 72802304 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:34.209275+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 443465728 unmapped: 72802304 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:35.209541+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.232456207s of 10.069496155s, submitted: 130
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4997582 data_alloc: 251658240 data_used: 47308800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1735000/0x0/0x1bfc00000, data 0x522bc05/0x5447000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444547072 unmapped: 71720960 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:36.209662+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444776448 unmapped: 71491584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:37.209804+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b55506d2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b5529d1a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b558124400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444801024 unmapped: 71467008 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:38.209931+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444817408 unmapped: 71450624 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:39.210066+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444817408 unmapped: 71450624 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:40.210224+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b558124400 session 0x55b5529d01e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4953814 data_alloc: 251658240 data_used: 47042560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444833792 unmapped: 71434240 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1ca9000/0x0/0x1bfc00000, data 0x4cbabf5/0x4ed5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:41.210406+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444833792 unmapped: 71434240 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:42.210612+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444833792 unmapped: 71434240 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:43.210746+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444833792 unmapped: 71434240 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:44.210921+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552c9dc00 session 0x55b5538834a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b554edfc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5559e2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444014592 unmapped: 72253440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1ca9000/0x0/0x1bfc00000, data 0x4cbabf5/0x4ed5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:45.211100+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798880 data_alloc: 234881024 data_used: 41181184
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444014592 unmapped: 72253440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:46.211241+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.432186127s of 11.353729248s, submitted: 90
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444014592 unmapped: 72253440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:47.211370+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2ae4000/0x0/0x1bfc00000, data 0x3e7fbf5/0x409a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444014592 unmapped: 72253440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:48.211569+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2ac3000/0x0/0x1bfc00000, data 0x3ea0bf5/0x40bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444014592 unmapped: 72253440 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:49.211715+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b5522f2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b55336e3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9dc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552c9dc00 session 0x55b554f22780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:50.211907+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4648094 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:51.212138+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a38e0000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:52.212295+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a38e0000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:53.212489+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:54.212680+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:55.212878+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4648094 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:56.213053+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:57.213212+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a38e0000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:58.213374+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444030976 unmapped: 72237056 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:59.213530+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444047360 unmapped: 72220672 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:00.213704+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4648094 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444047360 unmapped: 72220672 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:01.213886+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a38e0000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444047360 unmapped: 72220672 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:02.214014+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444047360 unmapped: 72220672 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:03.214202+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444047360 unmapped: 72220672 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:04.214386+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444047360 unmapped: 72220672 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:05.214555+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4648094 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444047360 unmapped: 72220672 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:06.214757+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a38e0000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444047360 unmapped: 72220672 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:07.214978+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444055552 unmapped: 72212480 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:08.215153+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a38e0000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a38e0000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444055552 unmapped: 72212480 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:09.215329+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444055552 unmapped: 72212480 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:10.215463+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4648094 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444055552 unmapped: 72212480 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:11.215620+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444055552 unmapped: 72212480 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:12.215753+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a38e0000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444055552 unmapped: 72212480 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:13.215859+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444055552 unmapped: 72212480 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:14.215981+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b553426780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b5534e50e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b5559d5860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b5534b9680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.761205673s of 27.858575821s, submitted: 38
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b553872800 session 0x55b5559d5680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5533ef860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b5529fa000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b5559434a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b5553d3a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444211200 unmapped: 72056832 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:15.216115+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4672391 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:16.216362+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:17.216547+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554698000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554698000 session 0x55b5529fab40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b554abb4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b5533685a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b55271eb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:18.216753+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a36e8000/0x0/0x1bfc00000, data 0x3279c67/0x3496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [0,0,0,0,0,4])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b5534e41e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b558124400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b558124400 session 0x55b5550e63c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b553426b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b5559d4b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b5526ee1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:19.216898+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:20.217103+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4713972 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b558124400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b558124400 session 0x55b5534b8b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:21.217310+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555a1e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:22.217489+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:23.217641+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a31d7000/0x0/0x1bfc00000, data 0x3789c8a/0x39a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:24.217977+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:25.218131+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4728905 data_alloc: 234881024 data_used: 37613568
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559dd6800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:26.218285+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559dd6800 session 0x55b5534e4780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.985000610s of 12.166028976s, submitted: 49
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444219392 unmapped: 72048640 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:27.218440+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444350464 unmapped: 71917568 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:28.218558+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a31d7000/0x0/0x1bfc00000, data 0x3789c8a/0x39a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444776448 unmapped: 71491584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:29.218701+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a31d7000/0x0/0x1bfc00000, data 0x3789c8a/0x39a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444776448 unmapped: 71491584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:30.218850+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4767117 data_alloc: 234881024 data_used: 42958848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444776448 unmapped: 71491584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:31.219028+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444776448 unmapped: 71491584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:32.219205+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444776448 unmapped: 71491584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:33.219359+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 444776448 unmapped: 71491584 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:34.219579+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446373888 unmapped: 69894144 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:35.219718+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a31d7000/0x0/0x1bfc00000, data 0x3789c8a/0x39a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4820569 data_alloc: 234881024 data_used: 43520000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446423040 unmapped: 69844992 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:36.219840+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446431232 unmapped: 69836800 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:37.219971+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446431232 unmapped: 69836800 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:38.220123+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1955000/0x0/0x1bfc00000, data 0x3e6bc8a/0x4089000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446431232 unmapped: 69836800 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.237733841s of 12.434546471s, submitted: 86
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:39.220251+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448823296 unmapped: 67444736 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:40.220397+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4874851 data_alloc: 234881024 data_used: 43900928
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448159744 unmapped: 68108288 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:41.220605+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 68100096 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:42.220775+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1303000/0x0/0x1bfc00000, data 0x44bdc8a/0x46db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 68100096 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:43.220930+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 68100096 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:44.221076+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 68100096 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1303000/0x0/0x1bfc00000, data 0x44bdc8a/0x46db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:45.221316+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886293 data_alloc: 234881024 data_used: 43896832
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448167936 unmapped: 68100096 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:46.221483+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448176128 unmapped: 68091904 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:47.221634+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1303000/0x0/0x1bfc00000, data 0x44bdc8a/0x46db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448184320 unmapped: 68083712 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:48.221802+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 68075520 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:49.221935+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 68075520 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:50.222092+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.592626572s of 11.460935593s, submitted: 53
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4884797 data_alloc: 234881024 data_used: 43896832
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 68075520 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:51.222263+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 68075520 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:52.222507+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1300000/0x0/0x1bfc00000, data 0x44bfc8a/0x46dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b554edf860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b55506c960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 68075520 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:53.222657+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 68075520 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:54.222790+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b558124400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1301000/0x0/0x1bfc00000, data 0x44bfc8a/0x46dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448192512 unmapped: 68075520 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:55.222902+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4884657 data_alloc: 234881024 data_used: 43933696
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448208896 unmapped: 68059136 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:56.223024+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448208896 unmapped: 68059136 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:57.223147+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448208896 unmapped: 68059136 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:58.223286+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448208896 unmapped: 68059136 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:59.223446+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:00.223616+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448208896 unmapped: 68059136 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1301000/0x0/0x1bfc00000, data 0x44bfc8a/0x46dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4884817 data_alloc: 234881024 data_used: 43937792
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:01.223801+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448217088 unmapped: 68050944 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.892584801s of 10.906824112s, submitted: 4
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:02.223963+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448217088 unmapped: 68050944 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:03.224094+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448225280 unmapped: 68042752 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:04.224232+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448225280 unmapped: 68042752 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:05.224359+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448225280 unmapped: 68042752 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4885001 data_alloc: 234881024 data_used: 43937792
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:06.224492+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a12ff000/0x0/0x1bfc00000, data 0x44c1c8a/0x46df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 448225280 unmapped: 68042752 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a12fd000/0x0/0x1bfc00000, data 0x44c1c8a/0x46df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:07.224609+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449273856 unmapped: 66994176 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:08.224724+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449273856 unmapped: 66994176 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:09.224914+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449273856 unmapped: 66994176 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:10.225149+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a12fb000/0x0/0x1bfc00000, data 0x44c2c8a/0x46e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449273856 unmapped: 66994176 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4892297 data_alloc: 234881024 data_used: 44707840
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:11.225323+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449282048 unmapped: 66985984 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:12.225485+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449290240 unmapped: 66977792 heap: 516268032 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.772482872s of 10.803831100s, submitted: 9
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554e0cc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a12fd000/0x0/0x1bfc00000, data 0x44c2cb3/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554e0cc00 session 0x55b55348a780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5547f4400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5547f4400 session 0x55b55463a3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557b68000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557b68000 session 0x55b5550dfc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:13.225681+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5553d2780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 70467584 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b555372780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:14.225860+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 70467584 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:15.226023+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 70467584 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4965058 data_alloc: 234881024 data_used: 44703744
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:16.226191+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449478656 unmapped: 70467584 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:17.226331+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b5558fed20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b558124400 session 0x55b5559d43c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449486848 unmapped: 70459392 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5547f4400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0abe000/0x0/0x1bfc00000, data 0x4d01cec/0x4f20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5547f4400 session 0x55b55336fc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:18.226473+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449495040 unmapped: 70451200 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:19.226827+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449495040 unmapped: 70451200 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b55392c960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b554abb0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:20.227018+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449503232 unmapped: 70443008 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4830436 data_alloc: 234881024 data_used: 38432768
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b5534ae960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b558124400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b558124400 session 0x55b5550de000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:21.227184+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 70426624 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554e0cc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bfc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:22.227344+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 449519616 unmapped: 70426624 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:23.227542+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a163c000/0x0/0x1bfc00000, data 0x4182d0f/0x43a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:24.227699+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:25.227953+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889812 data_alloc: 251658240 data_used: 45871104
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:26.228141+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a163c000/0x0/0x1bfc00000, data 0x4182d0f/0x43a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:27.228290+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:28.228501+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a163c000/0x0/0x1bfc00000, data 0x4182d0f/0x43a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.943431854s of 16.829010010s, submitted: 86
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:29.228659+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a163c000/0x0/0x1bfc00000, data 0x4182d0f/0x43a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:30.228846+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4889992 data_alloc: 251658240 data_used: 45871104
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:31.229038+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a163b000/0x0/0x1bfc00000, data 0x4183d0f/0x43a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:32.229535+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:33.229806+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 68354048 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:34.230126+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454139904 unmapped: 65806336 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:35.230469+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 65740800 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4987890 data_alloc: 251658240 data_used: 47845376
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:36.230727+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0b62000/0x0/0x1bfc00000, data 0x4c5bd0f/0x4e7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 65740800 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:37.230925+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 65740800 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:38.231088+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 65740800 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:39.231349+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 65740800 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:40.231490+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 65740800 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4988050 data_alloc: 251658240 data_used: 47849472
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:41.231852+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 65740800 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0b62000/0x0/0x1bfc00000, data 0x4c5bd0f/0x4e7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:42.232155+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.077936172s of 13.328265190s, submitted: 102
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454213632 unmapped: 65732608 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:43.232456+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454221824 unmapped: 65724416 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0b61000/0x0/0x1bfc00000, data 0x4c5cd0f/0x4e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:44.232673+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454221824 unmapped: 65724416 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:45.232917+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454221824 unmapped: 65724416 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4986246 data_alloc: 251658240 data_used: 47853568
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:46.233186+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454221824 unmapped: 65724416 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:47.233467+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454221824 unmapped: 65724416 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0b61000/0x0/0x1bfc00000, data 0x4c5cd0f/0x4e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:48.233817+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454221824 unmapped: 65724416 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:49.234029+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454221824 unmapped: 65724416 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:50.234503+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454221824 unmapped: 65724416 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4986422 data_alloc: 251658240 data_used: 47853568
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:51.234721+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454230016 unmapped: 65716224 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:52.235157+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6005.4 total, 600.0 interval
                                           Cumulative writes: 60K writes, 237K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s
                                           Cumulative WAL: 60K writes, 21K syncs, 2.80 writes per sync, written: 0.22 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4238 writes, 16K keys, 4238 commit groups, 1.0 writes per commit group, ingest: 17.71 MB, 0.03 MB/s
                                           Interval WAL: 4238 writes, 1708 syncs, 2.48 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.01              0.00         1    1.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.61              0.00         1    0.615       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.51              0.00         1    0.508       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.96              0.00         1    0.965       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6005.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b55116b610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454230016 unmapped: 65716224 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.224835396s of 10.239822388s, submitted: 4
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0b61000/0x0/0x1bfc00000, data 0x4c5dd0f/0x4e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:53.235330+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454230016 unmapped: 65716224 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:54.235769+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454230016 unmapped: 65716224 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:55.235945+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454230016 unmapped: 65716224 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4986250 data_alloc: 251658240 data_used: 47853568
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:56.236232+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0b61000/0x0/0x1bfc00000, data 0x4c5dd0f/0x4e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454238208 unmapped: 65708032 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:57.236541+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0b61000/0x0/0x1bfc00000, data 0x4c5dd0f/0x4e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454238208 unmapped: 65708032 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:58.236883+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0b61000/0x0/0x1bfc00000, data 0x4c5dd0f/0x4e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454238208 unmapped: 65708032 heap: 519946240 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b5527c6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b555372f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b5534a4d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b5533efe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:59.237072+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b554edfa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b558124400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b558124400 session 0x55b5553d3c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b554abb2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b552ee4f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b5550e6f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455335936 unmapped: 68288512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:00.237328+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455335936 unmapped: 68288512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5063151 data_alloc: 251658240 data_used: 47853568
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:01.237612+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455335936 unmapped: 68288512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:02.237847+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a01d9000/0x0/0x1bfc00000, data 0x55e3d81/0x5805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455335936 unmapped: 68288512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:03.238073+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.529541016s of 10.752375603s, submitted: 36
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455335936 unmapped: 68288512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:04.238190+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455335936 unmapped: 68288512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:05.238311+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455335936 unmapped: 68288512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5062291 data_alloc: 251658240 data_used: 47853568
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:06.238491+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b5533681e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455335936 unmapped: 68288512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:07.238622+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557b69400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e40400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455344128 unmapped: 68280320 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:08.238777+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a01d7000/0x0/0x1bfc00000, data 0x55e4d81/0x5806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455344128 unmapped: 68280320 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:09.238902+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457637888 unmapped: 65986560 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:10.239019+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554e0cc00 session 0x55b5529d01e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bfc00 session 0x55b5529d0d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457637888 unmapped: 65986560 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4915182 data_alloc: 251658240 data_used: 47214592
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b554edfe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:11.239217+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453935104 unmapped: 69689344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a11b1000/0x0/0x1bfc00000, data 0x42cdcec/0x44ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:12.239389+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453935104 unmapped: 69689344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:13.239585+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453935104 unmapped: 69689344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:14.239705+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.413508415s of 10.752321243s, submitted: 65
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453935104 unmapped: 69689344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:15.239836+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453935104 unmapped: 69689344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4913049 data_alloc: 251658240 data_used: 47210496
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:16.239991+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453935104 unmapped: 69689344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:17.240160+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453935104 unmapped: 69689344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a11b1000/0x0/0x1bfc00000, data 0x42cdcec/0x44ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:18.240353+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453935104 unmapped: 69689344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:19.240502+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454090752 unmapped: 69533696 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:20.240659+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455606272 unmapped: 68018176 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b55506da40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b555a1e000 session 0x55b554eded20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4982683 data_alloc: 251658240 data_used: 47812608
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:21.240849+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b5538825a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:22.240986+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a13a8000/0x0/0x1bfc00000, data 0x410bc67/0x4328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:23.241218+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:24.241373+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a13a8000/0x0/0x1bfc00000, data 0x410bc67/0x4328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:25.241541+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4898983 data_alloc: 234881024 data_used: 45297664
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:26.241710+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.766187668s of 12.246191025s, submitted: 104
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:27.241843+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:28.242009+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:29.242147+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a13a8000/0x0/0x1bfc00000, data 0x410bc67/0x4328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453263360 unmapped: 70361088 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:30.242278+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452263936 unmapped: 71360512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4893243 data_alloc: 234881024 data_used: 45314048
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:31.242477+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452263936 unmapped: 71360512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:32.242678+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557b69400 session 0x55b5553d30e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452263936 unmapped: 71360512 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e40400 session 0x55b554f22b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:33.242833+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a16b5000/0x0/0x1bfc00000, data 0x410cc67/0x4329000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450387968 unmapped: 73236480 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b555373e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:34.243000+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5559434a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bfc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5546bfc00 session 0x55b552ee4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b555942d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b5558fe000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557b69400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557b69400 session 0x55b55506c5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e40400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e40400 session 0x55b5558ff2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450560000 unmapped: 73064448 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555a1e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b555a1e000 session 0x55b554edfe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5533681e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b554edfa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:35.243155+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450560000 unmapped: 73064448 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4744348 data_alloc: 234881024 data_used: 35782656
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:36.243292+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2154000/0x0/0x1bfc00000, data 0x366ec05/0x388a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450560000 unmapped: 73064448 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557b69400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.842709541s of 10.721179008s, submitted: 68
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:37.243472+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557b69400 session 0x55b5527c6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e40400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 71999488 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:38.243690+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2153000/0x0/0x1bfc00000, data 0x366ec28/0x388b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451624960 unmapped: 71999488 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2153000/0x0/0x1bfc00000, data 0x366ec28/0x388b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:39.243878+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:40.244125+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4791890 data_alloc: 234881024 data_used: 41906176
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:41.244305+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:42.244395+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:43.244660+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2153000/0x0/0x1bfc00000, data 0x366ec28/0x388b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:44.244777+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:45.244919+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4791890 data_alloc: 234881024 data_used: 41906176
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:46.245091+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:47.245284+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:48.245384+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:49.245527+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2153000/0x0/0x1bfc00000, data 0x366ec28/0x388b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:50.245769+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.269830704s of 13.017261505s, submitted: 11
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452960256 unmapped: 70664192 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4846532 data_alloc: 234881024 data_used: 42459136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:51.246043+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 69419008 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b553426b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554fba400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b554fba400 session 0x55b5550e61e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b55336e5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b55506da40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b5553d2780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:52.246194+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 69419008 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:53.246382+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 69419008 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:54.246533+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1051000/0x0/0x1bfc00000, data 0x4762c28/0x497f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 69419008 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:55.246720+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 69419008 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4932630 data_alloc: 234881024 data_used: 42356736
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:56.246862+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 69419008 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:57.247045+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 69419008 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:58.247186+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454205440 unmapped: 69419008 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557b69400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:59.247368+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557b69400 session 0x55b5533f0960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453943296 unmapped: 69681152 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:00.247546+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a103a000/0x0/0x1bfc00000, data 0x4786c4b/0x49a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453943296 unmapped: 69681152 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4982044 data_alloc: 251658240 data_used: 49934336
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:01.247722+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455458816 unmapped: 68165632 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.456853867s of 11.784238815s, submitted: 101
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e40400 session 0x55b55336fc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b555942780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:02.247862+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b55506c960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:03.248040+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1cf6000/0x0/0x1bfc00000, data 0x3acbc3b/0x3ce8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:04.248250+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:05.248402+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4853648 data_alloc: 234881024 data_used: 45293568
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:06.248786+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1cf6000/0x0/0x1bfc00000, data 0x3acbc18/0x3ce7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:07.248914+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b556881400 session 0x55b5550e70e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b555389c00 session 0x55b55336e3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:08.249065+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef9000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552ef9000 session 0x55b55506c000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:09.249252+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:10.249378+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4712564 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:11.249628+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:12.249783+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a273f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:13.249951+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:14.250085+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:15.250253+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a273f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:16.250509+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4712564 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:17.250687+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:18.250843+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a273f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:19.251037+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a273f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:20.251225+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:21.253320+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4712564 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a273f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:22.253461+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:23.253593+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:24.253836+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a273f000/0x0/0x1bfc00000, data 0x3083bf5/0x329e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:25.253977+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:26.254090+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4712564 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451567616 unmapped: 72056832 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:27.254220+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.185047150s of 25.484613419s, submitted: 95
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b555943c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b555389c00 session 0x55b554f23c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b556881400 session 0x55b554f230e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b554f22f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ee800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5535ee800 session 0x55b554f22d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 72048640 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:28.254472+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 72048640 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a22f3000/0x0/0x1bfc00000, data 0x34d0bf5/0x36eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:29.254602+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 72048640 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:30.254777+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 72048640 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:31.255003+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4746594 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a22f3000/0x0/0x1bfc00000, data 0x34d0bf5/0x36eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 72048640 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:32.255296+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b554abba40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 72048640 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b555389c00 session 0x55b554abbc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:33.255584+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b556881400 session 0x55b554abad20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 72048640 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b554abab40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:34.255761+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557b69400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55ab1d400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451575808 unmapped: 72048640 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:35.255976+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552c9b400 session 0x55b5558fef00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5558ff0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b555389c00 session 0x55b553427680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b556881400 session 0x55b5529270e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451584000 unmapped: 72040448 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b559e41400 session 0x55b555943e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:36.256080+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5582a5c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b5582a5c00 session 0x55b555373a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b55348a780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4827626 data_alloc: 234881024 data_used: 40239104
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b555389c00 session 0x55b55336f860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b556881400 session 0x55b5534a54a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a22f3000/0x0/0x1bfc00000, data 0x34d0bf5/0x36eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451592192 unmapped: 72032256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557b69400 session 0x55b5533692c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b55ab1d400 session 0x55b5553732c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:37.256247+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a1e07000/0x0/0x1bfc00000, data 0x39bbc05/0x3bd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55ab1d400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.910137177s of 10.008732796s, submitted: 21
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b55ab1d400 session 0x55b554edeb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446824448 unmapped: 76800000 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:38.256483+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446824448 unmapped: 76800000 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:39.256669+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446824448 unmapped: 76800000 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:40.256798+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446824448 unmapped: 76800000 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:41.257057+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764586 data_alloc: 234881024 data_used: 35778560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a21d1000/0x0/0x1bfc00000, data 0x356ec05/0x378a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446824448 unmapped: 76800000 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:42.257282+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a21d1000/0x0/0x1bfc00000, data 0x356ec05/0x378a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446824448 unmapped: 76800000 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5534ae780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:43.257434+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a21d1000/0x0/0x1bfc00000, data 0x356ec05/0x378a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446840832 unmapped: 76783616 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:44.257600+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446849024 unmapped: 76775424 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:45.257810+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:46.258108+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803099 data_alloc: 234881024 data_used: 40640512
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:47.258313+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:48.258442+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:49.258582+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2230000/0x0/0x1bfc00000, data 0x3592c05/0x37ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:50.258723+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:51.258877+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803099 data_alloc: 234881024 data_used: 40640512
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:52.259022+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:53.259164+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:54.259304+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2230000/0x0/0x1bfc00000, data 0x3592c05/0x37ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:55.259444+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a2230000/0x0/0x1bfc00000, data 0x3592c05/0x37ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 446857216 unmapped: 76767232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:56.259612+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.699190140s of 18.757667542s, submitted: 17
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4866465 data_alloc: 234881024 data_used: 40673280
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 453533696 unmapped: 70090752 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:57.259735+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0e47000/0x0/0x1bfc00000, data 0x4563c05/0x477f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452501504 unmapped: 71122944 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:58.259935+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452501504 unmapped: 71122944 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:59.260097+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0dab000/0x0/0x1bfc00000, data 0x45ffc05/0x481b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452501504 unmapped: 71122944 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:00.260261+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452501504 unmapped: 71122944 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:01.260492+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4947189 data_alloc: 234881024 data_used: 41816064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452501504 unmapped: 71122944 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:02.260713+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451837952 unmapped: 71786496 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:03.260850+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451837952 unmapped: 71786496 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:04.260993+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d92000/0x0/0x1bfc00000, data 0x4620c05/0x483c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d92000/0x0/0x1bfc00000, data 0x4620c05/0x483c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451837952 unmapped: 71786496 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:05.261178+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451837952 unmapped: 71786496 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:06.261330+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4939641 data_alloc: 234881024 data_used: 41816064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451837952 unmapped: 71786496 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:07.261475+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 71778304 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:08.261657+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.517356873s of 12.858249664s, submitted: 130
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 71778304 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:09.261780+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 71778304 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:10.261963+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d84000/0x0/0x1bfc00000, data 0x462ec05/0x484a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 71778304 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:11.262181+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4939589 data_alloc: 234881024 data_used: 41816064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 71778304 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:12.262345+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 71778304 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:13.262596+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451846144 unmapped: 71778304 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:14.262783+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 71761920 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:15.262941+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 71761920 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:16.263175+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4939969 data_alloc: 218103808 data_used: 41816064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d81000/0x0/0x1bfc00000, data 0x4631c05/0x484d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 71761920 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:17.263472+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 71761920 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:18.263638+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 71761920 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:19.263841+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d81000/0x0/0x1bfc00000, data 0x4631c05/0x484d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 71761920 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:20.264072+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.593082428s of 11.612238884s, submitted: 5
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 71761920 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:21.264257+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4941497 data_alloc: 218103808 data_used: 41836544
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d73000/0x0/0x1bfc00000, data 0x463fc05/0x485b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451862528 unmapped: 71761920 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:22.264517+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d73000/0x0/0x1bfc00000, data 0x463fc05/0x485b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b555389c00 session 0x55b552927a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b556881400 session 0x55b554abba40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 71753728 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:23.264650+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 71753728 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:24.264825+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 71753728 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:25.265032+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d73000/0x0/0x1bfc00000, data 0x463fc05/0x485b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 71753728 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:26.265252+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4941365 data_alloc: 218103808 data_used: 41836544
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d73000/0x0/0x1bfc00000, data 0x463fc05/0x485b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557b69400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b557b69400 session 0x55b553883a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 heartbeat osd_stat(store_statfs(0x1a0d73000/0x0/0x1bfc00000, data 0x463fc05/0x485b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451870720 unmapped: 71753728 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:27.265444+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 ms_handle_reset con 0x55b552b37800 session 0x55b5559e2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b555389c00 session 0x55b5559e21e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b556881400 session 0x55b5559e32c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55ab1d400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b55ab1d400 session 0x55b5559e3680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b559e41400 session 0x55b5534ae960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b552b37800 session 0x55b5553721e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b555389c00 session 0x55b55463af00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b556881400 session 0x55b5533681e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55ab1d400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b55ab1d400 session 0x55b554f22d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 71745536 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:28.265601+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5539d1000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b5539d1000 session 0x55b55392af00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 71745536 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:29.265742+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0cec000/0x0/0x1bfc00000, data 0x46c48d1/0x48e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451878912 unmapped: 71745536 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:30.265872+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451887104 unmapped: 71737344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:31.266031+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4952289 data_alloc: 218103808 data_used: 41930752
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451887104 unmapped: 71737344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:32.266302+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451887104 unmapped: 71737344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:33.266691+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451887104 unmapped: 71737344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:34.266845+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b556881400 session 0x55b5553d3e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451887104 unmapped: 71737344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:35.267051+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 heartbeat osd_stat(store_statfs(0x1a0cec000/0x0/0x1bfc00000, data 0x46c48d1/0x48e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55ab1d400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b55ab1d400 session 0x55b55395c000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b5590eb000 session 0x55b552ee4780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451887104 unmapped: 71737344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:36.267205+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b5572e6000 session 0x55b555942d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55722e800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.622607231s of 15.682995796s, submitted: 18
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b55722e800 session 0x55b552ee4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4954724 data_alloc: 218103808 data_used: 41930752
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451887104 unmapped: 71737344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:37.267537+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 ms_handle_reset con 0x55b561043400 session 0x55b5559d4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451895296 unmapped: 71729152 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:38.267663+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0ce8000/0x0/0x1bfc00000, data 0x46c6624/0x48e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451911680 unmapped: 71712768 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:39.267805+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451928064 unmapped: 71696384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:40.267983+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451936256 unmapped: 71688192 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:41.268178+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4961954 data_alloc: 218103808 data_used: 42487808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:42.268352+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451969024 unmapped: 71655424 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0ce8000/0x0/0x1bfc00000, data 0x46c6624/0x48e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:43.268501+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451969024 unmapped: 71655424 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 415 heartbeat osd_stat(store_statfs(0x1a0ce8000/0x0/0x1bfc00000, data 0x46c6624/0x48e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:44.268695+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451969024 unmapped: 71655424 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:45.268831+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 451977216 unmapped: 71647232 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:46.268962+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452018176 unmapped: 71606272 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4980368 data_alloc: 218103808 data_used: 43929600
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:47.269144+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452018176 unmapped: 71606272 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.453063011s of 11.112067223s, submitted: 227
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:48.269326+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452018176 unmapped: 71606272 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:49.269529+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0ce5000/0x0/0x1bfc00000, data 0x46c81d6/0x48e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452034560 unmapped: 71589888 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:50.269744+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452894720 unmapped: 70729728 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:51.269946+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452911104 unmapped: 70713344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5008500 data_alloc: 218103808 data_used: 43941888
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:52.270104+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452911104 unmapped: 70713344 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5559423c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b554edfe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0a6b000/0x0/0x1bfc00000, data 0x49421d6/0x4b63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb000 session 0x55b5559e3c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:53.270280+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 70688768 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:54.270532+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 70688768 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:55.270742+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452935680 unmapped: 70688768 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:56.270933+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 70680576 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4777824 data_alloc: 218103808 data_used: 36343808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a202b000/0x0/0x1bfc00000, data 0x33831c6/0x35a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:57.271089+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 70680576 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:58.271287+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 70680576 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:59.271504+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 70680576 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:00.271762+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 70680576 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:01.271960+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 70680576 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4777824 data_alloc: 218103808 data_used: 36343808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:02.272121+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452943872 unmapped: 70680576 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a202b000/0x0/0x1bfc00000, data 0x33831c6/0x35a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.782125473s of 15.513628006s, submitted: 169
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:03.272275+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 70672384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:04.272493+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 70672384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2029000/0x0/0x1bfc00000, data 0x33851c6/0x35a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:05.272663+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 70672384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b556881400 session 0x55b554f22b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5572e6000 session 0x55b5533f0000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:06.272817+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 70672384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4778056 data_alloc: 218103808 data_used: 36343808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:07.273010+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 70672384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:08.273157+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 70672384 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b556881400 session 0x55b555942b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:09.273280+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450560000 unmapped: 73064448 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:10.273504+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450560000 unmapped: 73064448 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:11.273747+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 73056256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4744485 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:12.273905+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 73056256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:13.274050+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 73056256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:14.274218+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 73056256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:15.274568+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 73056256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:16.274815+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 73056256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4744485 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:17.275016+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 73056256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:18.275269+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450568192 unmapped: 73056256 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:19.275462+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 73048064 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:20.275670+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 73048064 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:21.275877+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 73048064 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4744485 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:22.276045+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 73048064 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:23.276226+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 73048064 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:24.276780+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 73048064 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:25.276933+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 73048064 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:26.277181+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450576384 unmapped: 73048064 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4744485 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:27.277383+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73039872 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:28.277553+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73039872 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:29.277737+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73039872 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:30.277926+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73039872 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:31.278109+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73039872 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4744485 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:32.278333+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73039872 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:33.278470+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73039872 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:34.278644+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450584576 unmapped: 73039872 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:35.278804+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 73015296 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:36.278965+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 73015296 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4744485 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:37.279096+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450609152 unmapped: 73015296 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.785320282s of 34.854682922s, submitted: 24
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5539125a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b55348e780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb000 session 0x55b554f230e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:38.279234+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450658304 unmapped: 72966144 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5527c7680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b5553730e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:39.279460+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e8f000/0x0/0x1bfc00000, data 0x35201f5/0x373f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450658304 unmapped: 72966144 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:40.279658+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450658304 unmapped: 72966144 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:41.279903+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450666496 unmapped: 72957952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783448 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:42.280067+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450666496 unmapped: 72957952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:43.280201+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 72949760 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e8f000/0x0/0x1bfc00000, data 0x35201f5/0x373f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:44.280399+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 72949760 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e8f000/0x0/0x1bfc00000, data 0x35201f5/0x373f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:45.280672+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 72949760 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:46.280860+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 72949760 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783448 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e8f000/0x0/0x1bfc00000, data 0x35201f5/0x373f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:47.281003+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 72949760 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:48.281156+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 72949760 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:49.281312+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 72949760 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e8f000/0x0/0x1bfc00000, data 0x35201f5/0x373f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:50.281970+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450674688 unmapped: 72949760 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:51.282587+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450682880 unmapped: 72941568 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783448 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.212303162s of 14.301129341s, submitted: 31
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b556881400 session 0x55b55395d680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:52.282727+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450691072 unmapped: 72933376 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:53.282959+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450691072 unmapped: 72933376 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:54.283249+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:55.283510+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e6b000/0x0/0x1bfc00000, data 0x35441f5/0x3763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:56.283735+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4813737 data_alloc: 218103808 data_used: 39481344
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e6b000/0x0/0x1bfc00000, data 0x35441f5/0x3763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:57.283862+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:58.283986+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:59.284220+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:00.284383+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:01.284642+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4813737 data_alloc: 218103808 data_used: 39481344
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:02.284816+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e6b000/0x0/0x1bfc00000, data 0x35441f5/0x3763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:03.285009+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1e6b000/0x0/0x1bfc00000, data 0x35441f5/0x3763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:04.285135+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.924534798s of 12.951892853s, submitted: 7
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:05.285273+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 450699264 unmapped: 72925184 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a17ae000/0x0/0x1bfc00000, data 0x3c011f5/0x3e20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,15])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:06.285405+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4871899 data_alloc: 218103808 data_used: 39751680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:07.285549+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:08.285714+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:09.285853+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:10.286034+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:11.286300+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4875689 data_alloc: 218103808 data_used: 40005632
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a178d000/0x0/0x1bfc00000, data 0x3c211f5/0x3e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:12.286510+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:13.286726+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a178d000/0x0/0x1bfc00000, data 0x3c211f5/0x3e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:14.286864+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a178b000/0x0/0x1bfc00000, data 0x3c221f5/0x3e41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:15.287009+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:16.287182+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a178b000/0x0/0x1bfc00000, data 0x3c221f5/0x3e41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.207130432s of 11.716340065s, submitted: 45
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4875925 data_alloc: 218103808 data_used: 40005632
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043400 session 0x55b5522f3860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55ab1d400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55ab1d400 session 0x55b5553d3a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b552926b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:17.287389+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b5553d2d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556881400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452714496 unmapped: 70909952 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5572e6000 session 0x55b553404b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:18.287640+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb000 session 0x55b55395d2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b556881400 session 0x55b55336eb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b553427860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b5534e41e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452755456 unmapped: 70868992 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5572e6000 session 0x55b5533efc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb000 session 0x55b5534ae1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:19.289256+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d3c000/0x0/0x1bfc00000, data 0x4672205/0x4892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452755456 unmapped: 70868992 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d3c000/0x0/0x1bfc00000, data 0x4672205/0x4892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:20.289486+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452755456 unmapped: 70868992 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:21.289745+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452755456 unmapped: 70868992 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4958283 data_alloc: 218103808 data_used: 40009728
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:22.290164+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452755456 unmapped: 70868992 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043400 session 0x55b55392da40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:23.290310+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d3c000/0x0/0x1bfc00000, data 0x4672205/0x4892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 70860800 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5534ae960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:24.290475+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 70860800 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b5529d1a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb000 session 0x55b5534054a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:25.290694+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452763648 unmapped: 70860800 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5572e6000 session 0x55b5529d0d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:26.290934+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bd800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9d800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 70705152 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4962572 data_alloc: 218103808 data_used: 40017920
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.051296234s of 10.221862793s, submitted: 27
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:27.291131+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 452919296 unmapped: 70705152 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:28.291279+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454533120 unmapped: 69091328 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d18000/0x0/0x1bfc00000, data 0x4696205/0x48b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d18000/0x0/0x1bfc00000, data 0x4696205/0x48b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:29.291529+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454533120 unmapped: 69091328 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:30.291750+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454533120 unmapped: 69091328 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:31.292006+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454541312 unmapped: 69083136 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5038732 data_alloc: 234881024 data_used: 50692096
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d18000/0x0/0x1bfc00000, data 0x4696205/0x48b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:32.292202+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d18000/0x0/0x1bfc00000, data 0x4696205/0x48b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454541312 unmapped: 69083136 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:33.292393+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d18000/0x0/0x1bfc00000, data 0x4696205/0x48b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454541312 unmapped: 69083136 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:34.292627+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454541312 unmapped: 69083136 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:35.292806+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454541312 unmapped: 69083136 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d18000/0x0/0x1bfc00000, data 0x4696205/0x48b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:36.292963+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454541312 unmapped: 69083136 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5038732 data_alloc: 234881024 data_used: 50692096
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:37.293181+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454541312 unmapped: 69083136 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:38.293360+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d18000/0x0/0x1bfc00000, data 0x4696205/0x48b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454541312 unmapped: 69083136 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:39.293572+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454549504 unmapped: 69074944 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.885848045s of 12.885848999s, submitted: 0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:40.293772+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0d18000/0x0/0x1bfc00000, data 0x4696205/0x48b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 454549504 unmapped: 69074944 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:41.293946+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 455491584 unmapped: 68132864 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5078544 data_alloc: 234881024 data_used: 50720768
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:42.294110+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456163328 unmapped: 67461120 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:43.294321+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 66379776 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b558125c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b558125c00 session 0x55b5550e6780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b553405a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b5559e21e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5572e6000 session 0x55b5534e4b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a062f000/0x0/0x1bfc00000, data 0x4d77205/0x4f97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,1,0,0,2])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:44.294471+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457547776 unmapped: 66076672 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:45.294621+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457965568 unmapped: 65658880 heap: 523624448 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:46.294746+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461135872 unmapped: 66691072 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5175941 data_alloc: 234881024 data_used: 52535296
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:47.294919+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458039296 unmapped: 69787648 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:48.295056+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb000 session 0x55b555943680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b553427680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5533681e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 69779456 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:49.295197+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x19ff83000/0x0/0x1bfc00000, data 0x542b205/0x564b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b5534b83c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b5529fa000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 69779456 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x19ff7d000/0x0/0x1bfc00000, data 0x5431205/0x5651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:50.295350+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1.530821681s of 10.320066452s, submitted: 90
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x19ff7d000/0x0/0x1bfc00000, data 0x5431205/0x5651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458047488 unmapped: 69779456 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5546bd800 session 0x55b55463ba40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:51.295590+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552c9d800 session 0x55b55392c780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 69763072 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5170643 data_alloc: 234881024 data_used: 52523008
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:52.295715+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 69763072 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:53.295889+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 69730304 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:54.296046+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b55463bc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0ae9000/0x0/0x1bfc00000, data 0x48c4228/0x4ae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458096640 unmapped: 69730304 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bd800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:55.296191+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 69722112 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:56.296360+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5093070 data_alloc: 234881024 data_used: 53641216
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:57.296529+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0aea000/0x0/0x1bfc00000, data 0x48c41c6/0x4ae4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:58.296679+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0aea000/0x0/0x1bfc00000, data 0x48c41c6/0x4ae4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0aea000/0x0/0x1bfc00000, data 0x48c41c6/0x4ae4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:59.296907+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:00.297177+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:01.297397+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5093246 data_alloc: 234881024 data_used: 53641216
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:02.297657+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0aea000/0x0/0x1bfc00000, data 0x48c41c6/0x4ae4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,2])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:03.297846+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:04.298040+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0aea000/0x0/0x1bfc00000, data 0x48c41c6/0x4ae4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,2])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:05.298487+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458637312 unmapped: 69189632 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:06.298627+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.120208740s, txc = 0x55b55279c000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 10.120072365s
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 10.120072365s
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 0.576829076s of 16.125360489s, submitted: 47
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.509048462s, txc = 0x55b554619500
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 67354624 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5138152 data_alloc: 234881024 data_used: 53579776
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:07.298785+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464027648 unmapped: 63799296 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:08.298917+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461299712 unmapped: 66527232 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:09.299100+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b553389680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461783040 unmapped: 66043904 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a03f9000/0x0/0x1bfc00000, data 0x4fb51c6/0x51d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:10.299252+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461791232 unmapped: 66035712 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:11.299584+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461791232 unmapped: 66035712 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5165876 data_alloc: 234881024 data_used: 53792768
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:12.299796+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a013e000/0x0/0x1bfc00000, data 0x52681c6/0x5488000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461119488 unmapped: 66707456 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:13.299966+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462168064 unmapped: 65658880 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:14.300160+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a013c000/0x0/0x1bfc00000, data 0x52711c6/0x5491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462168064 unmapped: 65658880 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:15.300494+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:16.300683+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a013b000/0x0/0x1bfc00000, data 0x52731c6/0x5493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a013b000/0x0/0x1bfc00000, data 0x52731c6/0x5493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5166212 data_alloc: 234881024 data_used: 53854208
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:17.300842+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:18.301006+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:19.301184+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:20.301345+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.646385670s of 14.323198318s, submitted: 82
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:21.301623+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a013b000/0x0/0x1bfc00000, data 0x52731c6/0x5493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5166020 data_alloc: 234881024 data_used: 53850112
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:22.301857+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a013b000/0x0/0x1bfc00000, data 0x52731c6/0x5493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:23.302000+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:24.302222+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a013b000/0x0/0x1bfc00000, data 0x52731c6/0x5493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:25.302373+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462176256 unmapped: 65650688 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a013b000/0x0/0x1bfc00000, data 0x52731c6/0x5493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:26.302526+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5546bd800 session 0x55b553913860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b55463b4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462184448 unmapped: 65642496 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5002778 data_alloc: 234881024 data_used: 48234496
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:27.302674+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5572e6000 session 0x55b554eded20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459948032 unmapped: 67878912 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:28.302812+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1145000/0x0/0x1bfc00000, data 0x42691a3/0x4488000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459948032 unmapped: 67878912 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1145000/0x0/0x1bfc00000, data 0x42691a3/0x4488000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:29.302968+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459948032 unmapped: 67878912 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:30.303114+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459948032 unmapped: 67878912 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:31.303286+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459948032 unmapped: 67878912 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5001978 data_alloc: 234881024 data_used: 48234496
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:32.303452+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b5559d50e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.537157059s of 11.748882294s, submitted: 45
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b5534052c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459948032 unmapped: 67878912 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:33.303584+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459948032 unmapped: 67878912 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:34.303752+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1145000/0x0/0x1bfc00000, data 0x42691a3/0x4488000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456736768 unmapped: 71090176 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b552ee4780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:35.303963+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456744960 unmapped: 71081984 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2327000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:36.304162+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456744960 unmapped: 71081984 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4777002 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:37.304484+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456744960 unmapped: 71081984 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:38.304661+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2327000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456744960 unmapped: 71081984 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:39.304827+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456744960 unmapped: 71081984 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:40.305032+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456744960 unmapped: 71081984 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:41.305255+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b5559e2d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bd800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5546bd800 session 0x55b5550df680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b552ee52c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b5553d2f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458334208 unmapped: 69492736 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b5559d5860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4797934 data_alloc: 218103808 data_used: 35807232
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b55506cd20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:42.305367+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555389c00 session 0x55b5559434a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b55392d680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b5553734a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:43.305516+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:44.305676+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a218a000/0x0/0x1bfc00000, data 0x32251a3/0x3444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:45.305813+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:46.306012+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a218a000/0x0/0x1bfc00000, data 0x32251a3/0x3444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:47.306178+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4797934 data_alloc: 218103808 data_used: 35807232
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b55506cb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b55392b4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:48.306543+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:49.306794+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb000 session 0x55b55336e960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.473749161s of 16.901857376s, submitted: 46
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5559430e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:50.306923+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456761344 unmapped: 71065600 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:51.307114+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456695808 unmapped: 71131136 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:52.307270+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803347 data_alloc: 218103808 data_used: 36368384
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2189000/0x0/0x1bfc00000, data 0x32251c6/0x3445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456695808 unmapped: 71131136 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:53.307460+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456695808 unmapped: 71131136 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:54.307693+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456695808 unmapped: 71131136 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:55.307867+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456695808 unmapped: 71131136 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:56.308091+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456695808 unmapped: 71131136 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:57.308293+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803347 data_alloc: 218103808 data_used: 36368384
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456695808 unmapped: 71131136 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:58.308496+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2189000/0x0/0x1bfc00000, data 0x32251c6/0x3445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2189000/0x0/0x1bfc00000, data 0x32251c6/0x3445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456695808 unmapped: 71131136 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:59.308645+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456704000 unmapped: 71122944 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:00.308813+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456704000 unmapped: 71122944 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:01.308988+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456704000 unmapped: 71122944 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:02.309157+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803347 data_alloc: 218103808 data_used: 36368384
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 456704000 unmapped: 71122944 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:03.309347+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.699629784s of 13.932987213s, submitted: 4
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457637888 unmapped: 70189056 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1cda000/0x0/0x1bfc00000, data 0x36d41c6/0x38f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:04.309485+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 70090752 heap: 527826944 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1c9b000/0x0/0x1bfc00000, data 0x370c1c6/0x392c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:05.309616+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b5559432c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043000 session 0x55b554f234a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b55463b0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559dd0c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458063872 unmapped: 78159872 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559dd0c00 session 0x55b5533ef2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5558feb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:06.309765+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457875456 unmapped: 78348288 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:07.309908+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4907467 data_alloc: 218103808 data_used: 36626432
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457875456 unmapped: 78348288 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:08.310108+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5558ff860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457875456 unmapped: 78348288 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:09.310261+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14bc000/0x0/0x1bfc00000, data 0x3ef1228/0x4112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b5550e7e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457875456 unmapped: 78348288 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:10.310404+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457875456 unmapped: 78348288 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043000 session 0x55b5533f0960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:11.310580+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55537bc00 session 0x55b55463a3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14bc000/0x0/0x1bfc00000, data 0x3ef1228/0x4112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458080256 unmapped: 78143488 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:12.310702+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4915811 data_alloc: 218103808 data_used: 36626432
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458080256 unmapped: 78143488 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b5559d43c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b55506c5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:13.310876+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:14.311059+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:15.311215+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:16.311402+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:17.311579+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4973891 data_alloc: 218103808 data_used: 44756992
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1496000/0x0/0x1bfc00000, data 0x3f1525b/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:18.311756+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:19.311942+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:20.312106+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1496000/0x0/0x1bfc00000, data 0x3f1525b/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:21.312296+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:22.312499+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4974051 data_alloc: 218103808 data_used: 44761088
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:23.312706+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1496000/0x0/0x1bfc00000, data 0x3f1525b/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 458022912 unmapped: 78200832 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b5534e4b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043000 session 0x55b5553d2960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:24.312893+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555388c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.518518448s of 21.228094101s, submitted: 101
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555388c00 session 0x55b553389680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461045760 unmapped: 75177984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:25.313081+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1b2a000/0x0/0x1bfc00000, data 0x388224b/0x3aa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461062144 unmapped: 75161600 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:26.313235+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:27.313365+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4984507 data_alloc: 218103808 data_used: 44277760
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:28.313555+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a13c6000/0x0/0x1bfc00000, data 0x3fe6228/0x4207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:29.313709+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:30.313908+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:31.314180+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:32.314371+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977675 data_alloc: 218103808 data_used: 44281856
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:33.314647+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:34.314826+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a13c4000/0x0/0x1bfc00000, data 0x3fe9228/0x420a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:35.315083+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:36.315259+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:37.363190+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.487427711s of 12.788675308s, submitted: 109
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977511 data_alloc: 218103808 data_used: 44281856
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b553426b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b55348ad20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461078528 unmapped: 75145216 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:38.363521+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457228288 unmapped: 78995456 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:39.363677+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b55336e960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457228288 unmapped: 78995456 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2302000/0x0/0x1bfc00000, data 0x30ad1b6/0x32cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:40.363828+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457228288 unmapped: 78995456 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:41.364262+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457228288 unmapped: 78995456 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:42.364782+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457228288 unmapped: 78995456 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:43.364934+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457228288 unmapped: 78995456 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:44.365130+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457228288 unmapped: 78995456 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:45.365293+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457236480 unmapped: 78987264 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:46.365447+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457236480 unmapped: 78987264 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:47.365578+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457236480 unmapped: 78987264 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:48.365760+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 78979072 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:49.365961+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 78979072 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:50.366107+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 78979072 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:51.366332+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 78979072 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:52.366503+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 78979072 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:53.366688+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 78979072 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:54.366904+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:55.367126+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:56.367278+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:57.367487+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:58.367631+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:59.367770+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457261056 unmapped: 78962688 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:00.368319+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457261056 unmapped: 78962688 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:01.368517+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457261056 unmapped: 78962688 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.46366 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:02.368699+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:03.368853+0000)
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.45539 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-mon[81689]: pgmap v4141: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:04.369010+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3474173825' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:05.369191+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.37290 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.45551 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:06.369392+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1373369456' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.46402 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:07.369637+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1008488265' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1281743882' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/230021667' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:08.369957+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4289010433' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/209309539' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:09.370085+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2701106949' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:10.370274+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:11.370514+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.868698120s of 34.334308624s, submitted: 68
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:12.370723+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:13.370897+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b5553730e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:14.371041+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:15.371175+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:16.371305+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:17.371487+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555388c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555388c00 session 0x55b55348e780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5539125a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b555942b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b5533f0000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:18.371676+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 78929920 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5559e3c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:19.371803+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b5559423c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b552ee4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b555942d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b55506cf00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:20.371955+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:21.372120+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d96000/0x0/0x1bfc00000, data 0x36191a3/0x3838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:22.372322+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4841992 data_alloc: 218103808 data_used: 35803136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d96000/0x0/0x1bfc00000, data 0x36191a3/0x3838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d96000/0x0/0x1bfc00000, data 0x36191a3/0x3838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:23.372489+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5533efc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:24.372709+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b553913860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b5558ff2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:25.372904+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5559430e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:26.373082+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.688621521s of 14.739874840s, submitted: 6
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043000 session 0x55b5538832c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d95000/0x0/0x1bfc00000, data 0x36191b3/0x3839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:27.373278+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4884950 data_alloc: 218103808 data_used: 41611264
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:28.373439+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:29.373617+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:30.373762+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:31.373917+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d95000/0x0/0x1bfc00000, data 0x36191b3/0x3839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:32.374134+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4884950 data_alloc: 218103808 data_used: 41611264
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb400 session 0x55b553427a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:33.374280+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:34.374405+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d95000/0x0/0x1bfc00000, data 0x36191b3/0x3839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d95000/0x0/0x1bfc00000, data 0x36191b3/0x3839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55484a400 session 0x55b5559e3680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:35.374530+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457105408 unmapped: 79118336 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:36.374674+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457105408 unmapped: 79118336 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.754782677s of 10.905808449s, submitted: 6
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:37.374897+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 78954496 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4949594 data_alloc: 218103808 data_used: 41611264
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14f3000/0x0/0x1bfc00000, data 0x3ebc1a3/0x40db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:38.375058+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459907072 unmapped: 76316672 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14b9000/0x0/0x1bfc00000, data 0x3ef01a3/0x410f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:39.375258+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14b9000/0x0/0x1bfc00000, data 0x3ef01a3/0x410f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459915264 unmapped: 76308480 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55484a400 session 0x55b553427680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14ba000/0x0/0x1bfc00000, data 0x3ef51a3/0x4114000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [1,0,0,0,0,0,0,0,0,6])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:40.375403+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459923456 unmapped: 76300288 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:41.375628+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459931648 unmapped: 76292096 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a149e000/0x0/0x1bfc00000, data 0x3f091a3/0x4128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:42.375770+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459931648 unmapped: 76292096 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969106 data_alloc: 218103808 data_used: 42409984
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:43.375884+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459931648 unmapped: 76292096 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a149e000/0x0/0x1bfc00000, data 0x3f091a3/0x4128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b554edf680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:44.376081+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb400 session 0x55b553a18d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:45.376258+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5529fa000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:46.376392+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:47.376579+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4961826 data_alloc: 234881024 data_used: 42418176
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:48.376772+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14a6000/0x0/0x1bfc00000, data 0x3f091a3/0x4128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.959997177s of 11.655179024s, submitted: 77
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043000 session 0x55b55395c000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:49.376927+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55484a400 session 0x55b552ee52c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460455936 unmapped: 75767808 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:50.377056+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460455936 unmapped: 75767808 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0b8a000/0x0/0x1bfc00000, data 0x48251a3/0x4a44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:51.377258+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460455936 unmapped: 75767808 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:52.377407+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460464128 unmapped: 75759616 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5032630 data_alloc: 234881024 data_used: 42418176
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:53.377581+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460464128 unmapped: 75759616 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b86000/0x0/0x1bfc00000, data 0x4826e6f/0x4a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:54.377741+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:55.377918+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:56.378055+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:57.378215+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5036100 data_alloc: 234881024 data_used: 42426368
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b86000/0x0/0x1bfc00000, data 0x4826e6f/0x4a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:58.378344+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:59.378566+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:00.378763+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:01.378954+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b5553721e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b55392b680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55918a000 session 0x55b554f22f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b559e41c00 session 0x55b5534054a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460480512 unmapped: 75743232 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.486028671s of 12.642086029s, submitted: 20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b5538834a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b5558fe000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b554abb2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55918a000 session 0x55b5550e7860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b559e41c00 session 0x55b5534b90e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b5559e2780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b5534b8d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:02.379091+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077199 data_alloc: 234881024 data_used: 42426368
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:03.379227+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a07a3000/0x0/0x1bfc00000, data 0x4c0ae6f/0x4e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:04.379361+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:05.379560+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b55506c000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:06.379670+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55918a000 session 0x55b553427a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5572c2000 session 0x55b5538832c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b5559430e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:07.379789+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461725696 unmapped: 74498048 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5079037 data_alloc: 234881024 data_used: 42426368
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5572c2000 session 0x55b5550e61e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:08.379983+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b553913860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461725696 unmapped: 74498048 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b55506cf00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561042c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a07a2000/0x0/0x1bfc00000, data 0x4c0ae7f/0x4e2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55918a000 session 0x55b555942d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:09.380160+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461873152 unmapped: 74350592 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:10.380325+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462757888 unmapped: 73465856 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:11.380469+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 69500928 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:12.380588+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 69500928 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179075 data_alloc: 251658240 data_used: 55762944
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:13.380766+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 69500928 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b5559e3c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b555942b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:14.380922+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 69500928 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.814599037s of 13.027617455s, submitted: 27
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a077d000/0x0/0x1bfc00000, data 0x4c2ee8f/0x4e51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:15.381076+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:16.381305+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:17.381449+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5572c2000 session 0x55b553883c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5174905 data_alloc: 251658240 data_used: 55758848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a07a2000/0x0/0x1bfc00000, data 0x4c0ae7f/0x4e2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:18.381634+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:19.381807+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b5533f03c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555388400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:20.381990+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b555388400 session 0x55b555942000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:21.382143+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471252992 unmapped: 64970752 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b55348a780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:22.382357+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471252992 unmapped: 64970752 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5282655 data_alloc: 251658240 data_used: 55771136
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:23.382518+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x19f952000/0x0/0x1bfc00000, data 0x5a5ae7f/0x5c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471539712 unmapped: 64684032 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:24.382679+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 418 ms_handle_reset con 0x55b554a5fc00 session 0x55b554abbe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 66625536 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe4b000/0x0/0x1bfc00000, data 0x514fb9f/0x5372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:25.382816+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 66625536 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:26.383008+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 66625536 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:27.383179+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 66625536 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.622128487s of 13.045692444s, submitted: 141
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 418 ms_handle_reset con 0x55b5572c2000 session 0x55b553883680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5173925 data_alloc: 251658240 data_used: 50388992
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 418 ms_handle_reset con 0x55b5590eb400 session 0x55b554f225a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:28.383328+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471384064 unmapped: 64839680 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:29.383503+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471384064 unmapped: 64839680 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 418 ms_handle_reset con 0x55b55537a400 session 0x55b553405680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:30.383630+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x19fe4c000/0x0/0x1bfc00000, data 0x514fb9f/0x5372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471384064 unmapped: 64839680 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:31.383791+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 65847296 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:32.383943+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 65847296 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55484a400 session 0x55b5559434a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5178847 data_alloc: 251658240 data_used: 50532352
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b554a5fc00 session 0x55b554abb2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:33.384099+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55537a400 session 0x55b5559e21e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 470425600 unmapped: 65798144 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:34.384251+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55933e000 session 0x55b554ede960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b561042c00 session 0x55b5550ded20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 470425600 unmapped: 65798144 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:35.384435+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55484a400 session 0x55b555373680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466092032 unmapped: 70131712 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0c1c000/0x0/0x1bfc00000, data 0x437f741/0x45a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:36.384653+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466092032 unmapped: 70131712 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:37.384801+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466092032 unmapped: 70131712 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5034519 data_alloc: 234881024 data_used: 44539904
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b554b20000 session 0x55b553369860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0c1c000/0x0/0x1bfc00000, data 0x437f741/0x45a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:38.384989+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 70123520 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:39.415497+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466108416 unmapped: 70115328 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:40.415975+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0c1c000/0x0/0x1bfc00000, data 0x437f741/0x45a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466108416 unmapped: 70115328 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:41.416475+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b552b37800 session 0x55b5559421e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55391f800 session 0x55b5527c7c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466108416 unmapped: 70115328 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.848965645s of 14.181877136s, submitted: 79
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0c1c000/0x0/0x1bfc00000, data 0x437f741/0x45a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b554b20000 session 0x55b55506cb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:42.416873+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4876269 data_alloc: 234881024 data_used: 37863424
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:43.417251+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a9d000/0x0/0x1bfc00000, data 0x34ff731/0x3721000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:44.417570+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55537a400 session 0x55b5533694a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b552b37800 session 0x55b55395cf00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:45.417935+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:46.418229+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55391f800 session 0x55b55463a3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a9d000/0x0/0x1bfc00000, data 0x34ff731/0x3721000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:47.418466+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55484a400 session 0x55b554f23a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4881593 data_alloc: 234881024 data_used: 37863424
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:48.418641+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:49.418782+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:50.418902+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:51.419116+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:52.419322+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4913433 data_alloc: 234881024 data_used: 41832448
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:53.419533+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:54.419772+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:55.419957+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:56.420149+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:57.420366+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4913433 data_alloc: 234881024 data_used: 41832448
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:58.420565+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:59.420737+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:00.420891+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.183296204s of 19.240226746s, submitted: 20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:01.421069+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462659584 unmapped: 73564160 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:02.421215+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462921728 unmapped: 73302016 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4986619 data_alloc: 234881024 data_used: 42168320
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:03.421377+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:04.421529+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a013c000/0x0/0x1bfc00000, data 0x3ca9741/0x3ecc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:05.421896+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:06.422054+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:07.422207+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4997317 data_alloc: 234881024 data_used: 41979904
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:08.422361+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:09.422544+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464125952 unmapped: 72097792 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:10.422712+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a013c000/0x0/0x1bfc00000, data 0x3ca9741/0x3ecc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464125952 unmapped: 72097792 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:11.422884+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.834084511s of 10.579282761s, submitted: 75
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55933e000 session 0x55b5550e7c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b5572c2000 session 0x55b5534afc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463462400 unmapped: 84836352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b552b37800 session 0x55b5527c74a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55391f800 session 0x55b5550e7680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55484a400 session 0x55b5559d4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:12.423052+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463462400 unmapped: 84836352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5098196 data_alloc: 234881024 data_used: 41979904
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:13.423255+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x19f387000/0x0/0x1bfc00000, data 0x4a737a3/0x4c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463462400 unmapped: 84836352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:14.423395+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463462400 unmapped: 84836352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:15.423546+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463470592 unmapped: 84828160 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:16.423694+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 420 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 420 ms_handle_reset con 0x55b55933e000 session 0x55b5559e2960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 420 heartbeat osd_stat(store_statfs(0x19f387000/0x0/0x1bfc00000, data 0x4a737a3/0x4c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463478784 unmapped: 84819968 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:17.424101+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b5590eb400 session 0x55b5534af4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463486976 unmapped: 84811776 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5106645 data_alloc: 234881024 data_used: 41988096
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:18.424328+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463486976 unmapped: 84811776 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:19.424522+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552b37800 session 0x55b5529fbe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463495168 unmapped: 84803584 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:20.424764+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463495168 unmapped: 84803584 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:21.424955+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37f000/0x0/0x1bfc00000, data 0x4a7717c/0x4c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 81174528 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:22.425150+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5208026 data_alloc: 251658240 data_used: 56258560
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:23.425522+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:24.425817+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37f000/0x0/0x1bfc00000, data 0x4a7717c/0x4c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:25.426021+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:26.426215+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.357582092s of 15.588162422s, submitted: 51
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552b36c00 session 0x55b55348af00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b55933e000 session 0x55b5527c7e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:27.426401+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37e000/0x0/0x1bfc00000, data 0x4a7718c/0x4ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467165184 unmapped: 81133568 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5209806 data_alloc: 251658240 data_used: 56262656
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:28.426872+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467173376 unmapped: 81125376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:29.427145+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37e000/0x0/0x1bfc00000, data 0x4a7718c/0x4ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467173376 unmapped: 81125376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:30.427326+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467173376 unmapped: 81125376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:31.427515+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467173376 unmapped: 81125376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:32.427668+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471130112 unmapped: 77168640 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37e000/0x0/0x1bfc00000, data 0x4a7718c/0x4ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5262878 data_alloc: 251658240 data_used: 57511936
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:33.427948+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557e64c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b557e64c00 session 0x55b5559430e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473554944 unmapped: 74743808 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bf800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:34.428078+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473620480 unmapped: 74678272 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:35.428216+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 74596352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:36.428404+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc59000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 74596352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:37.428602+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 74596352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5278240 data_alloc: 251658240 data_used: 58134528
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:38.428814+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 74596352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.711530685s of 11.976253510s, submitted: 89
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:39.429023+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc59000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:40.429255+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:41.429547+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:42.429742+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5269088 data_alloc: 251658240 data_used: 58146816
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:43.429900+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:44.430052+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:45.430197+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:46.430373+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:47.430540+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555a1ec00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473161728 unmapped: 75137024 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5282034 data_alloc: 251658240 data_used: 59121664
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:48.430701+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473194496 unmapped: 75104256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.841229439s of 10.072642326s, submitted: 3
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:49.430838+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473235456 unmapped: 75063296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:50.430989+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473432064 unmapped: 74866688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b555a1ec00 session 0x55b5550e6000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552b36c00 session 0x55b5559d5e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552b37800 session 0x55b555372000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557e64c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b557e64c00 session 0x55b554edf860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b55933e000 session 0x55b554652f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:51.431186+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19db93000/0x0/0x1bfc00000, data 0x50f118c/0x52eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473432064 unmapped: 74866688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:52.431527+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6605.4 total, 600.0 interval
                                           Cumulative writes: 64K writes, 252K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 64K writes, 23K syncs, 2.77 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4108 writes, 15K keys, 4108 commit groups, 1.0 writes per commit group, ingest: 16.42 MB, 0.03 MB/s
                                           Interval WAL: 4108 writes, 1697 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473432064 unmapped: 74866688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5343182 data_alloc: 251658240 data_used: 59523072
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:53.431710+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:54.431922+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d69e000/0x0/0x1bfc00000, data 0x55e618c/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:55.432121+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:56.432379+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:57.432510+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5343182 data_alloc: 251658240 data_used: 59523072
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:58.432693+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d69e000/0x0/0x1bfc00000, data 0x55e618c/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:59.432833+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:00.433071+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:01.433260+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d69e000/0x0/0x1bfc00000, data 0x55e618c/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:02.433401+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9a800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.267633438s of 13.161028862s, submitted: 30
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552c9a800 session 0x55b55392be00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5345218 data_alloc: 251658240 data_used: 59523072
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:03.433659+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:04.433795+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475643904 unmapped: 72654848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d67a000/0x0/0x1bfc00000, data 0x560a18c/0x5804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:05.434064+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475717632 unmapped: 72581120 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:06.434192+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475717632 unmapped: 72581120 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d67a000/0x0/0x1bfc00000, data 0x560a18c/0x5804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:07.434376+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475717632 unmapped: 72581120 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5408066 data_alloc: 268435456 data_used: 64700416
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:08.434681+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 70508544 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:09.434918+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d44f000/0x0/0x1bfc00000, data 0x583518c/0x5a2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 70508544 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:10.435123+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 70508544 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:11.435467+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b5572c2800 session 0x55b553913860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b5546bf800 session 0x55b554abb0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 70508544 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d44f000/0x0/0x1bfc00000, data 0x583518c/0x5a2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557e64c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b557e64c00 session 0x55b554652b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:12.435670+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 72785920 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5400946 data_alloc: 268435456 data_used: 64700416
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:13.435839+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b55933e000 session 0x55b55506da40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933f400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 72785920 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.608623505s of 11.626667023s, submitted: 10
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b55933f400 session 0x55b55506cd20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:14.436054+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475529216 unmapped: 72769536 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bf800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b5546bf800 session 0x55b555943860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b5572c2800 session 0x55b5550e72c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:15.436211+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479780864 unmapped: 68517888 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b554b20000 session 0x55b5529d05a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b55537a400 session 0x55b554edf680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:16.436785+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557e64c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b557e64c00 session 0x55b554abb4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 68214784 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 heartbeat osd_stat(store_statfs(0x19c9c0000/0x0/0x1bfc00000, data 0x6295e8c/0x64be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:17.437770+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482394112 unmapped: 65904640 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5497931 data_alloc: 268435456 data_used: 66588672
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:18.438067+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482394112 unmapped: 65904640 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bf800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b5546bf800 session 0x55b5553d23c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b554b20000 session 0x55b5550dfc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:19.439060+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:20.439254+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:21.439977+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 heartbeat osd_stat(store_statfs(0x19d589000/0x0/0x1bfc00000, data 0x56c4e8c/0x58ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:22.440200+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5347449 data_alloc: 251658240 data_used: 59408384
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:23.440614+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:24.440881+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.341896057s of 10.820668221s, submitted: 181
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 423 ms_handle_reset con 0x55b552b36c00 session 0x55b55463a1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 423 ms_handle_reset con 0x55b552b37800 session 0x55b55463b4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 423 heartbeat osd_stat(store_statfs(0x19d589000/0x0/0x1bfc00000, data 0x56c4e8c/0x58ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:25.441003+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 423 ms_handle_reset con 0x55b55537a400 session 0x55b553404000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:26.441153+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:27.441289+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:28.441481+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5149829 data_alloc: 251658240 data_used: 52277248
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:29.441663+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 423 heartbeat osd_stat(store_statfs(0x19e878000/0x0/0x1bfc00000, data 0x43dca3e/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 424 ms_handle_reset con 0x55b552b36c00 session 0x55b5534a4f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:30.441887+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479961088 unmapped: 68337664 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:31.442088+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479969280 unmapped: 68329472 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:32.442619+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479969280 unmapped: 68329472 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 424 ms_handle_reset con 0x55b55391f800 session 0x55b5522f2960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 424 ms_handle_reset con 0x55b55484a400 session 0x55b5553723c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:33.442783+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5151459 data_alloc: 251658240 data_used: 52285440
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 424 heartbeat osd_stat(store_statfs(0x19e875000/0x0/0x1bfc00000, data 0x43de740/0x4608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 424 ms_handle_reset con 0x55b552b37800 session 0x55b55336fa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:34.443579+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:35.443976+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:36.444349+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:37.444706+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f87a000/0x0/0x1bfc00000, data 0x309926d/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:38.444877+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4896597 data_alloc: 234881024 data_used: 36200448
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:39.445085+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:40.445386+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475430912 unmapped: 72867840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:41.445910+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f87a000/0x0/0x1bfc00000, data 0x309926d/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475430912 unmapped: 72867840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:42.446148+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bf800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.086919785s of 18.053037643s, submitted: 115
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b5546bf800 session 0x55b5534a5860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475430912 unmapped: 72867840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b5533890e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:43.446297+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4892645 data_alloc: 234881024 data_used: 36855808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475340800 unmapped: 72957952 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:44.446492+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475340800 unmapped: 72957952 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b37800 session 0x55b553388d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:45.446716+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475340800 unmapped: 72957952 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:46.447011+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fbba000/0x0/0x1bfc00000, data 0x30992df/0x32c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475340800 unmapped: 72957952 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:47.447215+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55391f800 session 0x55b55336f860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:48.447378+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4895358 data_alloc: 234881024 data_used: 36859904
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55484a400 session 0x55b5553d21e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b554b20000 session 0x55b553404b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b55506c000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:49.447576+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:50.447768+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:51.447981+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:52.448209+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7eb000/0x0/0x1bfc00000, data 0x346927d/0x3693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:53.448402+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925088 data_alloc: 234881024 data_used: 36855808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475373568 unmapped: 72925184 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:54.448681+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475373568 unmapped: 72925184 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:55.448881+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:56.449042+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7eb000/0x0/0x1bfc00000, data 0x346927d/0x3693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:57.449189+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:58.449368+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925088 data_alloc: 234881024 data_used: 36855808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:59.449502+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:00.449704+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7eb000/0x0/0x1bfc00000, data 0x346927d/0x3693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:01.449925+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:02.450102+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:03.450272+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925088 data_alloc: 234881024 data_used: 36855808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:04.450406+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7eb000/0x0/0x1bfc00000, data 0x346927d/0x3693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:05.450601+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:06.450734+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b37800 session 0x55b5559e30e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:07.450914+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55391f800 session 0x55b55392ba40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:08.451095+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925088 data_alloc: 234881024 data_used: 36855808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55484a400 session 0x55b5522f23c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:09.451229+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.342035294s of 26.467067719s, submitted: 28
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b5572c2800 session 0x55b553369680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:10.451402+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7ea000/0x0/0x1bfc00000, data 0x346928d/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:11.452167+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:12.452355+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:13.452519+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4954926 data_alloc: 234881024 data_used: 40792064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:14.452669+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7ea000/0x0/0x1bfc00000, data 0x346928d/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:15.452802+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7ea000/0x0/0x1bfc00000, data 0x346928d/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,1,4])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:16.452955+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55391f800 session 0x55b55395d0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55484a400 session 0x55b5553d2b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55933e000 session 0x55b553405680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b553873c00 session 0x55b554f23a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55579cc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55579cc00 session 0x55b5534af4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f13b000/0x0/0x1bfc00000, data 0x3b1828d/0x3d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:17.453060+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:18.453210+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5011995 data_alloc: 234881024 data_used: 40792064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:19.453454+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f13b000/0x0/0x1bfc00000, data 0x3b1828d/0x3d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:20.453660+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f13b000/0x0/0x1bfc00000, data 0x3b1828d/0x3d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f13b000/0x0/0x1bfc00000, data 0x3b1828d/0x3d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:21.453838+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.285288811s of 12.952371597s, submitted: 23
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:22.453953+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eeec000/0x0/0x1bfc00000, data 0x3d6728d/0x3f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:23.454091+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eeec000/0x0/0x1bfc00000, data 0x3d6728d/0x3f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5032441 data_alloc: 234881024 data_used: 41152512
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:24.454213+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:25.454382+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b553873c00 session 0x55b5559423c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:26.454578+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:27.454746+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475529216 unmapped: 72769536 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:28.454855+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089453 data_alloc: 251658240 data_used: 47894528
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19ee9d000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:29.455062+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19ee9d000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:30.455197+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:31.455487+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:32.455648+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:33.455810+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089453 data_alloc: 251658240 data_used: 47894528
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:34.455970+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19ee9d000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19ee9d000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:35.456083+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:36.456250+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.175427437s of 14.262315750s, submitted: 32
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476291072 unmapped: 72007680 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:37.456393+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b554eded20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b37800 session 0x55b5526efc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eea5000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 71999488 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:38.456551+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5088621 data_alloc: 251658240 data_used: 47878144
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 71999488 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:39.456903+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55933e000 session 0x55b5553d3c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479723520 unmapped: 68575232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:40.457057+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e4a9000/0x0/0x1bfc00000, data 0x47a527d/0x49cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,11,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480108544 unmapped: 68190208 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:41.457266+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:42.457497+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:43.457654+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179248 data_alloc: 251658240 data_used: 49754112
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:44.457903+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:45.458102+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:46.458279+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:47.458487+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:48.458647+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179568 data_alloc: 251658240 data_used: 49762304
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:49.458878+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:50.459021+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:51.459270+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:52.459456+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:53.459966+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179568 data_alloc: 251658240 data_used: 49762304
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:54.460143+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:55.460294+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 67223552 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:56.460545+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 67223552 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:57.460722+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 67223552 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:58.460859+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179888 data_alloc: 251658240 data_used: 49770496
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481083392 unmapped: 67215360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:59.461052+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481083392 unmapped: 67215360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:00.461223+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.632940292s of 24.362520218s, submitted: 111
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55391f800 session 0x55b5559425a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55484a400 session 0x55b5534ae1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481083392 unmapped: 67215360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:01.461374+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b5550e74a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:02.461493+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:03.461614+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4985776 data_alloc: 234881024 data_used: 40955904
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:04.461744+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:05.461810+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f555000/0x0/0x1bfc00000, data 0x36ff27d/0x3929000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:06.461999+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:07.462159+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f555000/0x0/0x1bfc00000, data 0x36ff27d/0x3929000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:08.462282+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b37800 session 0x55b5550def00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4985776 data_alloc: 234881024 data_used: 40955904
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b553873c00 session 0x55b554edfa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:09.462482+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f555000/0x0/0x1bfc00000, data 0x36ff27d/0x3929000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:10.462702+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55933e000 session 0x55b553426000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b5550df2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:11.462942+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.036637306s of 10.409495354s, submitted: 42
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:12.463103+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:13.463285+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992192 data_alloc: 234881024 data_used: 41013248
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:14.463468+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:15.463620+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:16.463766+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:17.463918+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:18.464072+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992192 data_alloc: 234881024 data_used: 41013248
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:19.464204+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:20.464325+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:21.464517+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:22.464635+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:23.464767+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992992 data_alloc: 234881024 data_used: 41033728
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.889412880s of 12.891948700s, submitted: 1
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:24.464888+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:25.465027+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:26.465154+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:27.465326+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:28.465504+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4997344 data_alloc: 234881024 data_used: 41099264
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:29.465717+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:30.465872+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:31.466041+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:32.466275+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:33.466376+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4997344 data_alloc: 234881024 data_used: 41099264
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:34.466492+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:35.466632+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:36.466858+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:37.467017+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55933e000 session 0x55b5550e6f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.681811333s of 13.694669724s, submitted: 15
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55484a400 session 0x55b5539123c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:38.467168+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478707712 unmapped: 69591040 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5004238 data_alloc: 234881024 data_used: 41791488
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:39.467311+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 69582848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e7000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556880c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b556880c00 session 0x55b55395cf00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b55395d2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55467d800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55467d800 session 0x55b55395c960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b554f23860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5572e7000 session 0x55b5553d32c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b5522f2b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:40.467443+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478863360 unmapped: 69435392 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55484a400 session 0x55b5522f2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556880c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b556880c00 session 0x55b55348f0e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b553404f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b553404960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55484a400 session 0x55b553405e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:41.467713+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478896128 unmapped: 69402624 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:42.467832+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478904320 unmapped: 69394432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:43.467925+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478904320 unmapped: 69394432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065770 data_alloc: 234881024 data_used: 41791488
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:44.468146+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478904320 unmapped: 69394432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:45.468365+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478920704 unmapped: 69378048 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:46.468547+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478969856 unmapped: 69328896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:47.468688+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478969856 unmapped: 69328896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.289855003s of 10.216338158s, submitted: 213
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:48.468801+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478969856 unmapped: 69328896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065770 data_alloc: 234881024 data_used: 41791488
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:49.469024+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478969856 unmapped: 69328896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:50.469155+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:51.469351+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:52.469492+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:53.469683+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065770 data_alloc: 234881024 data_used: 41791488
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:54.469842+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:55.470039+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e7000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5572e7000 session 0x55b553426f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:56.470168+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55484b400 session 0x55b55506cf00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:57.470314+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:58.470446+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065770 data_alloc: 234881024 data_used: 41791488
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b5533890e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b5553d23c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:59.473120+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e7000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:00.473270+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:01.473473+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:02.473685+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:03.473810+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5110410 data_alloc: 251658240 data_used: 47935488
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.608759880s of 15.859127045s, submitted: 95
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5590eac00 session 0x55b553405e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:04.473975+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 68313088 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e43c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:05.474096+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 68296704 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:06.474220+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:07.474393+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:08.474473+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5124223 data_alloc: 251658240 data_used: 47972352
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:09.474593+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee2a000/0x0/0x1bfc00000, data 0x3e5ffaf/0x4054000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:10.474722+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:11.474893+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:12.475054+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 64290816 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19e10d000/0x0/0x1bfc00000, data 0x4b6efaf/0x4d63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:13.475192+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484859904 unmapped: 63438848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5243865 data_alloc: 251658240 data_used: 49958912
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:14.475269+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484859904 unmapped: 63438848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:15.475435+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484859904 unmapped: 63438848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:16.475596+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484859904 unmapped: 63438848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19e0d3000/0x0/0x1bfc00000, data 0x4bb0faf/0x4da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.874275208s of 13.158001900s, submitted: 154
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:17.475774+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484990976 unmapped: 63307776 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:18.475950+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 63299584 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258465 data_alloc: 251658240 data_used: 50102272
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:19.476071+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 63299584 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:20.476200+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485007360 unmapped: 63291392 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df13000/0x0/0x1bfc00000, data 0x4d76faf/0x4f6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:21.476342+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df13000/0x0/0x1bfc00000, data 0x4d76faf/0x4f6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:22.476494+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:23.476634+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5259373 data_alloc: 251658240 data_used: 50102272
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:24.476778+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df13000/0x0/0x1bfc00000, data 0x4d76faf/0x4f6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:25.476918+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:26.477057+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:27.477217+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:28.477347+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5259921 data_alloc: 251658240 data_used: 50110464
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:29.477498+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:30.477655+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:31.477837+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:32.478011+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:33.478155+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5260561 data_alloc: 251658240 data_used: 50126848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:34.478289+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552ef8400 session 0x55b5559e2960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b559e43c00 session 0x55b55395d2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.206138611s of 17.766921997s, submitted: 5
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:35.478533+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b553388d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485023744 unmapped: 63275008 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552ef8400 session 0x55b5558fe960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b554652f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5590eac00 session 0x55b5550dfe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e43c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b559e43c00 session 0x55b5529fa000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:36.478748+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b5559d4780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552ef8400 session 0x55b5558ff2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b55392b680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5590eac00 session 0x55b552ee52c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554e0c000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b554e0c000 session 0x55b5559423c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:37.479027+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19dc7e000/0x0/0x1bfc00000, data 0x500af9c/0x51ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:38.479160+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5284477 data_alloc: 251658240 data_used: 50122752
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:39.479259+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:40.479497+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b554aba1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:41.479671+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552ef8400 session 0x55b553404960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19de43000/0x0/0x1bfc00000, data 0x4e46f9c/0x503b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:42.479830+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:43.480022+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b5558ffe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485056512 unmapped: 63242240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5268413 data_alloc: 251658240 data_used: 50044928
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b5590eac00 session 0x55b554f230e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:44.480146+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55467e800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485081088 unmapped: 63217664 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.754717827s of 10.119092941s, submitted: 62
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b55467e800 session 0x55b55348af00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:45.480318+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 427 heartbeat osd_stat(store_statfs(0x19de48000/0x0/0x1bfc00000, data 0x4e03cbc/0x5034000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485089280 unmapped: 63209472 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b552b37800 session 0x55b55271ef00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b553873c00 session 0x55b552927a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:46.480442+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485113856 unmapped: 63184896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b5535ef000 session 0x55b55392a960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:47.480633+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:48.480782+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5292442 data_alloc: 251658240 data_used: 52748288
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:49.480902+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19de26000/0x0/0x1bfc00000, data 0x4e27cac/0x5057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:50.481023+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:51.481201+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:52.481367+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19de23000/0x0/0x1bfc00000, data 0x4e2985e/0x505a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:53.481511+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5297240 data_alloc: 251658240 data_used: 52822016
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:54.481650+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19de23000/0x0/0x1bfc00000, data 0x4e2985e/0x505a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:55.481802+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:56.482019+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5590eac00 session 0x55b55392be00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557231400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:57.482217+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.247345924s of 12.328977585s, submitted: 46
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485531648 unmapped: 62767104 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b557231400 session 0x55b55392a5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:58.482534+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 490012672 unmapped: 58286080 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d876000/0x0/0x1bfc00000, data 0x4fc884e/0x51f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,10,2])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5281142 data_alloc: 251658240 data_used: 47751168
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:59.482727+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485605376 unmapped: 62693376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:00.482905+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485621760 unmapped: 62676992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:01.483148+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485629952 unmapped: 62668800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:02.483290+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d865000/0x0/0x1bfc00000, data 0x4fd884e/0x5208000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:03.483441+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5279842 data_alloc: 251658240 data_used: 47960064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:04.483576+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:05.483735+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:06.483892+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:07.484196+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:08.484356+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d865000/0x0/0x1bfc00000, data 0x4fd884e/0x5208000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5279842 data_alloc: 251658240 data_used: 47960064
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:09.484513+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.823444366s of 12.734890938s, submitted: 62
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5559e2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b553426b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:10.484659+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485654528 unmapped: 62644224 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:11.484869+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485654528 unmapped: 62644224 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:12.485021+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485654528 unmapped: 62644224 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5558fef00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:13.485181+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e30e000/0x0/0x1bfc00000, data 0x453183e/0x4760000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5167756 data_alloc: 234881024 data_used: 45006848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:14.485303+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:15.485450+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:16.485569+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:17.485782+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:18.485954+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x450d81b/0x473b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5167756 data_alloc: 234881024 data_used: 45006848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:19.486121+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:20.486261+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x450d81b/0x473b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484a400 session 0x55b5559d5860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5572e7000 session 0x55b55506cf00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.102567673s of 10.957017899s, submitted: 59
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5559e32c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:21.486496+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485679104 unmapped: 62619648 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:22.486631+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485679104 unmapped: 62619648 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:23.486750+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:24.486923+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:25.487093+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:26.487240+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:27.487386+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:28.487559+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:29.487678+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:30.487811+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:31.487982+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 62603264 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:32.488116+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 62603264 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:33.488272+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 62603264 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:34.488400+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 62603264 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:35.488633+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 62595072 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:36.488873+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 62595072 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:37.489056+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 62595072 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:38.489251+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 62595072 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:39.489526+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 62586880 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:40.489673+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:41.489921+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:42.490086+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:43.490225+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:44.490365+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:45.490494+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:46.490642+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:47.490822+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:48.491003+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:49.491152+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:50.491347+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:51.491551+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534a4d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b55463ba40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484a400 session 0x55b5534e4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b55271eb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.911609650s of 30.949949265s, submitted: 21
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b552927c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534af4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b5533685a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:52.491670+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b55392c960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484a400 session 0x55b555943c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 73064448 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:53.491796+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 73064448 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5028950 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ed22000/0x0/0x1bfc00000, data 0x3b1e81b/0x3d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:54.491945+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 73064448 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:55.492093+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b555373a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485744640 unmapped: 73056256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534a4d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:56.492275+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485744640 unmapped: 73056256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ed22000/0x0/0x1bfc00000, data 0x3b1e81b/0x3d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:57.492490+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485744640 unmapped: 73056256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:58.492644+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b5559e32c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b55506cf00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 72908800 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5033528 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:59.492751+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 72908800 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:00.492870+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:01.493027+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecfd000/0x0/0x1bfc00000, data 0x3b4282b/0x3d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:02.493196+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:03.493331+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5111768 data_alloc: 251658240 data_used: 47833088
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:04.493514+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:05.493637+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:06.493784+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecfd000/0x0/0x1bfc00000, data 0x3b4282b/0x3d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:07.493919+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:08.494065+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5111768 data_alloc: 251658240 data_used: 47833088
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:09.494229+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:10.494347+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecfd000/0x0/0x1bfc00000, data 0x3b4282b/0x3d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.233640671s of 19.318712234s, submitted: 9
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:11.494546+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:12.494669+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:13.494822+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5170232 data_alloc: 251658240 data_used: 48013312
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:14.495194+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:15.495465+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:16.496455+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:17.496660+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:18.496921+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5170232 data_alloc: 251658240 data_used: 48013312
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:19.497057+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:20.497216+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:21.497390+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:22.497580+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:23.497731+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5170232 data_alloc: 251658240 data_used: 48013312
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:24.497921+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:25.498165+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:26.498478+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:27.498617+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559dd8000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b559dd8000 session 0x55b5550e70e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b551f4e5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5539125a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b5553d2f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.500871658s of 16.613708496s, submitted: 41
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b553388d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5582a5400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5582a5400 session 0x55b5529fbe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b55348b4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5550e7680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:28.498726+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b553404000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5194787 data_alloc: 251658240 data_used: 48013312
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:29.498852+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:30.499007+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:31.499205+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e330000/0x0/0x1bfc00000, data 0x450e83b/0x473e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:32.499340+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:33.499518+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5194787 data_alloc: 251658240 data_used: 48013312
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:34.499657+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:35.499817+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b55463a3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492920832 unmapped: 65880064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:36.499951+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e30c000/0x0/0x1bfc00000, data 0x453283b/0x4762000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492920832 unmapped: 65880064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:37.500160+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:38.500549+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:39.500945+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5220027 data_alloc: 251658240 data_used: 51191808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:40.501360+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:41.501622+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:42.501956+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e30c000/0x0/0x1bfc00000, data 0x453283b/0x4762000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:43.502178+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.162872314s of 16.235620499s, submitted: 16
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:44.502343+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5220335 data_alloc: 251658240 data_used: 51191808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:45.502524+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e30a000/0x0/0x1bfc00000, data 0x453383b/0x4763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:46.502725+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:47.502885+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:48.503049+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493862912 unmapped: 64937984 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:49.503257+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5319061 data_alloc: 251658240 data_used: 51576832
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d6de000/0x0/0x1bfc00000, data 0x515a83b/0x538a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:50.503509+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:51.503725+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d6a3000/0x0/0x1bfc00000, data 0x518d83b/0x53bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:52.503877+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:53.504029+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d6a3000/0x0/0x1bfc00000, data 0x518d83b/0x53bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:54.504276+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5329285 data_alloc: 251658240 data_used: 51396608
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.729024887s of 10.971504211s, submitted: 91
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:55.504470+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:56.504664+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d28f000/0x0/0x1bfc00000, data 0x518f83b/0x53bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d5af9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484bc00 session 0x55b55336e780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5534aef00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:57.504787+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5550e7a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:58.504952+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f676000/0x0/0x1bfc00000, data 0x420982b/0x4438000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f676000/0x0/0x1bfc00000, data 0x420982b/0x4438000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:59.505194+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5178639 data_alloc: 251658240 data_used: 48013312
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:00.505469+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:01.505704+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:02.505844+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f676000/0x0/0x1bfc00000, data 0x420982b/0x4438000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:03.506026+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5558fef00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5590eac00 session 0x55b5559e2000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:04.506145+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5538834a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:05.506306+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:06.506492+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:07.506677+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:08.506872+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:09.506985+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:10.507116+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:11.507277+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:12.507445+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:13.507564+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:14.507700+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:15.507854+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:16.507987+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:17.508145+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:18.508279+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:19.508394+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:20.508569+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:21.508730+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:22.508852+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:23.509062+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:24.509216+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:25.509358+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:26.509488+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:27.509637+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:28.509792+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:29.510006+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:30.510161+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 78438400 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:31.510370+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 78438400 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:32.510578+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 78438400 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:33.510794+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 78438400 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:34.510935+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5534ae780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534e5a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5559e21e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b551f4fa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.365959167s of 39.855525970s, submitted: 54
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5590eac00 session 0x55b5550df2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b554abb860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b554f225a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 78430208 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5558fe5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b552ee4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:35.511054+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 78430208 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:36.511192+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 78430208 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075c000/0x0/0x1bfc00000, data 0x312481b/0x3352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:37.511368+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 78430208 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:38.511575+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b5529fa1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:39.511777+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4981270 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:40.511925+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5559d4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:41.512144+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075c000/0x0/0x1bfc00000, data 0x312481b/0x3352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:42.512285+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5559e3a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5529d01e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:43.512510+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:44.512647+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990358 data_alloc: 234881024 data_used: 37367808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:45.512908+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:46.513187+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:47.513377+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5534054a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.563242912s of 12.624855042s, submitted: 15
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b554f22780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:48.513598+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:49.513743+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990226 data_alloc: 234881024 data_used: 37367808
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:50.513871+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:51.514107+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:52.514351+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:53.514577+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:54.514773+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990134 data_alloc: 234881024 data_used: 37371904
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:55.514930+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480395264 unmapped: 78405632 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5526ee000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b555372000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:56.515100+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:57.515292+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:58.515496+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:59.515626+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.851888657s of 11.862312317s, submitted: 3
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990294 data_alloc: 234881024 data_used: 37376000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:00.515793+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:01.516031+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b554ede000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b55348f680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:02.516222+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484bc00 session 0x55b5534aeb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:03.516520+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:04.516798+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:05.517034+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:06.517508+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:07.517787+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:08.518089+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:09.518408+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:10.518657+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:11.518917+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:12.519080+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:13.519290+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:14.519454+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:15.519597+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:16.519757+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:17.519878+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:18.520069+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-06T08:25:19.520204+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _finish_auth 0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:20.291509+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:20.520408+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:21.520672+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:22.520808+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:23.520960+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:24.521105+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:25.521262+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:26.521409+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:27.521590+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:28.521741+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:29.521933+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:30.522115+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:31.522298+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:32.522589+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:33.522741+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:34.522897+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.667713165s of 35.853076935s, submitted: 37
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:35.523034+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486121472 unmapped: 76357632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b55271eb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5529d1c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5559421e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5534270e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546cc000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546cc000 session 0x55b55336fc20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:36.523184+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480567296 unmapped: 81911808 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19fc76000/0x0/0x1bfc00000, data 0x3c0b80b/0x3e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:37.523314+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b553404960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b553405e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480583680 unmapped: 81895424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5558ff860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b55392a5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556882800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b556882800 session 0x55b5559434a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3da000/0x0/0x1bfc00000, data 0x44a780b/0x46d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5550df860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:38.523516+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480583680 unmapped: 81895424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534b8d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5559d50e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5529d1860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933f000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55933f000 session 0x55b5559d4f00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5558fe960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:39.523661+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5558fe3c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5529d1860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480788480 unmapped: 81690624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5534b8d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ba000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546ba000 session 0x55b5550df860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5558ff860
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b55392a5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5162126 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b553404960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:40.523830+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480788480 unmapped: 81690624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:41.524016+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480788480 unmapped: 81690624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f090000/0x0/0x1bfc00000, data 0x47ef82b/0x4a1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:42.524239+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482213888 unmapped: 80265216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:43.524477+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f090000/0x0/0x1bfc00000, data 0x47ef82b/0x4a1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482213888 unmapped: 80265216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e43400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b559e43400 session 0x55b55392c960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:44.524632+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482369536 unmapped: 80109568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5241412 data_alloc: 251658240 data_used: 47656960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:45.524767+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482369536 unmapped: 80109568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:46.524904+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482369536 unmapped: 80109568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:47.525021+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f06c000/0x0/0x1bfc00000, data 0x481382b/0x4a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484114432 unmapped: 78364672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:48.525191+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.800157547s of 13.137754440s, submitted: 45
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546ca000 session 0x55b552ee4780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484114432 unmapped: 78364672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:49.525352+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484114432 unmapped: 78364672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f048000/0x0/0x1bfc00000, data 0x483782b/0x4a66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5308328 data_alloc: 251658240 data_used: 55156736
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:50.525492+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485416960 unmapped: 77062144 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:51.525648+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485416960 unmapped: 77062144 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:52.525781+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485416960 unmapped: 77062144 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f048000/0x0/0x1bfc00000, data 0x483782b/0x4a66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,3,3,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5534b9680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534af4a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:53.525919+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b553405a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 75759616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:54.526062+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 75759616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5347188 data_alloc: 251658240 data_used: 55115776
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:55.526181+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 75759616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553872000 session 0x55b5534a4d20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552c9bc00 session 0x55b554abb680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:56.526320+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 75726848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552c9bc00 session 0x55b55463af00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ebad000/0x0/0x1bfc00000, data 0x4cd281b/0x4f00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:57.526483+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 76251136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:58.526616+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 76251136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:59.526755+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5559421e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546ca400 session 0x55b5529d1c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 76251136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5219927 data_alloc: 251658240 data_used: 46284800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.214290619s of 11.467612267s, submitted: 101
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5558ffe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:00.526930+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:01.527129+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:02.527314+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:03.527482+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:04.527613+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:05.527745+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:06.527905+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:07.528043+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:08.528177+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:09.528317+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:10.528467+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:11.528635+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:12.528790+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:13.528953+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:14.529589+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:15.529793+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:16.529972+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:17.530106+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:18.530307+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:19.530494+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:20.530636+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:21.530790+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:22.530896+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:23.531036+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:24.531184+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:25.531368+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:26.531616+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:27.531785+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:28.531911+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:29.532060+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:30.532232+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:31.532394+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:32.532565+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:33.532682+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:34.532802+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:35.532885+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:36.532996+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:37.533137+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:38.533286+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:39.533393+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:40.533572+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:41.533759+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:42.533932+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:43.534161+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:44.534299+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:45.534491+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:46.534657+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:47.534831+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:48.535010+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:49.535151+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:50.535313+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:51.535521+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:52.535681+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:53.535820+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:54.535997+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:55.536120+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:56.536265+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:57.536531+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:58.536678+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.952594757s of 58.991859436s, submitted: 17
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5533681e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b55392ba40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:59.536799+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482541568 unmapped: 79937536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4999730 data_alloc: 234881024 data_used: 36880384
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:00.536944+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552c9bc00 session 0x55b5533efe00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482541568 unmapped: 79937536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:01.537102+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482541568 unmapped: 79937536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:02.537266+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482541568 unmapped: 79937536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546ca400 session 0x55b5522f3e00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:03.537403+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482549760 unmapped: 79929344 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5550deb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:04.537572+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553872000 session 0x55b5534261e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5083435 data_alloc: 234881024 data_used: 36880384
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:05.537705+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:06.537828+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:07.537966+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19fd31000/0x0/0x1bfc00000, data 0x3b5080b/0x3d7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:08.538159+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.000549316s of 10.172869682s, submitted: 41
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:09.538314+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482369536 unmapped: 80109568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5091959 data_alloc: 234881024 data_used: 36888576
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 430 ms_handle_reset con 0x55b552b36c00 session 0x55b55506cb40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:10.538518+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482385920 unmapped: 80093184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 431 heartbeat osd_stat(store_statfs(0x19fd27000/0x0/0x1bfc00000, data 0x3b5421a/0x3d85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:11.538689+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 431 ms_handle_reset con 0x55b5546ca400 session 0x55b55392a780
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482328576 unmapped: 80150528 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 432 ms_handle_reset con 0x55b552c9bc00 session 0x55b553883680
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:12.538835+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 80142336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:13.539012+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 432 ms_handle_reset con 0x55b55538b400 session 0x55b55395cf00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 80142336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:14.539158+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 80142336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5153117 data_alloc: 234881024 data_used: 36892672
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e43400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 432 ms_handle_reset con 0x55b559e43400 session 0x55b5550de000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:15.539298+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 432 ms_handle_reset con 0x55b552b36c00 session 0x55b554aba5a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482549760 unmapped: 79929344 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 432 heartbeat osd_stat(store_statfs(0x19ef60000/0x0/0x1bfc00000, data 0x4919c4c/0x4b4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 433 ms_handle_reset con 0x55b552c9bc00 session 0x55b55395d2c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 433 ms_handle_reset con 0x55b5546ca400 session 0x55b554f23a40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:16.539497+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 433 ms_handle_reset con 0x55b553873c00 session 0x55b5559434a0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482566144 unmapped: 79912960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:17.539730+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482566144 unmapped: 79912960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 433 heartbeat osd_stat(store_statfs(0x19eb58000/0x0/0x1bfc00000, data 0x4d1e996/0x4f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:18.539885+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482566144 unmapped: 79912960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:19.540078+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482574336 unmapped: 79904768 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5242599 data_alloc: 234881024 data_used: 36904960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.926185608s of 11.230964661s, submitted: 72
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:20.540301+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:21.540503+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:22.540690+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:23.540834+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:24.541024+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5244245 data_alloc: 234881024 data_used: 36904960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:25.541188+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:26.541392+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:27.541684+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:28.541847+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:29.542099+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b55538b400 session 0x55b5553732c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5244245 data_alloc: 234881024 data_used: 36904960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:30.542305+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b552b36c00 session 0x55b55395c960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:31.542551+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b552c9bc00 session 0x55b5550e63c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b553873c00 session 0x55b5558fe000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559dd1c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.029902458s of 12.038014412s, submitted: 10
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:32.542734+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:33.542979+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481034240 unmapped: 81444864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:34.543228+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5327897 data_alloc: 234881024 data_used: 44769280
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:35.543682+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:36.543821+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:37.543968+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:38.544103+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:39.544243+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5327897 data_alloc: 234881024 data_used: 44769280
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:40.544387+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:41.544630+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:42.544800+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:43.544986+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481345536 unmapped: 81133568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:44.545145+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481345536 unmapped: 81133568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.649188042s of 12.812686920s, submitted: 1
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5328013 data_alloc: 234881024 data_used: 44781568
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,1])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:45.545309+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 483876864 unmapped: 78602240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19e2c9000/0x0/0x1bfc00000, data 0x5930548/0x57e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:46.545474+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484933632 unmapped: 77545472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:47.545683+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484941824 unmapped: 77537280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:48.545845+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484941824 unmapped: 77537280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:49.545997+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484941824 unmapped: 77537280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5427449 data_alloc: 234881024 data_used: 44855296
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:50.546164+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484941824 unmapped: 77537280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:51.546372+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 77668352 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19e2a2000/0x0/0x1bfc00000, data 0x5957548/0x580c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7205.4 total, 600.0 interval
                                           Cumulative writes: 67K writes, 265K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s
                                           Cumulative WAL: 67K writes, 24K syncs, 2.75 writes per sync, written: 0.25 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3267 writes, 12K keys, 3267 commit groups, 1.0 writes per commit group, ingest: 12.60 MB, 0.02 MB/s
                                           Interval WAL: 3267 writes, 1351 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b5546ca400 session 0x55b5553d2b40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:52.546517+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b559dd1c00 session 0x55b55348fa40
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19e2a2000/0x0/0x1bfc00000, data 0x5957548/0x580c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b552b36c00 session 0x55b55348e1e0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 77660160 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:53.546719+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 77660160 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:54.546860+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 77660160 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5424449 data_alloc: 234881024 data_used: 44859392
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:55.547027+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 77660160 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.548606873s of 10.875352859s, submitted: 89
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:56.547199+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 435 ms_handle_reset con 0x55b552c9bc00 session 0x55b5533683c0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 435 ms_handle_reset con 0x55b5546ca400 session 0x55b5559d4960
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478429184 unmapped: 84049920 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:57.547329+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 436 ms_handle_reset con 0x55b553873c00 session 0x55b5534e4000
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 436 ms_handle_reset con 0x55b555389000 session 0x55b5553d3c20
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:58.547511+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 436 heartbeat osd_stat(store_statfs(0x1a07c9000/0x0/0x1bfc00000, data 0x30ad05a/0x32e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:59.547672+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052791 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:00.547829+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:01.548027+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:02.548170+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 436 heartbeat osd_stat(store_statfs(0x1a07c9000/0x0/0x1bfc00000, data 0x30ad05a/0x32e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:03.548349+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:04.548499+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052791 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:05.548653+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:06.548792+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:07.548994+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 84549632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:08.549155+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 84549632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:09.549335+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:10.549504+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:11.549755+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:12.549921+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:13.550106+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477945856 unmapped: 84533248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:14.550322+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477945856 unmapped: 84533248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:15.550507+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:16.550721+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:17.550908+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:18.551069+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:19.551218+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:20.551365+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:21.551588+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:22.551789+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:23.551945+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:24.552084+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:25.552279+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:26.552444+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:27.552631+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:28.552815+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:29.552983+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:30.553157+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:31.553357+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:32.553509+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:33.553712+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:34.553896+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:35.554036+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:36.554317+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:37.554478+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:38.554615+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 84484096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:39.554790+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:40.554993+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:41.555195+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:42.555383+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:43.555562+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:44.555707+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:45.555824+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:46.555999+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:47.556271+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:48.556469+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:49.556681+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: mgrc ms_handle_reset ms_handle_reset con 0x55b55ab1f800
Dec 06 08:29:59 compute-1 ceph-osd[79002]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/798720280
Dec 06 08:29:59 compute-1 ceph-osd[79002]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: get_auth_request con 0x55b555389000 auth_method 0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: mgrc handle_mgr_configure stats_period=5
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:50.556845+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:51.557166+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:52.557374+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:53.557574+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:54.557756+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:55.557903+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:56.558080+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:57.558283+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:58.558446+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:59.558607+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:00.558752+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:01.559004+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:02.559211+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 84451328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:03.559387+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 84451328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:04.559664+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 84451328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:05.559920+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 84451328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:06.560098+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 84434944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:07.560324+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 84434944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:08.560520+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 84434944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:09.560747+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 84434944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:10.560896+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:11.561180+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:12.561369+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:13.561599+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:14.561900+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:15.562068+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:16.562282+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:17.562472+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:18.562648+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:19.562872+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:20.563116+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:21.563533+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:22.563800+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:23.564040+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:24.564229+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:25.564466+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:29:59 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:29:59 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478142464 unmapped: 84336640 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:26.564617+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'config diff' '{prefix=config diff}'
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'config show' '{prefix=config show}'
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 08:29:59 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:27.564817+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 84516864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:29:59 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:28.565037+0000)
Dec 06 08:29:59 compute-1 ceph-osd[79002]: do_command 'log dump' '{prefix=log dump}'
Dec 06 08:29:59 compute-1 nova_compute[226101]: 2025-12-06 08:29:59.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:59 compute-1 nova_compute[226101]: 2025-12-06 08:29:59.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:29:59 compute-1 nova_compute[226101]: 2025-12-06 08:29:59.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:29:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 06 08:29:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/614316164' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:59 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:30:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 08:30:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1991419766' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.45566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.46420 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.45581 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.45587 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.46435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3078248732' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.37329 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/614316164' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3810903548' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/447824310' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1991419766' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1118942847' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2329781491' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:30:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 06 08:30:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4281710221' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:30:00 compute-1 crontab[320383]: (root) LIST (root)
Dec 06 08:30:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec 06 08:30:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1543744436' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:30:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:01.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:01 compute-1 nova_compute[226101]: 2025-12-06 08:30:01.333 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:30:01 compute-1 nova_compute[226101]: 2025-12-06 08:30:01.333 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:01 compute-1 nova_compute[226101]: 2025-12-06 08:30:01.333 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:30:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:30:01.708 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:30:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:30:01.709 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:30:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:30:01.709 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:30:01 compute-1 nova_compute[226101]: 2025-12-06 08:30:01.806 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.45599 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.46447 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.37341 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.45617 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.46468 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: pgmap v4142: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.37359 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4281710221' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.45629 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3661017523' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.46480 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2986239348' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1543744436' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3073358271' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/770934710' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3224398132' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:30:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec 06 08:30:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/202662870' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec 06 08:30:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1970116714' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec 06 08:30:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1705572258' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec 06 08:30:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2156604952' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.37371 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.45650 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.46501 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.37389 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.45662 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.46513 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.37407 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.46528 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/202662870' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.37422 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2197532046' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.45695 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: pgmap v4143: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.37428 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1970116714' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3704065491' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1705572258' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.46561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2156604952' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3224827068' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:30:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2894406726' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3960838425' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:03.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:03.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec 06 08:30:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3059804696' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec 06 08:30:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3894160054' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec 06 08:30:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2413139653' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec 06 08:30:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/811302128' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec 06 08:30:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2941519675' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3960838425' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.37455 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3059804696' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1188637612' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/471531764' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3894160054' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2413139653' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2314498896' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1646325394' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3163222241' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4005305835' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/811302128' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3892246700' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:30:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2941519675' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:04 compute-1 systemd[1]: Starting Hostname Service...
Dec 06 08:30:04 compute-1 nova_compute[226101]: 2025-12-06 08:30:04.159 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:04 compute-1 systemd[1]: Started Hostname Service.
Dec 06 08:30:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec 06 08:30:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3156275921' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:30:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec 06 08:30:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2252398494' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:30:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec 06 08:30:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/564105370' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:30:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 06 08:30:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/859904933' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:30:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec 06 08:30:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3241546565' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: pgmap v4144: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3035065875' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1241005051' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3156275921' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3036098789' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2252398494' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/564105370' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1800079149' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1902888652' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/841291817' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2084276204' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/859904933' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3241546565' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1062355183' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:30:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:05.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec 06 08:30:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/164530540' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec 06 08:30:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3810317471' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3072314910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/164530540' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3605788513' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2820554929' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3810317471' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/552085105' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1504983881' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/420795564' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3112676870' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.45815 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.45821 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3515512720' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1730310572' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2766860470' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/871280326' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-1 nova_compute[226101]: 2025-12-06 08:30:06.808 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec 06 08:30:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1512476254' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:30:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:07.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:07 compute-1 nova_compute[226101]: 2025-12-06 08:30:07.102 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:07 compute-1 nova_compute[226101]: 2025-12-06 08:30:07.103 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:07 compute-1 nova_compute[226101]: 2025-12-06 08:30:07.103 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:07 compute-1 nova_compute[226101]: 2025-12-06 08:30:07.104 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.45827 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.45833 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3375895849' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: pgmap v4145: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.37581 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.37587 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3724135901' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.45848 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.46705 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.46711 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.37596 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.46717 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1512476254' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec 06 08:30:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/836512384' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:30:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 06 08:30:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2298280578' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:08 compute-1 podman[321445]: 2025-12-06 08:30:08.087053786 +0000 UTC m=+0.068688983 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:30:08 compute-1 podman[321446]: 2025-12-06 08:30:08.117231066 +0000 UTC m=+0.094786574 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 08:30:08 compute-1 podman[321447]: 2025-12-06 08:30:08.149326667 +0000 UTC m=+0.126873225 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.45857 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.46723 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.46735 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1607896249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.37620 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.45872 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.46753 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/836512384' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/449367434' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.37638 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.45884 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3933450453' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3106095360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2298280578' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4031716560' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:30:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec 06 08:30:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1507927948' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:09.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-1 nova_compute[226101]: 2025-12-06 08:30:09.161 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.46765 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.37662 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.45896 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.46780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1735822335' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: pgmap v4146: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.37671 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1507927948' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/72862526' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.46792 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.37689 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/707535975' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2170862911' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3596632528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3831316851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3831316851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3797364698' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='client.46804 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3831316851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3831316851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3797364698' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec 06 08:30:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3180610648' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:30:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Dec 06 08:30:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3066853454' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:30:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:11.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.45983 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: pgmap v4147: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2092924722' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/98021121' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3180610648' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.46918 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.37797 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4294752078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3066853454' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1089499906' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1527899473' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec 06 08:30:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1386011340' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:30:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec 06 08:30:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1301984105' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:30:11 compute-1 nova_compute[226101]: 2025-12-06 08:30:11.853 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1386011340' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:30:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1853012714' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:30:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1056357027' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:30:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1456843590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1301984105' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:30:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/570365904' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:30:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2186325669' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:30:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3692227316' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:30:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec 06 08:30:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/289311643' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:30:12 compute-1 nova_compute[226101]: 2025-12-06 08:30:12.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec 06 08:30:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4122255845' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:30:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:13.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:13 compute-1 ceph-mon[81689]: from='client.46040 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-1 ceph-mon[81689]: pgmap v4148: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2466723271' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/289311643' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:30:13 compute-1 ceph-mon[81689]: from='client.37830 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-1 ceph-mon[81689]: from='client.46963 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4122255845' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3681133396' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:30:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3237105517' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:30:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec 06 08:30:13 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3333165368' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:30:14 compute-1 nova_compute[226101]: 2025-12-06 08:30:14.162 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:14 compute-1 ceph-mon[81689]: from='client.46064 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2735476104' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:30:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2157891544' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:30:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3333165368' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:30:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1500227487' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:30:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:15.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:15 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Dec 06 08:30:15 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4036621259' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 08:30:15 compute-1 nova_compute[226101]: 2025-12-06 08:30:15.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.46993 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.37854 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: pgmap v4149: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.46076 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4020119241' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.47005 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.37866 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4036621259' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2243827790' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3745472718' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 08:30:15 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/483461817' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 08:30:16 compute-1 sshd-session[319959]: error: kex_exchange_identification: read: Connection reset by peer
Dec 06 08:30:16 compute-1 sshd-session[319959]: Connection reset by 14.103.75.9 port 21906
Dec 06 08:30:16 compute-1 ceph-mon[81689]: from='client.46085 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-1 ceph-mon[81689]: from='client.47011 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-1 ceph-mon[81689]: from='client.46088 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3844968379' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 08:30:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2132293271' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 08:30:16 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Dec 06 08:30:16 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3220172303' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 08:30:16 compute-1 nova_compute[226101]: 2025-12-06 08:30:16.854 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:16 compute-1 ovs-appctl[323144]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 08:30:16 compute-1 ovs-appctl[323151]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 08:30:17 compute-1 ovs-appctl[323161]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 08:30:17 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Dec 06 08:30:17 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/347281288' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 08:30:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:17.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:17.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:17 compute-1 nova_compute[226101]: 2025-12-06 08:30:17.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.46106 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.47035 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: pgmap v4150: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.37896 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.46115 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.47044 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.37905 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3220172303' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2410749592' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1986429874' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/347281288' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1891305318' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 08:30:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2293411759' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 08:30:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Dec 06 08:30:18 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1164476600' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 08:30:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:19.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:19.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:19 compute-1 nova_compute[226101]: 2025-12-06 08:30:19.164 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:19 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/650511931' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:19 compute-1 nova_compute[226101]: 2025-12-06 08:30:19.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 06 08:30:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3335022702' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:21.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:21.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Dec 06 08:30:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3768634476' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 08:30:21 compute-1 nova_compute[226101]: 2025-12-06 08:30:21.856 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:21 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:21 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3375488095' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:22 compute-1 ceph-mon[81689]: from='client.46142 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:22 compute-1 ceph-mon[81689]: from='client.47071 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4062746758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:30:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2634980621' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:30:22 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Dec 06 08:30:22 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3695961694' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:23.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:23.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Dec 06 08:30:23 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/416997609' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.46148 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.37938 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.47077 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.37947 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1164476600' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/554704831' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3572935360' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/650511931' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2117150299' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.47107 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.46178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3573622413' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1565949019' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/496890426' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4171845927' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.37977 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2596404359' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3335022702' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3768634476' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3375488095' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/370689176' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4179480912' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3762760187' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3695961694' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/779822408' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-1 sshd[164848]: Timeout before authentication for connection from 14.103.118.136 to 38.102.83.204, pid = 317897
Dec 06 08:30:24 compute-1 nova_compute[226101]: 2025-12-06 08:30:24.209 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:24 compute-1 sshd-session[324412]: Received disconnect from 186.96.151.198 port 45156:11: Bye Bye [preauth]
Dec 06 08:30:24 compute-1 sshd-session[324412]: Disconnected from authenticating user root 186.96.151.198 port 45156 [preauth]
Dec 06 08:30:24 compute-1 ceph-mon[81689]: from='client.47155 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-1 ceph-mon[81689]: from='client.46211 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3674858668' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/958197419' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/416997609' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-1 ceph-mon[81689]: from='client.38019 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-1 ceph-mon[81689]: pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:25.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:25.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:25 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:25 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4124238822' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3599598298' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3655934544' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4124238822' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4251283217' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-1 ceph-mon[81689]: from='client.47176 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Dec 06 08:30:26 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3979588669' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-1 nova_compute[226101]: 2025-12-06 08:30:26.859 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:27.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:27.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:27 compute-1 ceph-mon[81689]: from='client.46232 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-1 ceph-mon[81689]: from='client.38043 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/328269644' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-1 ceph-mon[81689]: pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/782571674' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3979588669' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-1 ceph-mon[81689]: from='client.38055 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-1 ceph-mon[81689]: from='client.47197 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:27 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/61962800' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Dec 06 08:30:28 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1952695033' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.46250 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.47206 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.38064 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.46256 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2530206311' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/776452059' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/61962800' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3573223227' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3419545571' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1952695033' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:29.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:29 compute-1 nova_compute[226101]: 2025-12-06 08:30:29.211 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 06 08:30:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/348320463' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.47236 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.38091 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.47245 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.46283 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.38103 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3317472033' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1117232887' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3249964146' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/348320463' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-1 virtqemud[225710]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 08:30:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Dec 06 08:30:29 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/638693939' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-1 systemd[1]: Starting Time & Date Service...
Dec 06 08:30:30 compute-1 systemd[1]: Started Time & Date Service.
Dec 06 08:30:30 compute-1 ceph-mon[81689]: from='client.46301 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/939267410' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-1 ceph-mon[81689]: from='client.47281 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/638693939' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3466208961' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1655787077' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:31.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 08:30:31 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2817429348' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 sudo[325240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:31 compute-1 sudo[325240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:31 compute-1 sudo[325240]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 compute-1 sudo[325271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:30:31 compute-1 sudo[325271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:31 compute-1 sudo[325271]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 compute-1 sudo[325301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:31 compute-1 sudo[325301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:31 compute-1 sudo[325301]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Dec 06 08:30:31 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/563215958' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 sudo[325328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:30:31 compute-1 sudo[325328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:31 compute-1 nova_compute[226101]: 2025-12-06 08:30:31.863 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.38136 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.47287 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.38142 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 ceph-mon[81689]: pgmap v4157: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.46328 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.46337 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/222777010' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3615114752' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2817429348' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/563215958' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:32 compute-1 sudo[325328]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:33 compute-1 ceph-mon[81689]: pgmap v4158: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:33.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:33.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:33 compute-1 sshd-session[325384]: Received disconnect from 186.87.166.141 port 43236:11: Bye Bye [preauth]
Dec 06 08:30:33 compute-1 sshd-session[325384]: Disconnected from authenticating user root 186.87.166.141 port 43236 [preauth]
Dec 06 08:30:34 compute-1 nova_compute[226101]: 2025-12-06 08:30:34.217 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:30:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:30:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:30:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:30:34 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:30:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:35.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:35 compute-1 ceph-mon[81689]: pgmap v4159: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:36 compute-1 nova_compute[226101]: 2025-12-06 08:30:36.868 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:37 compute-1 sshd-session[325386]: Received disconnect from 45.120.216.232 port 57932:11: Bye Bye [preauth]
Dec 06 08:30:37 compute-1 sshd-session[325386]: Disconnected from authenticating user root 45.120.216.232 port 57932 [preauth]
Dec 06 08:30:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:37.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:37.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:37 compute-1 ceph-mon[81689]: pgmap v4160: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:39 compute-1 podman[325389]: 2025-12-06 08:30:39.091345587 +0000 UTC m=+0.068081308 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:30:39 compute-1 podman[325390]: 2025-12-06 08:30:39.126860419 +0000 UTC m=+0.102868520 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 06 08:30:39 compute-1 podman[325388]: 2025-12-06 08:30:39.136133999 +0000 UTC m=+0.110645880 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 08:30:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:39.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:39.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:39 compute-1 nova_compute[226101]: 2025-12-06 08:30:39.273 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:39 compute-1 ceph-mon[81689]: pgmap v4161: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:41 compute-1 ceph-mon[81689]: pgmap v4162: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:41 compute-1 sudo[325450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:41 compute-1 sudo[325450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:41 compute-1 sudo[325450]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:41 compute-1 sudo[325475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:30:41 compute-1 sudo[325475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:41 compute-1 sudo[325475]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:41 compute-1 nova_compute[226101]: 2025-12-06 08:30:41.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:42 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:42 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:43.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:43.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:43 compute-1 ceph-mon[81689]: pgmap v4163: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:44 compute-1 nova_compute[226101]: 2025-12-06 08:30:44.276 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:45.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:45.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:46 compute-1 ceph-mon[81689]: pgmap v4164: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:46 compute-1 nova_compute[226101]: 2025-12-06 08:30:46.872 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:47.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:47.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:48 compute-1 ceph-mon[81689]: pgmap v4165: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:49.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:49 compute-1 nova_compute[226101]: 2025-12-06 08:30:49.278 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:49 compute-1 ceph-mon[81689]: pgmap v4166: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:49 compute-1 nova_compute[226101]: 2025-12-06 08:30:49.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:49 compute-1 nova_compute[226101]: 2025-12-06 08:30:49.604 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:49 compute-1 nova_compute[226101]: 2025-12-06 08:30:49.635 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:30:49 compute-1 nova_compute[226101]: 2025-12-06 08:30:49.636 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:30:49 compute-1 nova_compute[226101]: 2025-12-06 08:30:49.636 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:30:49 compute-1 nova_compute[226101]: 2025-12-06 08:30:49.637 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:30:49 compute-1 nova_compute[226101]: 2025-12-06 08:30:49.637 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:30:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:30:50 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3827008471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.113 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.300 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.302 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4103MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.302 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.302 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.378 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.378 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.408 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:30:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3827008471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:50 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:30:50 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2064156881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.906 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.913 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.935 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.937 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:30:50 compute-1 nova_compute[226101]: 2025-12-06 08:30:50.937 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:30:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:51.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:51 compute-1 ceph-mon[81689]: pgmap v4167: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2064156881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:51 compute-1 nova_compute[226101]: 2025-12-06 08:30:51.873 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:52 compute-1 sshd-session[325546]: Received disconnect from 124.18.141.70 port 34416:11: Bye Bye [preauth]
Dec 06 08:30:52 compute-1 sshd-session[325546]: Disconnected from authenticating user root 124.18.141.70 port 34416 [preauth]
Dec 06 08:30:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:53.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:53 compute-1 ceph-mon[81689]: pgmap v4168: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:54 compute-1 nova_compute[226101]: 2025-12-06 08:30:54.282 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:54 compute-1 sshd-session[325500]: Connection closed by 165.154.55.146 port 53008 [preauth]
Dec 06 08:30:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:55.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:55 compute-1 sshd-session[325548]: Received disconnect from 101.100.194.199 port 35198:11: Bye Bye [preauth]
Dec 06 08:30:55 compute-1 sshd-session[325548]: Disconnected from authenticating user root 101.100.194.199 port 35198 [preauth]
Dec 06 08:30:55 compute-1 ceph-mon[81689]: pgmap v4169: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:56 compute-1 sshd-session[325550]: Received disconnect from 154.209.4.183 port 53990:11: Bye Bye [preauth]
Dec 06 08:30:56 compute-1 sshd-session[325550]: Disconnected from authenticating user root 154.209.4.183 port 53990 [preauth]
Dec 06 08:30:56 compute-1 sshd-session[325552]: Received disconnect from 136.112.8.45 port 45822:11: Bye Bye [preauth]
Dec 06 08:30:56 compute-1 sshd-session[325552]: Disconnected from authenticating user root 136.112.8.45 port 45822 [preauth]
Dec 06 08:30:56 compute-1 nova_compute[226101]: 2025-12-06 08:30:56.921 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:57.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:57 compute-1 ceph-mon[81689]: pgmap v4170: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:57.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:58 compute-1 nova_compute[226101]: 2025-12-06 08:30:58.923 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:58 compute-1 nova_compute[226101]: 2025-12-06 08:30:58.924 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:30:58 compute-1 nova_compute[226101]: 2025-12-06 08:30:58.924 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:30:58 compute-1 nova_compute[226101]: 2025-12-06 08:30:58.941 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:30:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:30:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:59 compute-1 nova_compute[226101]: 2025-12-06 08:30:59.286 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:59 compute-1 ceph-mon[81689]: pgmap v4171: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:00 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 08:31:00 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 08:31:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:01.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:01 compute-1 ceph-mon[81689]: pgmap v4172: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:31:01.709 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:31:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:31:01.710 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:31:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:31:01.710 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:31:01 compute-1 nova_compute[226101]: 2025-12-06 08:31:01.922 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:02 compute-1 ceph-mon[81689]: pgmap v4173: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:03.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:04 compute-1 nova_compute[226101]: 2025-12-06 08:31:04.287 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:05.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:05 compute-1 ceph-mon[81689]: pgmap v4174: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:05 compute-1 sshd-session[325558]: Received disconnect from 106.51.92.114 port 39470:11: Bye Bye [preauth]
Dec 06 08:31:05 compute-1 sshd-session[325558]: Disconnected from authenticating user root 106.51.92.114 port 39470 [preauth]
Dec 06 08:31:06 compute-1 nova_compute[226101]: 2025-12-06 08:31:06.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:06 compute-1 nova_compute[226101]: 2025-12-06 08:31:06.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:31:06 compute-1 nova_compute[226101]: 2025-12-06 08:31:06.925 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:07.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:07 compute-1 ceph-mon[81689]: pgmap v4175: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:07 compute-1 nova_compute[226101]: 2025-12-06 08:31:07.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:07 compute-1 nova_compute[226101]: 2025-12-06 08:31:07.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3295295733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:09.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:09.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:09 compute-1 nova_compute[226101]: 2025-12-06 08:31:09.289 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:09 compute-1 ceph-mon[81689]: pgmap v4176: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2136885558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:31:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2136885558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:31:09 compute-1 sshd-session[325560]: Received disconnect from 14.225.3.79 port 47172:11: Bye Bye [preauth]
Dec 06 08:31:09 compute-1 sshd-session[325560]: Disconnected from authenticating user root 14.225.3.79 port 47172 [preauth]
Dec 06 08:31:10 compute-1 podman[325563]: 2025-12-06 08:31:10.079526315 +0000 UTC m=+0.056756543 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:31:10 compute-1 podman[325562]: 2025-12-06 08:31:10.104769302 +0000 UTC m=+0.081711672 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 08:31:10 compute-1 podman[325564]: 2025-12-06 08:31:10.135479446 +0000 UTC m=+0.105836509 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:31:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/612683571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3684053451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:11.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:11 compute-1 ceph-mon[81689]: pgmap v4177: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 08:31:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2578691889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:11 compute-1 nova_compute[226101]: 2025-12-06 08:31:11.928 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:12 compute-1 sshd-session[325621]: Received disconnect from 91.144.158.231 port 32335:11: Bye Bye [preauth]
Dec 06 08:31:12 compute-1 sshd-session[325621]: Disconnected from authenticating user root 91.144.158.231 port 32335 [preauth]
Dec 06 08:31:12 compute-1 ceph-mon[81689]: pgmap v4178: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 06 08:31:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:13.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:13 compute-1 nova_compute[226101]: 2025-12-06 08:31:13.595 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:14 compute-1 nova_compute[226101]: 2025-12-06 08:31:14.292 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:15.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:15.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:15 compute-1 ceph-mon[81689]: pgmap v4179: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 06 08:31:16 compute-1 nova_compute[226101]: 2025-12-06 08:31:16.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:16 compute-1 nova_compute[226101]: 2025-12-06 08:31:16.930 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:17.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:17 compute-1 nova_compute[226101]: 2025-12-06 08:31:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:17 compute-1 sudo[318454]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:17 compute-1 sshd-session[318453]: Received disconnect from 192.168.122.10 port 34448:11: disconnected by user
Dec 06 08:31:17 compute-1 sshd-session[318453]: Disconnected from user zuul 192.168.122.10 port 34448
Dec 06 08:31:17 compute-1 sshd-session[318450]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:31:17 compute-1 systemd[1]: session-60.scope: Deactivated successfully.
Dec 06 08:31:17 compute-1 systemd[1]: session-60.scope: Consumed 2min 53.762s CPU time, 914.5M memory peak, read 373.8M from disk, written 322.9M to disk.
Dec 06 08:31:17 compute-1 systemd-logind[788]: Session 60 logged out. Waiting for processes to exit.
Dec 06 08:31:17 compute-1 systemd-logind[788]: Removed session 60.
Dec 06 08:31:17 compute-1 ceph-mon[81689]: pgmap v4180: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Dec 06 08:31:17 compute-1 sshd-session[325624]: Accepted publickey for zuul from 192.168.122.10 port 56158 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 08:31:17 compute-1 systemd-logind[788]: New session 61 of user zuul.
Dec 06 08:31:17 compute-1 systemd[1]: Started Session 61 of User zuul.
Dec 06 08:31:17 compute-1 sshd-session[325624]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 08:31:17 compute-1 sudo[325628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-1-2025-12-06-aliytbm.tar.xz
Dec 06 08:31:17 compute-1 sudo[325628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:31:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:18 compute-1 sudo[325628]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:18 compute-1 sshd-session[325627]: Received disconnect from 192.168.122.10 port 56158:11: disconnected by user
Dec 06 08:31:18 compute-1 sshd-session[325627]: Disconnected from user zuul 192.168.122.10 port 56158
Dec 06 08:31:18 compute-1 sshd-session[325624]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:31:18 compute-1 systemd-logind[788]: Session 61 logged out. Waiting for processes to exit.
Dec 06 08:31:18 compute-1 systemd[1]: session-61.scope: Deactivated successfully.
Dec 06 08:31:18 compute-1 systemd-logind[788]: Removed session 61.
Dec 06 08:31:18 compute-1 sshd-session[325653]: Accepted publickey for zuul from 192.168.122.10 port 56168 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 08:31:18 compute-1 systemd-logind[788]: New session 62 of user zuul.
Dec 06 08:31:18 compute-1 systemd[1]: Started Session 62 of User zuul.
Dec 06 08:31:18 compute-1 sshd-session[325653]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 08:31:18 compute-1 sudo[325657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 06 08:31:18 compute-1 sudo[325657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:31:18 compute-1 sudo[325657]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:18 compute-1 sshd-session[325656]: Received disconnect from 192.168.122.10 port 56168:11: disconnected by user
Dec 06 08:31:18 compute-1 sshd-session[325656]: Disconnected from user zuul 192.168.122.10 port 56168
Dec 06 08:31:18 compute-1 sshd-session[325653]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:31:18 compute-1 systemd[1]: session-62.scope: Deactivated successfully.
Dec 06 08:31:18 compute-1 systemd-logind[788]: Session 62 logged out. Waiting for processes to exit.
Dec 06 08:31:18 compute-1 systemd-logind[788]: Removed session 62.
Dec 06 08:31:19 compute-1 ceph-mon[81689]: pgmap v4181: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Dec 06 08:31:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:19.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:19 compute-1 nova_compute[226101]: 2025-12-06 08:31:19.294 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:20 compute-1 nova_compute[226101]: 2025-12-06 08:31:20.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:21.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:21 compute-1 ceph-mon[81689]: pgmap v4182: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 08:31:21 compute-1 nova_compute[226101]: 2025-12-06 08:31:21.932 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:23.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:23 compute-1 ceph-mon[81689]: pgmap v4183: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 08:31:24 compute-1 nova_compute[226101]: 2025-12-06 08:31:24.295 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:25.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:26 compute-1 ceph-mon[81689]: pgmap v4184: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Dec 06 08:31:26 compute-1 nova_compute[226101]: 2025-12-06 08:31:26.934 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:27.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:27.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:27 compute-1 ceph-mon[81689]: pgmap v4185: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec 06 08:31:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:28 compute-1 sshd-session[325682]: Invalid user xiqiao from 150.95.85.24 port 55560
Dec 06 08:31:28 compute-1 sshd-session[325682]: Received disconnect from 150.95.85.24 port 55560:11:  [preauth]
Dec 06 08:31:28 compute-1 sshd-session[325682]: Disconnected from invalid user xiqiao 150.95.85.24 port 55560 [preauth]
Dec 06 08:31:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:29.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:29.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:29 compute-1 nova_compute[226101]: 2025-12-06 08:31:29.296 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:29 compute-1 ceph-mon[81689]: pgmap v4186: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 95 op/s
Dec 06 08:31:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:31.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:31.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:31 compute-1 ceph-mon[81689]: pgmap v4187: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 102 op/s
Dec 06 08:31:31 compute-1 nova_compute[226101]: 2025-12-06 08:31:31.938 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:33.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:33 compute-1 ceph-mon[81689]: pgmap v4188: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Dec 06 08:31:34 compute-1 sshd-session[325684]: Received disconnect from 186.96.151.198 port 50966:11: Bye Bye [preauth]
Dec 06 08:31:34 compute-1 sshd-session[325684]: Disconnected from authenticating user root 186.96.151.198 port 50966 [preauth]
Dec 06 08:31:34 compute-1 nova_compute[226101]: 2025-12-06 08:31:34.300 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:35 compute-1 ceph-mon[81689]: pgmap v4189: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 08:31:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:35.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:35.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:36 compute-1 nova_compute[226101]: 2025-12-06 08:31:36.939 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:37.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:37.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:37 compute-1 ceph-mon[81689]: pgmap v4190: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 08:31:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:39.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:39.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:39 compute-1 nova_compute[226101]: 2025-12-06 08:31:39.302 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:39 compute-1 ceph-mon[81689]: pgmap v4191: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 06 08:31:41 compute-1 podman[325687]: 2025-12-06 08:31:41.095759827 +0000 UTC m=+0.072674850 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:41 compute-1 podman[325686]: 2025-12-06 08:31:41.109873236 +0000 UTC m=+0.086330716 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:31:41 compute-1 podman[325688]: 2025-12-06 08:31:41.13833734 +0000 UTC m=+0.113800634 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:41.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:41.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:41 compute-1 ceph-mon[81689]: pgmap v4192: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 06 08:31:41 compute-1 sudo[325748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:41 compute-1 sudo[325748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:41 compute-1 sudo[325748]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:41 compute-1 sudo[325773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:41 compute-1 sudo[325773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:41 compute-1 sudo[325773]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:41 compute-1 nova_compute[226101]: 2025-12-06 08:31:41.941 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:41 compute-1 sudo[325798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:41 compute-1 sudo[325798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:41 compute-1 sudo[325798]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:42 compute-1 sudo[325823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:31:42 compute-1 sudo[325823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:42 compute-1 podman[325920]: 2025-12-06 08:31:42.599549159 +0000 UTC m=+0.078816495 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:31:42 compute-1 podman[325920]: 2025-12-06 08:31:42.732026393 +0000 UTC m=+0.211293689 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:31:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:43.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:43 compute-1 sudo[325823]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:43.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:44 compute-1 ceph-mon[81689]: pgmap v4193: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 06 08:31:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:44 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:44 compute-1 nova_compute[226101]: 2025-12-06 08:31:44.303 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:44 compute-1 sudo[326045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:44 compute-1 sudo[326045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:44 compute-1 sudo[326045]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:44 compute-1 sudo[326070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:44 compute-1 sudo[326070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:44 compute-1 sudo[326070]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:44 compute-1 sudo[326095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:44 compute-1 sudo[326095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:44 compute-1 sudo[326095]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:44 compute-1 sudo[326120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:31:44 compute-1 sudo[326120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:45 compute-1 sudo[326120]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:45 compute-1 sudo[326177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:45 compute-1 sudo[326177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:45 compute-1 sudo[326177]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:45 compute-1 sudo[326202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:45 compute-1 sudo[326202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:45 compute-1 sudo[326202]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-1 ceph-mon[81689]: pgmap v4194: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:45.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:45.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:45 compute-1 sudo[326227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:45 compute-1 sudo[326227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:45 compute-1 sudo[326227]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:45 compute-1 sudo[326252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 08:31:45 compute-1 sudo[326252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:45 compute-1 podman[326316]: 2025-12-06 08:31:45.674672413 +0000 UTC m=+0.048345048 container create 56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:31:45 compute-1 systemd[1]: Started libpod-conmon-56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014.scope.
Dec 06 08:31:45 compute-1 podman[326316]: 2025-12-06 08:31:45.652659293 +0000 UTC m=+0.026331938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:45 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:31:45 compute-1 podman[326316]: 2025-12-06 08:31:45.774374698 +0000 UTC m=+0.148047333 container init 56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mirzakhani, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:31:45 compute-1 podman[326316]: 2025-12-06 08:31:45.782140186 +0000 UTC m=+0.155812801 container start 56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mirzakhani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:31:45 compute-1 podman[326316]: 2025-12-06 08:31:45.787828548 +0000 UTC m=+0.161501183 container attach 56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:31:45 compute-1 suspicious_mirzakhani[326333]: 167 167
Dec 06 08:31:45 compute-1 systemd[1]: libpod-56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014.scope: Deactivated successfully.
Dec 06 08:31:45 compute-1 podman[326316]: 2025-12-06 08:31:45.79310646 +0000 UTC m=+0.166779075 container died 56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:31:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-64cc9f068490951e30609e3db237eaf278fbe69f452a97033b199d5d945b9324-merged.mount: Deactivated successfully.
Dec 06 08:31:45 compute-1 podman[326316]: 2025-12-06 08:31:45.840203324 +0000 UTC m=+0.213875939 container remove 56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mirzakhani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:31:45 compute-1 systemd[1]: libpod-conmon-56a82e0464ca1ef8e497b507418ae3d9e02bddacbb68fcd9fcad0e94d5f6c014.scope: Deactivated successfully.
Dec 06 08:31:46 compute-1 podman[326358]: 2025-12-06 08:31:46.02606014 +0000 UTC m=+0.047378512 container create 1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:46 compute-1 systemd[1]: Started libpod-conmon-1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6.scope.
Dec 06 08:31:46 compute-1 systemd[1]: Started libcrun container.
Dec 06 08:31:46 compute-1 podman[326358]: 2025-12-06 08:31:46.006491085 +0000 UTC m=+0.027809467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21fae61a977e189fc1685b36a67343f192f9d5916c78adda53213c4b0ab6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21fae61a977e189fc1685b36a67343f192f9d5916c78adda53213c4b0ab6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21fae61a977e189fc1685b36a67343f192f9d5916c78adda53213c4b0ab6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21fae61a977e189fc1685b36a67343f192f9d5916c78adda53213c4b0ab6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:46 compute-1 podman[326358]: 2025-12-06 08:31:46.12258747 +0000 UTC m=+0.143905852 container init 1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 08:31:46 compute-1 podman[326358]: 2025-12-06 08:31:46.132275868 +0000 UTC m=+0.153594230 container start 1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Dec 06 08:31:46 compute-1 podman[326358]: 2025-12-06 08:31:46.135700181 +0000 UTC m=+0.157018573 container attach 1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:31:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:46 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:46 compute-1 nova_compute[226101]: 2025-12-06 08:31:46.943 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:47.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:47.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]: [
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:     {
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         "available": false,
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         "ceph_device": false,
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         "lsm_data": {},
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         "lvs": [],
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         "path": "/dev/sr0",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         "rejected_reasons": [
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "Insufficient space (<5GB)",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "Has a FileSystem"
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         ],
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         "sys_api": {
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "actuators": null,
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "device_nodes": "sr0",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "devname": "sr0",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "human_readable_size": "482.00 KB",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "id_bus": "ata",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "model": "QEMU DVD-ROM",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "nr_requests": "2",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "parent": "/dev/sr0",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "partitions": {},
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "path": "/dev/sr0",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "removable": "1",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "rev": "2.5+",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "ro": "0",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "rotational": "1",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "sas_address": "",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "sas_device_handle": "",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "scheduler_mode": "mq-deadline",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "sectors": 0,
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "sectorsize": "2048",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "size": 493568.0,
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "support_discard": "2048",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "type": "disk",
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:             "vendor": "QEMU"
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:         }
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]:     }
Dec 06 08:31:47 compute-1 frosty_ardinghelli[326374]: ]
Dec 06 08:31:47 compute-1 systemd[1]: libpod-1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6.scope: Deactivated successfully.
Dec 06 08:31:47 compute-1 podman[326358]: 2025-12-06 08:31:47.429750415 +0000 UTC m=+1.451068817 container died 1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:47 compute-1 systemd[1]: libpod-1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6.scope: Consumed 1.322s CPU time.
Dec 06 08:31:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-9fb21fae61a977e189fc1685b36a67343f192f9d5916c78adda53213c4b0ab6d-merged.mount: Deactivated successfully.
Dec 06 08:31:47 compute-1 podman[326358]: 2025-12-06 08:31:47.489596711 +0000 UTC m=+1.510915073 container remove 1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:31:47 compute-1 systemd[1]: libpod-conmon-1ad59d7ad802500b8fa0fe8a5fed095f4775bb6c0bc482105dfabe0c118530b6.scope: Deactivated successfully.
Dec 06 08:31:47 compute-1 sudo[326252]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:47 compute-1 ceph-mon[81689]: pgmap v4195: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:31:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:31:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:31:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:31:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:31:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:49.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:49.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:49 compute-1 nova_compute[226101]: 2025-12-06 08:31:49.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:49 compute-1 ceph-mon[81689]: pgmap v4196: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:50 compute-1 nova_compute[226101]: 2025-12-06 08:31:50.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:50 compute-1 nova_compute[226101]: 2025-12-06 08:31:50.637 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:31:50 compute-1 nova_compute[226101]: 2025-12-06 08:31:50.638 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:31:50 compute-1 nova_compute[226101]: 2025-12-06 08:31:50.638 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:31:50 compute-1 nova_compute[226101]: 2025-12-06 08:31:50.639 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:31:50 compute-1 nova_compute[226101]: 2025-12-06 08:31:50.639 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:31:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:31:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3564539784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.111 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:31:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:51.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:51.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.282 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.283 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4201MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.284 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.284 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.363 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.363 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.385 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.420 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.421 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.438 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.469 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.493 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:31:51 compute-1 ceph-mon[81689]: pgmap v4197: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:51 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3564539784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:51 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:31:51 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2614627506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.936 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.943 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.946 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.964 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.966 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:31:51 compute-1 nova_compute[226101]: 2025-12-06 08:31:51.966 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:31:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2614627506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:52 compute-1 ceph-mon[81689]: pgmap v4198: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:53.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:53.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:53 compute-1 sudo[327748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:53 compute-1 sudo[327748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:53 compute-1 sudo[327748]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:53 compute-1 sudo[327773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:31:53 compute-1 sudo[327773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:53 compute-1 sudo[327773]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:54 compute-1 nova_compute[226101]: 2025-12-06 08:31:54.307 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:54 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:55.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:55.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:55 compute-1 ceph-mon[81689]: pgmap v4199: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:56 compute-1 nova_compute[226101]: 2025-12-06 08:31:56.947 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:57.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:57.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:57 compute-1 ceph-mon[81689]: pgmap v4200: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:58 compute-1 ceph-mon[81689]: pgmap v4201: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:31:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:59.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:59 compute-1 nova_compute[226101]: 2025-12-06 08:31:59.311 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:00 compute-1 nova_compute[226101]: 2025-12-06 08:32:00.966 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:00 compute-1 nova_compute[226101]: 2025-12-06 08:32:00.967 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:32:00 compute-1 nova_compute[226101]: 2025-12-06 08:32:00.967 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:32:00 compute-1 nova_compute[226101]: 2025-12-06 08:32:00.996 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:32:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:01.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:01.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:01 compute-1 ceph-mon[81689]: pgmap v4202: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:01 compute-1 sshd-session[327798]: Received disconnect from 45.120.216.232 port 58268:11: Bye Bye [preauth]
Dec 06 08:32:01 compute-1 sshd-session[327798]: Disconnected from authenticating user root 45.120.216.232 port 58268 [preauth]
Dec 06 08:32:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:32:01.710 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:32:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:32:01.711 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:32:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:32:01.711 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:32:01 compute-1 nova_compute[226101]: 2025-12-06 08:32:01.950 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:03.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:03 compute-1 ceph-mon[81689]: pgmap v4203: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:04 compute-1 nova_compute[226101]: 2025-12-06 08:32:04.312 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:04 compute-1 ceph-mon[81689]: pgmap v4204: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:05.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:05 compute-1 sshd-session[327800]: Received disconnect from 101.100.194.199 port 56622:11: Bye Bye [preauth]
Dec 06 08:32:05 compute-1 sshd-session[327800]: Disconnected from authenticating user root 101.100.194.199 port 56622 [preauth]
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.192475) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926192599, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 2613, "num_deletes": 257, "total_data_size": 5997628, "memory_usage": 6088096, "flush_reason": "Manual Compaction"}
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926226955, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 3884538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90802, "largest_seqno": 93410, "table_properties": {"data_size": 3873046, "index_size": 7153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 28458, "raw_average_key_size": 22, "raw_value_size": 3848520, "raw_average_value_size": 2976, "num_data_blocks": 308, "num_entries": 1293, "num_filter_entries": 1293, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009734, "oldest_key_time": 1765009734, "file_creation_time": 1765009926, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 34520 microseconds, and 17079 cpu microseconds.
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.227025) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 3884538 bytes OK
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.227049) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.229146) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.229162) EVENT_LOG_v1 {"time_micros": 1765009926229157, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.229179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 5985176, prev total WAL file size 5985457, number of live WAL files 2.
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.230697) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353334' seq:72057594037927935, type:22 .. '6C6F676D0033373837' seq:0, type:0; will stop at (end)
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(3793KB)], [186(12MB)]
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926230764, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 17049218, "oldest_snapshot_seqno": -1}
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 12440 keys, 16896110 bytes, temperature: kUnknown
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926381639, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 16896110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16815019, "index_size": 48914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 328661, "raw_average_key_size": 26, "raw_value_size": 16596886, "raw_average_value_size": 1334, "num_data_blocks": 1868, "num_entries": 12440, "num_filter_entries": 12440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009926, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.381889) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 16896110 bytes
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.383019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.0 rd, 111.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.6 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 12971, records dropped: 531 output_compression: NoCompression
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.383036) EVENT_LOG_v1 {"time_micros": 1765009926383028, "job": 120, "event": "compaction_finished", "compaction_time_micros": 150943, "compaction_time_cpu_micros": 64471, "output_level": 6, "num_output_files": 1, "total_output_size": 16896110, "num_input_records": 12971, "num_output_records": 12440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926383836, "job": 120, "event": "table_file_deletion", "file_number": 188}
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926386059, "job": 120, "event": "table_file_deletion", "file_number": 186}
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.230603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.386132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.386137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.386139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.386140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:06.386142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-1 nova_compute[226101]: 2025-12-06 08:32:06.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:06 compute-1 nova_compute[226101]: 2025-12-06 08:32:06.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:32:06 compute-1 nova_compute[226101]: 2025-12-06 08:32:06.952 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:07 compute-1 ceph-mon[81689]: pgmap v4205: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:07.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:07.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:07 compute-1 nova_compute[226101]: 2025-12-06 08:32:07.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:07 compute-1 nova_compute[226101]: 2025-12-06 08:32:07.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:09.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:09.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:09 compute-1 nova_compute[226101]: 2025-12-06 08:32:09.314 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:09 compute-1 ceph-mon[81689]: pgmap v4206: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1861015745' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:32:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1861015745' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:32:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1069405486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:10 compute-1 sshd-session[327802]: Received disconnect from 186.87.166.141 port 44350:11: Bye Bye [preauth]
Dec 06 08:32:10 compute-1 sshd-session[327802]: Disconnected from authenticating user root 186.87.166.141 port 44350 [preauth]
Dec 06 08:32:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2625983309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:11.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:11.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:11 compute-1 ceph-mon[81689]: pgmap v4207: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:11 compute-1 nova_compute[226101]: 2025-12-06 08:32:11.955 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:12 compute-1 podman[327804]: 2025-12-06 08:32:12.090283347 +0000 UTC m=+0.073783030 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 08:32:12 compute-1 podman[327805]: 2025-12-06 08:32:12.108484035 +0000 UTC m=+0.085180056 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:32:12 compute-1 podman[327806]: 2025-12-06 08:32:12.127864445 +0000 UTC m=+0.106459266 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 08:32:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:13 compute-1 ceph-mon[81689]: pgmap v4208: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2621224397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:13.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:13.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3688201196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:14 compute-1 nova_compute[226101]: 2025-12-06 08:32:14.315 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:15.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:15 compute-1 ceph-mon[81689]: pgmap v4209: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:15.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:15 compute-1 nova_compute[226101]: 2025-12-06 08:32:15.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:16 compute-1 sshd-session[327869]: Received disconnect from 124.18.141.70 port 38432:11: Bye Bye [preauth]
Dec 06 08:32:16 compute-1 sshd-session[327869]: Disconnected from authenticating user root 124.18.141.70 port 38432 [preauth]
Dec 06 08:32:16 compute-1 nova_compute[226101]: 2025-12-06 08:32:16.957 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:17.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:17 compute-1 ceph-mon[81689]: pgmap v4210: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:17 compute-1 nova_compute[226101]: 2025-12-06 08:32:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:18 compute-1 nova_compute[226101]: 2025-12-06 08:32:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.600883) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938600932, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 384, "num_deletes": 251, "total_data_size": 367228, "memory_usage": 375256, "flush_reason": "Manual Compaction"}
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938604331, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 241741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93416, "largest_seqno": 93794, "table_properties": {"data_size": 239516, "index_size": 388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5636, "raw_average_key_size": 18, "raw_value_size": 235083, "raw_average_value_size": 778, "num_data_blocks": 17, "num_entries": 302, "num_filter_entries": 302, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009926, "oldest_key_time": 1765009926, "file_creation_time": 1765009938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 3466 microseconds, and 1192 cpu microseconds.
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.604359) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 241741 bytes OK
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.604371) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.606005) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.606021) EVENT_LOG_v1 {"time_micros": 1765009938606016, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.606038) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 364709, prev total WAL file size 364709, number of live WAL files 2.
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.606465) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(236KB)], [189(16MB)]
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938606502, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 17137851, "oldest_snapshot_seqno": -1}
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 12232 keys, 15213867 bytes, temperature: kUnknown
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938719626, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 15213867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15135602, "index_size": 46600, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30597, "raw_key_size": 325043, "raw_average_key_size": 26, "raw_value_size": 14922501, "raw_average_value_size": 1219, "num_data_blocks": 1764, "num_entries": 12232, "num_filter_entries": 12232, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765009938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.720052) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 15213867 bytes
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.721278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.3 rd, 134.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 16.1 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(133.8) write-amplify(62.9) OK, records in: 12742, records dropped: 510 output_compression: NoCompression
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.721303) EVENT_LOG_v1 {"time_micros": 1765009938721290, "job": 122, "event": "compaction_finished", "compaction_time_micros": 113262, "compaction_time_cpu_micros": 40352, "output_level": 6, "num_output_files": 1, "total_output_size": 15213867, "num_input_records": 12742, "num_output_records": 12232, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938721569, "job": 122, "event": "table_file_deletion", "file_number": 191}
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938726484, "job": 122, "event": "table_file_deletion", "file_number": 189}
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.606388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.726634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.726641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.726643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.726644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:32:18.726645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:19.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:19 compute-1 nova_compute[226101]: 2025-12-06 08:32:19.319 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:19.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:19 compute-1 ceph-mon[81689]: pgmap v4211: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:20 compute-1 nova_compute[226101]: 2025-12-06 08:32:20.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.003000079s ======
Dec 06 08:32:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:21.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Dec 06 08:32:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:21.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:21 compute-1 ceph-mon[81689]: pgmap v4212: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:21 compute-1 nova_compute[226101]: 2025-12-06 08:32:21.987 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:23.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:23.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:23 compute-1 ceph-mon[81689]: pgmap v4213: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:24 compute-1 nova_compute[226101]: 2025-12-06 08:32:24.355 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:25.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:25.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:25 compute-1 sshd-session[327875]: Received disconnect from 106.51.92.114 port 54256:11: Bye Bye [preauth]
Dec 06 08:32:25 compute-1 sshd-session[327875]: Disconnected from authenticating user root 106.51.92.114 port 54256 [preauth]
Dec 06 08:32:25 compute-1 ceph-mon[81689]: pgmap v4214: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:25 compute-1 sshd-session[327873]: Received disconnect from 154.209.4.183 port 32822:11: Bye Bye [preauth]
Dec 06 08:32:25 compute-1 sshd-session[327873]: Disconnected from authenticating user root 154.209.4.183 port 32822 [preauth]
Dec 06 08:32:26 compute-1 nova_compute[226101]: 2025-12-06 08:32:26.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:27.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:27 compute-1 ceph-mon[81689]: pgmap v4215: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:29.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:29 compute-1 nova_compute[226101]: 2025-12-06 08:32:29.356 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:29 compute-1 ceph-mon[81689]: pgmap v4216: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:30 compute-1 sshd-session[327877]: Received disconnect from 136.112.8.45 port 58940:11: Bye Bye [preauth]
Dec 06 08:32:30 compute-1 sshd-session[327877]: Disconnected from authenticating user root 136.112.8.45 port 58940 [preauth]
Dec 06 08:32:31 compute-1 ceph-mon[81689]: pgmap v4217: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:31.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:31 compute-1 nova_compute[226101]: 2025-12-06 08:32:31.993 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:32 compute-1 sshd-session[327871]: ssh_dispatch_run_fatal: Connection from 14.103.118.136 port 57278: Connection timed out [preauth]
Dec 06 08:32:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:33.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:33.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:33 compute-1 ceph-mon[81689]: pgmap v4218: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:34 compute-1 nova_compute[226101]: 2025-12-06 08:32:34.357 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:34 compute-1 sshd-session[327879]: Received disconnect from 91.144.158.231 port 47797:11: Bye Bye [preauth]
Dec 06 08:32:34 compute-1 sshd-session[327879]: Disconnected from authenticating user root 91.144.158.231 port 47797 [preauth]
Dec 06 08:32:35 compute-1 ceph-mon[81689]: pgmap v4219: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:35.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:35.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:36 compute-1 nova_compute[226101]: 2025-12-06 08:32:36.995 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:37.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:37.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:37 compute-1 ceph-mon[81689]: pgmap v4220: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:39.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:39.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:39 compute-1 nova_compute[226101]: 2025-12-06 08:32:39.359 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:39 compute-1 ceph-mon[81689]: pgmap v4221: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:41.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:41 compute-1 sshd-session[327881]: Received disconnect from 186.96.151.198 port 50990:11: Bye Bye [preauth]
Dec 06 08:32:41 compute-1 sshd-session[327881]: Disconnected from authenticating user root 186.96.151.198 port 50990 [preauth]
Dec 06 08:32:41 compute-1 ceph-mon[81689]: pgmap v4222: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:41 compute-1 nova_compute[226101]: 2025-12-06 08:32:41.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:42 compute-1 ceph-mon[81689]: pgmap v4223: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:43 compute-1 podman[327884]: 2025-12-06 08:32:43.068611701 +0000 UTC m=+0.051290808 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:32:43 compute-1 podman[327883]: 2025-12-06 08:32:43.076569144 +0000 UTC m=+0.061875840 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 06 08:32:43 compute-1 podman[327885]: 2025-12-06 08:32:43.108256234 +0000 UTC m=+0.088132866 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:32:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:43.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:43.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:44 compute-1 nova_compute[226101]: 2025-12-06 08:32:44.360 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:45.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:45.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:45 compute-1 ceph-mon[81689]: pgmap v4224: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:47 compute-1 nova_compute[226101]: 2025-12-06 08:32:46.999 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:47 compute-1 ceph-mon[81689]: pgmap v4225: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:47.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:47.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:49 compute-1 ceph-mon[81689]: pgmap v4226: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:49 compute-1 nova_compute[226101]: 2025-12-06 08:32:49.362 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:49.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:51.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:51 compute-1 nova_compute[226101]: 2025-12-06 08:32:51.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:51 compute-1 nova_compute[226101]: 2025-12-06 08:32:51.634 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:32:51 compute-1 nova_compute[226101]: 2025-12-06 08:32:51.635 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:32:51 compute-1 nova_compute[226101]: 2025-12-06 08:32:51.635 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:32:51 compute-1 nova_compute[226101]: 2025-12-06 08:32:51.635 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:32:51 compute-1 nova_compute[226101]: 2025-12-06 08:32:51.636 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:32:51 compute-1 ceph-mon[81689]: pgmap v4227: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:52 compute-1 nova_compute[226101]: 2025-12-06 08:32:52.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:32:52 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2631169994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:52 compute-1 nova_compute[226101]: 2025-12-06 08:32:52.279 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:32:52 compute-1 nova_compute[226101]: 2025-12-06 08:32:52.466 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:32:52 compute-1 nova_compute[226101]: 2025-12-06 08:32:52.468 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4221MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:32:52 compute-1 nova_compute[226101]: 2025-12-06 08:32:52.469 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:32:52 compute-1 nova_compute[226101]: 2025-12-06 08:32:52.469 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:32:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:53.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:53.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:53 compute-1 sudo[327965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:53 compute-1 sudo[327965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:53 compute-1 sudo[327965]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:53 compute-1 sudo[327990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:32:53 compute-1 sudo[327990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:53 compute-1 sudo[327990]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:53 compute-1 sudo[328015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:53 compute-1 sudo[328015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:53 compute-1 sudo[328015]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:53 compute-1 sudo[328040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:32:53 compute-1 sudo[328040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2631169994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:54 compute-1 ceph-mon[81689]: pgmap v4228: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:54 compute-1 nova_compute[226101]: 2025-12-06 08:32:54.149 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:32:54 compute-1 nova_compute[226101]: 2025-12-06 08:32:54.150 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:32:54 compute-1 sudo[328040]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:54 compute-1 nova_compute[226101]: 2025-12-06 08:32:54.364 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:54 compute-1 nova_compute[226101]: 2025-12-06 08:32:54.448 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:32:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:32:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1018717692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:54 compute-1 nova_compute[226101]: 2025-12-06 08:32:54.932 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:32:54 compute-1 nova_compute[226101]: 2025-12-06 08:32:54.942 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:32:55 compute-1 ceph-mon[81689]: pgmap v4229: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:32:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:32:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:32:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:32:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:32:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:32:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1018717692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:55 compute-1 nova_compute[226101]: 2025-12-06 08:32:55.106 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:32:55 compute-1 nova_compute[226101]: 2025-12-06 08:32:55.109 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:32:55 compute-1 nova_compute[226101]: 2025-12-06 08:32:55.110 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:32:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:55.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:55.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:55 compute-1 sshd-session[327941]: Connection closed by 165.154.55.146 port 35738 [preauth]
Dec 06 08:32:56 compute-1 nova_compute[226101]: 2025-12-06 08:32:56.104 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:57 compute-1 nova_compute[226101]: 2025-12-06 08:32:57.004 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:57.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:57.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:58 compute-1 ceph-mon[81689]: pgmap v4230: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:59 compute-1 ceph-mon[81689]: pgmap v4231: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:59.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:59 compute-1 nova_compute[226101]: 2025-12-06 08:32:59.377 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:32:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:59.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:59 compute-1 sshd-session[328119]: Received disconnect from 14.225.3.79 port 47814:11: Bye Bye [preauth]
Dec 06 08:32:59 compute-1 sshd-session[328119]: Disconnected from authenticating user root 14.225.3.79 port 47814 [preauth]
Dec 06 08:33:00 compute-1 sshd-session[328121]: Connection reset by authenticating user root 45.135.232.92 port 58874 [preauth]
Dec 06 08:33:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:01.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:01 compute-1 nova_compute[226101]: 2025-12-06 08:33:01.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:01 compute-1 nova_compute[226101]: 2025-12-06 08:33:01.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:33:01 compute-1 nova_compute[226101]: 2025-12-06 08:33:01.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:33:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:33:01.711 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:33:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:33:01.712 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:33:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:33:01.712 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:33:01 compute-1 nova_compute[226101]: 2025-12-06 08:33:01.952 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:33:02 compute-1 nova_compute[226101]: 2025-12-06 08:33:02.006 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:02 compute-1 sshd-session[328123]: Invalid user user1 from 45.135.232.92 port 58884
Dec 06 08:33:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:03 compute-1 sshd-session[328123]: Connection reset by invalid user user1 45.135.232.92 port 58884 [preauth]
Dec 06 08:33:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:03.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:03 compute-1 ceph-mon[81689]: pgmap v4232: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:03.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:04 compute-1 nova_compute[226101]: 2025-12-06 08:33:04.378 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:04 compute-1 sudo[328127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:33:04 compute-1 sudo[328127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:04 compute-1 sudo[328127]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:05 compute-1 ceph-mon[81689]: pgmap v4233: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:05 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:33:05 compute-1 sudo[328152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:33:05 compute-1 sudo[328152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:05 compute-1 sudo[328152]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:05.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:05.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:05 compute-1 sshd-session[328125]: Connection reset by authenticating user root 45.135.232.92 port 58896 [preauth]
Dec 06 08:33:06 compute-1 ceph-mon[81689]: pgmap v4234: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:33:07 compute-1 nova_compute[226101]: 2025-12-06 08:33:07.009 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:07.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:07.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:07 compute-1 nova_compute[226101]: 2025-12-06 08:33:07.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:08 compute-1 ceph-mon[81689]: pgmap v4235: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:08 compute-1 nova_compute[226101]: 2025-12-06 08:33:08.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:08 compute-1 nova_compute[226101]: 2025-12-06 08:33:08.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:33:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:09.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:09 compute-1 nova_compute[226101]: 2025-12-06 08:33:09.380 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:09.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:09 compute-1 nova_compute[226101]: 2025-12-06 08:33:09.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:10 compute-1 sshd-session[328177]: Invalid user admin from 45.135.232.92 port 49024
Dec 06 08:33:11 compute-1 sshd-session[328177]: Connection reset by invalid user admin 45.135.232.92 port 49024 [preauth]
Dec 06 08:33:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:11.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:11.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:12 compute-1 nova_compute[226101]: 2025-12-06 08:33:12.011 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:12 compute-1 ceph-mon[81689]: pgmap v4236: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:12 compute-1 sshd-session[328179]: Invalid user admin from 45.135.232.92 port 49040
Dec 06 08:33:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:13 compute-1 sshd-session[328179]: Connection reset by invalid user admin 45.135.232.92 port 49040 [preauth]
Dec 06 08:33:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:13.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:13.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:13 compute-1 ceph-mon[81689]: pgmap v4237: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:13 compute-1 ceph-mon[81689]: pgmap v4238: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4113851892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:33:13 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4113851892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:33:14 compute-1 podman[328181]: 2025-12-06 08:33:14.129340356 +0000 UTC m=+0.105310856 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:33:14 compute-1 podman[328182]: 2025-12-06 08:33:14.165686201 +0000 UTC m=+0.127914472 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:33:14 compute-1 podman[328183]: 2025-12-06 08:33:14.221298363 +0000 UTC m=+0.190969134 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 08:33:14 compute-1 nova_compute[226101]: 2025-12-06 08:33:14.382 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3772122857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1660156342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:15.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:15.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:15 compute-1 sshd-session[328237]: Received disconnect from 101.100.194.199 port 34306:11: Bye Bye [preauth]
Dec 06 08:33:15 compute-1 sshd-session[328237]: Disconnected from authenticating user root 101.100.194.199 port 34306 [preauth]
Dec 06 08:33:16 compute-1 ceph-mon[81689]: pgmap v4239: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:17 compute-1 nova_compute[226101]: 2025-12-06 08:33:17.013 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:17.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:17.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:17 compute-1 nova_compute[226101]: 2025-12-06 08:33:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:17 compute-1 nova_compute[226101]: 2025-12-06 08:33:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:17 compute-1 ceph-mon[81689]: pgmap v4240: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4202559805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2460955246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:18 compute-1 nova_compute[226101]: 2025-12-06 08:33:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:19.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:19 compute-1 nova_compute[226101]: 2025-12-06 08:33:19.384 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:19.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:19 compute-1 ceph-mon[81689]: pgmap v4241: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:20 compute-1 nova_compute[226101]: 2025-12-06 08:33:20.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:21.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:21 compute-1 ceph-mon[81689]: pgmap v4242: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:22 compute-1 nova_compute[226101]: 2025-12-06 08:33:22.016 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:23.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:23.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:23 compute-1 ceph-mon[81689]: pgmap v4243: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:24 compute-1 nova_compute[226101]: 2025-12-06 08:33:24.387 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:25.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:25 compute-1 ceph-mon[81689]: pgmap v4244: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.495401) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006495483, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 820, "num_deletes": 250, "total_data_size": 1612752, "memory_usage": 1639728, "flush_reason": "Manual Compaction"}
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006501614, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 681489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93799, "largest_seqno": 94614, "table_properties": {"data_size": 678239, "index_size": 1093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8768, "raw_average_key_size": 20, "raw_value_size": 671349, "raw_average_value_size": 1590, "num_data_blocks": 49, "num_entries": 422, "num_filter_entries": 422, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009939, "oldest_key_time": 1765009939, "file_creation_time": 1765010006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 6217 microseconds, and 2460 cpu microseconds.
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.501648) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 681489 bytes OK
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.501663) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.503210) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.503221) EVENT_LOG_v1 {"time_micros": 1765010006503218, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.503235) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 1608538, prev total WAL file size 1608538, number of live WAL files 2.
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.503898) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323739' seq:72057594037927935, type:22 .. '6D6772737461740033353330' seq:0, type:0; will stop at (end)
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(665KB)], [192(14MB)]
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006504005, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 15895356, "oldest_snapshot_seqno": -1}
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 12167 keys, 12431108 bytes, temperature: kUnknown
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006595498, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 12431108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12357266, "index_size": 42305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 323907, "raw_average_key_size": 26, "raw_value_size": 12149273, "raw_average_value_size": 998, "num_data_blocks": 1586, "num_entries": 12167, "num_filter_entries": 12167, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765010006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.595838) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 12431108 bytes
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.601019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.6 rd, 135.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(41.6) write-amplify(18.2) OK, records in: 12654, records dropped: 487 output_compression: NoCompression
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.601051) EVENT_LOG_v1 {"time_micros": 1765010006601037, "job": 124, "event": "compaction_finished", "compaction_time_micros": 91584, "compaction_time_cpu_micros": 38859, "output_level": 6, "num_output_files": 1, "total_output_size": 12431108, "num_input_records": 12654, "num_output_records": 12167, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006601490, "job": 124, "event": "table_file_deletion", "file_number": 194}
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006607048, "job": 124, "event": "table_file_deletion", "file_number": 192}
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.503765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.607218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.607225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.607227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.607229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:33:26.607231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:27 compute-1 nova_compute[226101]: 2025-12-06 08:33:27.019 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:27 compute-1 sshd-session[328243]: Received disconnect from 45.120.216.232 port 58606:11: Bye Bye [preauth]
Dec 06 08:33:27 compute-1 sshd-session[328243]: Disconnected from authenticating user root 45.120.216.232 port 58606 [preauth]
Dec 06 08:33:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:27.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:27.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:28 compute-1 ceph-mon[81689]: pgmap v4245: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:29 compute-1 ceph-mon[81689]: pgmap v4246: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:29.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:29 compute-1 nova_compute[226101]: 2025-12-06 08:33:29.388 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:29.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:31 compute-1 ceph-mon[81689]: pgmap v4247: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:31.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:31.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:32 compute-1 nova_compute[226101]: 2025-12-06 08:33:32.021 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:33 compute-1 ceph-mon[81689]: pgmap v4248: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:33.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:34 compute-1 nova_compute[226101]: 2025-12-06 08:33:34.390 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:35 compute-1 ceph-mon[81689]: pgmap v4249: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:35.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:35.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:37 compute-1 nova_compute[226101]: 2025-12-06 08:33:37.023 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:37.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:37.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:37 compute-1 ceph-mon[81689]: pgmap v4250: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:39 compute-1 nova_compute[226101]: 2025-12-06 08:33:39.392 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:39.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:39 compute-1 ceph-mon[81689]: pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:41 compute-1 ceph-mon[81689]: pgmap v4252: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:41.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:42 compute-1 nova_compute[226101]: 2025-12-06 08:33:42.025 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:42 compute-1 sshd-session[328246]: Received disconnect from 124.18.141.70 port 51862:11: Bye Bye [preauth]
Dec 06 08:33:42 compute-1 sshd-session[328246]: Disconnected from authenticating user root 124.18.141.70 port 51862 [preauth]
Dec 06 08:33:43 compute-1 ceph-mon[81689]: pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:43 compute-1 sshd-session[328248]: Received disconnect from 106.51.92.114 port 40809:11: Bye Bye [preauth]
Dec 06 08:33:43 compute-1 sshd-session[328248]: Disconnected from authenticating user root 106.51.92.114 port 40809 [preauth]
Dec 06 08:33:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:43.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:43.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:44 compute-1 nova_compute[226101]: 2025-12-06 08:33:44.394 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:45 compute-1 podman[328251]: 2025-12-06 08:33:45.07072081 +0000 UTC m=+0.052442338 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:33:45 compute-1 podman[328250]: 2025-12-06 08:33:45.105838162 +0000 UTC m=+0.089995674 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:33:45 compute-1 podman[328252]: 2025-12-06 08:33:45.13630621 +0000 UTC m=+0.114851502 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:33:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:45.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:45.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:45 compute-1 ceph-mon[81689]: pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:46 compute-1 sshd-session[328313]: Received disconnect from 186.87.166.141 port 45460:11: Bye Bye [preauth]
Dec 06 08:33:46 compute-1 sshd-session[328313]: Disconnected from authenticating user root 186.87.166.141 port 45460 [preauth]
Dec 06 08:33:47 compute-1 nova_compute[226101]: 2025-12-06 08:33:47.027 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:47 compute-1 ceph-mon[81689]: pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:47.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:47.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:49 compute-1 nova_compute[226101]: 2025-12-06 08:33:49.396 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:49.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:49 compute-1 ceph-mon[81689]: pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:49.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:51 compute-1 sshd-session[328315]: Received disconnect from 186.96.151.198 port 43076:11: Bye Bye [preauth]
Dec 06 08:33:51 compute-1 sshd-session[328315]: Disconnected from authenticating user root 186.96.151.198 port 43076 [preauth]
Dec 06 08:33:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:51.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:51 compute-1 ceph-mon[81689]: pgmap v4257: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:51.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:51 compute-1 nova_compute[226101]: 2025-12-06 08:33:51.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:51 compute-1 nova_compute[226101]: 2025-12-06 08:33:51.643 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:33:51 compute-1 nova_compute[226101]: 2025-12-06 08:33:51.643 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:33:51 compute-1 nova_compute[226101]: 2025-12-06 08:33:51.643 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:33:51 compute-1 nova_compute[226101]: 2025-12-06 08:33:51.643 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:33:51 compute-1 nova_compute[226101]: 2025-12-06 08:33:51.644 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.029 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:52 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:33:52 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/760969643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.065 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.250 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.251 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4231MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.251 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.252 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.578 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.579 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:33:52 compute-1 nova_compute[226101]: 2025-12-06 08:33:52.618 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:33:52 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/760969643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:33:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3201809378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:53 compute-1 nova_compute[226101]: 2025-12-06 08:33:53.078 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:33:53 compute-1 nova_compute[226101]: 2025-12-06 08:33:53.082 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:33:53 compute-1 nova_compute[226101]: 2025-12-06 08:33:53.106 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:33:53 compute-1 nova_compute[226101]: 2025-12-06 08:33:53.107 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:33:53 compute-1 nova_compute[226101]: 2025-12-06 08:33:53.108 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:33:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:53.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:54 compute-1 ceph-mon[81689]: pgmap v4258: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:54 compute-1 nova_compute[226101]: 2025-12-06 08:33:54.397 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3201809378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:55 compute-1 ceph-mon[81689]: pgmap v4259: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:55.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:55.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:55 compute-1 sshd-session[328361]: Received disconnect from 154.209.4.183 port 41510:11: Bye Bye [preauth]
Dec 06 08:33:55 compute-1 sshd-session[328361]: Disconnected from authenticating user root 154.209.4.183 port 41510 [preauth]
Dec 06 08:33:57 compute-1 nova_compute[226101]: 2025-12-06 08:33:57.031 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:57 compute-1 ceph-mon[81689]: pgmap v4260: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:57.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:57.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:57 compute-1 sshd-session[328363]: Received disconnect from 91.144.158.231 port 53623:11: Bye Bye [preauth]
Dec 06 08:33:57 compute-1 sshd-session[328363]: Disconnected from authenticating user root 91.144.158.231 port 53623 [preauth]
Dec 06 08:33:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:59 compute-1 ceph-mon[81689]: pgmap v4261: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:59 compute-1 nova_compute[226101]: 2025-12-06 08:33:59.400 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:59.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:33:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:01 compute-1 ceph-mon[81689]: pgmap v4262: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:34:01.713 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:34:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:34:01.713 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:34:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:34:01.714 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:34:02 compute-1 nova_compute[226101]: 2025-12-06 08:34:02.034 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:03 compute-1 nova_compute[226101]: 2025-12-06 08:34:03.108 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:03 compute-1 nova_compute[226101]: 2025-12-06 08:34:03.108 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:34:03 compute-1 nova_compute[226101]: 2025-12-06 08:34:03.108 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:34:03 compute-1 nova_compute[226101]: 2025-12-06 08:34:03.158 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:34:03 compute-1 ceph-mon[81689]: pgmap v4263: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:03.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:03.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:04 compute-1 nova_compute[226101]: 2025-12-06 08:34:04.401 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:05 compute-1 sudo[328365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:05 compute-1 sudo[328365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-1 sudo[328365]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 compute-1 sudo[328390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:34:05 compute-1 sudo[328390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-1 sudo[328390]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:05.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:05 compute-1 ceph-mon[81689]: pgmap v4264: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:05 compute-1 sudo[328415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:05 compute-1 sudo[328415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-1 sudo[328415]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:05.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:05 compute-1 sudo[328440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:34:05 compute-1 sudo[328440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-1 sudo[328440]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 compute-1 sudo[328487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:05 compute-1 sudo[328487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-1 sudo[328487]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 compute-1 sudo[328512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:34:05 compute-1 sudo[328512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-1 sudo[328512]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 compute-1 sudo[328537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:06 compute-1 sudo[328537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:06 compute-1 sudo[328537]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 compute-1 sudo[328562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:34:06 compute-1 sudo[328562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:06 compute-1 sudo[328562]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:34:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:34:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:34:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:34:06 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:34:06 compute-1 sshd-session[328444]: Received disconnect from 136.112.8.45 port 52506:11: Bye Bye [preauth]
Dec 06 08:34:06 compute-1 sshd-session[328444]: Disconnected from authenticating user root 136.112.8.45 port 52506 [preauth]
Dec 06 08:34:07 compute-1 nova_compute[226101]: 2025-12-06 08:34:07.058 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:07.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:07.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:07 compute-1 ceph-mon[81689]: pgmap v4265: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:08 compute-1 ceph-mon[81689]: pgmap v4266: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:34:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2526053359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:34:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:34:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2526053359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:34:09 compute-1 nova_compute[226101]: 2025-12-06 08:34:09.437 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:09.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:09 compute-1 nova_compute[226101]: 2025-12-06 08:34:09.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2526053359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:34:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2526053359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:34:10 compute-1 nova_compute[226101]: 2025-12-06 08:34:10.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:10 compute-1 nova_compute[226101]: 2025-12-06 08:34:10.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:10 compute-1 nova_compute[226101]: 2025-12-06 08:34:10.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:34:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:11.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:11.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:11 compute-1 ceph-mon[81689]: pgmap v4267: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:12 compute-1 nova_compute[226101]: 2025-12-06 08:34:12.061 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:12 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2055977561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:13.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:13.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:13 compute-1 sudo[328618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:13 compute-1 sudo[328618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:13 compute-1 sudo[328618]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:13 compute-1 sudo[328643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:34:13 compute-1 sudo[328643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:13 compute-1 sudo[328643]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:14 compute-1 ceph-mon[81689]: pgmap v4268: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2453581408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:14 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:14 compute-1 sshd-session[328245]: Connection closed by 107.150.106.178 port 33568 [preauth]
Dec 06 08:34:14 compute-1 nova_compute[226101]: 2025-12-06 08:34:14.440 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:15 compute-1 ceph-mon[81689]: pgmap v4269: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:15.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:15.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:16 compute-1 podman[328669]: 2025-12-06 08:34:16.100163355 +0000 UTC m=+0.072824865 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:34:16 compute-1 podman[328670]: 2025-12-06 08:34:16.125000001 +0000 UTC m=+0.089212734 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 08:34:16 compute-1 podman[328671]: 2025-12-06 08:34:16.149227811 +0000 UTC m=+0.110343271 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 06 08:34:17 compute-1 nova_compute[226101]: 2025-12-06 08:34:17.063 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:17.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:17.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:17 compute-1 nova_compute[226101]: 2025-12-06 08:34:17.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:17 compute-1 ceph-mon[81689]: pgmap v4270: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3168349565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/578781008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:18 compute-1 nova_compute[226101]: 2025-12-06 08:34:18.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:19 compute-1 nova_compute[226101]: 2025-12-06 08:34:19.471 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:19.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:19 compute-1 nova_compute[226101]: 2025-12-06 08:34:19.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:19 compute-1 ceph-mon[81689]: pgmap v4271: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:21.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:21 compute-1 ceph-mon[81689]: pgmap v4272: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:22 compute-1 nova_compute[226101]: 2025-12-06 08:34:22.065 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:22 compute-1 nova_compute[226101]: 2025-12-06 08:34:22.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:23.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:24 compute-1 nova_compute[226101]: 2025-12-06 08:34:24.521 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:24 compute-1 ceph-mon[81689]: pgmap v4273: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:25.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:25.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:25 compute-1 sshd-session[328737]: Received disconnect from 101.100.194.199 port 54580:11: Bye Bye [preauth]
Dec 06 08:34:25 compute-1 sshd-session[328737]: Disconnected from authenticating user root 101.100.194.199 port 54580 [preauth]
Dec 06 08:34:25 compute-1 ceph-mon[81689]: pgmap v4274: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:27 compute-1 nova_compute[226101]: 2025-12-06 08:34:27.067 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:27 compute-1 ceph-mon[81689]: pgmap v4275: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:27.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:29 compute-1 ceph-mon[81689]: pgmap v4276: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:29.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:29 compute-1 nova_compute[226101]: 2025-12-06 08:34:29.523 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:31 compute-1 ceph-mon[81689]: pgmap v4277: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:31.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:32 compute-1 nova_compute[226101]: 2025-12-06 08:34:32.069 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:33.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:33.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:33 compute-1 ceph-mon[81689]: pgmap v4278: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:34 compute-1 nova_compute[226101]: 2025-12-06 08:34:34.525 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:35 compute-1 ceph-mon[81689]: pgmap v4279: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:35.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:37 compute-1 nova_compute[226101]: 2025-12-06 08:34:37.098 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:37 compute-1 ceph-mon[81689]: pgmap v4280: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:37.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:37.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:39.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:39 compute-1 nova_compute[226101]: 2025-12-06 08:34:39.527 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:39 compute-1 nova_compute[226101]: 2025-12-06 08:34:39.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:39 compute-1 nova_compute[226101]: 2025-12-06 08:34:39.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:34:39 compute-1 nova_compute[226101]: 2025-12-06 08:34:39.606 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:34:39 compute-1 ceph-mon[81689]: pgmap v4281: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:40 compute-1 nova_compute[226101]: 2025-12-06 08:34:40.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:41.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:41.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:41 compute-1 ceph-mon[81689]: pgmap v4282: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:42 compute-1 nova_compute[226101]: 2025-12-06 08:34:42.100 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:42 compute-1 ceph-mon[81689]: pgmap v4283: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:43.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:43.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:44 compute-1 nova_compute[226101]: 2025-12-06 08:34:44.530 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:45.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:46 compute-1 ceph-mon[81689]: pgmap v4284: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:47 compute-1 podman[328741]: 2025-12-06 08:34:47.072231102 +0000 UTC m=+0.051872492 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:34:47 compute-1 podman[328740]: 2025-12-06 08:34:47.084288426 +0000 UTC m=+0.068851269 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 08:34:47 compute-1 nova_compute[226101]: 2025-12-06 08:34:47.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:47 compute-1 podman[328742]: 2025-12-06 08:34:47.104521838 +0000 UTC m=+0.081976100 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:34:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:47.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:47.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:48 compute-1 ceph-mon[81689]: pgmap v4285: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:49.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:49 compute-1 nova_compute[226101]: 2025-12-06 08:34:49.532 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:49.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:50 compute-1 ceph-mon[81689]: pgmap v4286: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:51 compute-1 ceph-mon[81689]: pgmap v4287: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:51.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:51.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:51 compute-1 sshd-session[328802]: Received disconnect from 45.120.216.232 port 58944:11: Bye Bye [preauth]
Dec 06 08:34:51 compute-1 sshd-session[328802]: Disconnected from authenticating user root 45.120.216.232 port 58944 [preauth]
Dec 06 08:34:52 compute-1 nova_compute[226101]: 2025-12-06 08:34:52.103 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:52 compute-1 sshd-session[328804]: Received disconnect from 14.225.3.79 port 48458:11: Bye Bye [preauth]
Dec 06 08:34:52 compute-1 sshd-session[328804]: Disconnected from authenticating user root 14.225.3.79 port 48458 [preauth]
Dec 06 08:34:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:53.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:53.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:53 compute-1 nova_compute[226101]: 2025-12-06 08:34:53.668 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:53 compute-1 ceph-mon[81689]: pgmap v4288: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:54 compute-1 nova_compute[226101]: 2025-12-06 08:34:54.594 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:34:54 compute-1 nova_compute[226101]: 2025-12-06 08:34:54.595 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:34:54 compute-1 nova_compute[226101]: 2025-12-06 08:34:54.595 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:34:54 compute-1 nova_compute[226101]: 2025-12-06 08:34:54.595 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:34:54 compute-1 nova_compute[226101]: 2025-12-06 08:34:54.595 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:34:54 compute-1 nova_compute[226101]: 2025-12-06 08:34:54.620 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:54 compute-1 ceph-mon[81689]: pgmap v4289: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:34:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1331996013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.286 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.443 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.445 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4232MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.445 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.445 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:34:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:55.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.515 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.516 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.538 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:34:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:55.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:34:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2914100732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.957 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:34:55 compute-1 nova_compute[226101]: 2025-12-06 08:34:55.962 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:34:56 compute-1 nova_compute[226101]: 2025-12-06 08:34:56.198 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:34:56 compute-1 nova_compute[226101]: 2025-12-06 08:34:56.200 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:34:56 compute-1 nova_compute[226101]: 2025-12-06 08:34:56.200 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:34:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1331996013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2914100732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:57 compute-1 nova_compute[226101]: 2025-12-06 08:34:57.105 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:57.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:57.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:59 compute-1 ceph-mon[81689]: pgmap v4290: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:59.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:34:59 compute-1 nova_compute[226101]: 2025-12-06 08:34:59.670 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:59.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:00 compute-1 nova_compute[226101]: 2025-12-06 08:35:00.115 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:00 compute-1 ceph-mon[81689]: pgmap v4291: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:01.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:01 compute-1 sshd-session[328850]: Received disconnect from 106.51.92.114 port 55595:11: Bye Bye [preauth]
Dec 06 08:35:01 compute-1 sshd-session[328850]: Disconnected from authenticating user root 106.51.92.114 port 55595 [preauth]
Dec 06 08:35:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:01.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:35:01.714 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:35:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:35:01.715 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:35:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:35:01.715 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:35:02 compute-1 nova_compute[226101]: 2025-12-06 08:35:02.108 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:02 compute-1 ceph-mon[81689]: pgmap v4292: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:02 compute-1 sshd-session[328852]: Received disconnect from 186.96.151.198 port 55734:11: Bye Bye [preauth]
Dec 06 08:35:02 compute-1 sshd-session[328852]: Disconnected from authenticating user root 186.96.151.198 port 55734 [preauth]
Dec 06 08:35:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:03.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:03 compute-1 nova_compute[226101]: 2025-12-06 08:35:03.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:03 compute-1 nova_compute[226101]: 2025-12-06 08:35:03.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:35:03 compute-1 nova_compute[226101]: 2025-12-06 08:35:03.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:35:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:03 compute-1 nova_compute[226101]: 2025-12-06 08:35:03.775 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:35:03 compute-1 ceph-mon[81689]: pgmap v4293: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:04 compute-1 nova_compute[226101]: 2025-12-06 08:35:04.672 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:05.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:05 compute-1 nova_compute[226101]: 2025-12-06 08:35:05.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:05 compute-1 nova_compute[226101]: 2025-12-06 08:35:05.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:35:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:05.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:07 compute-1 nova_compute[226101]: 2025-12-06 08:35:07.113 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:07.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:07 compute-1 ceph-mon[81689]: pgmap v4294: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:07.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:08 compute-1 sshd-session[328854]: Received disconnect from 124.18.141.70 port 44992:11: Bye Bye [preauth]
Dec 06 08:35:08 compute-1 sshd-session[328854]: Disconnected from authenticating user root 124.18.141.70 port 44992 [preauth]
Dec 06 08:35:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:08 compute-1 ceph-mon[81689]: pgmap v4295: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:09.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:09 compute-1 nova_compute[226101]: 2025-12-06 08:35:09.673 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:09.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:10 compute-1 nova_compute[226101]: 2025-12-06 08:35:10.636 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:10 compute-1 nova_compute[226101]: 2025-12-06 08:35:10.636 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:10 compute-1 nova_compute[226101]: 2025-12-06 08:35:10.636 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:35:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:11.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:11 compute-1 nova_compute[226101]: 2025-12-06 08:35:11.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:11 compute-1 ceph-mon[81689]: pgmap v4296: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:11.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:12 compute-1 nova_compute[226101]: 2025-12-06 08:35:12.116 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:13.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:13.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:13 compute-1 sudo[328856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:13 compute-1 sudo[328856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:13 compute-1 sudo[328856]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:14 compute-1 sudo[328881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:35:14 compute-1 sudo[328881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:14 compute-1 sudo[328881]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:14 compute-1 sudo[328906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:14 compute-1 sudo[328906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:14 compute-1 sudo[328906]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:14 compute-1 sudo[328931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:35:14 compute-1 sudo[328931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:14 compute-1 nova_compute[226101]: 2025-12-06 08:35:14.349 226109 DEBUG oslo_concurrency.processutils [None req-973963d8-dc59-4fec-ad5a-82e5018c1182 05522e78304c4c4eb4be044936c2fa3e 5ed95c9b17ee4dcb83395850789304e6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:35:14 compute-1 nova_compute[226101]: 2025-12-06 08:35:14.443 226109 DEBUG oslo_concurrency.processutils [None req-973963d8-dc59-4fec-ad5a-82e5018c1182 05522e78304c4c4eb4be044936c2fa3e 5ed95c9b17ee4dcb83395850789304e6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:35:14 compute-1 nova_compute[226101]: 2025-12-06 08:35:14.675 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:14 compute-1 sudo[328931]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2342233228' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:35:14 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2342233228' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:35:14 compute-1 ceph-mon[81689]: pgmap v4297: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:15.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:15.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:16 compute-1 sshd-session[328989]: Received disconnect from 91.144.158.231 port 29348:11: Bye Bye [preauth]
Dec 06 08:35:16 compute-1 sshd-session[328989]: Disconnected from authenticating user root 91.144.158.231 port 29348 [preauth]
Dec 06 08:35:16 compute-1 ceph-mon[81689]: pgmap v4298: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3295919848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:16 compute-1 ceph-mon[81689]: pgmap v4299: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:35:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:35:16 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2806479779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:17 compute-1 nova_compute[226101]: 2025-12-06 08:35:17.118 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:17.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:17 compute-1 nova_compute[226101]: 2025-12-06 08:35:17.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:17.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:18 compute-1 podman[328992]: 2025-12-06 08:35:18.133265484 +0000 UTC m=+0.111229604 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:35:18 compute-1 podman[328991]: 2025-12-06 08:35:18.134002124 +0000 UTC m=+0.110314430 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:35:18 compute-1 podman[328993]: 2025-12-06 08:35:18.145805001 +0000 UTC m=+0.122646672 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:35:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:18 compute-1 ceph-mon[81689]: pgmap v4300: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:19.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:19 compute-1 nova_compute[226101]: 2025-12-06 08:35:19.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:19 compute-1 nova_compute[226101]: 2025-12-06 08:35:19.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:19.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3215890770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:20 compute-1 ceph-mon[81689]: pgmap v4301: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:20 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/766304778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:21.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:21 compute-1 sshd-session[329056]: Received disconnect from 186.87.166.141 port 46576:11: Bye Bye [preauth]
Dec 06 08:35:21 compute-1 sshd-session[329056]: Disconnected from authenticating user root 186.87.166.141 port 46576 [preauth]
Dec 06 08:35:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:35:21.581 139580 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=110, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=109) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:35:21 compute-1 nova_compute[226101]: 2025-12-06 08:35:21.581 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:21 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:35:21.582 139580 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:35:21 compute-1 nova_compute[226101]: 2025-12-06 08:35:21.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:21 compute-1 ceph-mon[81689]: pgmap v4302: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:21.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:22 compute-1 nova_compute[226101]: 2025-12-06 08:35:22.121 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:23.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:23.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:24 compute-1 ceph-mon[81689]: pgmap v4303: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:35:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:35:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:35:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:35:24 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:35:24 compute-1 nova_compute[226101]: 2025-12-06 08:35:24.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:24 compute-1 nova_compute[226101]: 2025-12-06 08:35:24.748 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:25 compute-1 ceph-mon[81689]: pgmap v4304: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:25.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:26 compute-1 sshd-session[329058]: Received disconnect from 154.209.4.183 port 58770:11: Bye Bye [preauth]
Dec 06 08:35:26 compute-1 sshd-session[329058]: Disconnected from authenticating user root 154.209.4.183 port 58770 [preauth]
Dec 06 08:35:27 compute-1 nova_compute[226101]: 2025-12-06 08:35:27.199 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:27.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:27 compute-1 ceph-mon[81689]: pgmap v4305: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:29 compute-1 ceph-mon[81689]: pgmap v4306: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:29 compute-1 sudo[329060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:29 compute-1 sudo[329060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:29 compute-1 sudo[329060]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:29 compute-1 sudo[329085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:35:29 compute-1 sudo[329085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:29.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:29 compute-1 sudo[329085]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:29.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:29 compute-1 nova_compute[226101]: 2025-12-06 08:35:29.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:30 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:31.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:31 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:35:31.584 139580 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=03fe054d-d727-4af3-9c5e-92e57505f242, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '110'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:35:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:31.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:32 compute-1 nova_compute[226101]: 2025-12-06 08:35:32.201 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:32 compute-1 ceph-mon[81689]: pgmap v4307: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:33.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:34 compute-1 ceph-mon[81689]: pgmap v4308: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:34 compute-1 nova_compute[226101]: 2025-12-06 08:35:34.753 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:35 compute-1 ceph-mon[81689]: pgmap v4309: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:35.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:35.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:36 compute-1 nova_compute[226101]: 2025-12-06 08:35:36.925 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:37 compute-1 nova_compute[226101]: 2025-12-06 08:35:37.226 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:37 compute-1 ceph-mon[81689]: pgmap v4310: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:37.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:37 compute-1 sshd-session[329110]: Connection closed by 107.150.106.178 port 59288 [preauth]
Dec 06 08:35:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:37.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:38 compute-1 sshd-session[329112]: Received disconnect from 101.100.194.199 port 34994:11: Bye Bye [preauth]
Dec 06 08:35:38 compute-1 sshd-session[329112]: Disconnected from authenticating user root 101.100.194.199 port 34994 [preauth]
Dec 06 08:35:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:39.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:39 compute-1 ceph-mon[81689]: pgmap v4311: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:39 compute-1 nova_compute[226101]: 2025-12-06 08:35:39.755 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:41.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:41.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:41 compute-1 ceph-mon[81689]: pgmap v4312: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:42 compute-1 nova_compute[226101]: 2025-12-06 08:35:42.267 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:43.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:43 compute-1 ceph-mon[81689]: pgmap v4313: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:44 compute-1 sshd-session[329114]: Connection closed by 14.103.75.9 port 22824 [preauth]
Dec 06 08:35:44 compute-1 sshd-session[329115]: Received disconnect from 136.112.8.45 port 49464:11: Bye Bye [preauth]
Dec 06 08:35:44 compute-1 sshd-session[329115]: Disconnected from authenticating user root 136.112.8.45 port 49464 [preauth]
Dec 06 08:35:44 compute-1 nova_compute[226101]: 2025-12-06 08:35:44.795 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:45.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:46 compute-1 ceph-mon[81689]: pgmap v4314: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:47 compute-1 nova_compute[226101]: 2025-12-06 08:35:47.313 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:47 compute-1 ceph-mon[81689]: pgmap v4315: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:47.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:49 compute-1 podman[329118]: 2025-12-06 08:35:49.083472202 +0000 UTC m=+0.068073146 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 08:35:49 compute-1 podman[329119]: 2025-12-06 08:35:49.108208846 +0000 UTC m=+0.083840460 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:35:49 compute-1 podman[329120]: 2025-12-06 08:35:49.108284448 +0000 UTC m=+0.087257781 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:35:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:49.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:49 compute-1 ceph-mon[81689]: pgmap v4316: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:49 compute-1 nova_compute[226101]: 2025-12-06 08:35:49.796 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:51.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:51 compute-1 ceph-mon[81689]: pgmap v4317: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:52 compute-1 nova_compute[226101]: 2025-12-06 08:35:52.315 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:53.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:53 compute-1 nova_compute[226101]: 2025-12-06 08:35:53.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:53 compute-1 nova_compute[226101]: 2025-12-06 08:35:53.736 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:35:53 compute-1 nova_compute[226101]: 2025-12-06 08:35:53.736 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:35:53 compute-1 nova_compute[226101]: 2025-12-06 08:35:53.736 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:35:53 compute-1 nova_compute[226101]: 2025-12-06 08:35:53.737 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:35:53 compute-1 nova_compute[226101]: 2025-12-06 08:35:53.737 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:35:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:53 compute-1 ceph-mon[81689]: pgmap v4318: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:35:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/490789410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.171 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.344 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.346 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4216MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.346 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.346 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.426 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.427 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.443 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:35:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/490789410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.843 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:35:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/436894287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.881 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.887 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.906 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.908 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:35:54 compute-1 nova_compute[226101]: 2025-12-06 08:35:54.908 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:35:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:55.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:55.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:55 compute-1 ceph-mon[81689]: pgmap v4319: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/436894287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:56 compute-1 ceph-mon[81689]: pgmap v4320: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:57 compute-1 nova_compute[226101]: 2025-12-06 08:35:57.412 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:57.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:59.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:35:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:59.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:59 compute-1 nova_compute[226101]: 2025-12-06 08:35:59.845 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:59 compute-1 ceph-mon[81689]: pgmap v4321: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:00 compute-1 ceph-mon[81689]: pgmap v4322: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:01.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:36:01.715 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:36:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:36:01.716 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:36:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:36:01.716 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:36:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:01.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:02 compute-1 nova_compute[226101]: 2025-12-06 08:36:02.458 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:03.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:03 compute-1 ceph-mon[81689]: pgmap v4323: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:03.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:04 compute-1 nova_compute[226101]: 2025-12-06 08:36:04.885 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:04 compute-1 ceph-mon[81689]: pgmap v4324: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:05.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:05.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:05 compute-1 nova_compute[226101]: 2025-12-06 08:36:05.909 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:05 compute-1 nova_compute[226101]: 2025-12-06 08:36:05.910 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:36:05 compute-1 nova_compute[226101]: 2025-12-06 08:36:05.910 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:36:05 compute-1 nova_compute[226101]: 2025-12-06 08:36:05.927 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:36:07 compute-1 ceph-mon[81689]: pgmap v4325: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:07 compute-1 nova_compute[226101]: 2025-12-06 08:36:07.460 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:07.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:07.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:09 compute-1 ceph-mon[81689]: pgmap v4326: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4294410134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:36:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4294410134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:36:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:09.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:09 compute-1 nova_compute[226101]: 2025-12-06 08:36:09.887 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:10 compute-1 nova_compute[226101]: 2025-12-06 08:36:10.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:11 compute-1 nova_compute[226101]: 2025-12-06 08:36:11.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:11 compute-1 nova_compute[226101]: 2025-12-06 08:36:11.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:11 compute-1 nova_compute[226101]: 2025-12-06 08:36:11.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:36:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:11.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:11 compute-1 ceph-mon[81689]: pgmap v4327: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:12 compute-1 nova_compute[226101]: 2025-12-06 08:36:12.489 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.236764) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173236810, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 1807, "num_deletes": 251, "total_data_size": 4446641, "memory_usage": 4499904, "flush_reason": "Manual Compaction"}
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Dec 06 08:36:13 compute-1 ceph-mon[81689]: pgmap v4328: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173262766, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 2901434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94619, "largest_seqno": 96421, "table_properties": {"data_size": 2893790, "index_size": 4586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15560, "raw_average_key_size": 20, "raw_value_size": 2878691, "raw_average_value_size": 3738, "num_data_blocks": 201, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010006, "oldest_key_time": 1765010006, "file_creation_time": 1765010173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 26101 microseconds, and 8451 cpu microseconds.
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.262850) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 2901434 bytes OK
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.262906) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.266500) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.266515) EVENT_LOG_v1 {"time_micros": 1765010173266510, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.266531) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 4438553, prev total WAL file size 4438553, number of live WAL files 2.
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.267610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(2833KB)], [195(11MB)]
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173267679, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 15332542, "oldest_snapshot_seqno": -1}
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 12418 keys, 13372369 bytes, temperature: kUnknown
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173375523, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 13372369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13296043, "index_size": 44149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 329789, "raw_average_key_size": 26, "raw_value_size": 13082904, "raw_average_value_size": 1053, "num_data_blocks": 1660, "num_entries": 12418, "num_filter_entries": 12418, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765010173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.376022) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 13372369 bytes
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.377785) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.8 rd, 123.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.9 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(9.9) write-amplify(4.6) OK, records in: 12937, records dropped: 519 output_compression: NoCompression
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.377813) EVENT_LOG_v1 {"time_micros": 1765010173377800, "job": 126, "event": "compaction_finished", "compaction_time_micros": 108128, "compaction_time_cpu_micros": 41871, "output_level": 6, "num_output_files": 1, "total_output_size": 13372369, "num_input_records": 12937, "num_output_records": 12418, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173379310, "job": 126, "event": "table_file_deletion", "file_number": 197}
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173382452, "job": 126, "event": "table_file_deletion", "file_number": 195}
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.267491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.382681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.382686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.382688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.382690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:36:13.382691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:13.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:13.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:14 compute-1 nova_compute[226101]: 2025-12-06 08:36:14.890 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:15 compute-1 ceph-mon[81689]: pgmap v4329: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:15.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:16 compute-1 sshd-session[329223]: Received disconnect from 45.120.216.232 port 59282:11: Bye Bye [preauth]
Dec 06 08:36:16 compute-1 sshd-session[329223]: Disconnected from authenticating user root 45.120.216.232 port 59282 [preauth]
Dec 06 08:36:16 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3432423705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:17 compute-1 nova_compute[226101]: 2025-12-06 08:36:17.491 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:17 compute-1 nova_compute[226101]: 2025-12-06 08:36:17.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:17.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:17.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:17 compute-1 sshd-session[329227]: Received disconnect from 186.96.151.198 port 40448:11: Bye Bye [preauth]
Dec 06 08:36:17 compute-1 sshd-session[329227]: Disconnected from authenticating user root 186.96.151.198 port 40448 [preauth]
Dec 06 08:36:17 compute-1 ceph-mon[81689]: pgmap v4330: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:17 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/4080446913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:19 compute-1 ceph-mon[81689]: pgmap v4331: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4153081065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:19.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:19.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:19 compute-1 nova_compute[226101]: 2025-12-06 08:36:19.892 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:20 compute-1 podman[329229]: 2025-12-06 08:36:20.060548813 +0000 UTC m=+0.048208264 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:36:20 compute-1 podman[329230]: 2025-12-06 08:36:20.077570869 +0000 UTC m=+0.057629476 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 06 08:36:20 compute-1 podman[329231]: 2025-12-06 08:36:20.142668936 +0000 UTC m=+0.120537964 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:36:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3293839874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:21 compute-1 ceph-mon[81689]: pgmap v4332: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:21 compute-1 nova_compute[226101]: 2025-12-06 08:36:21.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:21 compute-1 nova_compute[226101]: 2025-12-06 08:36:21.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:21.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:21.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:22 compute-1 nova_compute[226101]: 2025-12-06 08:36:22.494 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:23 compute-1 ceph-mon[81689]: pgmap v4333: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:23.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:23.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:24 compute-1 sshd-session[329291]: Received disconnect from 106.51.92.114 port 42150:11: Bye Bye [preauth]
Dec 06 08:36:24 compute-1 sshd-session[329291]: Disconnected from authenticating user root 106.51.92.114 port 42150 [preauth]
Dec 06 08:36:24 compute-1 nova_compute[226101]: 2025-12-06 08:36:24.893 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:25.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:25 compute-1 ceph-mon[81689]: pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:25.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:26 compute-1 sshd-session[329225]: ssh_dispatch_run_fatal: Connection from 14.103.118.136 port 32876: Connection timed out [preauth]
Dec 06 08:36:26 compute-1 nova_compute[226101]: 2025-12-06 08:36:26.584 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:27 compute-1 nova_compute[226101]: 2025-12-06 08:36:27.496 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:27 compute-1 ceph-mon[81689]: pgmap v4335: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:27.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:27.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:29.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:29 compute-1 ceph-mon[81689]: pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:29 compute-1 sudo[329293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:29 compute-1 sudo[329293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:29 compute-1 sudo[329293]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:29 compute-1 sudo[329318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:36:29 compute-1 sudo[329318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:29 compute-1 sudo[329318]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:29 compute-1 sudo[329343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:29 compute-1 sudo[329343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:29 compute-1 sudo[329343]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:29 compute-1 nova_compute[226101]: 2025-12-06 08:36:29.894 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:29 compute-1 sudo[329368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:36:29 compute-1 sudo[329368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:30 compute-1 sudo[329368]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:36:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:36:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:36:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:36:31 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:36:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:31.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:31.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:32 compute-1 nova_compute[226101]: 2025-12-06 08:36:32.544 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:32 compute-1 sshd-session[329424]: Received disconnect from 124.18.141.70 port 46390:11: Bye Bye [preauth]
Dec 06 08:36:32 compute-1 sshd-session[329424]: Disconnected from authenticating user root 124.18.141.70 port 46390 [preauth]
Dec 06 08:36:32 compute-1 ceph-mon[81689]: pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:33.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:33.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:34 compute-1 ceph-mon[81689]: pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:34 compute-1 nova_compute[226101]: 2025-12-06 08:36:34.896 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:35 compute-1 ceph-mon[81689]: pgmap v4339: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:35.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:35.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:37 compute-1 sshd-session[328739]: error: kex_exchange_identification: read: Connection reset by peer
Dec 06 08:36:37 compute-1 sshd-session[328739]: Connection reset by 165.154.55.146 port 50450
Dec 06 08:36:37 compute-1 nova_compute[226101]: 2025-12-06 08:36:37.547 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:37.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:37.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:37 compute-1 ceph-mon[81689]: pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:38 compute-1 sudo[329426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:38 compute-1 sudo[329426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:38 compute-1 sudo[329426]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:38 compute-1 sudo[329451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:36:38 compute-1 sudo[329451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:38 compute-1 sudo[329451]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:39 compute-1 ceph-mon[81689]: pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:39.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:39.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:39 compute-1 nova_compute[226101]: 2025-12-06 08:36:39.898 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:40 compute-1 sshd-session[329476]: Received disconnect from 91.144.158.231 port 60655:11: Bye Bye [preauth]
Dec 06 08:36:40 compute-1 sshd-session[329476]: Disconnected from authenticating user root 91.144.158.231 port 60655 [preauth]
Dec 06 08:36:41 compute-1 ceph-mon[81689]: pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:41.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:41.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:42 compute-1 nova_compute[226101]: 2025-12-06 08:36:42.548 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:42 compute-1 ceph-mon[81689]: pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:43.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:43.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:44 compute-1 nova_compute[226101]: 2025-12-06 08:36:44.902 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:45.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:45.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:46 compute-1 ceph-mon[81689]: pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:47 compute-1 sshd-session[329478]: Received disconnect from 14.225.3.79 port 49102:11: Bye Bye [preauth]
Dec 06 08:36:47 compute-1 sshd-session[329478]: Disconnected from authenticating user root 14.225.3.79 port 49102 [preauth]
Dec 06 08:36:47 compute-1 nova_compute[226101]: 2025-12-06 08:36:47.549 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:47.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:47.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:48 compute-1 ceph-mon[81689]: pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:49.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:49.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:49 compute-1 nova_compute[226101]: 2025-12-06 08:36:49.903 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:50 compute-1 ceph-mon[81689]: pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:51 compute-1 podman[329483]: 2025-12-06 08:36:51.067615788 +0000 UTC m=+0.052979792 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 08:36:51 compute-1 podman[329482]: 2025-12-06 08:36:51.071259526 +0000 UTC m=+0.058393317 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:36:51 compute-1 podman[329484]: 2025-12-06 08:36:51.114336131 +0000 UTC m=+0.098068311 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:36:51 compute-1 ceph-mon[81689]: pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:51.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:51.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:52 compute-1 nova_compute[226101]: 2025-12-06 08:36:52.553 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:53.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:53.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:54 compute-1 ceph-mon[81689]: pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:54 compute-1 nova_compute[226101]: 2025-12-06 08:36:54.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:54 compute-1 nova_compute[226101]: 2025-12-06 08:36:54.622 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:36:54 compute-1 nova_compute[226101]: 2025-12-06 08:36:54.622 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:36:54 compute-1 nova_compute[226101]: 2025-12-06 08:36:54.622 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:36:54 compute-1 nova_compute[226101]: 2025-12-06 08:36:54.622 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:36:54 compute-1 nova_compute[226101]: 2025-12-06 08:36:54.622 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:36:54 compute-1 nova_compute[226101]: 2025-12-06 08:36:54.903 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:36:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2972193943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.036 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:36:55 compute-1 ceph-mon[81689]: pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2972193943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.190 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.191 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4237MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.192 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.192 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.267 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.267 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.287 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:36:55 compute-1 sshd-session[329545]: Received disconnect from 101.100.194.199 port 50316:11: Bye Bye [preauth]
Dec 06 08:36:55 compute-1 sshd-session[329545]: Disconnected from authenticating user root 101.100.194.199 port 50316 [preauth]
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.480 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.481 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.498 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.537 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.556 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:36:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:55.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:55.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:36:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1615628550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.977 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:36:55 compute-1 nova_compute[226101]: 2025-12-06 08:36:55.983 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:36:56 compute-1 nova_compute[226101]: 2025-12-06 08:36:56.004 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:36:56 compute-1 nova_compute[226101]: 2025-12-06 08:36:56.006 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:36:56 compute-1 nova_compute[226101]: 2025-12-06 08:36:56.006 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:36:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1615628550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:57 compute-1 ceph-mon[81689]: pgmap v4350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:57 compute-1 nova_compute[226101]: 2025-12-06 08:36:57.555 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:57.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:59 compute-1 ceph-mon[81689]: pgmap v4351: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:59 compute-1 sshd-session[329480]: Connection closed by 165.154.55.146 port 42744 [preauth]
Dec 06 08:36:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:59.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:59 compute-1 sshd-session[329593]: Received disconnect from 186.87.166.141 port 47694:11: Bye Bye [preauth]
Dec 06 08:36:59 compute-1 sshd-session[329593]: Disconnected from authenticating user root 186.87.166.141 port 47694 [preauth]
Dec 06 08:36:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:36:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:59 compute-1 nova_compute[226101]: 2025-12-06 08:36:59.940 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:00 compute-1 nova_compute[226101]: 2025-12-06 08:37:00.000 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:01 compute-1 ceph-mon[81689]: pgmap v4352: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:01.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:37:01.716 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:37:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:37:01.717 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:37:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:37:01.717 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:37:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:01.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:02 compute-1 nova_compute[226101]: 2025-12-06 08:37:02.558 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:03 compute-1 ceph-mon[81689]: pgmap v4353: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:03.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:03.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:04 compute-1 nova_compute[226101]: 2025-12-06 08:37:04.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:04 compute-1 nova_compute[226101]: 2025-12-06 08:37:04.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:37:04 compute-1 nova_compute[226101]: 2025-12-06 08:37:04.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:37:04 compute-1 nova_compute[226101]: 2025-12-06 08:37:04.748 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:37:04 compute-1 nova_compute[226101]: 2025-12-06 08:37:04.942 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:05 compute-1 ceph-mon[81689]: pgmap v4354: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:05.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:05.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:07 compute-1 nova_compute[226101]: 2025-12-06 08:37:07.561 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:07.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:07 compute-1 ceph-mon[81689]: pgmap v4355: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:07.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:09 compute-1 ceph-mon[81689]: pgmap v4356: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2873814189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:37:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2873814189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:37:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:09.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:09.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:09 compute-1 nova_compute[226101]: 2025-12-06 08:37:09.943 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:10 compute-1 ceph-mon[81689]: pgmap v4357: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:11 compute-1 nova_compute[226101]: 2025-12-06 08:37:11.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:11.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:11.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:12 compute-1 nova_compute[226101]: 2025-12-06 08:37:12.563 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:12 compute-1 nova_compute[226101]: 2025-12-06 08:37:12.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:13 compute-1 ceph-mon[81689]: pgmap v4358: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:13 compute-1 nova_compute[226101]: 2025-12-06 08:37:13.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:13 compute-1 nova_compute[226101]: 2025-12-06 08:37:13.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:37:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:13.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:13.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:14 compute-1 nova_compute[226101]: 2025-12-06 08:37:14.946 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:15 compute-1 ceph-mon[81689]: pgmap v4359: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:15.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:15.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:17 compute-1 ceph-mon[81689]: pgmap v4360: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:17 compute-1 nova_compute[226101]: 2025-12-06 08:37:17.566 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:17.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:17.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:18 compute-1 nova_compute[226101]: 2025-12-06 08:37:18.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:18 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2483443150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:19.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:19 compute-1 ceph-mon[81689]: pgmap v4361: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:19 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1232078863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:19.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:19 compute-1 nova_compute[226101]: 2025-12-06 08:37:19.948 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/943641337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:21 compute-1 sshd-session[329595]: Received disconnect from 136.112.8.45 port 47422:11: Bye Bye [preauth]
Dec 06 08:37:21 compute-1 sshd-session[329595]: Disconnected from authenticating user root 136.112.8.45 port 47422 [preauth]
Dec 06 08:37:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:21.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:21.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:21 compute-1 ceph-mon[81689]: pgmap v4362: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1486169835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:22 compute-1 podman[329598]: 2025-12-06 08:37:22.083642243 +0000 UTC m=+0.061589073 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:37:22 compute-1 podman[329597]: 2025-12-06 08:37:22.094592967 +0000 UTC m=+0.071111569 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:37:22 compute-1 podman[329599]: 2025-12-06 08:37:22.137880039 +0000 UTC m=+0.111635336 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:37:22 compute-1 nova_compute[226101]: 2025-12-06 08:37:22.568 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:22 compute-1 nova_compute[226101]: 2025-12-06 08:37:22.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:23 compute-1 ceph-mon[81689]: pgmap v4363: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:23 compute-1 nova_compute[226101]: 2025-12-06 08:37:23.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:23.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:23.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:24 compute-1 nova_compute[226101]: 2025-12-06 08:37:24.949 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:25 compute-1 ceph-mon[81689]: pgmap v4364: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:25.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:25.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:27 compute-1 nova_compute[226101]: 2025-12-06 08:37:27.571 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:27 compute-1 ceph-mon[81689]: pgmap v4365: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:27.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:27.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:28 compute-1 nova_compute[226101]: 2025-12-06 08:37:28.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:29 compute-1 ceph-mon[81689]: pgmap v4366: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:29.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:29.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:29 compute-1 nova_compute[226101]: 2025-12-06 08:37:29.952 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:31 compute-1 ceph-mon[81689]: pgmap v4367: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:31.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:31.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:31 compute-1 sshd-session[329658]: Received disconnect from 186.96.151.198 port 37236:11: Bye Bye [preauth]
Dec 06 08:37:31 compute-1 sshd-session[329658]: Disconnected from authenticating user root 186.96.151.198 port 37236 [preauth]
Dec 06 08:37:32 compute-1 nova_compute[226101]: 2025-12-06 08:37:32.573 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:33.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:33 compute-1 ceph-mon[81689]: pgmap v4368: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:33.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:34 compute-1 nova_compute[226101]: 2025-12-06 08:37:34.954 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:35 compute-1 ceph-mon[81689]: pgmap v4369: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:35.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:37 compute-1 ceph-mon[81689]: pgmap v4370: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:37 compute-1 nova_compute[226101]: 2025-12-06 08:37:37.575 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:37.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:37.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:38 compute-1 sudo[329660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:38 compute-1 sudo[329660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:38 compute-1 sudo[329660]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:38 compute-1 sudo[329685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:37:38 compute-1 sudo[329685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:38 compute-1 sudo[329685]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:38 compute-1 sudo[329710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:38 compute-1 sudo[329710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:38 compute-1 sudo[329710]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:38 compute-1 sudo[329735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:37:38 compute-1 sudo[329735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:39 compute-1 sudo[329735]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:39 compute-1 ceph-mon[81689]: pgmap v4371: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:37:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:37:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:37:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:37:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:37:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:39.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:39.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:39 compute-1 nova_compute[226101]: 2025-12-06 08:37:39.956 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:40 compute-1 sshd-session[329774]: Received disconnect from 45.120.216.232 port 59618:11: Bye Bye [preauth]
Dec 06 08:37:40 compute-1 sshd-session[329774]: Disconnected from authenticating user root 45.120.216.232 port 59618 [preauth]
Dec 06 08:37:40 compute-1 sshd-session[329795]: banner exchange: Connection from 172.208.25.111 port 55960: invalid format
Dec 06 08:37:41 compute-1 ceph-mon[81689]: pgmap v4372: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:41.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:42 compute-1 nova_compute[226101]: 2025-12-06 08:37:42.582 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:43 compute-1 sshd-session[329796]: Received disconnect from 106.51.92.114 port 56937:11: Bye Bye [preauth]
Dec 06 08:37:43 compute-1 sshd-session[329796]: Disconnected from authenticating user root 106.51.92.114 port 56937 [preauth]
Dec 06 08:37:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:43.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:44 compute-1 ceph-mon[81689]: pgmap v4373: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:44 compute-1 nova_compute[226101]: 2025-12-06 08:37:44.959 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:45 compute-1 ceph-mon[81689]: pgmap v4374: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:45.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:45.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:46 compute-1 sudo[329798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:46 compute-1 sudo[329798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:46 compute-1 sudo[329798]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:46 compute-1 sudo[329823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:37:46 compute-1 sudo[329823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:46 compute-1 sudo[329823]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:47 compute-1 nova_compute[226101]: 2025-12-06 08:37:47.584 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:47.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:47 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:47 compute-1 ceph-mon[81689]: pgmap v4375: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:47.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:48 compute-1 ceph-mon[81689]: pgmap v4376: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:49.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:49.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:49 compute-1 nova_compute[226101]: 2025-12-06 08:37:49.960 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:50 compute-1 sshd-session[329793]: Connection closed by 172.208.25.111 port 55958 [preauth]
Dec 06 08:37:51 compute-1 ceph-mon[81689]: pgmap v4377: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:51.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:51.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:52 compute-1 nova_compute[226101]: 2025-12-06 08:37:52.586 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:53 compute-1 podman[329848]: 2025-12-06 08:37:53.074821103 +0000 UTC m=+0.057976816 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 06 08:37:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:53 compute-1 podman[329849]: 2025-12-06 08:37:53.096235158 +0000 UTC m=+0.080039149 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:37:53 compute-1 podman[329850]: 2025-12-06 08:37:53.135150191 +0000 UTC m=+0.117633416 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 06 08:37:53 compute-1 ceph-mon[81689]: pgmap v4378: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:53.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:53.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:54 compute-1 nova_compute[226101]: 2025-12-06 08:37:54.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:54 compute-1 nova_compute[226101]: 2025-12-06 08:37:54.848 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:37:54 compute-1 nova_compute[226101]: 2025-12-06 08:37:54.849 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:37:54 compute-1 nova_compute[226101]: 2025-12-06 08:37:54.849 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:37:54 compute-1 nova_compute[226101]: 2025-12-06 08:37:54.849 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:37:54 compute-1 nova_compute[226101]: 2025-12-06 08:37:54.849 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:37:54 compute-1 nova_compute[226101]: 2025-12-06 08:37:54.960 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:37:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/350442536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:55 compute-1 nova_compute[226101]: 2025-12-06 08:37:55.289 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:37:55 compute-1 nova_compute[226101]: 2025-12-06 08:37:55.447 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:37:55 compute-1 nova_compute[226101]: 2025-12-06 08:37:55.448 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4229MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:37:55 compute-1 nova_compute[226101]: 2025-12-06 08:37:55.448 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:37:55 compute-1 nova_compute[226101]: 2025-12-06 08:37:55.448 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:37:55 compute-1 ceph-mon[81689]: pgmap v4379: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/350442536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:55 compute-1 nova_compute[226101]: 2025-12-06 08:37:55.644 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:37:55 compute-1 nova_compute[226101]: 2025-12-06 08:37:55.645 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:37:55 compute-1 nova_compute[226101]: 2025-12-06 08:37:55.685 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:37:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:55.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:55.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:37:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/751184748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:56 compute-1 nova_compute[226101]: 2025-12-06 08:37:56.109 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:37:56 compute-1 nova_compute[226101]: 2025-12-06 08:37:56.113 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:37:56 compute-1 nova_compute[226101]: 2025-12-06 08:37:56.135 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:37:56 compute-1 nova_compute[226101]: 2025-12-06 08:37:56.137 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:37:56 compute-1 nova_compute[226101]: 2025-12-06 08:37:56.137 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:37:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/751184748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:57 compute-1 nova_compute[226101]: 2025-12-06 08:37:57.587 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:57 compute-1 ceph-mon[81689]: pgmap v4380: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:57.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:57.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:59 compute-1 ceph-mon[81689]: pgmap v4381: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:59 compute-1 sshd-session[329956]: Received disconnect from 91.144.158.231 port 15793:11: Bye Bye [preauth]
Dec 06 08:37:59 compute-1 sshd-session[329956]: Disconnected from authenticating user root 91.144.158.231 port 15793 [preauth]
Dec 06 08:37:59 compute-1 sshd-session[329954]: Received disconnect from 124.18.141.70 port 39112:11: Bye Bye [preauth]
Dec 06 08:37:59 compute-1 sshd-session[329954]: Disconnected from authenticating user root 124.18.141.70 port 39112 [preauth]
Dec 06 08:37:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:59.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:59 compute-1 nova_compute[226101]: 2025-12-06 08:37:59.961 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:37:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:59.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:01 compute-1 ceph-mon[81689]: pgmap v4382: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:38:01.717 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:38:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:38:01.718 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:38:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:38:01.718 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:38:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:38:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:01.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:38:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:01.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:02 compute-1 nova_compute[226101]: 2025-12-06 08:38:02.629 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:03 compute-1 ceph-mon[81689]: pgmap v4383: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:03.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:03.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:04 compute-1 nova_compute[226101]: 2025-12-06 08:38:04.962 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:05 compute-1 ceph-mon[81689]: pgmap v4384: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:05.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:05.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:07 compute-1 nova_compute[226101]: 2025-12-06 08:38:07.138 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:07 compute-1 nova_compute[226101]: 2025-12-06 08:38:07.138 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:38:07 compute-1 nova_compute[226101]: 2025-12-06 08:38:07.138 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:38:07 compute-1 nova_compute[226101]: 2025-12-06 08:38:07.168 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:38:07 compute-1 ceph-mon[81689]: pgmap v4385: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:07 compute-1 nova_compute[226101]: 2025-12-06 08:38:07.632 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:07.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:07.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:09 compute-1 sshd-session[329958]: Received disconnect from 101.100.194.199 port 48590:11: Bye Bye [preauth]
Dec 06 08:38:09 compute-1 sshd-session[329958]: Disconnected from authenticating user root 101.100.194.199 port 48590 [preauth]
Dec 06 08:38:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:09 compute-1 nova_compute[226101]: 2025-12-06 08:38:09.963 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:09.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:10 compute-1 ceph-mon[81689]: pgmap v4386: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2004746509' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:38:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2004746509' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:38:11 compute-1 ceph-mon[81689]: pgmap v4387: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:11.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:11.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:12 compute-1 nova_compute[226101]: 2025-12-06 08:38:12.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:12 compute-1 nova_compute[226101]: 2025-12-06 08:38:12.634 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:13 compute-1 ceph-mon[81689]: pgmap v4388: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:13 compute-1 nova_compute[226101]: 2025-12-06 08:38:13.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:13.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:14 compute-1 nova_compute[226101]: 2025-12-06 08:38:14.964 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:15 compute-1 nova_compute[226101]: 2025-12-06 08:38:15.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:15 compute-1 nova_compute[226101]: 2025-12-06 08:38:15.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:38:15 compute-1 ceph-mon[81689]: pgmap v4389: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:15.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:15.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:17 compute-1 ceph-mon[81689]: pgmap v4390: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:17 compute-1 nova_compute[226101]: 2025-12-06 08:38:17.637 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:17.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:17.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:19.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:19 compute-1 nova_compute[226101]: 2025-12-06 08:38:19.966 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:20.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:20 compute-1 nova_compute[226101]: 2025-12-06 08:38:20.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:20 compute-1 ceph-mon[81689]: pgmap v4391: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:20 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/62786821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1095114067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:21 compute-1 ceph-mon[81689]: pgmap v4392: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1308302036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:21.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:22.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:38:22 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7805.4 total, 600.0 interval
                                           Cumulative writes: 68K writes, 266K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.03 MB/s
                                           Cumulative WAL: 68K writes, 25K syncs, 2.74 writes per sync, written: 0.25 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 883 writes, 1887 keys, 883 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s
                                           Interval WAL: 883 writes, 423 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:38:22 compute-1 nova_compute[226101]: 2025-12-06 08:38:22.640 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2898152300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:23.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:24.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:24 compute-1 podman[329961]: 2025-12-06 08:38:24.084370744 +0000 UTC m=+0.057224036 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:38:24 compute-1 podman[329960]: 2025-12-06 08:38:24.089328107 +0000 UTC m=+0.068514879 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 08:38:24 compute-1 podman[329962]: 2025-12-06 08:38:24.14235878 +0000 UTC m=+0.104487525 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:38:24 compute-1 ceph-mon[81689]: pgmap v4393: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:24 compute-1 nova_compute[226101]: 2025-12-06 08:38:24.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:24 compute-1 nova_compute[226101]: 2025-12-06 08:38:24.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:24 compute-1 nova_compute[226101]: 2025-12-06 08:38:24.968 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:25.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:25 compute-1 ceph-mon[81689]: pgmap v4394: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:26.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:27 compute-1 ceph-mon[81689]: pgmap v4395: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:27 compute-1 nova_compute[226101]: 2025-12-06 08:38:27.643 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:27.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:28.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:28 compute-1 nova_compute[226101]: 2025-12-06 08:38:28.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:29 compute-1 ceph-mon[81689]: pgmap v4396: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:29.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:29 compute-1 nova_compute[226101]: 2025-12-06 08:38:29.969 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:30.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:30 compute-1 sshd-session[330026]: Received disconnect from 154.209.4.183 port 38194:11: Bye Bye [preauth]
Dec 06 08:38:30 compute-1 sshd-session[330026]: Disconnected from authenticating user root 154.209.4.183 port 38194 [preauth]
Dec 06 08:38:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:31.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:32 compute-1 nova_compute[226101]: 2025-12-06 08:38:32.646 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:32 compute-1 ceph-mon[81689]: pgmap v4397: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:33.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:34.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:34 compute-1 nova_compute[226101]: 2025-12-06 08:38:34.971 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:35 compute-1 ceph-mon[81689]: pgmap v4398: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:35.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:36.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:36 compute-1 ceph-mon[81689]: pgmap v4399: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:37 compute-1 ceph-mon[81689]: pgmap v4400: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:37 compute-1 sshd-session[330028]: Received disconnect from 186.87.166.141 port 48810:11: Bye Bye [preauth]
Dec 06 08:38:37 compute-1 sshd-session[330028]: Disconnected from authenticating user root 186.87.166.141 port 48810 [preauth]
Dec 06 08:38:37 compute-1 nova_compute[226101]: 2025-12-06 08:38:37.648 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:37.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:38.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:39.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:39 compute-1 ceph-mon[81689]: pgmap v4401: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:39 compute-1 nova_compute[226101]: 2025-12-06 08:38:39.974 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:40.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:40 compute-1 sshd-session[330030]: Received disconnect from 14.225.3.79 port 49744:11: Bye Bye [preauth]
Dec 06 08:38:40 compute-1 sshd-session[330030]: Disconnected from authenticating user root 14.225.3.79 port 49744 [preauth]
Dec 06 08:38:41 compute-1 ceph-mon[81689]: pgmap v4402: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:41.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:42.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:42 compute-1 nova_compute[226101]: 2025-12-06 08:38:42.651 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:43 compute-1 ceph-mon[81689]: pgmap v4403: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:43 compute-1 sshd-session[330032]: Received disconnect from 186.96.151.198 port 52364:11: Bye Bye [preauth]
Dec 06 08:38:43 compute-1 sshd-session[330032]: Disconnected from authenticating user root 186.96.151.198 port 52364 [preauth]
Dec 06 08:38:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:43.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:44.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:44 compute-1 nova_compute[226101]: 2025-12-06 08:38:44.976 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:45.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:45 compute-1 ceph-mon[81689]: pgmap v4404: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:46.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:46 compute-1 sudo[330034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:46 compute-1 sudo[330034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:46 compute-1 sudo[330034]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:46 compute-1 sudo[330059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:38:46 compute-1 sudo[330059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:46 compute-1 sudo[330059]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:46 compute-1 sudo[330084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:46 compute-1 sudo[330084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:46 compute-1 sudo[330084]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:46 compute-1 sudo[330109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:38:46 compute-1 sudo[330109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:47 compute-1 ceph-mon[81689]: pgmap v4405: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:47 compute-1 sudo[330109]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:47 compute-1 nova_compute[226101]: 2025-12-06 08:38:47.653 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:47.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:48.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:38:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:38:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:38:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:38:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:38:48 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:38:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:49 compute-1 ceph-mon[81689]: pgmap v4406: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:49.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:49 compute-1 nova_compute[226101]: 2025-12-06 08:38:49.977 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:50.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:52 compute-1 ceph-mon[81689]: pgmap v4407: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:52.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:52 compute-1 nova_compute[226101]: 2025-12-06 08:38:52.655 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:53 compute-1 ceph-mon[81689]: pgmap v4408: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:53.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:54.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:54 compute-1 sshd-session[330167]: Received disconnect from 136.112.8.45 port 38718:11: Bye Bye [preauth]
Dec 06 08:38:54 compute-1 sshd-session[330167]: Disconnected from authenticating user root 136.112.8.45 port 38718 [preauth]
Dec 06 08:38:54 compute-1 nova_compute[226101]: 2025-12-06 08:38:54.978 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:55 compute-1 podman[330170]: 2025-12-06 08:38:55.087162255 +0000 UTC m=+0.066369111 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:38:55 compute-1 podman[330169]: 2025-12-06 08:38:55.122584286 +0000 UTC m=+0.099860770 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:38:55 compute-1 podman[330171]: 2025-12-06 08:38:55.15331711 +0000 UTC m=+0.132669959 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:38:55 compute-1 sudo[330225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:55 compute-1 sudo[330225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:55 compute-1 sudo[330225]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:55 compute-1 sudo[330250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:38:55 compute-1 sudo[330250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:55 compute-1 sudo[330250]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:55 compute-1 ceph-mon[81689]: pgmap v4409: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:56.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:56 compute-1 nova_compute[226101]: 2025-12-06 08:38:56.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:56 compute-1 nova_compute[226101]: 2025-12-06 08:38:56.634 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:38:56 compute-1 nova_compute[226101]: 2025-12-06 08:38:56.635 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:38:56 compute-1 nova_compute[226101]: 2025-12-06 08:38:56.635 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:38:56 compute-1 nova_compute[226101]: 2025-12-06 08:38:56.635 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:38:56 compute-1 nova_compute[226101]: 2025-12-06 08:38:56.635 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:38:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:38:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1865931276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.085 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.264 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.266 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4211MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.266 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.266 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.353 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.353 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.374 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.658 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:38:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/490959542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.823 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.828 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.868 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.870 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:38:57 compute-1 nova_compute[226101]: 2025-12-06 08:38:57.870 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:38:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:57.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:58.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:58 compute-1 ceph-mon[81689]: pgmap v4410: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1865931276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:59 compute-1 sshd[164848]: Timeout before authentication for connection from 154.209.4.183 to 38.102.83.204, pid = 329591
Dec 06 08:38:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/490959542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:59 compute-1 ceph-mon[81689]: pgmap v4411: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:38:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:59.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:59 compute-1 nova_compute[226101]: 2025-12-06 08:38:59.981 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:00.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:00 compute-1 sshd-session[330319]: Received disconnect from 106.51.92.114 port 43489:11: Bye Bye [preauth]
Dec 06 08:39:00 compute-1 sshd-session[330319]: Disconnected from authenticating user root 106.51.92.114 port 43489 [preauth]
Dec 06 08:39:00 compute-1 sshd-session[330321]: Received disconnect from 45.120.216.232 port 59952:11: Bye Bye [preauth]
Dec 06 08:39:00 compute-1 sshd-session[330321]: Disconnected from authenticating user root 45.120.216.232 port 59952 [preauth]
Dec 06 08:39:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:39:01.719 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:39:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:39:01.719 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:39:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:39:01.719 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:39:01 compute-1 ceph-mon[81689]: pgmap v4412: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:01.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:02.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:02 compute-1 sshd-session[330165]: Connection closed by 165.154.55.146 port 52288 [preauth]
Dec 06 08:39:02 compute-1 nova_compute[226101]: 2025-12-06 08:39:02.660 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:03 compute-1 ceph-mon[81689]: pgmap v4413: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:03 compute-1 nova_compute[226101]: 2025-12-06 08:39:03.864 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:03.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:04.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:04 compute-1 nova_compute[226101]: 2025-12-06 08:39:04.983 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:05.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:06.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:07 compute-1 ceph-mon[81689]: pgmap v4414: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:07 compute-1 nova_compute[226101]: 2025-12-06 08:39:07.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:07 compute-1 nova_compute[226101]: 2025-12-06 08:39:07.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:39:07 compute-1 nova_compute[226101]: 2025-12-06 08:39:07.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:39:07 compute-1 nova_compute[226101]: 2025-12-06 08:39:07.661 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:07.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:07 compute-1 nova_compute[226101]: 2025-12-06 08:39:07.930 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:39:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:08.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:08 compute-1 ceph-mon[81689]: pgmap v4415: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:39:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/869045702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:39:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:39:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/869045702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:39:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:09.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:09 compute-1 nova_compute[226101]: 2025-12-06 08:39:09.985 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:10 compute-1 ceph-mon[81689]: pgmap v4416: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/869045702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:39:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/869045702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:39:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:10.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:11 compute-1 ceph-mon[81689]: pgmap v4417: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:11.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:12 compute-1 nova_compute[226101]: 2025-12-06 08:39:12.663 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:12.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:13 compute-1 ceph-mon[81689]: pgmap v4418: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:13 compute-1 nova_compute[226101]: 2025-12-06 08:39:13.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:13.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:14 compute-1 nova_compute[226101]: 2025-12-06 08:39:14.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:14.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:14 compute-1 nova_compute[226101]: 2025-12-06 08:39:14.989 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:15 compute-1 ceph-mon[81689]: pgmap v4419: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:15.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:16 compute-1 nova_compute[226101]: 2025-12-06 08:39:16.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:16 compute-1 nova_compute[226101]: 2025-12-06 08:39:16.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:39:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:16.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:16 compute-1 ceph-mon[81689]: pgmap v4420: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:17 compute-1 nova_compute[226101]: 2025-12-06 08:39:17.666 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:17.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:18 compute-1 sshd-session[330324]: Received disconnect from 91.144.158.231 port 12622:11: Bye Bye [preauth]
Dec 06 08:39:18 compute-1 sshd-session[330324]: Disconnected from authenticating user root 91.144.158.231 port 12622 [preauth]
Dec 06 08:39:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:18.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:19.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:19 compute-1 nova_compute[226101]: 2025-12-06 08:39:19.991 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:20 compute-1 ceph-mon[81689]: pgmap v4421: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:20.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:21 compute-1 nova_compute[226101]: 2025-12-06 08:39:21.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:21 compute-1 ceph-mon[81689]: pgmap v4422: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3790213319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/509942672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3261544347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:21.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:22 compute-1 nova_compute[226101]: 2025-12-06 08:39:22.668 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3223365794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:22.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:23 compute-1 sshd-session[330326]: Received disconnect from 101.100.194.199 port 47056:11: Bye Bye [preauth]
Dec 06 08:39:23 compute-1 sshd-session[330326]: Disconnected from authenticating user root 101.100.194.199 port 47056 [preauth]
Dec 06 08:39:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:23 compute-1 ceph-mon[81689]: pgmap v4423: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:23.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:24 compute-1 nova_compute[226101]: 2025-12-06 08:39:24.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:24.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:24 compute-1 sshd-session[330328]: Received disconnect from 124.18.141.70 port 43398:11: Bye Bye [preauth]
Dec 06 08:39:24 compute-1 sshd-session[330328]: Disconnected from authenticating user root 124.18.141.70 port 43398 [preauth]
Dec 06 08:39:24 compute-1 nova_compute[226101]: 2025-12-06 08:39:24.995 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:25 compute-1 ceph-mon[81689]: pgmap v4424: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:25.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:26 compute-1 podman[330330]: 2025-12-06 08:39:26.0743554 +0000 UTC m=+0.057730830 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 06 08:39:26 compute-1 podman[330331]: 2025-12-06 08:39:26.090808741 +0000 UTC m=+0.069824514 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:39:26 compute-1 podman[330332]: 2025-12-06 08:39:26.133107056 +0000 UTC m=+0.113676271 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:39:26 compute-1 nova_compute[226101]: 2025-12-06 08:39:26.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:26.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:27 compute-1 ceph-mon[81689]: pgmap v4425: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:27 compute-1 nova_compute[226101]: 2025-12-06 08:39:27.671 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:27.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:28.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:29 compute-1 ceph-mon[81689]: pgmap v4426: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:29 compute-1 nova_compute[226101]: 2025-12-06 08:39:29.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:29.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:29 compute-1 nova_compute[226101]: 2025-12-06 08:39:29.994 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:39:30 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.5 total, 600.0 interval
                                           Cumulative writes: 19K writes, 98K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1472 writes, 7081 keys, 1472 commit groups, 1.0 writes per commit group, ingest: 15.69 MB, 0.03 MB/s
                                           Interval WAL: 1472 writes, 1472 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     19.7      6.15              0.36        63    0.098       0      0       0.0       0.0
                                             L6      1/0   12.75 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6     52.2     45.0     15.13              1.96        62    0.244    546K    33K       0.0       0.0
                                            Sum      1/0   12.75 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     37.1     37.7     21.28              2.32       125    0.170    546K    33K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.5    116.8    117.1      0.53              0.21         8    0.067     51K   2047       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0     52.2     45.0     15.13              1.96        62    0.244    546K    33K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     19.8      6.10              0.36        62    0.098       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.118, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.78 GB write, 0.10 MB/s write, 0.77 GB read, 0.10 MB/s read, 21.3 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558b75ad91f0#2 capacity: 304.00 MB usage: 86.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000549 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5374,83.12 MB,27.3433%) FilterBlock(125,1.47 MB,0.482092%) IndexBlock(125,2.34 MB,0.770684%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:39:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:30.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:30 compute-1 sshd[164848]: Timeout before authentication for connection from 107.150.106.178 to 38.102.83.204, pid = 329656
Dec 06 08:39:31 compute-1 ceph-mon[81689]: pgmap v4427: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:31.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:32 compute-1 nova_compute[226101]: 2025-12-06 08:39:32.672 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:33 compute-1 ceph-mon[81689]: pgmap v4428: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:33.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:34 compute-1 nova_compute[226101]: 2025-12-06 08:39:34.997 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:35 compute-1 ceph-mon[81689]: pgmap v4429: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:36.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:37 compute-1 nova_compute[226101]: 2025-12-06 08:39:37.676 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:37 compute-1 ceph-mon[81689]: pgmap v4430: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:37.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:38.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:39 compute-1 ceph-mon[81689]: pgmap v4431: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:39.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:39 compute-1 nova_compute[226101]: 2025-12-06 08:39:39.999 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:40.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:41 compute-1 ceph-mon[81689]: pgmap v4432: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:42 compute-1 nova_compute[226101]: 2025-12-06 08:39:42.679 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:42.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:43 compute-1 ceph-mon[81689]: pgmap v4433: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:43.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:45 compute-1 nova_compute[226101]: 2025-12-06 08:39:45.000 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:45 compute-1 nova_compute[226101]: 2025-12-06 08:39:45.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:45 compute-1 nova_compute[226101]: 2025-12-06 08:39:45.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:39:45 compute-1 nova_compute[226101]: 2025-12-06 08:39:45.608 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:39:45 compute-1 ceph-mon[81689]: pgmap v4434: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:45.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:46.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:47 compute-1 nova_compute[226101]: 2025-12-06 08:39:47.682 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:47 compute-1 ceph-mon[81689]: pgmap v4435: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:47.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:48.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:49 compute-1 nova_compute[226101]: 2025-12-06 08:39:49.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:49 compute-1 ceph-mon[81689]: pgmap v4436: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:49.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:50 compute-1 nova_compute[226101]: 2025-12-06 08:39:50.001 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:50.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:51.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:52 compute-1 ceph-mon[81689]: pgmap v4437: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:52 compute-1 nova_compute[226101]: 2025-12-06 08:39:52.684 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:52.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:53 compute-1 ceph-mon[81689]: pgmap v4438: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:53.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:54.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:55 compute-1 nova_compute[226101]: 2025-12-06 08:39:55.003 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:55 compute-1 sudo[330394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:55 compute-1 sudo[330394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:55 compute-1 sudo[330394]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:55 compute-1 sudo[330419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:39:55 compute-1 sudo[330419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:55 compute-1 sudo[330419]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:55 compute-1 sudo[330444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:55 compute-1 sudo[330444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:55 compute-1 sudo[330444]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:55.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:55 compute-1 sudo[330469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:39:55 compute-1 sudo[330469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:56 compute-1 sudo[330469]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:56 compute-1 ceph-mon[81689]: pgmap v4439: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:56 compute-1 nova_compute[226101]: 2025-12-06 08:39:56.870 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:56 compute-1 nova_compute[226101]: 2025-12-06 08:39:56.907 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:39:56 compute-1 nova_compute[226101]: 2025-12-06 08:39:56.908 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:39:56 compute-1 nova_compute[226101]: 2025-12-06 08:39:56.908 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:39:56 compute-1 nova_compute[226101]: 2025-12-06 08:39:56.909 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:39:56 compute-1 nova_compute[226101]: 2025-12-06 08:39:56.909 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:39:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:56.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:57 compute-1 podman[330526]: 2025-12-06 08:39:57.078196648 +0000 UTC m=+0.059665082 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:39:57 compute-1 podman[330525]: 2025-12-06 08:39:57.078598189 +0000 UTC m=+0.059824095 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 08:39:57 compute-1 podman[330527]: 2025-12-06 08:39:57.126601147 +0000 UTC m=+0.107685550 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:39:57 compute-1 ceph-mon[81689]: pgmap v4440: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:39:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:39:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:39:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:39:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:39:57 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:39:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:39:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3263683812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.444 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.599 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.600 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4228MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.600 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.600 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.686 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.826 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.826 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:39:57 compute-1 nova_compute[226101]: 2025-12-06 08:39:57.863 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:39:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:57.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.002479) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398002549, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 2308, "num_deletes": 258, "total_data_size": 5724127, "memory_usage": 5802976, "flush_reason": "Manual Compaction"}
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398032567, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 3754820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96426, "largest_seqno": 98729, "table_properties": {"data_size": 3745477, "index_size": 5900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18572, "raw_average_key_size": 19, "raw_value_size": 3726930, "raw_average_value_size": 3998, "num_data_blocks": 260, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010174, "oldest_key_time": 1765010174, "file_creation_time": 1765010398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 30139 microseconds, and 7307 cpu microseconds.
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.032624) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 3754820 bytes OK
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.032648) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.034449) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.034464) EVENT_LOG_v1 {"time_micros": 1765010398034458, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.034481) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 5714097, prev total WAL file size 5714378, number of live WAL files 2.
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.035874) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373836' seq:72057594037927935, type:22 .. '6C6F676D0034303430' seq:0, type:0; will stop at (end)
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(3666KB)], [198(12MB)]
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398035931, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 17127189, "oldest_snapshot_seqno": -1}
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 12821 keys, 17003607 bytes, temperature: kUnknown
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398256105, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 17003607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16920933, "index_size": 49537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32069, "raw_key_size": 339114, "raw_average_key_size": 26, "raw_value_size": 16696831, "raw_average_value_size": 1302, "num_data_blocks": 1891, "num_entries": 12821, "num_filter_entries": 12821, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765010398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.256358) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 17003607 bytes
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.258491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.8 rd, 77.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.8 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 13350, records dropped: 529 output_compression: NoCompression
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.258511) EVENT_LOG_v1 {"time_micros": 1765010398258501, "job": 128, "event": "compaction_finished", "compaction_time_micros": 220236, "compaction_time_cpu_micros": 38713, "output_level": 6, "num_output_files": 1, "total_output_size": 17003607, "num_input_records": 13350, "num_output_records": 12821, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398259264, "job": 128, "event": "table_file_deletion", "file_number": 200}
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010398261555, "job": 128, "event": "table_file_deletion", "file_number": 198}
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.035749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.261591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.261595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.261597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.261599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:58 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:39:58.261601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:39:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/409984988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:58 compute-1 nova_compute[226101]: 2025-12-06 08:39:58.318 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:39:58 compute-1 nova_compute[226101]: 2025-12-06 08:39:58.325 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:39:58 compute-1 nova_compute[226101]: 2025-12-06 08:39:58.342 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:39:58 compute-1 nova_compute[226101]: 2025-12-06 08:39:58.344 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:39:58 compute-1 nova_compute[226101]: 2025-12-06 08:39:58.344 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:39:58 compute-1 sshd[164848]: drop connection #0 from [154.209.4.183]:36066 on [38.102.83.204]:22 penalty: exceeded LoginGraceTime
Dec 06 08:39:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3263683812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/409984988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:58.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:59 compute-1 ceph-mon[81689]: pgmap v4441: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:39:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:59.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:00 compute-1 nova_compute[226101]: 2025-12-06 08:40:00.005 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:00.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:01 compute-1 ceph-mon[81689]: overall HEALTH_OK
Dec 06 08:40:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:40:01.720 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:40:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:40:01.721 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:40:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:40:01.721 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:40:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:01.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:02 compute-1 ceph-mon[81689]: pgmap v4442: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.288976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402289250, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 307, "num_deletes": 251, "total_data_size": 122934, "memory_usage": 129864, "flush_reason": "Manual Compaction"}
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402293705, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 80391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98730, "largest_seqno": 99036, "table_properties": {"data_size": 78459, "index_size": 159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5111, "raw_average_key_size": 18, "raw_value_size": 74592, "raw_average_value_size": 269, "num_data_blocks": 7, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010398, "oldest_key_time": 1765010398, "file_creation_time": 1765010402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 4792 microseconds, and 2146 cpu microseconds.
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.293781) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 80391 bytes OK
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.293815) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.295797) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.295824) EVENT_LOG_v1 {"time_micros": 1765010402295815, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.295855) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 120714, prev total WAL file size 120714, number of live WAL files 2.
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.296368) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(78KB)], [201(16MB)]
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402296406, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 17083998, "oldest_snapshot_seqno": -1}
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 12588 keys, 14953644 bytes, temperature: kUnknown
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402427229, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 14953644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14874621, "index_size": 46462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 334971, "raw_average_key_size": 26, "raw_value_size": 14656842, "raw_average_value_size": 1164, "num_data_blocks": 1752, "num_entries": 12588, "num_filter_entries": 12588, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765010402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.427633) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 14953644 bytes
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.430090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.5 rd, 114.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 16.2 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(398.5) write-amplify(186.0) OK, records in: 13098, records dropped: 510 output_compression: NoCompression
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.430122) EVENT_LOG_v1 {"time_micros": 1765010402430109, "job": 130, "event": "compaction_finished", "compaction_time_micros": 130945, "compaction_time_cpu_micros": 34164, "output_level": 6, "num_output_files": 1, "total_output_size": 14953644, "num_input_records": 13098, "num_output_records": 12588, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402430954, "job": 130, "event": "table_file_deletion", "file_number": 203}
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402436614, "job": 130, "event": "table_file_deletion", "file_number": 201}
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.296325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.436745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.436752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.436755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.436758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:40:02.436761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-1 nova_compute[226101]: 2025-12-06 08:40:02.688 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:02.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:03 compute-1 sudo[330627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:03 compute-1 sudo[330627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:03 compute-1 sudo[330627]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:03 compute-1 sudo[330652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:40:03 compute-1 sudo[330652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:03 compute-1 sudo[330652]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:03 compute-1 ceph-mon[81689]: pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:40:03 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:40:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:04.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:05 compute-1 nova_compute[226101]: 2025-12-06 08:40:05.006 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:05 compute-1 ceph-mon[81689]: pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:06 compute-1 nova_compute[226101]: 2025-12-06 08:40:06.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:06 compute-1 nova_compute[226101]: 2025-12-06 08:40:06.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:40:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:06.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:07 compute-1 nova_compute[226101]: 2025-12-06 08:40:07.691 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:07.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:08 compute-1 ceph-mon[81689]: pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:08.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:09 compute-1 ceph-mon[81689]: pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3340009868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:40:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3340009868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:40:09 compute-1 nova_compute[226101]: 2025-12-06 08:40:09.604 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:09 compute-1 nova_compute[226101]: 2025-12-06 08:40:09.605 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:40:09 compute-1 nova_compute[226101]: 2025-12-06 08:40:09.605 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:40:09 compute-1 nova_compute[226101]: 2025-12-06 08:40:09.639 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:40:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:09.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:10 compute-1 nova_compute[226101]: 2025-12-06 08:40:10.009 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:11.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:12 compute-1 ceph-mon[81689]: pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:12 compute-1 nova_compute[226101]: 2025-12-06 08:40:12.693 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:12.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:13 compute-1 ceph-mon[81689]: pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:13 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:13.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:14 compute-1 nova_compute[226101]: 2025-12-06 08:40:14.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:14 compute-1 nova_compute[226101]: 2025-12-06 08:40:14.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:14.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:15 compute-1 nova_compute[226101]: 2025-12-06 08:40:15.010 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:15 compute-1 sshd-session[330677]: Received disconnect from 14.103.118.136 port 36216:11: Bye Bye [preauth]
Dec 06 08:40:15 compute-1 sshd-session[330677]: Disconnected from authenticating user root 14.103.118.136 port 36216 [preauth]
Dec 06 08:40:15 compute-1 ceph-mon[81689]: pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:15 compute-1 sshd-session[330681]: Received disconnect from 186.87.166.141 port 49930:11: Bye Bye [preauth]
Dec 06 08:40:15 compute-1 sshd-session[330681]: Disconnected from authenticating user root 186.87.166.141 port 49930 [preauth]
Dec 06 08:40:15 compute-1 sshd-session[330679]: Received disconnect from 106.51.92.114 port 58275:11: Bye Bye [preauth]
Dec 06 08:40:15 compute-1 sshd-session[330679]: Disconnected from authenticating user root 106.51.92.114 port 58275 [preauth]
Dec 06 08:40:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:15.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:16.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:17 compute-1 ceph-mon[81689]: pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:17 compute-1 nova_compute[226101]: 2025-12-06 08:40:17.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:17 compute-1 nova_compute[226101]: 2025-12-06 08:40:17.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:40:17 compute-1 nova_compute[226101]: 2025-12-06 08:40:17.697 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:17.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:18 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:19 compute-1 ceph-mon[81689]: pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:19.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:20 compute-1 nova_compute[226101]: 2025-12-06 08:40:20.012 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:20.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:21 compute-1 sshd-session[330683]: Received disconnect from 45.120.216.232 port 60286:11: Bye Bye [preauth]
Dec 06 08:40:21 compute-1 sshd-session[330683]: Disconnected from authenticating user root 45.120.216.232 port 60286 [preauth]
Dec 06 08:40:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:22.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:22 compute-1 ceph-mon[81689]: pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:22 compute-1 nova_compute[226101]: 2025-12-06 08:40:22.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:22 compute-1 nova_compute[226101]: 2025-12-06 08:40:22.700 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:22.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3187170638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2298997144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2073063480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-1 ceph-mon[81689]: pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1651602309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:24.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:24.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:25 compute-1 nova_compute[226101]: 2025-12-06 08:40:25.032 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:25 compute-1 ceph-mon[81689]: pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:26.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:26 compute-1 nova_compute[226101]: 2025-12-06 08:40:26.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:26.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:27 compute-1 ceph-mon[81689]: pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:27 compute-1 nova_compute[226101]: 2025-12-06 08:40:27.702 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:28.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:28 compute-1 podman[330686]: 2025-12-06 08:40:28.066021987 +0000 UTC m=+0.050887006 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:40:28 compute-1 podman[330685]: 2025-12-06 08:40:28.071211016 +0000 UTC m=+0.056454586 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:40:28 compute-1 podman[330687]: 2025-12-06 08:40:28.095708583 +0000 UTC m=+0.076227146 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:40:28 compute-1 nova_compute[226101]: 2025-12-06 08:40:28.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:28 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:28.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:29 compute-1 ceph-mon[81689]: pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:30.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:30 compute-1 nova_compute[226101]: 2025-12-06 08:40:30.080 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:30 compute-1 nova_compute[226101]: 2025-12-06 08:40:30.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:30.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:31 compute-1 sshd-session[330744]: Received disconnect from 14.225.3.79 port 50388:11: Bye Bye [preauth]
Dec 06 08:40:31 compute-1 sshd-session[330744]: Disconnected from authenticating user root 14.225.3.79 port 50388 [preauth]
Dec 06 08:40:31 compute-1 ceph-mon[81689]: pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:32.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:32 compute-1 nova_compute[226101]: 2025-12-06 08:40:32.705 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:33 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:33 compute-1 ceph-mon[81689]: pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:34.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:34.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:35 compute-1 nova_compute[226101]: 2025-12-06 08:40:35.080 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:35 compute-1 ceph-mon[81689]: pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:36.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:36.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:37 compute-1 nova_compute[226101]: 2025-12-06 08:40:37.707 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:37 compute-1 ceph-mon[81689]: pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:38.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:38 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:38.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:39 compute-1 ceph-mon[81689]: pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:40.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:40 compute-1 nova_compute[226101]: 2025-12-06 08:40:40.083 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:41 compute-1 sshd-session[330746]: Received disconnect from 91.144.158.231 port 28403:11: Bye Bye [preauth]
Dec 06 08:40:41 compute-1 sshd-session[330746]: Disconnected from authenticating user root 91.144.158.231 port 28403 [preauth]
Dec 06 08:40:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:42.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:42 compute-1 ceph-mon[81689]: pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:42 compute-1 nova_compute[226101]: 2025-12-06 08:40:42.709 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:43 compute-1 ceph-mon[81689]: pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:43 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:44.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:45 compute-1 nova_compute[226101]: 2025-12-06 08:40:45.085 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:45 compute-1 ceph-mon[81689]: pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:46.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:46.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:47 compute-1 nova_compute[226101]: 2025-12-06 08:40:47.711 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:48.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:48 compute-1 ceph-mon[81689]: pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:48.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:50.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:50 compute-1 nova_compute[226101]: 2025-12-06 08:40:50.085 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:50 compute-1 ceph-mon[81689]: pgmap v4466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:50 compute-1 sshd-session[330748]: Received disconnect from 124.18.141.70 port 48840:11: Bye Bye [preauth]
Dec 06 08:40:50 compute-1 sshd-session[330748]: Disconnected from authenticating user root 124.18.141.70 port 48840 [preauth]
Dec 06 08:40:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:51 compute-1 ceph-mon[81689]: pgmap v4467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:52.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:52 compute-1 nova_compute[226101]: 2025-12-06 08:40:52.714 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:52.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:53 compute-1 ceph-mon[81689]: pgmap v4468: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:54.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:55 compute-1 nova_compute[226101]: 2025-12-06 08:40:55.086 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:55 compute-1 ceph-mon[81689]: pgmap v4469: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:56.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:57 compute-1 nova_compute[226101]: 2025-12-06 08:40:57.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:57 compute-1 nova_compute[226101]: 2025-12-06 08:40:57.717 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:57 compute-1 nova_compute[226101]: 2025-12-06 08:40:57.800 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:40:57 compute-1 nova_compute[226101]: 2025-12-06 08:40:57.800 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:40:57 compute-1 nova_compute[226101]: 2025-12-06 08:40:57.800 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:40:57 compute-1 nova_compute[226101]: 2025-12-06 08:40:57.800 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:40:57 compute-1 nova_compute[226101]: 2025-12-06 08:40:57.801 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:40:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:58.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:40:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1709071470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:58 compute-1 nova_compute[226101]: 2025-12-06 08:40:58.222 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:40:58 compute-1 nova_compute[226101]: 2025-12-06 08:40:58.366 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:40:58 compute-1 nova_compute[226101]: 2025-12-06 08:40:58.367 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4240MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:40:58 compute-1 nova_compute[226101]: 2025-12-06 08:40:58.367 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:40:58 compute-1 nova_compute[226101]: 2025-12-06 08:40:58.367 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:40:58 compute-1 nova_compute[226101]: 2025-12-06 08:40:58.908 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:40:58 compute-1 nova_compute[226101]: 2025-12-06 08:40:58.909 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:40:58 compute-1 nova_compute[226101]: 2025-12-06 08:40:58.926 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:40:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:40:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:59.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:59 compute-1 ceph-mon[81689]: pgmap v4470: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:59 compute-1 podman[330775]: 2025-12-06 08:40:59.078205428 +0000 UTC m=+0.063988218 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:40:59 compute-1 podman[330776]: 2025-12-06 08:40:59.078452215 +0000 UTC m=+0.058284374 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:40:59 compute-1 podman[330777]: 2025-12-06 08:40:59.122456006 +0000 UTC m=+0.091587599 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:40:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:40:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4092294334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:59 compute-1 nova_compute[226101]: 2025-12-06 08:40:59.380 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:40:59 compute-1 nova_compute[226101]: 2025-12-06 08:40:59.385 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:40:59 compute-1 nova_compute[226101]: 2025-12-06 08:40:59.423 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:40:59 compute-1 nova_compute[226101]: 2025-12-06 08:40:59.425 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:40:59 compute-1 nova_compute[226101]: 2025-12-06 08:40:59.425 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:41:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:00.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:00 compute-1 nova_compute[226101]: 2025-12-06 08:41:00.087 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1709071470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:00 compute-1 ceph-mon[81689]: pgmap v4471: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4092294334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:01.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:41:01.721 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:41:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:41:01.721 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:41:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:41:01.721 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:41:01 compute-1 sshd-session[330750]: Connection closed by 165.154.55.146 port 46828 [preauth]
Dec 06 08:41:02 compute-1 ceph-mon[81689]: pgmap v4472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:02 compute-1 nova_compute[226101]: 2025-12-06 08:41:02.719 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:03.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:03 compute-1 sudo[330861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:03 compute-1 sudo[330861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:03 compute-1 sudo[330861]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:03 compute-1 sudo[330886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:41:03 compute-1 sudo[330886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:03 compute-1 sudo[330886]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:03 compute-1 sudo[330911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:03 compute-1 sudo[330911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:03 compute-1 sudo[330911]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:03 compute-1 ceph-mon[81689]: pgmap v4473: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:03 compute-1 sudo[330936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:41:03 compute-1 sudo[330936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:04.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:04 compute-1 sudo[330936]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:05.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:05 compute-1 nova_compute[226101]: 2025-12-06 08:41:05.089 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:05 compute-1 nova_compute[226101]: 2025-12-06 08:41:05.419 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:05 compute-1 ceph-mon[81689]: pgmap v4474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:06.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:07.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:41:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:41:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:41:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:41:07 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:41:07 compute-1 nova_compute[226101]: 2025-12-06 08:41:07.722 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:08.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:08 compute-1 ceph-mon[81689]: pgmap v4475: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:09 compute-1 ceph-mon[81689]: pgmap v4476: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:09 compute-1 nova_compute[226101]: 2025-12-06 08:41:09.601 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:09 compute-1 nova_compute[226101]: 2025-12-06 08:41:09.601 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:41:09 compute-1 nova_compute[226101]: 2025-12-06 08:41:09.601 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:41:09 compute-1 nova_compute[226101]: 2025-12-06 08:41:09.641 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:41:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:41:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/166007238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:41:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:41:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/166007238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:41:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:10.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:10 compute-1 nova_compute[226101]: 2025-12-06 08:41:10.090 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/166007238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:41:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/166007238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:41:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:11.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:11 compute-1 ceph-mon[81689]: pgmap v4477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 08:41:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:12.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:12 compute-1 nova_compute[226101]: 2025-12-06 08:41:12.724 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:13.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:13 compute-1 ceph-mon[81689]: pgmap v4478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 08:41:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:14.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:14 compute-1 nova_compute[226101]: 2025-12-06 08:41:14.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:15.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:15 compute-1 nova_compute[226101]: 2025-12-06 08:41:15.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:15 compute-1 sudo[330993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:15 compute-1 sudo[330993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:15 compute-1 sudo[330993]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:15 compute-1 sudo[331018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:41:15 compute-1 sudo[331018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:15 compute-1 sudo[331018]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:15 compute-1 ceph-mon[81689]: pgmap v4479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 08:41:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:15 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:16.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:16 compute-1 nova_compute[226101]: 2025-12-06 08:41:16.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:17.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:17 compute-1 nova_compute[226101]: 2025-12-06 08:41:17.726 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:18.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:18 compute-1 ceph-mon[81689]: pgmap v4480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 06 08:41:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:19.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:19 compute-1 ceph-mon[81689]: pgmap v4481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 06 08:41:19 compute-1 nova_compute[226101]: 2025-12-06 08:41:19.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:19 compute-1 nova_compute[226101]: 2025-12-06 08:41:19.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:41:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:20.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:20 compute-1 nova_compute[226101]: 2025-12-06 08:41:20.093 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:21.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:21 compute-1 ceph-mon[81689]: pgmap v4482: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Dec 06 08:41:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:22.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4159804911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:22 compute-1 nova_compute[226101]: 2025-12-06 08:41:22.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:22 compute-1 nova_compute[226101]: 2025-12-06 08:41:22.728 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:23.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:23 compute-1 ceph-mon[81689]: pgmap v4483: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Dec 06 08:41:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1737536969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:24.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3839346878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3380316587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:25.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:25 compute-1 nova_compute[226101]: 2025-12-06 08:41:25.094 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:25 compute-1 ceph-mon[81689]: pgmap v4484: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Dec 06 08:41:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:26.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:26 compute-1 nova_compute[226101]: 2025-12-06 08:41:26.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:27.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:27 compute-1 nova_compute[226101]: 2025-12-06 08:41:27.731 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:27 compute-1 ceph-mon[81689]: pgmap v4485: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Dec 06 08:41:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:28.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:28 compute-1 nova_compute[226101]: 2025-12-06 08:41:28.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:29.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:29 compute-1 ceph-mon[81689]: pgmap v4486: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Dec 06 08:41:30 compute-1 podman[331043]: 2025-12-06 08:41:30.076034004 +0000 UTC m=+0.058214902 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:41:30 compute-1 podman[331044]: 2025-12-06 08:41:30.077692118 +0000 UTC m=+0.056097275 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 08:41:30 compute-1 nova_compute[226101]: 2025-12-06 08:41:30.096 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:30.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:30 compute-1 podman[331045]: 2025-12-06 08:41:30.127195457 +0000 UTC m=+0.106729325 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 08:41:30 compute-1 nova_compute[226101]: 2025-12-06 08:41:30.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:31 compute-1 ceph-mon[81689]: pgmap v4487: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Dec 06 08:41:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:32.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:32 compute-1 sshd-session[331108]: Received disconnect from 154.209.4.183 port 59260:11: Bye Bye [preauth]
Dec 06 08:41:32 compute-1 sshd-session[331108]: Disconnected from authenticating user root 154.209.4.183 port 59260 [preauth]
Dec 06 08:41:32 compute-1 nova_compute[226101]: 2025-12-06 08:41:32.780 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:32 compute-1 sshd-session[331110]: Received disconnect from 106.51.92.114 port 44828:11: Bye Bye [preauth]
Dec 06 08:41:32 compute-1 sshd-session[331110]: Disconnected from authenticating user root 106.51.92.114 port 44828 [preauth]
Dec 06 08:41:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:33 compute-1 ceph-mon[81689]: pgmap v4488: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s
Dec 06 08:41:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:34.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:35.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:35 compute-1 nova_compute[226101]: 2025-12-06 08:41:35.097 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:35 compute-1 ceph-mon[81689]: pgmap v4489: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:41:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:36.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:37.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:37 compute-1 nova_compute[226101]: 2025-12-06 08:41:37.782 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:37 compute-1 ceph-mon[81689]: pgmap v4490: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:41:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:38.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:39.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:39 compute-1 ceph-mon[81689]: pgmap v4491: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:40 compute-1 nova_compute[226101]: 2025-12-06 08:41:40.099 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:40.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:41.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:41 compute-1 ceph-mon[81689]: pgmap v4492: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:42.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:42 compute-1 nova_compute[226101]: 2025-12-06 08:41:42.786 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:43.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:43 compute-1 sshd-session[331112]: Received disconnect from 45.120.216.232 port 60622:11: Bye Bye [preauth]
Dec 06 08:41:43 compute-1 sshd-session[331112]: Disconnected from authenticating user root 45.120.216.232 port 60622 [preauth]
Dec 06 08:41:43 compute-1 ceph-mon[81689]: pgmap v4493: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:45.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:45 compute-1 nova_compute[226101]: 2025-12-06 08:41:45.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:46.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:47.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:47 compute-1 ceph-mon[81689]: pgmap v4494: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:47 compute-1 nova_compute[226101]: 2025-12-06 08:41:47.788 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:48 compute-1 ceph-mon[81689]: pgmap v4495: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:49.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:49 compute-1 ceph-mon[81689]: pgmap v4496: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:50 compute-1 nova_compute[226101]: 2025-12-06 08:41:50.101 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:51.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:51 compute-1 ceph-mon[81689]: pgmap v4497: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:52.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #205. Immutable memtables: 0.
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.491477) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 205
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512491514, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1271, "num_deletes": 251, "total_data_size": 2920639, "memory_usage": 2960232, "flush_reason": "Manual Compaction"}
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #206: started
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512500252, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 206, "file_size": 1156403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 99041, "largest_seqno": 100307, "table_properties": {"data_size": 1152133, "index_size": 1793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11184, "raw_average_key_size": 20, "raw_value_size": 1142950, "raw_average_value_size": 2116, "num_data_blocks": 81, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010402, "oldest_key_time": 1765010402, "file_creation_time": 1765010512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 8851 microseconds, and 4012 cpu microseconds.
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.500327) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #206: 1156403 bytes OK
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.500346) [db/memtable_list.cc:519] [default] Level-0 commit table #206 started
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.502500) [db/memtable_list.cc:722] [default] Level-0 commit table #206: memtable #1 done
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.502512) EVENT_LOG_v1 {"time_micros": 1765010512502508, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.502526) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 2914683, prev total WAL file size 2914683, number of live WAL files 2.
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000202.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.503357) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353239' seq:72057594037927935, type:22 .. '6D6772737461740033373831' seq:0, type:0; will stop at (end)
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [206(1129KB)], [204(14MB)]
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512503435, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [206], "files_L6": [204], "score": -1, "input_data_size": 16110047, "oldest_snapshot_seqno": -1}
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #207: 12662 keys, 13009075 bytes, temperature: kUnknown
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512596973, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 207, "file_size": 13009075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12932889, "index_size": 43433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31685, "raw_key_size": 336667, "raw_average_key_size": 26, "raw_value_size": 12716894, "raw_average_value_size": 1004, "num_data_blocks": 1629, "num_entries": 12662, "num_filter_entries": 12662, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765010512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 207, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.597236) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 13009075 bytes
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.598838) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.1 rd, 139.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(25.2) write-amplify(11.2) OK, records in: 13128, records dropped: 466 output_compression: NoCompression
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.598858) EVENT_LOG_v1 {"time_micros": 1765010512598849, "job": 132, "event": "compaction_finished", "compaction_time_micros": 93612, "compaction_time_cpu_micros": 34135, "output_level": 6, "num_output_files": 1, "total_output_size": 13009075, "num_input_records": 13128, "num_output_records": 12662, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512599188, "job": 132, "event": "table_file_deletion", "file_number": 206}
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000204.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512602514, "job": 132, "event": "table_file_deletion", "file_number": 204}
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.503267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.602593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.602599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.602604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.602607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:41:52.602609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-1 nova_compute[226101]: 2025-12-06 08:41:52.791 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:53 compute-1 ceph-mon[81689]: pgmap v4498: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:54 compute-1 sshd-session[331114]: Received disconnect from 186.87.166.141 port 51032:11: Bye Bye [preauth]
Dec 06 08:41:54 compute-1 sshd-session[331114]: Disconnected from authenticating user root 186.87.166.141 port 51032 [preauth]
Dec 06 08:41:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:54.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:55 compute-1 nova_compute[226101]: 2025-12-06 08:41:55.104 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:55 compute-1 ceph-mon[81689]: pgmap v4499: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:57.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:57 compute-1 ceph-mon[81689]: pgmap v4500: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:57 compute-1 nova_compute[226101]: 2025-12-06 08:41:57.793 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:58.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:58 compute-1 nova_compute[226101]: 2025-12-06 08:41:58.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:58 compute-1 nova_compute[226101]: 2025-12-06 08:41:58.659 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:41:58 compute-1 nova_compute[226101]: 2025-12-06 08:41:58.660 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:41:58 compute-1 nova_compute[226101]: 2025-12-06 08:41:58.660 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:41:58 compute-1 nova_compute[226101]: 2025-12-06 08:41:58.660 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:41:58 compute-1 nova_compute[226101]: 2025-12-06 08:41:58.661 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:41:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:41:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:59.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:41:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2386261853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.129 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.303 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.304 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4250MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.305 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.305 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.385 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.385 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.543 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.578 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.578 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.600 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.629 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:41:59 compute-1 nova_compute[226101]: 2025-12-06 08:41:59.647 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:41:59 compute-1 ceph-mon[81689]: pgmap v4501: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:59 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2386261853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:42:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1094671400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:00 compute-1 nova_compute[226101]: 2025-12-06 08:42:00.059 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:42:00 compute-1 nova_compute[226101]: 2025-12-06 08:42:00.064 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:42:00 compute-1 nova_compute[226101]: 2025-12-06 08:42:00.081 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:42:00 compute-1 nova_compute[226101]: 2025-12-06 08:42:00.085 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:42:00 compute-1 nova_compute[226101]: 2025-12-06 08:42:00.085 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:42:00 compute-1 nova_compute[226101]: 2025-12-06 08:42:00.140 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:00.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:01 compute-1 podman[331161]: 2025-12-06 08:42:01.067575865 +0000 UTC m=+0.053139237 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:42:01 compute-1 podman[331160]: 2025-12-06 08:42:01.073212566 +0000 UTC m=+0.061161372 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 08:42:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1094671400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:01 compute-1 podman[331162]: 2025-12-06 08:42:01.09833625 +0000 UTC m=+0.078628341 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:42:01.721 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:42:01.722 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:42:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:42:01.722 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:42:02 compute-1 ceph-mon[81689]: pgmap v4502: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:02.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:02 compute-1 sshd-session[331224]: Received disconnect from 91.144.158.231 port 4320:11: Bye Bye [preauth]
Dec 06 08:42:02 compute-1 sshd-session[331224]: Disconnected from authenticating user root 91.144.158.231 port 4320 [preauth]
Dec 06 08:42:02 compute-1 nova_compute[226101]: 2025-12-06 08:42:02.795 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:03 compute-1 ceph-mon[81689]: pgmap v4503: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:04.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:05.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:05 compute-1 nova_compute[226101]: 2025-12-06 08:42:05.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:05 compute-1 ceph-mon[81689]: pgmap v4504: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:06.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:07.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:07 compute-1 ceph-mon[81689]: pgmap v4505: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:07 compute-1 nova_compute[226101]: 2025-12-06 08:42:07.798 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:08.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:09.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:42:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/391879808' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:42:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:42:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/391879808' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:42:09 compute-1 ceph-mon[81689]: pgmap v4506: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/391879808' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:42:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/391879808' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:42:10 compute-1 nova_compute[226101]: 2025-12-06 08:42:10.143 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:11.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:11 compute-1 ceph-mon[81689]: pgmap v4507: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:12 compute-1 nova_compute[226101]: 2025-12-06 08:42:12.086 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:12 compute-1 nova_compute[226101]: 2025-12-06 08:42:12.087 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:42:12 compute-1 nova_compute[226101]: 2025-12-06 08:42:12.087 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:42:12 compute-1 nova_compute[226101]: 2025-12-06 08:42:12.110 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:42:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:12 compute-1 nova_compute[226101]: 2025-12-06 08:42:12.801 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:13.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:13 compute-1 ceph-mon[81689]: pgmap v4508: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:14.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:15.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:15 compute-1 nova_compute[226101]: 2025-12-06 08:42:15.144 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:15 compute-1 nova_compute[226101]: 2025-12-06 08:42:15.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:15 compute-1 ceph-mon[81689]: pgmap v4509: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:15 compute-1 sudo[331226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:15 compute-1 sudo[331226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:15 compute-1 sudo[331226]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:15 compute-1 sudo[331251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:15 compute-1 sudo[331251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:15 compute-1 sudo[331251]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:15 compute-1 sudo[331276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:15 compute-1 sudo[331276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:15 compute-1 sudo[331276]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:15 compute-1 sudo[331301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:42:15 compute-1 sudo[331301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:16.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:16 compute-1 podman[331396]: 2025-12-06 08:42:16.500380549 +0000 UTC m=+0.077906921 container exec 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:42:16 compute-1 nova_compute[226101]: 2025-12-06 08:42:16.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:16 compute-1 podman[331396]: 2025-12-06 08:42:16.627026116 +0000 UTC m=+0.204552518 container exec_died 23be104115800eec2d46a871ae6c2d15b12eccd73458eddfa8729ed52d8d1644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-1, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:42:17 compute-1 sudo[331301]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:17.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:17 compute-1 sudo[331519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:17 compute-1 sudo[331519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-1 ceph-mon[81689]: pgmap v4510: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-1 sudo[331519]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:17 compute-1 sudo[331546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:17 compute-1 sudo[331546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:17 compute-1 sudo[331546]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:17 compute-1 sudo[331571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:17 compute-1 sudo[331571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:17 compute-1 sudo[331571]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:17 compute-1 sudo[331596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:42:17 compute-1 sudo[331596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:17 compute-1 nova_compute[226101]: 2025-12-06 08:42:17.801 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:17 compute-1 sudo[331596]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:42:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:42:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:42:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:42:18 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:42:18 compute-1 sshd-session[331544]: Received disconnect from 124.18.141.70 port 55668:11: Bye Bye [preauth]
Dec 06 08:42:18 compute-1 sshd-session[331544]: Disconnected from authenticating user root 124.18.141.70 port 55668 [preauth]
Dec 06 08:42:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:19 compute-1 ceph-mon[81689]: pgmap v4511: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:19 compute-1 nova_compute[226101]: 2025-12-06 08:42:19.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:19 compute-1 nova_compute[226101]: 2025-12-06 08:42:19.591 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:42:20 compute-1 nova_compute[226101]: 2025-12-06 08:42:20.147 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:20.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:21 compute-1 ceph-mon[81689]: pgmap v4512: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:22.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:22 compute-1 nova_compute[226101]: 2025-12-06 08:42:22.804 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:23.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:23 compute-1 sshd-session[331653]: Received disconnect from 14.225.3.79 port 51032:11: Bye Bye [preauth]
Dec 06 08:42:23 compute-1 sshd-session[331653]: Disconnected from authenticating user root 14.225.3.79 port 51032 [preauth]
Dec 06 08:42:23 compute-1 ceph-mon[81689]: pgmap v4513: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:24 compute-1 nova_compute[226101]: 2025-12-06 08:42:24.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1943936127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3825190703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:25.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:25 compute-1 nova_compute[226101]: 2025-12-06 08:42:25.148 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:25 compute-1 sudo[331655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:25 compute-1 sudo[331655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:25 compute-1 sudo[331655]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:25 compute-1 sudo[331680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:42:25 compute-1 sudo[331680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:25 compute-1 sudo[331680]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:26 compute-1 ceph-mon[81689]: pgmap v4514: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3695553771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2688271195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:26.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:27.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:27 compute-1 nova_compute[226101]: 2025-12-06 08:42:27.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:27 compute-1 nova_compute[226101]: 2025-12-06 08:42:27.836 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:28 compute-1 ceph-mon[81689]: pgmap v4515: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:29 compute-1 ceph-mon[81689]: pgmap v4516: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:29.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:30 compute-1 nova_compute[226101]: 2025-12-06 08:42:30.150 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:30.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:30 compute-1 nova_compute[226101]: 2025-12-06 08:42:30.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:31 compute-1 nova_compute[226101]: 2025-12-06 08:42:31.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:31 compute-1 ceph-mon[81689]: pgmap v4517: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:32 compute-1 podman[331706]: 2025-12-06 08:42:32.086647221 +0000 UTC m=+0.065697783 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 08:42:32 compute-1 podman[331705]: 2025-12-06 08:42:32.105199699 +0000 UTC m=+0.083876971 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:42:32 compute-1 podman[331707]: 2025-12-06 08:42:32.115295339 +0000 UTC m=+0.094133866 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 08:42:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:32 compute-1 nova_compute[226101]: 2025-12-06 08:42:32.838 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:33.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:34 compute-1 ceph-mon[81689]: pgmap v4518: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:34.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:35 compute-1 ceph-mon[81689]: pgmap v4519: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:35.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:35 compute-1 nova_compute[226101]: 2025-12-06 08:42:35.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:36.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:37.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:37 compute-1 ceph-mon[81689]: pgmap v4520: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:37 compute-1 nova_compute[226101]: 2025-12-06 08:42:37.841 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:38.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:39.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:39 compute-1 ceph-mon[81689]: pgmap v4521: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:40 compute-1 nova_compute[226101]: 2025-12-06 08:42:40.153 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:40.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:41.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:41 compute-1 ceph-mon[81689]: pgmap v4522: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:42.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:42 compute-1 nova_compute[226101]: 2025-12-06 08:42:42.843 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:43.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:44.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:44 compute-1 ceph-mon[81689]: pgmap v4523: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:45 compute-1 nova_compute[226101]: 2025-12-06 08:42:45.157 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:45.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:46.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:46 compute-1 ceph-mon[81689]: pgmap v4524: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:47.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:47 compute-1 ceph-mon[81689]: pgmap v4525: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:47 compute-1 nova_compute[226101]: 2025-12-06 08:42:47.846 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:48.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:49.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:49 compute-1 ceph-mon[81689]: pgmap v4526: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:50 compute-1 nova_compute[226101]: 2025-12-06 08:42:50.161 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:50.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:51.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:51 compute-1 ceph-mon[81689]: pgmap v4527: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:52.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:52 compute-1 nova_compute[226101]: 2025-12-06 08:42:52.848 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:53.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:53 compute-1 sshd-session[331774]: Received disconnect from 165.154.55.146 port 32952:11: Bye Bye [preauth]
Dec 06 08:42:53 compute-1 sshd-session[331774]: Disconnected from authenticating user root 165.154.55.146 port 32952 [preauth]
Dec 06 08:42:53 compute-1 ceph-mon[81689]: pgmap v4528: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:54.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:55 compute-1 nova_compute[226101]: 2025-12-06 08:42:55.203 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:55 compute-1 ceph-mon[81689]: pgmap v4529: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:57 compute-1 ceph-mon[81689]: pgmap v4530: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:57 compute-1 sshd-session[331776]: Received disconnect from 106.51.92.114 port 59616:11: Bye Bye [preauth]
Dec 06 08:42:57 compute-1 sshd-session[331776]: Disconnected from authenticating user root 106.51.92.114 port 59616 [preauth]
Dec 06 08:42:57 compute-1 nova_compute[226101]: 2025-12-06 08:42:57.850 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:58.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:42:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:59 compute-1 nova_compute[226101]: 2025-12-06 08:42:59.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:59 compute-1 nova_compute[226101]: 2025-12-06 08:42:59.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:42:59 compute-1 nova_compute[226101]: 2025-12-06 08:42:59.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:42:59 compute-1 nova_compute[226101]: 2025-12-06 08:42:59.615 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:42:59 compute-1 nova_compute[226101]: 2025-12-06 08:42:59.616 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:42:59 compute-1 nova_compute[226101]: 2025-12-06 08:42:59.616 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:42:59 compute-1 ceph-mon[81689]: pgmap v4531: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:43:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2862912728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.043 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.186 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.188 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4219MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.188 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.188 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.204 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:00.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.271 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.272 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.290 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:43:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2862912728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:43:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/608419154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.716 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.723 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.744 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.746 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:43:00 compute-1 nova_compute[226101]: 2025-12-06 08:43:00.746 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:43:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:01.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:01 compute-1 ceph-mon[81689]: pgmap v4532: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/608419154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:43:01.723 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:43:01.723 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:43:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:43:01.723 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:43:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:02.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:02 compute-1 nova_compute[226101]: 2025-12-06 08:43:02.853 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:03 compute-1 podman[331823]: 2025-12-06 08:43:03.073361309 +0000 UTC m=+0.051333259 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:43:03 compute-1 podman[331822]: 2025-12-06 08:43:03.078182468 +0000 UTC m=+0.056154308 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 06 08:43:03 compute-1 podman[331824]: 2025-12-06 08:43:03.09947983 +0000 UTC m=+0.072893727 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 08:43:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:03.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:03 compute-1 ceph-mon[81689]: pgmap v4533: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:04.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:04 compute-1 nova_compute[226101]: 2025-12-06 08:43:04.741 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:05 compute-1 nova_compute[226101]: 2025-12-06 08:43:05.207 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:05.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:05 compute-1 ceph-mon[81689]: pgmap v4534: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:06 compute-1 sshd-session[331885]: Received disconnect from 45.120.216.232 port 60958:11: Bye Bye [preauth]
Dec 06 08:43:06 compute-1 sshd-session[331885]: Disconnected from authenticating user root 45.120.216.232 port 60958 [preauth]
Dec 06 08:43:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:06.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:07 compute-1 sshd-session[331887]: Received disconnect from 154.209.4.183 port 35384:11: Bye Bye [preauth]
Dec 06 08:43:07 compute-1 sshd-session[331887]: Disconnected from authenticating user root 154.209.4.183 port 35384 [preauth]
Dec 06 08:43:07 compute-1 ceph-mon[81689]: pgmap v4535: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:07 compute-1 nova_compute[226101]: 2025-12-06 08:43:07.855 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:08.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:09.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:09 compute-1 ceph-mon[81689]: pgmap v4536: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3926080530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:43:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3926080530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:43:10 compute-1 nova_compute[226101]: 2025-12-06 08:43:10.209 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:10.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:10 compute-1 nova_compute[226101]: 2025-12-06 08:43:10.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:10 compute-1 nova_compute[226101]: 2025-12-06 08:43:10.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:43:10 compute-1 nova_compute[226101]: 2025-12-06 08:43:10.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:43:10 compute-1 nova_compute[226101]: 2025-12-06 08:43:10.611 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:43:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:11.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:12 compute-1 ceph-mon[81689]: pgmap v4537: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:12.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:12 compute-1 nova_compute[226101]: 2025-12-06 08:43:12.867 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:13.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:13 compute-1 ceph-mon[81689]: pgmap v4538: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:14.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:15 compute-1 nova_compute[226101]: 2025-12-06 08:43:15.210 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:15 compute-1 ceph-mon[81689]: pgmap v4539: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:16.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:16 compute-1 nova_compute[226101]: 2025-12-06 08:43:16.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:17 compute-1 nova_compute[226101]: 2025-12-06 08:43:17.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:17 compute-1 ceph-mon[81689]: pgmap v4540: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:17 compute-1 nova_compute[226101]: 2025-12-06 08:43:17.870 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:18.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:19 compute-1 ceph-mon[81689]: pgmap v4541: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:20 compute-1 nova_compute[226101]: 2025-12-06 08:43:20.213 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:20.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:20 compute-1 nova_compute[226101]: 2025-12-06 08:43:20.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:20 compute-1 nova_compute[226101]: 2025-12-06 08:43:20.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:43:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:21 compute-1 ceph-mon[81689]: pgmap v4542: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2520334797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:22.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1253960128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2003389108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:22 compute-1 nova_compute[226101]: 2025-12-06 08:43:22.918 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:23 compute-1 sshd-session[331889]: Received disconnect from 91.144.158.231 port 44806:11: Bye Bye [preauth]
Dec 06 08:43:23 compute-1 sshd-session[331889]: Disconnected from authenticating user root 91.144.158.231 port 44806 [preauth]
Dec 06 08:43:23 compute-1 ceph-mon[81689]: pgmap v4543: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2416721712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:24 compute-1 sshd-session[331891]: Received disconnect from 107.150.106.178 port 49244:11: Bye Bye [preauth]
Dec 06 08:43:24 compute-1 sshd-session[331891]: Disconnected from authenticating user root 107.150.106.178 port 49244 [preauth]
Dec 06 08:43:25 compute-1 nova_compute[226101]: 2025-12-06 08:43:25.217 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:25 compute-1 sudo[331893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:25 compute-1 sudo[331893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:25 compute-1 sudo[331893]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:25 compute-1 sudo[331918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:43:25 compute-1 sudo[331918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:25 compute-1 sudo[331918]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:25 compute-1 sudo[331943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:25 compute-1 sudo[331943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:25 compute-1 sudo[331943]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:25 compute-1 nova_compute[226101]: 2025-12-06 08:43:25.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:25 compute-1 sudo[331968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:43:25 compute-1 sudo[331968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:25 compute-1 ceph-mon[81689]: pgmap v4544: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:26 compute-1 sudo[331968]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:26.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:43:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:43:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:43:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:43:26 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:43:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:27 compute-1 ceph-mon[81689]: pgmap v4545: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:27 compute-1 nova_compute[226101]: 2025-12-06 08:43:27.920 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:28.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:29 compute-1 nova_compute[226101]: 2025-12-06 08:43:29.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:29 compute-1 ceph-mon[81689]: pgmap v4546: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:30 compute-1 nova_compute[226101]: 2025-12-06 08:43:30.218 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:31 compute-1 nova_compute[226101]: 2025-12-06 08:43:31.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:31 compute-1 nova_compute[226101]: 2025-12-06 08:43:31.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:31 compute-1 ceph-mon[81689]: pgmap v4547: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:32 compute-1 sudo[332024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:32 compute-1 sudo[332024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:32 compute-1 sudo[332024]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:32 compute-1 sudo[332049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:43:32 compute-1 sudo[332049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:32 compute-1 sudo[332049]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:32.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:32 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:32 compute-1 nova_compute[226101]: 2025-12-06 08:43:32.961 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:33 compute-1 ceph-mon[81689]: pgmap v4548: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:34 compute-1 podman[332075]: 2025-12-06 08:43:34.087252438 +0000 UTC m=+0.058891500 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:43:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:34 compute-1 podman[332074]: 2025-12-06 08:43:34.093446705 +0000 UTC m=+0.066003621 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 08:43:34 compute-1 podman[332076]: 2025-12-06 08:43:34.111186631 +0000 UTC m=+0.081200639 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 08:43:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:35 compute-1 nova_compute[226101]: 2025-12-06 08:43:35.229 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:36 compute-1 ceph-mon[81689]: pgmap v4549: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:36 compute-1 sshd-session[332139]: Received disconnect from 186.87.166.141 port 52154:11: Bye Bye [preauth]
Dec 06 08:43:36 compute-1 sshd-session[332139]: Disconnected from authenticating user root 186.87.166.141 port 52154 [preauth]
Dec 06 08:43:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:36.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:37 compute-1 nova_compute[226101]: 2025-12-06 08:43:37.962 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:38 compute-1 ceph-mon[81689]: pgmap v4550: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:38.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:39 compute-1 ceph-mon[81689]: pgmap v4551: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:40 compute-1 nova_compute[226101]: 2025-12-06 08:43:40.231 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:40.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:41 compute-1 ceph-mon[81689]: pgmap v4552: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:42.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:42 compute-1 nova_compute[226101]: 2025-12-06 08:43:42.964 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:43:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:43:43 compute-1 ceph-mon[81689]: pgmap v4553: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #208. Immutable memtables: 0.
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.724972) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 208
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623725034, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1337, "num_deletes": 251, "total_data_size": 3036734, "memory_usage": 3072840, "flush_reason": "Manual Compaction"}
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #209: started
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623739697, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 209, "file_size": 1981695, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100312, "largest_seqno": 101644, "table_properties": {"data_size": 1975905, "index_size": 3120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12323, "raw_average_key_size": 19, "raw_value_size": 1964333, "raw_average_value_size": 3183, "num_data_blocks": 138, "num_entries": 617, "num_filter_entries": 617, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010513, "oldest_key_time": 1765010513, "file_creation_time": 1765010623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 14776 microseconds, and 4659 cpu microseconds.
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.739744) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #209: 1981695 bytes OK
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.739769) [db/memtable_list.cc:519] [default] Level-0 commit table #209 started
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.741510) [db/memtable_list.cc:722] [default] Level-0 commit table #209: memtable #1 done
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.741522) EVENT_LOG_v1 {"time_micros": 1765010623741518, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.741538) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 3030484, prev total WAL file size 3030484, number of live WAL files 2.
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000205.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.742249) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [209(1935KB)], [207(12MB)]
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623742291, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [209], "files_L6": [207], "score": -1, "input_data_size": 14990770, "oldest_snapshot_seqno": -1}
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #210: 12762 keys, 12944888 bytes, temperature: kUnknown
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623810364, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 210, "file_size": 12944888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12868128, "index_size": 43751, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 339420, "raw_average_key_size": 26, "raw_value_size": 12650508, "raw_average_value_size": 991, "num_data_blocks": 1637, "num_entries": 12762, "num_filter_entries": 12762, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765010623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 210, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.810623) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12944888 bytes
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.811852) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.9 rd, 189.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.1) write-amplify(6.5) OK, records in: 13279, records dropped: 517 output_compression: NoCompression
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.811867) EVENT_LOG_v1 {"time_micros": 1765010623811860, "job": 134, "event": "compaction_finished", "compaction_time_micros": 68165, "compaction_time_cpu_micros": 31308, "output_level": 6, "num_output_files": 1, "total_output_size": 12944888, "num_input_records": 13279, "num_output_records": 12762, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623812271, "job": 134, "event": "table_file_deletion", "file_number": 209}
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000207.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623814365, "job": 134, "event": "table_file_deletion", "file_number": 207}
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.742174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.814441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.814447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.814448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.814450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:43:43.814451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:44.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:45 compute-1 nova_compute[226101]: 2025-12-06 08:43:45.232 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:45.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:45 compute-1 ceph-mon[81689]: pgmap v4554: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:45 compute-1 sshd-session[332141]: Received disconnect from 124.18.141.70 port 50324:11: Bye Bye [preauth]
Dec 06 08:43:45 compute-1 sshd-session[332141]: Disconnected from authenticating user root 124.18.141.70 port 50324 [preauth]
Dec 06 08:43:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:46.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:47.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:47 compute-1 ceph-mon[81689]: pgmap v4555: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:47 compute-1 nova_compute[226101]: 2025-12-06 08:43:47.967 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:48.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:49.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:49 compute-1 ceph-mon[81689]: pgmap v4556: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:50 compute-1 nova_compute[226101]: 2025-12-06 08:43:50.234 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:50.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:51.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:52 compute-1 ceph-mon[81689]: pgmap v4557: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:52.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:52 compute-1 nova_compute[226101]: 2025-12-06 08:43:52.969 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:53.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:54 compute-1 ceph-mon[81689]: pgmap v4558: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:54.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:55 compute-1 nova_compute[226101]: 2025-12-06 08:43:55.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:56 compute-1 ceph-mon[81689]: pgmap v4559: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:57 compute-1 nova_compute[226101]: 2025-12-06 08:43:57.971 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:58 compute-1 ceph-mon[81689]: pgmap v4560: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:58.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:43:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:59 compute-1 nova_compute[226101]: 2025-12-06 08:43:59.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:59 compute-1 nova_compute[226101]: 2025-12-06 08:43:59.677 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:43:59 compute-1 nova_compute[226101]: 2025-12-06 08:43:59.677 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:43:59 compute-1 nova_compute[226101]: 2025-12-06 08:43:59.677 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:43:59 compute-1 nova_compute[226101]: 2025-12-06 08:43:59.677 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:43:59 compute-1 nova_compute[226101]: 2025-12-06 08:43:59.678 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:44:00 compute-1 ceph-mon[81689]: pgmap v4561: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:44:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2339240083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.123 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.237 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.270 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.271 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4235MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.271 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.272 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:44:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:00.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.342 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.342 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.460 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:44:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:44:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4115477988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.873 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.880 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.908 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.910 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:44:00 compute-1 nova_compute[226101]: 2025-12-06 08:44:00.911 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:44:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2339240083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:01 compute-1 ceph-mon[81689]: pgmap v4562: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4115477988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:44:01.724 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:44:01.725 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:44:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:44:01.725 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:44:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:02.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:03 compute-1 nova_compute[226101]: 2025-12-06 08:44:02.999 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:04 compute-1 ceph-mon[81689]: pgmap v4563: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:04.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:05 compute-1 podman[332187]: 2025-12-06 08:44:05.094314495 +0000 UTC m=+0.073802141 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:44:05 compute-1 podman[332188]: 2025-12-06 08:44:05.106055579 +0000 UTC m=+0.085821492 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:44:05 compute-1 podman[332189]: 2025-12-06 08:44:05.117228159 +0000 UTC m=+0.092031529 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 08:44:05 compute-1 nova_compute[226101]: 2025-12-06 08:44:05.239 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:05.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:05 compute-1 ceph-mon[81689]: pgmap v4564: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:06.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:07.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:07 compute-1 ceph-mon[81689]: pgmap v4565: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:08 compute-1 nova_compute[226101]: 2025-12-06 08:44:08.043 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:08.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:09.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:10 compute-1 ceph-mon[81689]: pgmap v4566: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2208837905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:44:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/2208837905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:44:10 compute-1 nova_compute[226101]: 2025-12-06 08:44:10.241 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:10.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:11.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:12 compute-1 ceph-mon[81689]: pgmap v4567: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:12.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:12 compute-1 nova_compute[226101]: 2025-12-06 08:44:12.911 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:12 compute-1 nova_compute[226101]: 2025-12-06 08:44:12.911 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:44:12 compute-1 nova_compute[226101]: 2025-12-06 08:44:12.911 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:44:12 compute-1 nova_compute[226101]: 2025-12-06 08:44:12.962 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:44:13 compute-1 nova_compute[226101]: 2025-12-06 08:44:13.045 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:13.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:14 compute-1 ceph-mon[81689]: pgmap v4568: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:14 compute-1 sshd-session[332249]: Received disconnect from 14.225.3.79 port 51676:11: Bye Bye [preauth]
Dec 06 08:44:14 compute-1 sshd-session[332249]: Disconnected from authenticating user root 14.225.3.79 port 51676 [preauth]
Dec 06 08:44:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:14.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:15 compute-1 nova_compute[226101]: 2025-12-06 08:44:15.243 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:16 compute-1 ceph-mon[81689]: pgmap v4569: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:16.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:17 compute-1 ceph-mon[81689]: pgmap v4570: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:18 compute-1 nova_compute[226101]: 2025-12-06 08:44:18.047 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:18.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:18 compute-1 nova_compute[226101]: 2025-12-06 08:44:18.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:18 compute-1 nova_compute[226101]: 2025-12-06 08:44:18.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:19 compute-1 ceph-mon[81689]: pgmap v4571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:20 compute-1 nova_compute[226101]: 2025-12-06 08:44:20.246 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:20.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:20 compute-1 sshd-session[332253]: Received disconnect from 106.51.92.114 port 46171:11: Bye Bye [preauth]
Dec 06 08:44:20 compute-1 sshd-session[332253]: Disconnected from authenticating user root 106.51.92.114 port 46171 [preauth]
Dec 06 08:44:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:21.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:21 compute-1 nova_compute[226101]: 2025-12-06 08:44:21.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:21 compute-1 nova_compute[226101]: 2025-12-06 08:44:21.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:44:21 compute-1 ceph-mon[81689]: pgmap v4572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:21 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3146289939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:22.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:22 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/591882036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:23 compute-1 nova_compute[226101]: 2025-12-06 08:44:23.050 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:23.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:23 compute-1 ceph-mon[81689]: pgmap v4573: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:23 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/721875890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3529626307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:25 compute-1 nova_compute[226101]: 2025-12-06 08:44:25.249 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:25.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:25 compute-1 nova_compute[226101]: 2025-12-06 08:44:25.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:25 compute-1 ceph-mon[81689]: pgmap v4574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:26.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:27.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:28 compute-1 ceph-mon[81689]: pgmap v4575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:28 compute-1 nova_compute[226101]: 2025-12-06 08:44:28.052 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:28.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:29 compute-1 sshd-session[332255]: Received disconnect from 45.120.216.232 port 33062:11: Bye Bye [preauth]
Dec 06 08:44:29 compute-1 sshd-session[332255]: Disconnected from authenticating user root 45.120.216.232 port 33062 [preauth]
Dec 06 08:44:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:29.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:30 compute-1 ceph-mon[81689]: pgmap v4576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:30 compute-1 nova_compute[226101]: 2025-12-06 08:44:30.252 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:30.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:30 compute-1 nova_compute[226101]: 2025-12-06 08:44:30.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:31 compute-1 ceph-mon[81689]: pgmap v4577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:31.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:32 compute-1 sudo[332257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:32 compute-1 sudo[332257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-1 sudo[332257]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:32 compute-1 sudo[332282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:44:32 compute-1 sudo[332282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-1 sudo[332282]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:32 compute-1 sudo[332307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:32 compute-1 sudo[332307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-1 sudo[332307]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:32 compute-1 sudo[332332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:44:32 compute-1 nova_compute[226101]: 2025-12-06 08:44:32.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:32 compute-1 sudo[332332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-1 sudo[332332]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:32 compute-1 sudo[332376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:32 compute-1 sudo[332376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-1 sudo[332376]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:33 compute-1 sudo[332401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:44:33 compute-1 sudo[332401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:33 compute-1 sudo[332401]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:33 compute-1 nova_compute[226101]: 2025-12-06 08:44:33.055 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:33 compute-1 sudo[332426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:33 compute-1 sudo[332426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:33 compute-1 sudo[332426]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:33 compute-1 sudo[332451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:44:33 compute-1 sudo[332451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:33.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:33 compute-1 nova_compute[226101]: 2025-12-06 08:44:33.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:33 compute-1 sudo[332451]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:33 compute-1 ceph-mon[81689]: pgmap v4578: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:44:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:44:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:44:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:44:33 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:44:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:35 compute-1 nova_compute[226101]: 2025-12-06 08:44:35.253 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:35.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:35 compute-1 ceph-mon[81689]: pgmap v4579: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:36 compute-1 podman[332506]: 2025-12-06 08:44:36.071944412 +0000 UTC m=+0.055958052 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Dec 06 08:44:36 compute-1 podman[332507]: 2025-12-06 08:44:36.072113516 +0000 UTC m=+0.050321040 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 08:44:36 compute-1 podman[332508]: 2025-12-06 08:44:36.15015697 +0000 UTC m=+0.123359091 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:44:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:37.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:37 compute-1 ceph-mon[81689]: pgmap v4580: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:38 compute-1 nova_compute[226101]: 2025-12-06 08:44:38.086 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:38.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:39.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:39 compute-1 sudo[332568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:39 compute-1 sudo[332568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:39 compute-1 sudo[332568]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:39 compute-1 sudo[332593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:44:39 compute-1 sudo[332593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:39 compute-1 sudo[332593]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:39 compute-1 ceph-mon[81689]: pgmap v4581: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:39 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:40 compute-1 nova_compute[226101]: 2025-12-06 08:44:40.256 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:40.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:41 compute-1 ceph-mon[81689]: pgmap v4582: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:42 compute-1 sshd-session[332618]: Received disconnect from 154.209.4.183 port 54694:11: Bye Bye [preauth]
Dec 06 08:44:42 compute-1 sshd-session[332618]: Disconnected from authenticating user root 154.209.4.183 port 54694 [preauth]
Dec 06 08:44:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:42.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:43 compute-1 nova_compute[226101]: 2025-12-06 08:44:43.088 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:43 compute-1 ceph-mon[81689]: pgmap v4583: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:44.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:45 compute-1 nova_compute[226101]: 2025-12-06 08:44:45.257 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:45.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:45 compute-1 sshd-session[332620]: Received disconnect from 91.144.158.231 port 29008:11: Bye Bye [preauth]
Dec 06 08:44:45 compute-1 sshd-session[332620]: Disconnected from authenticating user root 91.144.158.231 port 29008 [preauth]
Dec 06 08:44:45 compute-1 ceph-mon[81689]: pgmap v4584: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:46.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:47.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:47 compute-1 ceph-mon[81689]: pgmap v4585: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:48 compute-1 nova_compute[226101]: 2025-12-06 08:44:48.090 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:48.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:49.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:49 compute-1 ceph-mon[81689]: pgmap v4586: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:50 compute-1 nova_compute[226101]: 2025-12-06 08:44:50.259 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:50.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:51.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:51 compute-1 ceph-mon[81689]: pgmap v4587: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:52.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:53 compute-1 nova_compute[226101]: 2025-12-06 08:44:53.092 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:53 compute-1 ceph-mon[81689]: pgmap v4588: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:53 compute-1 sshd-session[332622]: Connection reset by authenticating user root 91.202.233.33 port 47534 [preauth]
Dec 06 08:44:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:54.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:55 compute-1 nova_compute[226101]: 2025-12-06 08:44:55.260 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:55.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:55 compute-1 nova_compute[226101]: 2025-12-06 08:44:55.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:55 compute-1 ceph-mon[81689]: pgmap v4589: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:56.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:56 compute-1 sshd-session[332626]: Connection reset by authenticating user root 91.202.233.33 port 43302 [preauth]
Dec 06 08:44:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:57.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:57 compute-1 ceph-mon[81689]: pgmap v4590: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:58 compute-1 nova_compute[226101]: 2025-12-06 08:44:58.136 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:58.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:58 compute-1 sshd-session[332628]: Connection reset by authenticating user root 45.140.17.124 port 37056 [preauth]
Dec 06 08:44:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:44:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:59.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:59 compute-1 sshd-session[332630]: Connection reset by authenticating user root 91.202.233.33 port 43322 [preauth]
Dec 06 08:44:59 compute-1 ceph-mon[81689]: pgmap v4591: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:00 compute-1 nova_compute[226101]: 2025-12-06 08:45:00.261 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:00.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:00 compute-1 nova_compute[226101]: 2025-12-06 08:45:00.668 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:00 compute-1 nova_compute[226101]: 2025-12-06 08:45:00.727 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:45:00 compute-1 nova_compute[226101]: 2025-12-06 08:45:00.728 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:45:00 compute-1 nova_compute[226101]: 2025-12-06 08:45:00.728 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:45:00 compute-1 nova_compute[226101]: 2025-12-06 08:45:00.728 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:45:00 compute-1 nova_compute[226101]: 2025-12-06 08:45:00.728 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:45:01 compute-1 sshd-session[332632]: Connection reset by authenticating user root 45.140.17.124 port 37078 [preauth]
Dec 06 08:45:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:45:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/289764488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:01 compute-1 nova_compute[226101]: 2025-12-06 08:45:01.204 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:45:01 compute-1 sshd-session[332634]: Invalid user 12345 from 91.202.233.33 port 43336
Dec 06 08:45:01 compute-1 nova_compute[226101]: 2025-12-06 08:45:01.389 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:45:01 compute-1 nova_compute[226101]: 2025-12-06 08:45:01.391 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4230MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:45:01 compute-1 nova_compute[226101]: 2025-12-06 08:45:01.391 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:45:01 compute-1 nova_compute[226101]: 2025-12-06 08:45:01.391 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:45:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:01.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:01 compute-1 nova_compute[226101]: 2025-12-06 08:45:01.533 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:45:01 compute-1 nova_compute[226101]: 2025-12-06 08:45:01.533 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:45:01 compute-1 nova_compute[226101]: 2025-12-06 08:45:01.571 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:45:01.725 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:45:01.726 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:45:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:45:01.726 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:45:01 compute-1 ceph-mon[81689]: pgmap v4592: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/289764488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:01 compute-1 sshd-session[332634]: Connection reset by invalid user 12345 91.202.233.33 port 43336 [preauth]
Dec 06 08:45:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:45:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2883025854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:02 compute-1 nova_compute[226101]: 2025-12-06 08:45:02.021 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:45:02 compute-1 nova_compute[226101]: 2025-12-06 08:45:02.028 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:45:02 compute-1 nova_compute[226101]: 2025-12-06 08:45:02.053 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:45:02 compute-1 nova_compute[226101]: 2025-12-06 08:45:02.054 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:45:02 compute-1 nova_compute[226101]: 2025-12-06 08:45:02.055 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:45:02 compute-1 nova_compute[226101]: 2025-12-06 08:45:02.055 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:02 compute-1 nova_compute[226101]: 2025-12-06 08:45:02.055 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:45:02 compute-1 nova_compute[226101]: 2025-12-06 08:45:02.080 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:45:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:02.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2883025854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:03 compute-1 nova_compute[226101]: 2025-12-06 08:45:03.139 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:03 compute-1 sshd-session[332624]: Connection closed by 165.154.55.146 port 53090 [preauth]
Dec 06 08:45:03 compute-1 sshd-session[332658]: Invalid user 1 from 45.140.17.124 port 37102
Dec 06 08:45:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:03.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:03 compute-1 sshd-session[332680]: Invalid user postgres from 91.202.233.33 port 50214
Dec 06 08:45:03 compute-1 ceph-mon[81689]: pgmap v4593: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:03 compute-1 sshd-session[332658]: Connection reset by invalid user 1 45.140.17.124 port 37102 [preauth]
Dec 06 08:45:03 compute-1 sshd-session[332680]: Connection reset by invalid user postgres 91.202.233.33 port 50214 [preauth]
Dec 06 08:45:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:04.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:05 compute-1 nova_compute[226101]: 2025-12-06 08:45:05.262 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:05.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:05 compute-1 ceph-mgr[82049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 08:45:05 compute-1 ceph-mon[81689]: pgmap v4594: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:06.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:06 compute-1 sshd-session[332684]: Invalid user admin from 45.140.17.124 port 21892
Dec 06 08:45:06 compute-1 podman[332687]: 2025-12-06 08:45:06.600667618 +0000 UTC m=+0.045566044 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 08:45:06 compute-1 podman[332686]: 2025-12-06 08:45:06.611283602 +0000 UTC m=+0.060631377 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:45:06 compute-1 podman[332688]: 2025-12-06 08:45:06.639393076 +0000 UTC m=+0.078821745 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:45:07 compute-1 sshd-session[332684]: Connection reset by invalid user admin 45.140.17.124 port 21892 [preauth]
Dec 06 08:45:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:07.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:07 compute-1 ceph-mon[81689]: pgmap v4595: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:08 compute-1 nova_compute[226101]: 2025-12-06 08:45:08.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:08.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:09 compute-1 sshd-session[332751]: Invalid user www from 45.140.17.124 port 21904
Dec 06 08:45:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:09.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:09 compute-1 sshd-session[332751]: Connection reset by invalid user www 45.140.17.124 port 21904 [preauth]
Dec 06 08:45:09 compute-1 ceph-mon[81689]: pgmap v4596: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3111797188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:45:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/3111797188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:45:09 compute-1 nova_compute[226101]: 2025-12-06 08:45:09.994 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:10 compute-1 nova_compute[226101]: 2025-12-06 08:45:10.265 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:10.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:11.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:11 compute-1 nova_compute[226101]: 2025-12-06 08:45:11.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:11 compute-1 nova_compute[226101]: 2025-12-06 08:45:11.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:45:11 compute-1 nova_compute[226101]: 2025-12-06 08:45:11.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:45:11 compute-1 nova_compute[226101]: 2025-12-06 08:45:11.640 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:45:11 compute-1 ceph-mon[81689]: pgmap v4597: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:12.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:12 compute-1 nova_compute[226101]: 2025-12-06 08:45:12.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:12 compute-1 nova_compute[226101]: 2025-12-06 08:45:12.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:45:13 compute-1 nova_compute[226101]: 2025-12-06 08:45:13.141 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:13.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:13 compute-1 ceph-mon[81689]: pgmap v4598: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:14 compute-1 sshd-session[332753]: Received disconnect from 124.18.141.70 port 46506:11: Bye Bye [preauth]
Dec 06 08:45:14 compute-1 sshd-session[332753]: Disconnected from authenticating user root 124.18.141.70 port 46506 [preauth]
Dec 06 08:45:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:14.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:14 compute-1 sshd-session[332755]: Received disconnect from 186.87.166.141 port 53278:11: Bye Bye [preauth]
Dec 06 08:45:14 compute-1 sshd-session[332755]: Disconnected from authenticating user root 186.87.166.141 port 53278 [preauth]
Dec 06 08:45:15 compute-1 nova_compute[226101]: 2025-12-06 08:45:15.267 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:15.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:15 compute-1 ceph-mon[81689]: pgmap v4599: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:16.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:17.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:18 compute-1 ceph-mon[81689]: pgmap v4600: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:18 compute-1 nova_compute[226101]: 2025-12-06 08:45:18.143 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:18.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:18 compute-1 nova_compute[226101]: 2025-12-06 08:45:18.608 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:19.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:20 compute-1 ceph-mon[81689]: pgmap v4601: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:20 compute-1 nova_compute[226101]: 2025-12-06 08:45:20.268 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:20.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:20 compute-1 nova_compute[226101]: 2025-12-06 08:45:20.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:21.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:21 compute-1 nova_compute[226101]: 2025-12-06 08:45:21.588 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:21 compute-1 nova_compute[226101]: 2025-12-06 08:45:21.589 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:45:22 compute-1 ceph-mon[81689]: pgmap v4602: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:23 compute-1 nova_compute[226101]: 2025-12-06 08:45:23.145 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:23.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:24 compute-1 ceph-mon[81689]: pgmap v4603: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:24 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3671994025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:24.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/632855294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1430437182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:25 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2771911123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:25 compute-1 nova_compute[226101]: 2025-12-06 08:45:25.270 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:25.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:25 compute-1 nova_compute[226101]: 2025-12-06 08:45:25.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:26 compute-1 ceph-mon[81689]: pgmap v4604: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:26 compute-1 sshd-session[332757]: Connection closed by 107.150.106.178 port 46730 [preauth]
Dec 06 08:45:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:26.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:27 compute-1 ceph-mon[81689]: pgmap v4605: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:27.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:28 compute-1 nova_compute[226101]: 2025-12-06 08:45:28.148 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:28.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:29.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:29 compute-1 ceph-mon[81689]: pgmap v4606: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:30 compute-1 nova_compute[226101]: 2025-12-06 08:45:30.273 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:30.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:30 compute-1 nova_compute[226101]: 2025-12-06 08:45:30.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:31.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:31 compute-1 ceph-mon[81689]: pgmap v4607: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:32.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:33 compute-1 nova_compute[226101]: 2025-12-06 08:45:33.150 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:33.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:33 compute-1 nova_compute[226101]: 2025-12-06 08:45:33.582 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:33 compute-1 ceph-mon[81689]: pgmap v4608: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:35 compute-1 nova_compute[226101]: 2025-12-06 08:45:35.275 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:35.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:35 compute-1 nova_compute[226101]: 2025-12-06 08:45:35.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:35 compute-1 ceph-mon[81689]: pgmap v4609: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:36.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:36 compute-1 nova_compute[226101]: 2025-12-06 08:45:36.924 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:37 compute-1 podman[332759]: 2025-12-06 08:45:37.110573058 +0000 UTC m=+0.099060322 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:45:37 compute-1 podman[332761]: 2025-12-06 08:45:37.143308091 +0000 UTC m=+0.124202769 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 08:45:37 compute-1 podman[332760]: 2025-12-06 08:45:37.14444318 +0000 UTC m=+0.124873836 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:45:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:37.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:38 compute-1 ceph-mon[81689]: pgmap v4610: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:38 compute-1 nova_compute[226101]: 2025-12-06 08:45:38.202 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:38.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:39.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:39 compute-1 sudo[332825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:39 compute-1 sudo[332825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:39 compute-1 sudo[332825]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:39 compute-1 sudo[332850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:45:39 compute-1 sudo[332850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:39 compute-1 sudo[332850]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:40 compute-1 sudo[332875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:40 compute-1 sudo[332875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:40 compute-1 sudo[332875]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:40 compute-1 ceph-mon[81689]: pgmap v4611: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:40 compute-1 sudo[332900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:45:40 compute-1 sudo[332900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:40 compute-1 nova_compute[226101]: 2025-12-06 08:45:40.276 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:40.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:40 compute-1 sshd-session[332823]: Received disconnect from 106.51.92.114 port 60958:11: Bye Bye [preauth]
Dec 06 08:45:40 compute-1 sshd-session[332823]: Disconnected from authenticating user root 106.51.92.114 port 60958 [preauth]
Dec 06 08:45:40 compute-1 sudo[332900]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:45:41 compute-1 ceph-mon[81689]: pgmap v4612: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:41 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:45:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:41.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:42.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:43 compute-1 ceph-mon[81689]: pgmap v4613: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:45:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:45:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:45:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:45:43 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:45:43 compute-1 nova_compute[226101]: 2025-12-06 08:45:43.204 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:43.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:44.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:45 compute-1 nova_compute[226101]: 2025-12-06 08:45:45.305 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:45.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:45 compute-1 ceph-mon[81689]: pgmap v4614: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:46.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:47.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:47 compute-1 ceph-mon[81689]: pgmap v4615: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:48 compute-1 nova_compute[226101]: 2025-12-06 08:45:48.207 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:48.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:48 compute-1 sudo[332957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:48 compute-1 sudo[332957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:48 compute-1 sudo[332957]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:48 compute-1 sudo[332982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:45:48 compute-1 sudo[332982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:48 compute-1 sudo[332982]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:49 compute-1 ceph-mon[81689]: pgmap v4616: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:49.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:50 compute-1 nova_compute[226101]: 2025-12-06 08:45:50.307 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:50.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:51.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:52 compute-1 ceph-mon[81689]: pgmap v4617: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:52.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:53 compute-1 sshd-session[333007]: Received disconnect from 45.120.216.232 port 33398:11: Bye Bye [preauth]
Dec 06 08:45:53 compute-1 sshd-session[333007]: Disconnected from authenticating user root 45.120.216.232 port 33398 [preauth]
Dec 06 08:45:53 compute-1 nova_compute[226101]: 2025-12-06 08:45:53.209 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:53.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:54 compute-1 ceph-mon[81689]: pgmap v4618: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:54.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:55 compute-1 nova_compute[226101]: 2025-12-06 08:45:55.358 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:55.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:56 compute-1 ceph-mon[81689]: pgmap v4619: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:56.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:57.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:58 compute-1 ceph-mon[81689]: pgmap v4620: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:58 compute-1 nova_compute[226101]: 2025-12-06 08:45:58.212 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:58.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:45:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:59.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:00 compute-1 ceph-mon[81689]: pgmap v4621: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:00 compute-1 nova_compute[226101]: 2025-12-06 08:46:00.367 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:00.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:01.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:01 compute-1 nova_compute[226101]: 2025-12-06 08:46:01.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:01 compute-1 nova_compute[226101]: 2025-12-06 08:46:01.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:46:01 compute-1 nova_compute[226101]: 2025-12-06 08:46:01.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:46:01 compute-1 nova_compute[226101]: 2025-12-06 08:46:01.620 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:46:01 compute-1 nova_compute[226101]: 2025-12-06 08:46:01.621 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:46:01 compute-1 nova_compute[226101]: 2025-12-06 08:46:01.621 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:46:01.726 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:46:01.726 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:46:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:46:01.726 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:46:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:46:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1869584584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.045 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:46:02 compute-1 ceph-mon[81689]: pgmap v4622: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1869584584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.199 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.200 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4222MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.200 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.200 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.271 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.271 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.330 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:46:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:02.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:46:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1875903531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.779 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.785 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.801 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.802 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:46:02 compute-1 nova_compute[226101]: 2025-12-06 08:46:02.803 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:46:03 compute-1 nova_compute[226101]: 2025-12-06 08:46:03.214 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:03 compute-1 ceph-mon[81689]: pgmap v4623: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1875903531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:04.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:05 compute-1 sshd-session[333053]: Received disconnect from 14.225.3.79 port 52320:11: Bye Bye [preauth]
Dec 06 08:46:05 compute-1 sshd-session[333053]: Disconnected from authenticating user root 14.225.3.79 port 52320 [preauth]
Dec 06 08:46:05 compute-1 nova_compute[226101]: 2025-12-06 08:46:05.368 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:05.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:06 compute-1 ceph-mon[81689]: pgmap v4624: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:06.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:07 compute-1 sshd-session[333055]: Received disconnect from 91.144.158.231 port 8704:11: Bye Bye [preauth]
Dec 06 08:46:07 compute-1 sshd-session[333055]: Disconnected from authenticating user root 91.144.158.231 port 8704 [preauth]
Dec 06 08:46:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:07 compute-1 ceph-mon[81689]: pgmap v4625: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:08 compute-1 podman[333058]: 2025-12-06 08:46:08.07288765 +0000 UTC m=+0.057334996 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 06 08:46:08 compute-1 podman[333057]: 2025-12-06 08:46:08.104620776 +0000 UTC m=+0.091900178 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Dec 06 08:46:08 compute-1 podman[333059]: 2025-12-06 08:46:08.105260353 +0000 UTC m=+0.087492619 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:46:08 compute-1 nova_compute[226101]: 2025-12-06 08:46:08.216 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:08.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:09.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:46:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1800815749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:46:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:46:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1800815749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:46:09 compute-1 ceph-mon[81689]: pgmap v4626: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1800815749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:46:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/1800815749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:46:10 compute-1 nova_compute[226101]: 2025-12-06 08:46:10.371 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:10.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:11.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:11 compute-1 ceph-mon[81689]: pgmap v4627: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:12.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:13 compute-1 nova_compute[226101]: 2025-12-06 08:46:13.217 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:13 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:13 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:13 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:13 compute-1 nova_compute[226101]: 2025-12-06 08:46:13.804 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:13 compute-1 nova_compute[226101]: 2025-12-06 08:46:13.804 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:46:13 compute-1 nova_compute[226101]: 2025-12-06 08:46:13.805 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:46:13 compute-1 nova_compute[226101]: 2025-12-06 08:46:13.841 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:46:13 compute-1 ceph-mon[81689]: pgmap v4628: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:14 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:14 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:14 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:14 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:14.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:15 compute-1 nova_compute[226101]: 2025-12-06 08:46:15.373 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:15 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:15 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:15 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:15.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:15 compute-1 ceph-mon[81689]: pgmap v4629: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:15 compute-1 sshd[164848]: Timeout before authentication for connection from 14.103.118.136 to 38.102.83.204, pid = 332251
Dec 06 08:46:16 compute-1 sshd-session[333123]: Received disconnect from 154.209.4.183 port 56624:11: Bye Bye [preauth]
Dec 06 08:46:16 compute-1 sshd-session[333123]: Disconnected from authenticating user root 154.209.4.183 port 56624 [preauth]
Dec 06 08:46:16 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:16 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:16 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:16.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:17 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:17 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:17 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:17.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:17 compute-1 ceph-mon[81689]: pgmap v4630: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:18 compute-1 nova_compute[226101]: 2025-12-06 08:46:18.219 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:18 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:18 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:18 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:18.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:19 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:19 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:19 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:19 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:19.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:19 compute-1 ceph-mon[81689]: pgmap v4631: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:20 compute-1 nova_compute[226101]: 2025-12-06 08:46:20.375 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:20 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:20 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:20 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:20.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:20 compute-1 nova_compute[226101]: 2025-12-06 08:46:20.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:20 compute-1 nova_compute[226101]: 2025-12-06 08:46:20.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:21 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:21 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:21 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:21.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:22 compute-1 ceph-mon[81689]: pgmap v4632: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:22 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:22 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:22 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:22.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:23 compute-1 nova_compute[226101]: 2025-12-06 08:46:23.222 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:23 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:23 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:23 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:23.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:23 compute-1 nova_compute[226101]: 2025-12-06 08:46:23.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:23 compute-1 nova_compute[226101]: 2025-12-06 08:46:23.590 226109 DEBUG nova.compute.manager [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:46:24 compute-1 ceph-mon[81689]: pgmap v4633: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:24 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:24 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:24 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:24 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:24.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:25 compute-1 nova_compute[226101]: 2025-12-06 08:46:25.416 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:25 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:25 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:25 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:25.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:26 compute-1 ceph-mon[81689]: pgmap v4634: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:26 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/75782707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:26 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:26 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:26 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:26.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1659106783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3420502171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:27 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/768853376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:27 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:27 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:27 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:27.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:27 compute-1 nova_compute[226101]: 2025-12-06 08:46:27.591 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:28 compute-1 ceph-mon[81689]: pgmap v4635: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:28 compute-1 nova_compute[226101]: 2025-12-06 08:46:28.225 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:28 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:28 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:28 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:28.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:29 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:29 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:29 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:29 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:29.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:30 compute-1 ceph-mon[81689]: pgmap v4636: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:30 compute-1 nova_compute[226101]: 2025-12-06 08:46:30.418 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:30 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:30 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:30 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:30 compute-1 nova_compute[226101]: 2025-12-06 08:46:30.590 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:31 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:31 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:31 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:31.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:32 compute-1 ceph-mon[81689]: pgmap v4637: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:32 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:32 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:32 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:32.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:33 compute-1 ceph-mon[81689]: pgmap v4638: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:33 compute-1 nova_compute[226101]: 2025-12-06 08:46:33.228 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:33 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:33 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:33 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:33.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:33 compute-1 nova_compute[226101]: 2025-12-06 08:46:33.583 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:34 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:34 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:34 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:34 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:34.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:35 compute-1 nova_compute[226101]: 2025-12-06 08:46:35.421 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:35 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:35 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:35 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:35.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:35 compute-1 nova_compute[226101]: 2025-12-06 08:46:35.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:35 compute-1 ceph-mon[81689]: pgmap v4639: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:36 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:36 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 08:46:36 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:36.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 08:46:37 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:37 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:37 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:37.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:37 compute-1 ceph-mon[81689]: pgmap v4640: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:38 compute-1 nova_compute[226101]: 2025-12-06 08:46:38.229 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:38 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:38 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:38 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:38.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:39 compute-1 podman[333126]: 2025-12-06 08:46:39.070669759 +0000 UTC m=+0.052937647 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 08:46:39 compute-1 podman[333125]: 2025-12-06 08:46:39.095259882 +0000 UTC m=+0.082945997 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 08:46:39 compute-1 podman[333127]: 2025-12-06 08:46:39.126474584 +0000 UTC m=+0.103826439 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:46:39 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:39 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:39 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:39 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:39.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:39 compute-1 ceph-mon[81689]: pgmap v4641: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:40 compute-1 nova_compute[226101]: 2025-12-06 08:46:40.422 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:40 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:40 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:40 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:40.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:41 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:41 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:41 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:41.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:41 compute-1 ceph-mon[81689]: pgmap v4642: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:42 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:42 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:42 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:42.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:43 compute-1 nova_compute[226101]: 2025-12-06 08:46:43.231 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:43 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:43 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:43 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:43.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:43 compute-1 ceph-mon[81689]: pgmap v4643: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:44 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:44 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:44 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:44 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:44.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:44 compute-1 sshd-session[333189]: Accepted publickey for zuul from 192.168.122.10 port 38770 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 08:46:44 compute-1 systemd-logind[788]: New session 63 of user zuul.
Dec 06 08:46:44 compute-1 systemd[1]: Started Session 63 of User zuul.
Dec 06 08:46:44 compute-1 sshd-session[333189]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 08:46:44 compute-1 sudo[333193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 06 08:46:44 compute-1 sudo[333193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:46:45 compute-1 ceph-mon[81689]: pgmap v4644: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:45 compute-1 nova_compute[226101]: 2025-12-06 08:46:45.423 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:45 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:45 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:45 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:45.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:46 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:46 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:46 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:46.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:47 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:47 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:47 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:47.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:47 compute-1 ceph-mon[81689]: pgmap v4645: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:48 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 06 08:46:48 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1260620969' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:46:48 compute-1 nova_compute[226101]: 2025-12-06 08:46:48.233 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:48 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:48 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:48 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:48.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:48 compute-1 ceph-mon[81689]: from='client.46769 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:48 compute-1 ceph-mon[81689]: from='client.46775 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:48 compute-1 ceph-mon[81689]: from='client.38556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1260620969' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:46:48 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1767899615' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:46:48 compute-1 sudo[333447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:48 compute-1 sudo[333447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:48 compute-1 sudo[333447]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:48 compute-1 sudo[333472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:46:48 compute-1 sudo[333472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:48 compute-1 sudo[333472]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:48 compute-1 sudo[333497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:48 compute-1 sudo[333497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:48 compute-1 sudo[333497]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:48 compute-1 sudo[333522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:46:48 compute-1 sudo[333522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:49 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:49 compute-1 sudo[333522]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:49 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:49 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:49 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:49.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:49 compute-1 ceph-mon[81689]: from='client.38562 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:49 compute-1 ceph-mon[81689]: pgmap v4646: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:46:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:46:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:46:49 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:50 compute-1 nova_compute[226101]: 2025-12-06 08:46:50.429 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:50 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:50 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:50 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:50.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:50 compute-1 ceph-mon[81689]: from='client.47743 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:50 compute-1 ceph-mon[81689]: from='client.47749 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:50 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/584181407' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:46:50 compute-1 ovs-vsctl[333608]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 06 08:46:51 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:51 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:51 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:51.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:51 compute-1 virtqemud[225710]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 06 08:46:51 compute-1 virtqemud[225710]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 06 08:46:51 compute-1 virtqemud[225710]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 08:46:51 compute-1 ceph-mon[81689]: pgmap v4647: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:52 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: cache status {prefix=cache status} (starting...)
Dec 06 08:46:52 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:52 compute-1 lvm[333925]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 08:46:52 compute-1 lvm[333925]: VG ceph_vg0 finished
Dec 06 08:46:52 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: client ls {prefix=client ls} (starting...)
Dec 06 08:46:52 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:52 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:52 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:52 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:52.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: damage ls {prefix=damage ls} (starting...)
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump loads {prefix=dump loads} (starting...)
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 06 08:46:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2292196295' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:53 compute-1 nova_compute[226101]: 2025-12-06 08:46:53.236 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:53 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:53 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:53 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:53.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:46:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2772597569' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:53 compute-1 sshd-session[334032]: Received disconnect from 186.87.166.141 port 54394:11: Bye Bye [preauth]
Dec 06 08:46:53 compute-1 sshd-session[334032]: Disconnected from authenticating user root 186.87.166.141 port 54394 [preauth]
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:53 compute-1 ceph-mon[81689]: pgmap v4648: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:53 compute-1 ceph-mon[81689]: from='client.46787 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2292196295' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:53 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:53 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2772597569' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 06 08:46:53 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:53 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec 06 08:46:53 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2599282265' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 06 08:46:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec 06 08:46:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/290536756' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: ops {prefix=ops} (starting...)
Dec 06 08:46:54 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec 06 08:46:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1880989875' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:46:54 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:54 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:54 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:54 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 06 08:46:54 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1962631154' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.38574 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.38580 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.46823 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/603513567' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2599282265' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.38598 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/520104312' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/290536756' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1880989875' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/251249151' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:46:54 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1962631154' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: session ls {prefix=session ls} (starting...)
Dec 06 08:46:55 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt Can't run that command on an inactive MDS!
Dec 06 08:46:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:46:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2165939020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mds[83729]: mds.cephfs.compute-1.vsxbzt asok_command: status {prefix=status} (starting...)
Dec 06 08:46:55 compute-1 nova_compute[226101]: 2025-12-06 08:46:55.430 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:55 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:55 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:55 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:55.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 06 08:46:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3694447404' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 06 08:46:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1406254907' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:55 compute-1 sudo[334378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:55 compute-1 sudo[334378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:55 compute-1 sudo[334378]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:55 compute-1 sudo[334409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:46:55 compute-1 sudo[334409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:55 compute-1 sudo[334409]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:55 compute-1 ceph-mon[81689]: pgmap v4649: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.38631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.46856 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2165939020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/795653726' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2238563466' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2466793297' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3694447404' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1406254907' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3973645874' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1438137494' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:55 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 08:46:55 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1433611718' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec 06 08:46:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/114419710' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 06 08:46:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1382535926' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:46:56 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:56 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:56 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:56.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 06 08:46:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/451094596' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.47800 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.46868 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.47815 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.38664 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1433611718' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.38670 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/114419710' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1607452064' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3553370529' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1382535926' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3755701381' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/161953549' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/613699475' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3388070215' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/451094596' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:46:56 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec 06 08:46:56 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1188300132' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec 06 08:46:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2713819647' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:46:57 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:57 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:57 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:57.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #211. Immutable memtables: 0.
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.593549) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 211
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817593602, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 2135, "num_deletes": 251, "total_data_size": 5270790, "memory_usage": 5339872, "flush_reason": "Manual Compaction"}
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #212: started
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817611583, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 212, "file_size": 3445968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 101650, "largest_seqno": 103779, "table_properties": {"data_size": 3437078, "index_size": 5511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17756, "raw_average_key_size": 19, "raw_value_size": 3419286, "raw_average_value_size": 3761, "num_data_blocks": 241, "num_entries": 909, "num_filter_entries": 909, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010623, "oldest_key_time": 1765010623, "file_creation_time": 1765010817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 18271 microseconds, and 6450 cpu microseconds.
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.611815) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #212: 3445968 bytes OK
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.611885) [db/memtable_list.cc:519] [default] Level-0 commit table #212 started
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.613841) [db/memtable_list.cc:722] [default] Level-0 commit table #212: memtable #1 done
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.613863) EVENT_LOG_v1 {"time_micros": 1765010817613855, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.613891) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 5261294, prev total WAL file size 5261294, number of live WAL files 2.
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000208.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.615802) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600353030' seq:72057594037927935, type:22 .. '6B7600373532' seq:0, type:0; will stop at (end)
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [212(3365KB)], [210(12MB)]
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817615881, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [212], "files_L6": [210], "score": -1, "input_data_size": 16390856, "oldest_snapshot_seqno": -1}
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #213: 13152 keys, 15263452 bytes, temperature: kUnknown
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817718950, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 213, "file_size": 15263452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15182176, "index_size": 47307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 349509, "raw_average_key_size": 26, "raw_value_size": 14955506, "raw_average_value_size": 1137, "num_data_blocks": 1775, "num_entries": 13152, "num_filter_entries": 13152, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002569, "oldest_key_time": 0, "file_creation_time": 1765010817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c52d74fd-e915-42a6-9fe8-e89fb2ec4bf8", "db_session_id": "SLV0S33CGVISHGWW623C", "orig_file_number": 213, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.719275) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 15263452 bytes
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.720488) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.9 rd, 148.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 12.3 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.4) OK, records in: 13671, records dropped: 519 output_compression: NoCompression
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.720508) EVENT_LOG_v1 {"time_micros": 1765010817720498, "job": 136, "event": "compaction_finished", "compaction_time_micros": 103161, "compaction_time_cpu_micros": 32927, "output_level": 6, "num_output_files": 1, "total_output_size": 15263452, "num_input_records": 13671, "num_output_records": 13152, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817721677, "job": 136, "event": "table_file_deletion", "file_number": 212}
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000210.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817723589, "job": 136, "event": "table_file_deletion", "file_number": 210}
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.615693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.723635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.723640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.723641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.723643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-1 ceph-mon[81689]: rocksdb: (Original Log Time 2025/12/06-08:46:57.723644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.47845 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.46913 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: pgmap v4650: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/608501659' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1188300132' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.47878 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1142785465' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1827320869' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2713819647' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/887353704' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/360114230' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/831227999' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2119058871' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:57 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 06 08:46:57 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2387806077' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:58 compute-1 nova_compute[226101]: 2025-12-06 08:46:58.240 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:46:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2200948100' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:58 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:58 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:58 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:58.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 78979072 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:53.366688+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457244672 unmapped: 78979072 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:54.366904+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:55.367126+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:56.367278+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:57.367487+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:58.367631+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457252864 unmapped: 78970880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:59.367770+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457261056 unmapped: 78962688 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:00.368319+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457261056 unmapped: 78962688 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:01.368517+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457261056 unmapped: 78962688 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:02.368699+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:03.368853+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:04.369010+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:05.369191+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:06.369392+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:07.369637+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:08.369957+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:09.370085+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:10.370274+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457277440 unmapped: 78946304 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:11.370514+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.868698120s of 34.334308624s, submitted: 68
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:12.370723+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:13.370897+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b5553730e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:14.371041+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:15.371175+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:16.371305+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:17.371487+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457285632 unmapped: 78938112 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555388c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b555388c00 session 0x55b55348e780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b5539125a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798262 data_alloc: 218103808 data_used: 35803136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b555942b40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b5533f0000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a2326000/0x0/0x1bfc00000, data 0x3089193/0x32a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:18.371676+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457293824 unmapped: 78929920 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5559e3c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:19.371803+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b5559423c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b552b37800 session 0x55b552ee4000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55391f800 session 0x55b555942d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b55506cf00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:20.371955+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:21.372120+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d96000/0x0/0x1bfc00000, data 0x36191a3/0x3838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:22.372322+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4841992 data_alloc: 218103808 data_used: 35803136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d96000/0x0/0x1bfc00000, data 0x36191a3/0x3838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d96000/0x0/0x1bfc00000, data 0x36191a3/0x3838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:23.372489+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5533efc20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:24.372709+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b559e41800 session 0x55b553913860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b5558ff2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:25.372904+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457302016 unmapped: 78921728 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5559430e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:26.373082+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.688621521s of 14.739874840s, submitted: 6
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043000 session 0x55b5538832c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d95000/0x0/0x1bfc00000, data 0x36191b3/0x3839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:27.373278+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4884950 data_alloc: 218103808 data_used: 41611264
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:28.373439+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:29.373617+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:30.373762+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:31.373917+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d95000/0x0/0x1bfc00000, data 0x36191b3/0x3839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:32.374134+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4884950 data_alloc: 218103808 data_used: 41611264
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb400 session 0x55b553427a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:33.374280+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:34.374405+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d95000/0x0/0x1bfc00000, data 0x36191b3/0x3839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457097216 unmapped: 79126528 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a1d95000/0x0/0x1bfc00000, data 0x36191b3/0x3839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55484a400 session 0x55b5559e3680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:35.374530+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457105408 unmapped: 79118336 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:36.374674+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457105408 unmapped: 79118336 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.754782677s of 10.905808449s, submitted: 6
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:37.374897+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 457269248 unmapped: 78954496 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4949594 data_alloc: 218103808 data_used: 41611264
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14f3000/0x0/0x1bfc00000, data 0x3ebc1a3/0x40db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:38.375058+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459907072 unmapped: 76316672 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14b9000/0x0/0x1bfc00000, data 0x3ef01a3/0x410f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:39.375258+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14b9000/0x0/0x1bfc00000, data 0x3ef01a3/0x410f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459915264 unmapped: 76308480 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55484a400 session 0x55b553427680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14ba000/0x0/0x1bfc00000, data 0x3ef51a3/0x4114000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [1,0,0,0,0,0,0,0,0,6])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:40.375403+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459923456 unmapped: 76300288 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:41.375628+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459931648 unmapped: 76292096 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a149e000/0x0/0x1bfc00000, data 0x3f091a3/0x4128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:42.375770+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459931648 unmapped: 76292096 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4969106 data_alloc: 218103808 data_used: 42409984
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:43.375884+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 459931648 unmapped: 76292096 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a149e000/0x0/0x1bfc00000, data 0x3f091a3/0x4128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b554a5fc00 session 0x55b554edf680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:44.376081+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b5590eb400 session 0x55b553a18d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:45.376258+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55918a000 session 0x55b5529fa000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:46.376392+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:47.376579+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4961826 data_alloc: 234881024 data_used: 42418176
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:48.376772+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561043000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a14a6000/0x0/0x1bfc00000, data 0x3f091a3/0x4128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.959997177s of 11.655179024s, submitted: 77
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b561043000 session 0x55b55395c000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:49.376927+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 ms_handle_reset con 0x55b55484a400 session 0x55b552ee52c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460455936 unmapped: 75767808 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:50.377056+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460455936 unmapped: 75767808 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 heartbeat osd_stat(store_statfs(0x1a0b8a000/0x0/0x1bfc00000, data 0x48251a3/0x4a44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:51.377258+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460455936 unmapped: 75767808 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:52.377407+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460464128 unmapped: 75759616 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5032630 data_alloc: 234881024 data_used: 42418176
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:53.377581+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460464128 unmapped: 75759616 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b86000/0x0/0x1bfc00000, data 0x4826e6f/0x4a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:54.377741+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:55.377918+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:56.378055+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:57.378215+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5036100 data_alloc: 234881024 data_used: 42426368
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a0b86000/0x0/0x1bfc00000, data 0x4826e6f/0x4a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:58.378344+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:59.378566+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:00.378763+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 75751424 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:01.378954+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b5553721e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b55392b680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55918a000 session 0x55b554f22f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b559e41c00 session 0x55b5534054a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460480512 unmapped: 75743232 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.486028671s of 12.642086029s, submitted: 20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b5538834a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b5558fe000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b554abb2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55918a000 session 0x55b5550e7860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e41c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b559e41c00 session 0x55b5534b90e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b5559e2780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b5534b8d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:02.379091+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5077199 data_alloc: 234881024 data_used: 42426368
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:03.379227+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a07a3000/0x0/0x1bfc00000, data 0x4c0ae6f/0x4e2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:04.379361+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:05.379560+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b55506c000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:06.379670+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55918a000 session 0x55b553427a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461717504 unmapped: 74506240 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5572c2000 session 0x55b5538832c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b5559430e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:07.379789+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461725696 unmapped: 74498048 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5079037 data_alloc: 234881024 data_used: 42426368
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5572c2000 session 0x55b5550e61e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:08.379983+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b553913860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461725696 unmapped: 74498048 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b55506cf00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55918a000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b561042c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a07a2000/0x0/0x1bfc00000, data 0x4c0ae7f/0x4e2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55918a000 session 0x55b555942d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:09.380160+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 461873152 unmapped: 74350592 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:10.380325+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462757888 unmapped: 73465856 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:11.380469+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 69500928 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:12.380588+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 69500928 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179075 data_alloc: 251658240 data_used: 55762944
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:13.380766+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 69500928 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b5559e3c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b554a5fc00 session 0x55b555942b40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:14.380922+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 69500928 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.814599037s of 13.027617455s, submitted: 27
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a077d000/0x0/0x1bfc00000, data 0x4c2ee8f/0x4e51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:15.381076+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:16.381305+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:17.381449+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5572c2000 session 0x55b553883c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5174905 data_alloc: 251658240 data_used: 55758848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x1a07a2000/0x0/0x1bfc00000, data 0x4c0ae7f/0x4e2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:18.381634+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:19.381807+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b5590eb400 session 0x55b5533f03c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555388400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:20.381990+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 69492736 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b555388400 session 0x55b555942000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:21.382143+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471252992 unmapped: 64970752 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 ms_handle_reset con 0x55b55484a400 session 0x55b55348a780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:22.382357+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471252992 unmapped: 64970752 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5282655 data_alloc: 251658240 data_used: 55771136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:23.382518+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 heartbeat osd_stat(store_statfs(0x19f952000/0x0/0x1bfc00000, data 0x5a5ae7f/0x5c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471539712 unmapped: 64684032 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:24.382679+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 418 ms_handle_reset con 0x55b554a5fc00 session 0x55b554abbe00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 66625536 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 418 heartbeat osd_stat(store_statfs(0x19fe4b000/0x0/0x1bfc00000, data 0x514fb9f/0x5372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:25.382816+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 66625536 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:26.383008+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 66625536 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:27.383179+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 469598208 unmapped: 66625536 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.622128487s of 13.045692444s, submitted: 141
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 418 ms_handle_reset con 0x55b5572c2000 session 0x55b553883680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5173925 data_alloc: 251658240 data_used: 50388992
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 418 ms_handle_reset con 0x55b5590eb400 session 0x55b554f225a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:28.383328+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471384064 unmapped: 64839680 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:29.383503+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471384064 unmapped: 64839680 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 418 ms_handle_reset con 0x55b55537a400 session 0x55b553405680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:30.383630+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x19fe4c000/0x0/0x1bfc00000, data 0x514fb9f/0x5372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471384064 unmapped: 64839680 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:31.383791+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 65847296 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:32.383943+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 65847296 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55484a400 session 0x55b5559434a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5178847 data_alloc: 251658240 data_used: 50532352
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b554a5fc00 session 0x55b554abb2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:33.384099+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55537a400 session 0x55b5559e21e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 470425600 unmapped: 65798144 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:34.384251+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55933e000 session 0x55b554ede960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b561042c00 session 0x55b5550ded20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 470425600 unmapped: 65798144 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:35.384435+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55484a400 session 0x55b555373680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466092032 unmapped: 70131712 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0c1c000/0x0/0x1bfc00000, data 0x437f741/0x45a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:36.384653+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466092032 unmapped: 70131712 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:37.384801+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466092032 unmapped: 70131712 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5034519 data_alloc: 234881024 data_used: 44539904
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b554b20000 session 0x55b553369860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554a5fc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0c1c000/0x0/0x1bfc00000, data 0x437f741/0x45a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:38.384989+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466100224 unmapped: 70123520 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:39.415497+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466108416 unmapped: 70115328 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:40.415975+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0c1c000/0x0/0x1bfc00000, data 0x437f741/0x45a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466108416 unmapped: 70115328 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:41.416475+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b552b37800 session 0x55b5559421e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55391f800 session 0x55b5527c7c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 466108416 unmapped: 70115328 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.848965645s of 14.181877136s, submitted: 79
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a0c1c000/0x0/0x1bfc00000, data 0x437f741/0x45a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b554b20000 session 0x55b55506cb40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:42.416873+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4876269 data_alloc: 234881024 data_used: 37863424
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:43.417251+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a9d000/0x0/0x1bfc00000, data 0x34ff731/0x3721000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:44.417570+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55537a400 session 0x55b5533694a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b552b37800 session 0x55b55395cf00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:45.417935+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:46.418229+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460152832 unmapped: 76070912 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55391f800 session 0x55b55463a3c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a9d000/0x0/0x1bfc00000, data 0x34ff731/0x3721000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:47.418466+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55484a400 session 0x55b554f23a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4881593 data_alloc: 234881024 data_used: 37863424
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:48.418641+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:49.418782+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:50.418902+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:51.419116+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:52.419322+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4913433 data_alloc: 234881024 data_used: 41832448
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:53.419533+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:54.419772+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:55.419957+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:56.420149+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:57.420366+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4913433 data_alloc: 234881024 data_used: 41832448
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:58.420565+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:59.420737+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:00.420891+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 460324864 unmapped: 75898880 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.183296204s of 19.240226746s, submitted: 20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:01.421069+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a1a72000/0x0/0x1bfc00000, data 0x3529741/0x374c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462659584 unmapped: 73564160 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:02.421215+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 462921728 unmapped: 73302016 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4986619 data_alloc: 234881024 data_used: 42168320
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:03.421377+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:04.421529+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a013c000/0x0/0x1bfc00000, data 0x3ca9741/0x3ecc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:05.421896+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:06.422054+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:07.422207+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4997317 data_alloc: 234881024 data_used: 41979904
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:08.422361+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464117760 unmapped: 72105984 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:09.422544+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464125952 unmapped: 72097792 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:10.422712+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x1a013c000/0x0/0x1bfc00000, data 0x3ca9741/0x3ecc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 464125952 unmapped: 72097792 heap: 536223744 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:11.422884+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.834084511s of 10.579282761s, submitted: 75
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55933e000 session 0x55b5550e7c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b5572c2000 session 0x55b5534afc20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463462400 unmapped: 84836352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b552b37800 session 0x55b5527c74a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55391f800 session 0x55b5550e7680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 ms_handle_reset con 0x55b55484a400 session 0x55b5559d4000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:12.423052+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463462400 unmapped: 84836352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5098196 data_alloc: 234881024 data_used: 41979904
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:13.423255+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 heartbeat osd_stat(store_statfs(0x19f387000/0x0/0x1bfc00000, data 0x4a737a3/0x4c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463462400 unmapped: 84836352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:14.423395+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463462400 unmapped: 84836352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:15.423546+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463470592 unmapped: 84828160 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:16.423694+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 420 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 420 ms_handle_reset con 0x55b55933e000 session 0x55b5559e2960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 420 heartbeat osd_stat(store_statfs(0x19f387000/0x0/0x1bfc00000, data 0x4a737a3/0x4c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463478784 unmapped: 84819968 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eb400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:17.424101+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b5590eb400 session 0x55b5534af4a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463486976 unmapped: 84811776 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5106645 data_alloc: 234881024 data_used: 41988096
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:18.424328+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463486976 unmapped: 84811776 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:19.424522+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552b37800 session 0x55b5529fbe00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463495168 unmapped: 84803584 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:20.424764+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 463495168 unmapped: 84803584 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:21.424955+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37f000/0x0/0x1bfc00000, data 0x4a7717c/0x4c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 81174528 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:22.425150+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5208026 data_alloc: 251658240 data_used: 56258560
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:23.425522+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:24.425817+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37f000/0x0/0x1bfc00000, data 0x4a7717c/0x4c9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:25.426021+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:26.426215+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467156992 unmapped: 81141760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.357582092s of 15.588162422s, submitted: 51
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552b36c00 session 0x55b55348af00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b55933e000 session 0x55b5527c7e00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:27.426401+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37e000/0x0/0x1bfc00000, data 0x4a7718c/0x4ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467165184 unmapped: 81133568 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5209806 data_alloc: 251658240 data_used: 56262656
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:28.426872+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467173376 unmapped: 81125376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:29.427145+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37e000/0x0/0x1bfc00000, data 0x4a7718c/0x4ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467173376 unmapped: 81125376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:30.427326+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467173376 unmapped: 81125376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:31.427515+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 467173376 unmapped: 81125376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:32.427668+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 471130112 unmapped: 77168640 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19f37e000/0x0/0x1bfc00000, data 0x4a7718c/0x4ca0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5262878 data_alloc: 251658240 data_used: 57511936
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:33.427948+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557e64c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b557e64c00 session 0x55b5559430e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473554944 unmapped: 74743808 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bf800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:34.428078+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473620480 unmapped: 74678272 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:35.428216+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 74596352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:36.428404+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc59000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 74596352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:37.428602+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 74596352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5278240 data_alloc: 251658240 data_used: 58134528
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:38.428814+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 74596352 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.711530685s of 11.976253510s, submitted: 89
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:39.429023+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc59000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:40.429255+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:41.429547+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:42.429742+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5269088 data_alloc: 251658240 data_used: 58146816
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:43.429900+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:44.430052+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:45.430197+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19dc61000/0x0/0x1bfc00000, data 0x4ff418c/0x521d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:46.430373+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473030656 unmapped: 75268096 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:47.430540+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555a1ec00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473161728 unmapped: 75137024 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5282034 data_alloc: 251658240 data_used: 59121664
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:48.430701+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473194496 unmapped: 75104256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.841229439s of 10.072642326s, submitted: 3
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:49.430838+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473235456 unmapped: 75063296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:50.430989+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473432064 unmapped: 74866688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b555a1ec00 session 0x55b5550e6000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552b36c00 session 0x55b5559d5e00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552b37800 session 0x55b555372000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557e64c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b557e64c00 session 0x55b554edf860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b55933e000 session 0x55b554652f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:51.431186+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19db93000/0x0/0x1bfc00000, data 0x50f118c/0x52eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473432064 unmapped: 74866688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:52.431527+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6605.4 total, 600.0 interval
                                           Cumulative writes: 64K writes, 252K keys, 64K commit groups, 1.0 writes per commit group, ingest: 0.24 GB, 0.04 MB/s
                                           Cumulative WAL: 64K writes, 23K syncs, 2.77 writes per sync, written: 0.24 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4108 writes, 15K keys, 4108 commit groups, 1.0 writes per commit group, ingest: 16.42 MB, 0.03 MB/s
                                           Interval WAL: 4108 writes, 1697 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473432064 unmapped: 74866688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5343182 data_alloc: 251658240 data_used: 59523072
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:53.431710+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:54.431922+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d69e000/0x0/0x1bfc00000, data 0x55e618c/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:55.432121+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:56.432379+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:57.432510+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5343182 data_alloc: 251658240 data_used: 59523072
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:58.432693+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d69e000/0x0/0x1bfc00000, data 0x55e618c/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:59.432833+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:00.433071+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:01.433260+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d69e000/0x0/0x1bfc00000, data 0x55e618c/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:02.433401+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9a800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.267633438s of 13.161028862s, submitted: 30
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b552c9a800 session 0x55b55392be00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5345218 data_alloc: 251658240 data_used: 59523072
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:03.433659+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 473456640 unmapped: 74842112 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:04.433795+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475643904 unmapped: 72654848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d67a000/0x0/0x1bfc00000, data 0x560a18c/0x5804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:05.434064+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475717632 unmapped: 72581120 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:06.434192+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475717632 unmapped: 72581120 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d67a000/0x0/0x1bfc00000, data 0x560a18c/0x5804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:07.434376+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475717632 unmapped: 72581120 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5408066 data_alloc: 268435456 data_used: 64700416
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:08.434681+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 70508544 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:09.434918+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d44f000/0x0/0x1bfc00000, data 0x583518c/0x5a2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 70508544 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:10.435123+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 70508544 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:11.435467+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b5572c2800 session 0x55b553913860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b5546bf800 session 0x55b554abb0e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 70508544 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 heartbeat osd_stat(store_statfs(0x19d44f000/0x0/0x1bfc00000, data 0x583518c/0x5a2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557e64c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b557e64c00 session 0x55b554652b40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:12.435670+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 72785920 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5400946 data_alloc: 268435456 data_used: 64700416
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:13.435839+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b55933e000 session 0x55b55506da40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933f400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475512832 unmapped: 72785920 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.608623505s of 11.626667023s, submitted: 10
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b55933f400 session 0x55b55506cd20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:14.436054+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475529216 unmapped: 72769536 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bf800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 ms_handle_reset con 0x55b5546bf800 session 0x55b555943860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b5572c2800 session 0x55b5550e72c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:15.436211+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479780864 unmapped: 68517888 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b554b20000 session 0x55b5529d05a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b55537a400 session 0x55b554edf680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:16.436785+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557e64c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b557e64c00 session 0x55b554abb4a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480083968 unmapped: 68214784 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 heartbeat osd_stat(store_statfs(0x19c9c0000/0x0/0x1bfc00000, data 0x6295e8c/0x64be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:17.437770+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482394112 unmapped: 65904640 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5497931 data_alloc: 268435456 data_used: 66588672
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:18.438067+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482394112 unmapped: 65904640 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bf800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b5546bf800 session 0x55b5553d23c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 ms_handle_reset con 0x55b554b20000 session 0x55b5550dfc20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:19.439060+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:20.439254+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:21.439977+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 heartbeat osd_stat(store_statfs(0x19d589000/0x0/0x1bfc00000, data 0x56c4e8c/0x58ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:22.440200+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5347449 data_alloc: 251658240 data_used: 59408384
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:23.440614+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:24.440881+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482410496 unmapped: 65888256 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.341896057s of 10.820668221s, submitted: 181
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 423 ms_handle_reset con 0x55b552b36c00 session 0x55b55463a1e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 423 ms_handle_reset con 0x55b552b37800 session 0x55b55463b4a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55537a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 423 heartbeat osd_stat(store_statfs(0x19d589000/0x0/0x1bfc00000, data 0x56c4e8c/0x58ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:25.441003+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 423 ms_handle_reset con 0x55b55537a400 session 0x55b553404000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:26.441153+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:27.441289+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:28.441481+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5149829 data_alloc: 251658240 data_used: 52277248
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:29.441663+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 423 heartbeat osd_stat(store_statfs(0x19e878000/0x0/0x1bfc00000, data 0x43dca3e/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478912512 unmapped: 69386240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 424 ms_handle_reset con 0x55b552b36c00 session 0x55b5534a4f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:30.441887+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479961088 unmapped: 68337664 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:31.442088+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479969280 unmapped: 68329472 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:32.442619+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479969280 unmapped: 68329472 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 424 ms_handle_reset con 0x55b55391f800 session 0x55b5522f2960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 424 ms_handle_reset con 0x55b55484a400 session 0x55b5553723c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:33.442783+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5151459 data_alloc: 251658240 data_used: 52285440
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 424 heartbeat osd_stat(store_statfs(0x19e875000/0x0/0x1bfc00000, data 0x43de740/0x4608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 424 ms_handle_reset con 0x55b552b37800 session 0x55b55336fa40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:34.443579+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:35.443976+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:36.444349+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:37.444706+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f87a000/0x0/0x1bfc00000, data 0x309926d/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:38.444877+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4896597 data_alloc: 234881024 data_used: 36200448
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:39.445085+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475422720 unmapped: 72876032 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:40.445386+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475430912 unmapped: 72867840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:41.445910+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f87a000/0x0/0x1bfc00000, data 0x309926d/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475430912 unmapped: 72867840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:42.446148+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546bf800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.086919785s of 18.053037643s, submitted: 115
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b5546bf800 session 0x55b5534a5860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475430912 unmapped: 72867840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b5533890e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:43.446297+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4892645 data_alloc: 234881024 data_used: 36855808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475340800 unmapped: 72957952 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:44.446492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475340800 unmapped: 72957952 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b37800 session 0x55b553388d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:45.446716+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475340800 unmapped: 72957952 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:46.447011+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19fbba000/0x0/0x1bfc00000, data 0x30992df/0x32c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475340800 unmapped: 72957952 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:47.447215+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55391f800 session 0x55b55336f860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:48.447378+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4895358 data_alloc: 234881024 data_used: 36859904
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55484a400 session 0x55b5553d21e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554b20000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b554b20000 session 0x55b553404b40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b55506c000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:49.447576+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:50.447768+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:51.447981+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:52.448209+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7eb000/0x0/0x1bfc00000, data 0x346927d/0x3693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475365376 unmapped: 72933376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:53.448402+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925088 data_alloc: 234881024 data_used: 36855808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475373568 unmapped: 72925184 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:54.448681+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475373568 unmapped: 72925184 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:55.448881+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:56.449042+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7eb000/0x0/0x1bfc00000, data 0x346927d/0x3693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:57.449189+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:58.449368+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925088 data_alloc: 234881024 data_used: 36855808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:59.449502+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:00.449704+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7eb000/0x0/0x1bfc00000, data 0x346927d/0x3693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:01.449925+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:02.450102+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475381760 unmapped: 72916992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:03.450272+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925088 data_alloc: 234881024 data_used: 36855808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:04.450406+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7eb000/0x0/0x1bfc00000, data 0x346927d/0x3693000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:05.450601+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:06.450734+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b37800 session 0x55b5559e30e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:07.450914+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55391f800 session 0x55b55392ba40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:08.451095+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4925088 data_alloc: 234881024 data_used: 36855808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55484a400 session 0x55b5522f23c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:09.451229+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572c2800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.342035294s of 26.467067719s, submitted: 28
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b5572c2800 session 0x55b553369680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:10.451402+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7ea000/0x0/0x1bfc00000, data 0x346928d/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:11.452167+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:12.452355+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:13.452519+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4954926 data_alloc: 234881024 data_used: 40792064
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:14.452669+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7ea000/0x0/0x1bfc00000, data 0x346928d/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:15.452802+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 72908800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f7ea000/0x0/0x1bfc00000, data 0x346928d/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,1,4])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:16.452955+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55391f800 session 0x55b55395d0e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55484a400 session 0x55b5553d2b40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55933e000 session 0x55b553405680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b553873c00 session 0x55b554f23a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55579cc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55579cc00 session 0x55b5534af4a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f13b000/0x0/0x1bfc00000, data 0x3b1828d/0x3d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:17.453060+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:18.453210+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5011995 data_alloc: 234881024 data_used: 40792064
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:19.453454+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f13b000/0x0/0x1bfc00000, data 0x3b1828d/0x3d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:20.453660+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f13b000/0x0/0x1bfc00000, data 0x3b1828d/0x3d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f13b000/0x0/0x1bfc00000, data 0x3b1828d/0x3d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:21.453838+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475652096 unmapped: 72646656 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.285288811s of 12.952371597s, submitted: 23
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:22.453953+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eeec000/0x0/0x1bfc00000, data 0x3d6728d/0x3f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:23.454091+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eeec000/0x0/0x1bfc00000, data 0x3d6728d/0x3f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5032441 data_alloc: 234881024 data_used: 41152512
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:24.454213+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:25.454382+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b553873c00 session 0x55b5559423c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:26.454578+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55391f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476372992 unmapped: 71925760 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:27.454746+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 475529216 unmapped: 72769536 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:28.454855+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089453 data_alloc: 251658240 data_used: 47894528
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19ee9d000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:29.455062+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19ee9d000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:30.455197+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:31.455487+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:32.455648+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:33.455810+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5089453 data_alloc: 251658240 data_used: 47894528
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:34.455970+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19ee9d000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19ee9d000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:35.456083+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476422144 unmapped: 71876608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:36.456250+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.175427437s of 14.262315750s, submitted: 32
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476291072 unmapped: 72007680 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:37.456393+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b554eded20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b37800 session 0x55b5526efc20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19eea5000/0x0/0x1bfc00000, data 0x3dae28d/0x3fd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 71999488 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:38.456551+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5088621 data_alloc: 251658240 data_used: 47878144
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 476299264 unmapped: 71999488 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:39.456903+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55933e000 session 0x55b5553d3c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479723520 unmapped: 68575232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:40.457057+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e4a9000/0x0/0x1bfc00000, data 0x47a527d/0x49cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,11,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480108544 unmapped: 68190208 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:41.457266+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:42.457497+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:43.457654+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179248 data_alloc: 251658240 data_used: 49754112
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:44.457903+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:45.458102+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:46.458279+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:47.458487+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:48.458647+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179568 data_alloc: 251658240 data_used: 49762304
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:49.458878+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 67239936 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:50.459021+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:51.459270+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:52.459456+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:53.459966+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179568 data_alloc: 251658240 data_used: 49762304
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:54.460143+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 67231744 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:55.460294+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 67223552 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:56.460545+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 67223552 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:57.460722+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 67223552 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:58.460859+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179888 data_alloc: 251658240 data_used: 49770496
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481083392 unmapped: 67215360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:59.461052+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19e48d000/0x0/0x1bfc00000, data 0x47bf27d/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481083392 unmapped: 67215360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:00.461223+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.632940292s of 24.362520218s, submitted: 111
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55391f800 session 0x55b5559425a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55484a400 session 0x55b5534ae1e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481083392 unmapped: 67215360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:01.461374+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b5550e74a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:02.461493+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:03.461614+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4985776 data_alloc: 234881024 data_used: 40955904
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:04.461744+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:05.461810+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f555000/0x0/0x1bfc00000, data 0x36ff27d/0x3929000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:06.461999+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:07.462159+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f555000/0x0/0x1bfc00000, data 0x36ff27d/0x3929000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:08.462282+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b37800 session 0x55b5550def00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4985776 data_alloc: 234881024 data_used: 40955904
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b553873c00 session 0x55b554edfa40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:09.462482+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f555000/0x0/0x1bfc00000, data 0x36ff27d/0x3929000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:10.462702+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55933e000 session 0x55b553426000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b552b36c00 session 0x55b5550df2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:11.462942+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.036637306s of 10.409495354s, submitted: 42
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:12.463103+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:13.463285+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992192 data_alloc: 234881024 data_used: 41013248
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:14.463468+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:15.463620+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:16.463766+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:17.463918+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:18.464072+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992192 data_alloc: 234881024 data_used: 41013248
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:19.464204+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:20.464325+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:21.464517+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:22.464635+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:23.464767+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478666752 unmapped: 69632000 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992992 data_alloc: 234881024 data_used: 41033728
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.889412880s of 12.891948700s, submitted: 1
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:24.464888+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:25.465027+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:26.465154+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:27.465326+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:28.465504+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4997344 data_alloc: 234881024 data_used: 41099264
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:29.465717+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:30.465872+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478691328 unmapped: 69607424 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:31.466041+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:32.466275+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:33.466376+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4997344 data_alloc: 234881024 data_used: 41099264
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:34.466492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:35.466632+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:36.466858+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:37.467017+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478699520 unmapped: 69599232 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 heartbeat osd_stat(store_statfs(0x19f553000/0x0/0x1bfc00000, data 0x36ff2b0/0x392b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933e000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 ms_handle_reset con 0x55b55933e000 session 0x55b5550e6f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.681811333s of 13.694669724s, submitted: 15
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55484a400 session 0x55b5539123c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:38.467168+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478707712 unmapped: 69591040 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5004238 data_alloc: 234881024 data_used: 41791488
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:39.467311+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 69582848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e7000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556880c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b556880c00 session 0x55b55395cf00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b55395d2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55467d800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55467d800 session 0x55b55395c960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b554f23860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5572e7000 session 0x55b5553d32c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b5522f2b40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:40.467443+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478863360 unmapped: 69435392 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55484a400 session 0x55b5522f2000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556880c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b556880c00 session 0x55b55348f0e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b553404f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b553404960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55484a400 session 0x55b553405e00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:41.467713+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478896128 unmapped: 69402624 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:42.467832+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478904320 unmapped: 69394432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:43.467925+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478904320 unmapped: 69394432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065770 data_alloc: 234881024 data_used: 41791488
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:44.468146+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478904320 unmapped: 69394432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:45.468365+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478920704 unmapped: 69378048 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:46.468547+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478969856 unmapped: 69328896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:47.468688+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478969856 unmapped: 69328896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.289855003s of 10.216338158s, submitted: 213
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:48.468801+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478969856 unmapped: 69328896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065770 data_alloc: 234881024 data_used: 41791488
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:49.469024+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478969856 unmapped: 69328896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:50.469155+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:51.469351+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:52.469492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:53.469683+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065770 data_alloc: 234881024 data_used: 41791488
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:54.469842+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:55.470039+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e7000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5572e7000 session 0x55b553426f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:56.470168+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b55484b400 session 0x55b55506cf00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:57.470314+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:58.470446+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5065770 data_alloc: 234881024 data_used: 41791488
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b5533890e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b5553d23c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:59.473120+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480059392 unmapped: 68239360 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5572e7000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:00.473270+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:01.473473+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:02.473685+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:03.473810+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 68321280 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5110410 data_alloc: 251658240 data_used: 47935488
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.608759880s of 15.859127045s, submitted: 95
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5590eac00 session 0x55b553405e00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:04.473975+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479985664 unmapped: 68313088 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e43c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee36000/0x0/0x1bfc00000, data 0x3e19f8c/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:05.474096+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 68296704 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:06.474220+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:07.474393+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:08.474473+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5124223 data_alloc: 251658240 data_used: 47972352
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:09.474593+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19ee2a000/0x0/0x1bfc00000, data 0x3e5ffaf/0x4054000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:10.474722+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:11.474893+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 68288512 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:12.475054+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 64290816 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19e10d000/0x0/0x1bfc00000, data 0x4b6efaf/0x4d63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:13.475192+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484859904 unmapped: 63438848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5243865 data_alloc: 251658240 data_used: 49958912
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:14.475269+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484859904 unmapped: 63438848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:15.475435+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484859904 unmapped: 63438848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:16.475596+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484859904 unmapped: 63438848 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19e0d3000/0x0/0x1bfc00000, data 0x4bb0faf/0x4da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.874275208s of 13.158001900s, submitted: 154
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:17.475774+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484990976 unmapped: 63307776 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:18.475950+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 63299584 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258465 data_alloc: 251658240 data_used: 50102272
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:19.476071+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 63299584 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:20.476200+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485007360 unmapped: 63291392 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df13000/0x0/0x1bfc00000, data 0x4d76faf/0x4f6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:21.476342+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df13000/0x0/0x1bfc00000, data 0x4d76faf/0x4f6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:22.476494+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:23.476634+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5259373 data_alloc: 251658240 data_used: 50102272
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:24.476778+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df13000/0x0/0x1bfc00000, data 0x4d76faf/0x4f6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:25.476918+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:26.477057+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:27.477217+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:28.477347+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5259921 data_alloc: 251658240 data_used: 50110464
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:29.477498+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:30.477655+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:31.477837+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:32.478011+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:33.478155+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5260561 data_alloc: 251658240 data_used: 50126848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:34.478289+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485015552 unmapped: 63283200 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552ef8400 session 0x55b5559e2960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b559e43c00 session 0x55b55395d2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.206138611s of 17.766921997s, submitted: 5
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:35.478533+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b553388d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19df12000/0x0/0x1bfc00000, data 0x4d77faf/0x4f6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485023744 unmapped: 63275008 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552ef8400 session 0x55b5558fe960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b554652f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5590eac00 session 0x55b5550dfe00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e43c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b559e43c00 session 0x55b5529fa000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:36.478748+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b5559d4780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552ef8400 session 0x55b5558ff2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b55392b680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5590eac00 session 0x55b552ee52c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b554e0c000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b554e0c000 session 0x55b5559423c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:37.479027+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19dc7e000/0x0/0x1bfc00000, data 0x500af9c/0x51ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:38.479160+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5284477 data_alloc: 251658240 data_used: 50122752
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:39.479259+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:40.479497+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552b36c00 session 0x55b554aba1e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:41.479671+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b552ef8400 session 0x55b553404960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 heartbeat osd_stat(store_statfs(0x19de43000/0x0/0x1bfc00000, data 0x4e46f9c/0x503b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:42.479830+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 63250432 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:43.480022+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 ms_handle_reset con 0x55b5535ef000 session 0x55b5558ffe00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485056512 unmapped: 63242240 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5268413 data_alloc: 251658240 data_used: 50044928
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b5590eac00 session 0x55b554f230e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:44.480146+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55467e800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485081088 unmapped: 63217664 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.754717827s of 10.119092941s, submitted: 62
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b55467e800 session 0x55b55348af00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:45.480318+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 427 heartbeat osd_stat(store_statfs(0x19de48000/0x0/0x1bfc00000, data 0x4e03cbc/0x5034000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485089280 unmapped: 63209472 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b552b37800 session 0x55b55271ef00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b553873c00 session 0x55b552927a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:46.480442+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485113856 unmapped: 63184896 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 427 ms_handle_reset con 0x55b5535ef000 session 0x55b55392a960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:47.480633+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:48.480782+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5292442 data_alloc: 251658240 data_used: 52748288
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:49.480902+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19de26000/0x0/0x1bfc00000, data 0x4e27cac/0x5057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:50.481023+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:51.481201+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:52.481367+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19de23000/0x0/0x1bfc00000, data 0x4e2985e/0x505a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:53.481511+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5297240 data_alloc: 251658240 data_used: 52822016
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:54.481650+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19de23000/0x0/0x1bfc00000, data 0x4e2985e/0x505a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1cd7f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:55.481802+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:56.482019+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485523456 unmapped: 62775296 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5590eac00 session 0x55b55392be00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b557231400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:57.482217+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.247345924s of 12.328977585s, submitted: 46
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485531648 unmapped: 62767104 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b557231400 session 0x55b55392a5a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:58.482534+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 490012672 unmapped: 58286080 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d876000/0x0/0x1bfc00000, data 0x4fc884e/0x51f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,10,2])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5281142 data_alloc: 251658240 data_used: 47751168
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:59.482727+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485605376 unmapped: 62693376 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:00.482905+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485621760 unmapped: 62676992 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:01.483148+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485629952 unmapped: 62668800 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:02.483290+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d865000/0x0/0x1bfc00000, data 0x4fd884e/0x5208000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:03.483441+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5279842 data_alloc: 251658240 data_used: 47960064
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:04.483576+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:05.483735+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:06.483892+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:07.484196+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:08.484356+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d865000/0x0/0x1bfc00000, data 0x4fd884e/0x5208000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5279842 data_alloc: 251658240 data_used: 47960064
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:09.484513+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485638144 unmapped: 62660608 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.823444366s of 12.734890938s, submitted: 62
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5559e2000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b553426b40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:10.484659+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485654528 unmapped: 62644224 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:11.484869+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485654528 unmapped: 62644224 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:12.485021+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485654528 unmapped: 62644224 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5558fef00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:13.485181+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e30e000/0x0/0x1bfc00000, data 0x453183e/0x4760000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5167756 data_alloc: 234881024 data_used: 45006848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:14.485303+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:15.485450+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:16.485569+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:17.485782+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:18.485954+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x450d81b/0x473b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5167756 data_alloc: 234881024 data_used: 45006848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:19.486121+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:20.486261+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x450d81b/0x473b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484a400 session 0x55b5559d5860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5572e7000 session 0x55b55506cf00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485670912 unmapped: 62627840 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.102567673s of 10.957017899s, submitted: 59
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5559e32c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:21.486496+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485679104 unmapped: 62619648 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:22.486631+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485679104 unmapped: 62619648 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:23.486750+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:24.486923+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:25.487093+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:26.487240+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:27.487386+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:28.487559+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:29.487678+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:30.487811+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 62611456 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:31.487982+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 62603264 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:32.488116+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 62603264 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:33.488272+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 62603264 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:34.488400+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 62603264 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:35.488633+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 62595072 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:36.488873+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 62595072 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:37.489056+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 62595072 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:38.489251+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 62595072 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:39.489526+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 62586880 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:40.489673+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:41.489921+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:42.490086+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:43.490225+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:44.490365+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:45.490494+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:46.490642+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 62578688 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:47.490822+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:48.491003+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4950720 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:49.491152+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f7a3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:50.491347+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:51.491551+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534a4d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b55463ba40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484a400 session 0x55b5534e4000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 62570496 heap: 548298752 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b55271eb40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.911609650s of 30.949949265s, submitted: 21
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b552927c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534af4a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b5533685a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:52.491670+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b55392c960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484a400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484a400 session 0x55b555943c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 73064448 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:53.491796+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 73064448 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5028950 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ed22000/0x0/0x1bfc00000, data 0x3b1e81b/0x3d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:54.491945+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 73064448 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:55.492093+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b555373a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485744640 unmapped: 73056256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534a4d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:56.492275+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485744640 unmapped: 73056256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ed22000/0x0/0x1bfc00000, data 0x3b1e81b/0x3d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:57.492490+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485744640 unmapped: 73056256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:58.492644+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b5559e32c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b55506cf00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 72908800 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5033528 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:59.492751+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 72908800 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:00.492870+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:01.493027+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecfd000/0x0/0x1bfc00000, data 0x3b4282b/0x3d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:02.493196+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:03.493331+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5111768 data_alloc: 251658240 data_used: 47833088
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:04.493514+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:05.493637+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:06.493784+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecfd000/0x0/0x1bfc00000, data 0x3b4282b/0x3d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:07.493919+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:08.494065+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5111768 data_alloc: 251658240 data_used: 47833088
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:09.494229+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:10.494347+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 491307008 unmapped: 67493888 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ecfd000/0x0/0x1bfc00000, data 0x3b4282b/0x3d71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.233640671s of 19.318712234s, submitted: 9
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:11.494546+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:12.494669+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:13.494822+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5170232 data_alloc: 251658240 data_used: 48013312
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:14.495194+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:15.495465+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:16.496455+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:17.496660+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:18.496921+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5170232 data_alloc: 251658240 data_used: 48013312
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:19.497057+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:20.497216+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:21.497390+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:22.497580+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:23.497731+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5170232 data_alloc: 251658240 data_used: 48013312
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:24.497921+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:25.498165+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:26.498478+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:27.498617+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e637000/0x0/0x1bfc00000, data 0x420882b/0x4437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559dd8000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b559dd8000 session 0x55b5550e70e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b551f4e5a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5539125a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b5553d2f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.500871658s of 16.613708496s, submitted: 41
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b553388d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5582a5400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5582a5400 session 0x55b5529fbe00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b55348b4a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5550e7680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:28.498726+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b553404000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5194787 data_alloc: 251658240 data_used: 48013312
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:29.498852+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:30.499007+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:31.499205+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e330000/0x0/0x1bfc00000, data 0x450e83b/0x473e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:32.499340+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:33.499518+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5194787 data_alloc: 251658240 data_used: 48013312
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:34.499657+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492912640 unmapped: 65888256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:35.499817+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b55463a3c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492920832 unmapped: 65880064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:36.499951+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e30c000/0x0/0x1bfc00000, data 0x453283b/0x4762000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 492920832 unmapped: 65880064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:37.500160+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:38.500549+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:39.500945+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5220027 data_alloc: 251658240 data_used: 51191808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:40.501360+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:41.501622+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:42.501956+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e30c000/0x0/0x1bfc00000, data 0x453283b/0x4762000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:43.502178+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.162872314s of 16.235620499s, submitted: 16
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:44.502343+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5220335 data_alloc: 251658240 data_used: 51191808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:45.502524+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19e30a000/0x0/0x1bfc00000, data 0x453383b/0x4763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:46.502725+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:47.502885+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493035520 unmapped: 65765376 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:48.503049+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 493862912 unmapped: 64937984 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:49.503257+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5319061 data_alloc: 251658240 data_used: 51576832
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d6de000/0x0/0x1bfc00000, data 0x515a83b/0x538a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:50.503509+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:51.503725+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d6a3000/0x0/0x1bfc00000, data 0x518d83b/0x53bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:52.503877+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:53.504029+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d6a3000/0x0/0x1bfc00000, data 0x518d83b/0x53bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d18f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:54.504276+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5329285 data_alloc: 251658240 data_used: 51396608
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.729024887s of 10.971504211s, submitted: 91
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:55.504470+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:56.504664+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19d28f000/0x0/0x1bfc00000, data 0x518f83b/0x53bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d5af9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484bc00 session 0x55b55336e780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5534aef00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494960640 unmapped: 63840256 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:57.504787+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5550e7a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:58.504952+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f676000/0x0/0x1bfc00000, data 0x420982b/0x4438000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f676000/0x0/0x1bfc00000, data 0x420982b/0x4438000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:59.505194+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5178639 data_alloc: 251658240 data_used: 48013312
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:00.505469+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:01.505704+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:02.505844+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f676000/0x0/0x1bfc00000, data 0x420982b/0x4438000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 494968832 unmapped: 63832064 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:03.506026+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5558fef00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5590eac00 session 0x55b5559e2000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:04.506145+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5538834a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:05.506306+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:06.506492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:07.506677+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:08.506872+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:09.506985+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:10.507116+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:11.507277+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:12.507445+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:13.507564+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:14.507700+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:15.507854+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:16.507987+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:17.508145+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:18.508279+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:19.508394+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:20.508569+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:21.508730+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:22.508852+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480346112 unmapped: 78454784 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:23.509062+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:24.509216+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:25.509358+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:26.509488+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:27.509637+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:28.509792+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:29.510006+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 78446592 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:30.510161+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 78438400 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:31.510370+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 78438400 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:32.510578+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 78438400 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:33.510794+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 78438400 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:34.510935+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4971374 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5534ae780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534e5a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5559e21e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b551f4fa40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5590eac00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.365959167s of 39.855525970s, submitted: 54
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5590eac00 session 0x55b5550df2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b554abb860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b554f225a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 78430208 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5558fe5a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b552ee4000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:35.511054+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 78430208 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:36.511192+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 78430208 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075c000/0x0/0x1bfc00000, data 0x312481b/0x3352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:37.511368+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 78430208 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:38.511575+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552ef8400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552ef8400 session 0x55b5529fa1e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:39.511777+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4981270 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:40.511925+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5559d4000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:41.512144+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075c000/0x0/0x1bfc00000, data 0x312481b/0x3352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:42.512285+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5559e3a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5529d01e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:43.512510+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5535ef000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:44.512647+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990358 data_alloc: 234881024 data_used: 37367808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:45.512908+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:46.513187+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:47.513377+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5534054a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.563242912s of 12.624855042s, submitted: 15
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5535ef000 session 0x55b554f22780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:48.513598+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:49.513743+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990226 data_alloc: 234881024 data_used: 37367808
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:50.513871+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:51.514107+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:52.514351+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:53.514577+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:54.514773+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480378880 unmapped: 78422016 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990134 data_alloc: 234881024 data_used: 37371904
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:55.514930+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480395264 unmapped: 78405632 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5526ee000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b555372000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:56.515100+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:57.515292+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:58.515496+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:59.515626+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.851888657s of 11.862312317s, submitted: 3
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990294 data_alloc: 234881024 data_used: 37376000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:00.515793+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a075a000/0x0/0x1bfc00000, data 0x312484e/0x3354000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:01.516031+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480403456 unmapped: 78397440 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b554ede000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b55348f680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55484bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:02.516222+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55484bc00 session 0x55b5534aeb40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:03.516520+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:04.516798+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:05.517034+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:06.517508+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:07.517787+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:08.518089+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:09.518408+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:10.518657+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 78389248 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:11.518917+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:12.519080+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:13.519290+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:14.519454+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:15.519597+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:16.519757+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:17.519878+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:18.520069+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480419840 unmapped: 78381056 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-06T08:25:19.520204+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _finish_auth 0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:20.291509+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:20.520408+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:21.520672+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:22.520808+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:23.520960+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:24.521105+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:25.521262+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:26.521409+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 78372864 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:27.521590+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:28.521741+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:29.521933+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:30.522115+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:31.522298+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e2000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:32.522589+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:33.522741+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:34.522897+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480436224 unmapped: 78364672 heap: 558800896 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4977071 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.667713165s of 35.853076935s, submitted: 37
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:35.523034+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486121472 unmapped: 76357632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b55271eb40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5529d1c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5559421e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5534270e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546cc000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546cc000 session 0x55b55336fc20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:36.523184+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480567296 unmapped: 81911808 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19fc76000/0x0/0x1bfc00000, data 0x3c0b80b/0x3e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:37.523314+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b553404960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b553405e00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480583680 unmapped: 81895424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5558ff860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b55392a5a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b556882800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b556882800 session 0x55b5559434a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f3da000/0x0/0x1bfc00000, data 0x44a780b/0x46d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5550df860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:38.523516+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480583680 unmapped: 81895424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534b8d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5559d50e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5529d1860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55933f000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55933f000 session 0x55b5559d4f00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5558fe960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:39.523661+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5558fe3c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b5529d1860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480788480 unmapped: 81690624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5534b8d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ba000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546ba000 session 0x55b5550df860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5558ff860
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b55392a5a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5162126 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b553404960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:40.523830+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480788480 unmapped: 81690624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:41.524016+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480788480 unmapped: 81690624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f090000/0x0/0x1bfc00000, data 0x47ef82b/0x4a1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:42.524239+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482213888 unmapped: 80265216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:43.524477+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f090000/0x0/0x1bfc00000, data 0x47ef82b/0x4a1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482213888 unmapped: 80265216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e43400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b559e43400 session 0x55b55392c960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:44.524632+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482369536 unmapped: 80109568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5241412 data_alloc: 251658240 data_used: 47656960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:45.524767+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482369536 unmapped: 80109568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:46.524904+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482369536 unmapped: 80109568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:47.525021+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f06c000/0x0/0x1bfc00000, data 0x481382b/0x4a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484114432 unmapped: 78364672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:48.525191+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.800157547s of 13.137754440s, submitted: 45
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546ca000 session 0x55b552ee4780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484114432 unmapped: 78364672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:49.525352+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484114432 unmapped: 78364672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f048000/0x0/0x1bfc00000, data 0x483782b/0x4a66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5308328 data_alloc: 251658240 data_used: 55156736
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:50.525492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485416960 unmapped: 77062144 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:51.525648+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485416960 unmapped: 77062144 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:52.525781+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 485416960 unmapped: 77062144 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19f048000/0x0/0x1bfc00000, data 0x483782b/0x4a66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,3,3,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5534b9680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5534af4a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:53.525919+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553873c00 session 0x55b553405a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 75759616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:54.526062+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 75759616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5347188 data_alloc: 251658240 data_used: 55115776
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:55.526181+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 75759616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553872000 session 0x55b5534a4d20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552c9bc00 session 0x55b554abb680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:56.526320+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 75726848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552c9bc00 session 0x55b55463af00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19ebad000/0x0/0x1bfc00000, data 0x4cd281b/0x4f00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:57.526483+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 76251136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:58.526616+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 76251136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:59.526755+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5559421e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546ca400 session 0x55b5529d1c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 76251136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5219927 data_alloc: 251658240 data_used: 46284800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.214290619s of 11.467612267s, submitted: 101
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b5558ffe00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:00.526930+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:01.527129+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:02.527314+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:03.527482+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:04.527613+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:05.527745+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:06.527905+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:07.528043+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:08.528177+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:09.528317+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:10.528467+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:11.528635+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:12.528790+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:13.528953+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:14.529589+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:15.529793+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:16.529972+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:17.530106+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:18.530307+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:19.530494+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:20.530636+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:21.530790+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:22.530896+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:23.531036+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:24.531184+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:25.531368+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:26.531616+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:27.531785+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:28.531911+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:29.532060+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:30.532232+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:31.532394+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:32.532565+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:33.532682+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:34.532802+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:35.532885+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:36.532996+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:37.533137+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:38.533286+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:39.533393+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:40.533572+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:41.533759+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:42.533932+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:43.534161+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:44.534299+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:45.534491+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:46.534657+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:47.534831+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:48.535010+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:49.535151+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:50.535313+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:51.535521+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:52.535681+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:53.535820+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:54.535997+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481902592 unmapped: 80576512 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4995570 data_alloc: 234881024 data_used: 36814848
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:55.536120+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:56.536265+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:57.536531+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:58.536678+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b37800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.952594757s of 58.991859436s, submitted: 17
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b37800 session 0x55b5533681e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552b36c00 session 0x55b55392ba40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:59.536799+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482541568 unmapped: 79937536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4999730 data_alloc: 234881024 data_used: 36880384
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:00.536944+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b552c9bc00 session 0x55b5533efe00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482541568 unmapped: 79937536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:01.537102+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482541568 unmapped: 79937536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:02.537266+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x1a07e3000/0x0/0x1bfc00000, data 0x309e80b/0x32cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482541568 unmapped: 79937536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b5546ca400 session 0x55b5522f3e00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:03.537403+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482549760 unmapped: 79929344 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b55538b400 session 0x55b5550deb40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553872000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:04.537572+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 ms_handle_reset con 0x55b553872000 session 0x55b5534261e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5083435 data_alloc: 234881024 data_used: 36880384
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:05.537705+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:06.537828+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:07.537966+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 heartbeat osd_stat(store_statfs(0x19fd31000/0x0/0x1bfc00000, data 0x3b5080b/0x3d7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:08.538159+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482557952 unmapped: 79921152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.000549316s of 10.172869682s, submitted: 41
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:09.538314+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482369536 unmapped: 80109568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5091959 data_alloc: 234881024 data_used: 36888576
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 430 ms_handle_reset con 0x55b552b36c00 session 0x55b55506cb40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:10.538518+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482385920 unmapped: 80093184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 431 heartbeat osd_stat(store_statfs(0x19fd27000/0x0/0x1bfc00000, data 0x3b5421a/0x3d85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:11.538689+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 431 ms_handle_reset con 0x55b5546ca400 session 0x55b55392a780
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482328576 unmapped: 80150528 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 432 ms_handle_reset con 0x55b552c9bc00 session 0x55b553883680
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:12.538835+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 80142336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:13.539012+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 432 ms_handle_reset con 0x55b55538b400 session 0x55b55395cf00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 80142336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:14.539158+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482336768 unmapped: 80142336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5153117 data_alloc: 234881024 data_used: 36892672
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559e43400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 432 ms_handle_reset con 0x55b559e43400 session 0x55b5550de000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:15.539298+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 432 ms_handle_reset con 0x55b552b36c00 session 0x55b554aba5a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482549760 unmapped: 79929344 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 432 heartbeat osd_stat(store_statfs(0x19ef60000/0x0/0x1bfc00000, data 0x4919c4c/0x4b4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _renew_subs
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 433 ms_handle_reset con 0x55b552c9bc00 session 0x55b55395d2c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 433 ms_handle_reset con 0x55b5546ca400 session 0x55b554f23a40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:16.539497+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 433 ms_handle_reset con 0x55b553873c00 session 0x55b5559434a0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482566144 unmapped: 79912960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:17.539730+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482566144 unmapped: 79912960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 433 heartbeat osd_stat(store_statfs(0x19eb58000/0x0/0x1bfc00000, data 0x4d1e996/0x4f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:18.539885+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482566144 unmapped: 79912960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:19.540078+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482574336 unmapped: 79904768 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5242599 data_alloc: 234881024 data_used: 36904960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.926185608s of 11.230964661s, submitted: 72
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:20.540301+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:21.540503+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:22.540690+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:23.540834+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:24.541024+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5244245 data_alloc: 234881024 data_used: 36904960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:25.541188+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:26.541392+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482582528 unmapped: 79896576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:27.541684+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:28.541847+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:29.542099+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b55538b400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b55538b400 session 0x55b5553732c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5244245 data_alloc: 234881024 data_used: 36904960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:30.542305+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b552b36c00 session 0x55b55395c960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:31.542551+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b552c9bc00 session 0x55b5550e63c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b553873c00 session 0x55b5558fe000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b559dd1c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.029902458s of 12.038014412s, submitted: 10
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:32.542734+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 79880192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:33.542979+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481034240 unmapped: 81444864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:34.543228+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5327897 data_alloc: 234881024 data_used: 44769280
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:35.543682+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:36.543821+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:37.543968+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:38.544103+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:39.544243+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5327897 data_alloc: 234881024 data_used: 44769280
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:40.544387+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:41.544630+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:42.544800+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481337344 unmapped: 81141760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:43.544986+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481345536 unmapped: 81133568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:44.545145+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481345536 unmapped: 81133568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.649188042s of 12.812686920s, submitted: 1
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5328013 data_alloc: 234881024 data_used: 44781568
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19eb55000/0x0/0x1bfc00000, data 0x4d20548/0x4f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:45.545309+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 483876864 unmapped: 78602240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19e2c9000/0x0/0x1bfc00000, data 0x5930548/0x57e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:46.545474+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484933632 unmapped: 77545472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:47.545683+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484941824 unmapped: 77537280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:48.545845+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484941824 unmapped: 77537280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:49.545997+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484941824 unmapped: 77537280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5427449 data_alloc: 234881024 data_used: 44855296
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:50.546164+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484941824 unmapped: 77537280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:51.546372+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 77668352 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19e2a2000/0x0/0x1bfc00000, data 0x5957548/0x580c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7205.4 total, 600.0 interval
                                           Cumulative writes: 67K writes, 265K keys, 67K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.04 MB/s
                                           Cumulative WAL: 67K writes, 24K syncs, 2.75 writes per sync, written: 0.25 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3267 writes, 12K keys, 3267 commit groups, 1.0 writes per commit group, ingest: 12.60 MB, 0.02 MB/s
                                           Interval WAL: 3267 writes, 1351 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b5546ca400 session 0x55b5553d2b40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:52.546517+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b559dd1c00 session 0x55b55348fa40
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 heartbeat osd_stat(store_statfs(0x19e2a2000/0x0/0x1bfc00000, data 0x5957548/0x580c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 ms_handle_reset con 0x55b552b36c00 session 0x55b55348e1e0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 77660160 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:53.546719+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 77660160 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:54.546860+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 77660160 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5424449 data_alloc: 234881024 data_used: 44859392
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:55.547027+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 77660160 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552c9bc00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.548606873s of 10.875352859s, submitted: 89
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b553873c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:56.547199+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 435 ms_handle_reset con 0x55b552c9bc00 session 0x55b5533683c0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b5546ca400
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 435 ms_handle_reset con 0x55b5546ca400 session 0x55b5559d4960
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478429184 unmapped: 84049920 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:57.547329+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b555389000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 436 ms_handle_reset con 0x55b553873c00 session 0x55b5534e4000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 436 ms_handle_reset con 0x55b555389000 session 0x55b5553d3c20
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:58.547511+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 436 heartbeat osd_stat(store_statfs(0x1a07c9000/0x0/0x1bfc00000, data 0x30ad05a/0x32e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:59.547672+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052791 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:00.547829+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:01.548027+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:02.548170+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 436 heartbeat osd_stat(store_statfs(0x1a07c9000/0x0/0x1bfc00000, data 0x30ad05a/0x32e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:03.548349+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:04.548499+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5052791 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:05.548653+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:06.548792+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:07.548994+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 84549632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:08.549155+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 84549632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:09.549335+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:10.549504+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:11.549755+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:12.549921+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:13.550106+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477945856 unmapped: 84533248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:14.550322+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477945856 unmapped: 84533248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:15.550507+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:16.550721+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:17.550908+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:18.551069+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:19.551218+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:20.551365+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:21.551588+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477954048 unmapped: 84525056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:22.551789+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:23.551945+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:24.552084+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:25.552279+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:26.552444+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:27.552631+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:28.552815+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:29.552983+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:30.553157+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:31.553357+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:32.553509+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:33.553712+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:34.553896+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:35.554036+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:36.554317+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:37.554478+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477986816 unmapped: 84492288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:38.554615+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477995008 unmapped: 84484096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:39.554790+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:40.554993+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:41.555195+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:42.555383+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:43.555562+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:44.555707+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:45.555824+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478003200 unmapped: 84475904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:46.555999+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:47.556271+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:48.556469+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:49.556681+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: mgrc ms_handle_reset ms_handle_reset con 0x55b55ab1f800
Dec 06 08:46:58 compute-1 ceph-osd[79002]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/798720280
Dec 06 08:46:58 compute-1 ceph-osd[79002]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: get_auth_request con 0x55b555389000 auth_method 0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: mgrc handle_mgr_configure stats_period=5
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:50.556845+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:51.557166+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:52.557374+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:53.557574+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478011392 unmapped: 84467712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:54.557756+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:55.557903+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:56.558080+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:57.558283+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:58.558446+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:59.558607+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:00.558752+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:01.559004+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478019584 unmapped: 84459520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:02.559211+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 84451328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:03.559387+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 84451328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:04.559664+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 84451328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:05.559920+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478027776 unmapped: 84451328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:06.560098+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 84434944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:07.560324+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 84434944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:08.560520+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 84434944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:09.560747+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 84434944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:10.560896+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:11.561180+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:12.561369+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:13.561599+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:14.561900+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:15.562068+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:16.562282+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:17.562472+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 84426752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:18.562648+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:19.562872+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:20.563116+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:21.563533+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:22.563800+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:23.564040+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:24.564229+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 84418560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:25.564466+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478142464 unmapped: 84336640 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:26.564617+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'config diff' '{prefix=config diff}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'config show' '{prefix=config show}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477970432 unmapped: 84508672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:27.564817+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477962240 unmapped: 84516864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:28.565037+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'log dump' '{prefix=log dump}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 488865792 unmapped: 73613312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:29.566550+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'perf dump' '{prefix=perf dump}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'perf schema' '{prefix=perf schema}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 84852736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:30.566734+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 84852736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:31.566954+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 84852736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:32.567148+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 84852736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:33.567333+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477626368 unmapped: 84852736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:34.567468+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 84844544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:35.567653+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 84844544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:36.567845+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 84844544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:37.568053+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 84844544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:38.568354+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 84844544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:39.568467+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 84844544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:40.568636+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 84844544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:41.568840+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477634560 unmapped: 84844544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:42.569079+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 84811776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:43.569257+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 84811776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:44.569462+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 84811776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:45.569668+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 84811776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:46.569889+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 84811776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:47.570068+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 84811776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:48.570234+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 84811776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:49.570458+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477667328 unmapped: 84811776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:50.570637+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:51.570857+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477683712 unmapped: 84795392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:52.571039+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 84787200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:53.571195+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 84787200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:54.571378+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 84787200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:55.571551+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 84787200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:56.571694+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 84787200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:57.571819+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 84787200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:58.571950+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477691904 unmapped: 84787200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:59.572107+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477700096 unmapped: 84779008 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:00.572275+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477708288 unmapped: 84770816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:01.572476+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477708288 unmapped: 84770816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:02.572615+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477708288 unmapped: 84770816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:03.572777+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477708288 unmapped: 84770816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:04.572946+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477708288 unmapped: 84770816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:05.573119+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477708288 unmapped: 84770816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:06.573298+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477708288 unmapped: 84770816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:07.573550+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 84762624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:08.573731+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 84762624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:09.573882+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 84762624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:10.574033+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 84762624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:11.574263+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 84762624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:12.574467+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 84762624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:13.574595+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 84762624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:14.574741+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477716480 unmapped: 84762624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:15.574905+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 84729856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:16.575084+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 84729856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:17.575262+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 84729856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:18.575460+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477749248 unmapped: 84729856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:19.575600+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477757440 unmapped: 84721664 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:20.575749+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477757440 unmapped: 84721664 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:21.575914+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477757440 unmapped: 84721664 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:22.576077+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477757440 unmapped: 84721664 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:23.576226+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477773824 unmapped: 84705280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:24.576503+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477773824 unmapped: 84705280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:25.576638+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477773824 unmapped: 84705280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:26.576836+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477773824 unmapped: 84705280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:27.577062+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477773824 unmapped: 84705280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:28.577270+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477773824 unmapped: 84705280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:29.577485+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477773824 unmapped: 84705280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:30.577626+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477773824 unmapped: 84705280 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055765 data_alloc: 234881024 data_used: 33239040
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:31.577806+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 84688896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:32.577983+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 84688896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:33.578141+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 84688896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:34.578339+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 84688896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c6000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:35.578512+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 84688896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 159.212417603s of 159.564025879s, submitted: 123
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33243136
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:36.578669+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 84688896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:37.578823+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 84688896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:38.578948+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477790208 unmapped: 84688896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:39.579140+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477798400 unmapped: 84680704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:40.579279+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477798400 unmapped: 84680704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:41.579457+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477855744 unmapped: 84623360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:42.579646+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477863936 unmapped: 84615168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:43.579797+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477880320 unmapped: 84598784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:44.579918+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477880320 unmapped: 84598784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:45.580082+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.852506638s of 10.005393028s, submitted: 144
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477880320 unmapped: 84598784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:46.580249+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 84590592 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:47.580396+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477880320 unmapped: 84598784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:48.580546+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477880320 unmapped: 84598784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:49.580733+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477880320 unmapped: 84598784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:50.580848+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477904896 unmapped: 84574208 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055117 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [0,0,0,1])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:51.580973+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477913088 unmapped: 84566016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:52.581135+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 84557824 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:53.581291+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 84541440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:54.581511+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478986240 unmapped: 83492864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:55.581655+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 478994432 unmapped: 83484672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.743370056s of 10.503570557s, submitted: 140
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:56.581829+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479002624 unmapped: 83476480 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a07c7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c14f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:57.581982+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479027200 unmapped: 83451904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:58.582161+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479027200 unmapped: 83451904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:59.582314+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:00.582557+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:01.582827+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:02.583029+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:03.583255+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:04.583495+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:05.583760+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:06.583982+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:07.584238+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:08.584453+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:09.584620+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:10.584769+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479043584 unmapped: 83435520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:11.585050+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479051776 unmapped: 83427328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:12.585215+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479051776 unmapped: 83427328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:13.585516+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479051776 unmapped: 83427328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:14.585696+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479051776 unmapped: 83427328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:15.585869+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479059968 unmapped: 83419136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:16.586100+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479059968 unmapped: 83419136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:17.586316+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479059968 unmapped: 83419136 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:18.586526+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479068160 unmapped: 83410944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:19.586749+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479076352 unmapped: 83402752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:20.586948+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479076352 unmapped: 83402752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:21.587269+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479076352 unmapped: 83402752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:22.587568+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479076352 unmapped: 83402752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:23.587837+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479076352 unmapped: 83402752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:24.588055+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479076352 unmapped: 83402752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:25.588258+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479076352 unmapped: 83402752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:26.588521+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479076352 unmapped: 83402752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:27.588680+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479084544 unmapped: 83394560 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:28.588883+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479092736 unmapped: 83386368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:29.589045+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479092736 unmapped: 83386368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:30.589264+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479092736 unmapped: 83386368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:31.589502+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479092736 unmapped: 83386368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:32.589670+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479092736 unmapped: 83386368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:33.589861+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479092736 unmapped: 83386368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:34.590109+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479092736 unmapped: 83386368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:35.590327+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479109120 unmapped: 83369984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:36.590544+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479109120 unmapped: 83369984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:37.590731+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479109120 unmapped: 83369984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 ms_handle_reset con 0x55b554a5fc00 session 0x55b55395c000
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: handle_auth_request added challenge on 0x55b552b36c00
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:38.590926+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479109120 unmapped: 83369984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:39.591078+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479109120 unmapped: 83369984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:40.591300+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479109120 unmapped: 83369984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:41.591535+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479109120 unmapped: 83369984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:42.591696+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479109120 unmapped: 83369984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:43.591902+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479117312 unmapped: 83361792 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:44.592037+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479117312 unmapped: 83361792 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:45.592174+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479125504 unmapped: 83353600 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:46.592382+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479125504 unmapped: 83353600 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:47.592492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479125504 unmapped: 83353600 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:48.592631+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479125504 unmapped: 83353600 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:49.592868+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479125504 unmapped: 83353600 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:50.593056+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479125504 unmapped: 83353600 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:51.593247+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 83337216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:52.593393+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 83337216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:53.593718+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 83337216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:54.593869+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 83337216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:55.594008+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 83337216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:56.594236+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 83337216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:57.594510+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 83337216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:58.594742+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479141888 unmapped: 83337216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:59.594940+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479150080 unmapped: 83329024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:00.595148+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479150080 unmapped: 83329024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:01.595530+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479150080 unmapped: 83329024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:02.595765+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479150080 unmapped: 83329024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:03.596051+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479150080 unmapped: 83329024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:04.596258+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479158272 unmapped: 83320832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:05.596534+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479158272 unmapped: 83320832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:06.596681+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479158272 unmapped: 83320832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:07.596847+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479174656 unmapped: 83304448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:08.597040+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479174656 unmapped: 83304448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:09.597242+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479174656 unmapped: 83304448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:10.597403+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479174656 unmapped: 83304448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:11.597646+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479174656 unmapped: 83304448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:12.597839+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479174656 unmapped: 83304448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:13.598017+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479174656 unmapped: 83304448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:14.598201+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479174656 unmapped: 83304448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:15.598438+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 83296256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:16.598647+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 83296256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:17.598835+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 83296256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:18.598993+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 83296256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:19.599185+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 83296256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:20.599364+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 83296256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:21.599628+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 83296256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:22.599855+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 83296256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:23.600026+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479191040 unmapped: 83288064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:24.600337+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479191040 unmapped: 83288064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:25.600507+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479191040 unmapped: 83288064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:26.600674+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479191040 unmapped: 83288064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:27.600866+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479191040 unmapped: 83288064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:28.601793+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479191040 unmapped: 83288064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:29.601999+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479191040 unmapped: 83288064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:30.602148+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479191040 unmapped: 83288064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:31.602343+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479199232 unmapped: 83279872 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:32.602570+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479199232 unmapped: 83279872 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:33.602710+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479199232 unmapped: 83279872 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:34.602909+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479199232 unmapped: 83279872 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:35.603082+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479199232 unmapped: 83279872 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:36.603253+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479199232 unmapped: 83279872 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:37.603492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479199232 unmapped: 83279872 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:38.603729+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479199232 unmapped: 83279872 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:39.603885+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479207424 unmapped: 83271680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:40.604051+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479215616 unmapped: 83263488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:41.604351+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479215616 unmapped: 83263488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:42.604562+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479215616 unmapped: 83263488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:43.604758+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479223808 unmapped: 83255296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:44.604949+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479223808 unmapped: 83255296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:45.605104+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479223808 unmapped: 83255296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:46.605274+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 83247104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:47.605469+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 83247104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:48.605756+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 83247104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:49.605934+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 83247104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:50.606154+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 83247104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:51.606503+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 83247104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:52.606720+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 83247104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:53.606913+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 83247104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:54.607197+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479248384 unmapped: 83230720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:55.607393+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479248384 unmapped: 83230720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:56.607617+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479248384 unmapped: 83230720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:57.607775+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479248384 unmapped: 83230720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:58.607974+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479248384 unmapped: 83230720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:59.608161+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479248384 unmapped: 83230720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:00.608360+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479248384 unmapped: 83230720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:01.608776+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479248384 unmapped: 83230720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:02.608955+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479256576 unmapped: 83222528 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:03.609163+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479264768 unmapped: 83214336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:04.609317+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479264768 unmapped: 83214336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:05.609514+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479264768 unmapped: 83214336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:06.609660+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479264768 unmapped: 83214336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:07.609818+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479264768 unmapped: 83214336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:08.610010+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479264768 unmapped: 83214336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:09.610162+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479264768 unmapped: 83214336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:10.610306+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 83189760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:11.610487+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 83189760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:12.610628+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 83189760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:13.610792+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 83189760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:14.611002+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 83189760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:15.611141+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 83189760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:16.611336+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 83189760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:17.611497+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479289344 unmapped: 83189760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:18.611703+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 83181568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:19.611906+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 83181568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:20.612036+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 83181568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:21.612268+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 83181568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:22.612495+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 83181568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:23.612642+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 83181568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:24.612780+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 83181568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:25.612990+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479297536 unmapped: 83181568 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:26.613175+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479313920 unmapped: 83165184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:27.613359+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479313920 unmapped: 83165184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:28.613526+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479313920 unmapped: 83165184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:29.613660+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479313920 unmapped: 83165184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:30.613822+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479313920 unmapped: 83165184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:31.614026+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479313920 unmapped: 83165184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:32.614226+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479313920 unmapped: 83165184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:33.614385+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479313920 unmapped: 83165184 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:34.614549+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 83148800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:35.614736+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 83148800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:36.614875+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 83148800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:37.615103+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 83148800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:38.615404+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 83148800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:39.615723+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 83148800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:40.615935+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 83148800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:41.616144+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479330304 unmapped: 83148800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:42.616343+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479346688 unmapped: 83132416 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:43.616536+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479346688 unmapped: 83132416 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:44.616784+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 83124224 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:45.617058+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 83124224 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:46.617236+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 83124224 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:47.617403+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 83124224 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:48.617641+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 83124224 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:49.617841+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479354880 unmapped: 83124224 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:50.617988+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479371264 unmapped: 83107840 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:51.618234+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479371264 unmapped: 83107840 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:52.618398+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479371264 unmapped: 83107840 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:53.618619+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479371264 unmapped: 83107840 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:54.618837+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479371264 unmapped: 83107840 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:55.618983+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479371264 unmapped: 83107840 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:56.619217+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479371264 unmapped: 83107840 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:57.619443+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479371264 unmapped: 83107840 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:58.619621+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479395840 unmapped: 83083264 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:59.619788+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479404032 unmapped: 83075072 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:00.619957+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479412224 unmapped: 83066880 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:01.620157+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479412224 unmapped: 83066880 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:02.620334+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479412224 unmapped: 83066880 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:03.620505+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479412224 unmapped: 83066880 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:04.620709+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479412224 unmapped: 83066880 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:05.620890+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479412224 unmapped: 83066880 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:06.621146+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:07.621374+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:08.621552+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:09.621701+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:10.621881+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:11.622110+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:12.622376+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:13.622574+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:14.622751+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:15.622934+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:16.623117+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:17.623297+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:18.623490+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:19.623628+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479428608 unmapped: 83050496 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:20.623786+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479436800 unmapped: 83042304 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:21.623989+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479436800 unmapped: 83042304 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:22.624115+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479444992 unmapped: 83034112 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:23.624360+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479444992 unmapped: 83034112 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:24.624498+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479444992 unmapped: 83034112 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:25.624696+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479444992 unmapped: 83034112 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:26.624828+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479444992 unmapped: 83034112 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:27.624984+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479444992 unmapped: 83034112 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:28.625132+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479444992 unmapped: 83034112 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:29.625301+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479453184 unmapped: 83025920 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:30.625495+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479469568 unmapped: 83009536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:31.625670+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479469568 unmapped: 83009536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:32.625857+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479469568 unmapped: 83009536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:33.626016+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:34.626149+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479469568 unmapped: 83009536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:35.626294+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479469568 unmapped: 83009536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:36.626482+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479469568 unmapped: 83009536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:37.626626+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479469568 unmapped: 83009536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:38.626779+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479469568 unmapped: 83009536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:39.626946+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479494144 unmapped: 82984960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:40.627076+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479494144 unmapped: 82984960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:41.627265+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479494144 unmapped: 82984960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:42.627469+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479494144 unmapped: 82984960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:43.627689+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479494144 unmapped: 82984960 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:44.627833+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479502336 unmapped: 82976768 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:45.627987+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479502336 unmapped: 82976768 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:46.628176+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479502336 unmapped: 82976768 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:47.628338+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479510528 unmapped: 82968576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:48.628531+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479510528 unmapped: 82968576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:49.628805+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479510528 unmapped: 82968576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:50.628959+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479510528 unmapped: 82968576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:51.629128+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479510528 unmapped: 82968576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:52.629308+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479510528 unmapped: 82968576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:53.629517+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479510528 unmapped: 82968576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:54.629734+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479510528 unmapped: 82968576 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:55.629912+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479526912 unmapped: 82952192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:56.630126+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479526912 unmapped: 82952192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:57.630280+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479526912 unmapped: 82952192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:58.630448+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479526912 unmapped: 82952192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:59.630680+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479526912 unmapped: 82952192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:00.630866+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479526912 unmapped: 82952192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:01.631078+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479526912 unmapped: 82952192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:02.631251+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479526912 unmapped: 82952192 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:03.631410+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479535104 unmapped: 82944000 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:04.631572+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479535104 unmapped: 82944000 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:05.631730+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479551488 unmapped: 82927616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:06.632123+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479551488 unmapped: 82927616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:07.632269+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479551488 unmapped: 82927616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:08.632391+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479551488 unmapped: 82927616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:09.632520+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479551488 unmapped: 82927616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:10.632665+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479551488 unmapped: 82927616 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:11.632843+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479559680 unmapped: 82919424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:12.632982+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479559680 unmapped: 82919424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:13.633118+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479559680 unmapped: 82919424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:14.633309+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479559680 unmapped: 82919424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:15.633474+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479559680 unmapped: 82919424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:16.633605+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479559680 unmapped: 82919424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:17.633764+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479559680 unmapped: 82919424 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:18.633904+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479567872 unmapped: 82911232 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:19.634078+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479584256 unmapped: 82894848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:20.634237+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479584256 unmapped: 82894848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:21.634395+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479592448 unmapped: 82886656 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:22.634577+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479592448 unmapped: 82886656 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:23.634707+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479600640 unmapped: 82878464 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:24.634842+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479600640 unmapped: 82878464 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:25.635113+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479600640 unmapped: 82878464 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:26.635344+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479600640 unmapped: 82878464 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:27.635513+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479617024 unmapped: 82862080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:28.635694+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479617024 unmapped: 82862080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:29.635871+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479617024 unmapped: 82862080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:30.636009+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479617024 unmapped: 82862080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:31.636128+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479617024 unmapped: 82862080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:32.636253+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479617024 unmapped: 82862080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:33.636473+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479617024 unmapped: 82862080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:34.636586+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479617024 unmapped: 82862080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:35.636755+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479625216 unmapped: 82853888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:36.636914+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479625216 unmapped: 82853888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:37.637097+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479625216 unmapped: 82853888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:38.637281+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479625216 unmapped: 82853888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:39.637465+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479625216 unmapped: 82853888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:40.637623+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479641600 unmapped: 82837504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:41.637836+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479641600 unmapped: 82837504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:42.637999+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479641600 unmapped: 82837504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:43.638157+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479649792 unmapped: 82829312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:44.638294+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479649792 unmapped: 82829312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:45.638525+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479649792 unmapped: 82829312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:46.638703+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479649792 unmapped: 82829312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:47.638871+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479649792 unmapped: 82829312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:48.639045+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479649792 unmapped: 82829312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:49.639183+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479649792 unmapped: 82829312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:50.639318+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479657984 unmapped: 82821120 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:51.639499+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 82804736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:52.639641+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 82804736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:53.639769+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 82804736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:54.639998+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 82804736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:55.640128+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 82804736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:56.640297+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 82804736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:57.640453+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 82804736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:58.640571+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479674368 unmapped: 82804736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:59.640725+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479690752 unmapped: 82788352 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:00.640893+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479690752 unmapped: 82788352 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:01.641251+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479690752 unmapped: 82788352 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:02.641543+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479690752 unmapped: 82788352 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:03.641700+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479707136 unmapped: 82771968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:04.641856+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479707136 unmapped: 82771968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:05.642027+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479707136 unmapped: 82771968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:06.642234+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479707136 unmapped: 82771968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:07.642364+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479715328 unmapped: 82763776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:08.642540+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479715328 unmapped: 82763776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:09.642737+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479715328 unmapped: 82763776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:10.642894+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479715328 unmapped: 82763776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:11.643055+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479715328 unmapped: 82763776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:12.643225+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479715328 unmapped: 82763776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:13.643378+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479715328 unmapped: 82763776 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:14.643500+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479723520 unmapped: 82755584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:15.643622+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479731712 unmapped: 82747392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:16.643792+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479731712 unmapped: 82747392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:17.643958+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479731712 unmapped: 82747392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:18.644175+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479731712 unmapped: 82747392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:19.644327+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479731712 unmapped: 82747392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:20.644537+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479731712 unmapped: 82747392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:21.644778+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479731712 unmapped: 82747392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:22.644941+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479731712 unmapped: 82747392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:23.645089+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 82722816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:24.645247+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 82722816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:25.645438+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 82722816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:26.645636+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 82722816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:27.645776+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 82722816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:28.645918+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 82722816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:29.646634+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 82722816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:30.646802+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479756288 unmapped: 82722816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:31.646985+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479772672 unmapped: 82706432 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:32.647178+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479772672 unmapped: 82706432 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:33.647321+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479780864 unmapped: 82698240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:34.647476+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479780864 unmapped: 82698240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:35.647609+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479780864 unmapped: 82698240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:36.647742+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479780864 unmapped: 82698240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:37.647924+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479780864 unmapped: 82698240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:38.648084+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479789056 unmapped: 82690048 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:39.648236+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479797248 unmapped: 82681856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:40.648377+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479797248 unmapped: 82681856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:41.648552+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479797248 unmapped: 82681856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:42.648666+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479797248 unmapped: 82681856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:43.648871+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479797248 unmapped: 82681856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:44.649015+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479797248 unmapped: 82681856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:45.649185+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479805440 unmapped: 82673664 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:46.649351+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479805440 unmapped: 82673664 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:47.649529+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479813632 unmapped: 82665472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:48.649671+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479813632 unmapped: 82665472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:49.649818+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479813632 unmapped: 82665472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:50.649963+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479813632 unmapped: 82665472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:51.650191+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479813632 unmapped: 82665472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:52.650462+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479813632 unmapped: 82665472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:53.650616+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479813632 unmapped: 82665472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:54.650754+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479813632 unmapped: 82665472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:55.650883+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:56.651029+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:57.651161+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:58.651345+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:59.651576+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:00.651768+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:01.651937+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:02.652117+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:03.652306+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479838208 unmapped: 82640896 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:04.652532+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479846400 unmapped: 82632704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:05.652693+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479846400 unmapped: 82632704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:06.652830+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479846400 unmapped: 82632704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:07.652981+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479846400 unmapped: 82632704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:08.653112+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479846400 unmapped: 82632704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:09.653272+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479846400 unmapped: 82632704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:10.653449+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 82608128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:11.653648+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 82608128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:12.653813+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 82608128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:13.654546+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 82608128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:14.654678+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 82608128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:15.654853+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 82608128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:16.655006+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 82608128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:17.655117+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479870976 unmapped: 82608128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:18.655279+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479879168 unmapped: 82599936 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:19.655393+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479879168 unmapped: 82599936 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:20.655537+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479879168 unmapped: 82599936 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:21.655665+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479887360 unmapped: 82591744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:22.655765+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479887360 unmapped: 82591744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:23.655989+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479887360 unmapped: 82591744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:24.656171+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479887360 unmapped: 82591744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:25.656327+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479895552 unmapped: 82583552 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:26.656491+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479903744 unmapped: 82575360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:27.656704+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479903744 unmapped: 82575360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:28.656845+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479903744 unmapped: 82575360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:29.657081+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479903744 unmapped: 82575360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:30.657246+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479903744 unmapped: 82575360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:31.657509+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479903744 unmapped: 82575360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:32.657705+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479903744 unmapped: 82575360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:33.657872+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479903744 unmapped: 82575360 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:34.658027+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479928320 unmapped: 82550784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:35.658276+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479928320 unmapped: 82550784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:36.658561+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479928320 unmapped: 82550784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:37.658730+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479928320 unmapped: 82550784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:38.658848+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479928320 unmapped: 82550784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:39.658992+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479928320 unmapped: 82550784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:40.659160+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479928320 unmapped: 82550784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:41.659355+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479928320 unmapped: 82550784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:42.659508+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479944704 unmapped: 82534400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:43.659639+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479944704 unmapped: 82534400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:44.659785+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479944704 unmapped: 82534400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:45.660031+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479944704 unmapped: 82534400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:46.660242+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479944704 unmapped: 82534400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:47.660480+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479944704 unmapped: 82534400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:48.660652+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479944704 unmapped: 82534400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:49.661069+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479961088 unmapped: 82518016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:50.661382+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 82501632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:51.661645+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 82501632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7805.4 total, 600.0 interval
                                           Cumulative writes: 68K writes, 266K keys, 68K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.03 MB/s
                                           Cumulative WAL: 68K writes, 25K syncs, 2.74 writes per sync, written: 0.25 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 883 writes, 1887 keys, 883 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s
                                           Interval WAL: 883 writes, 423 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:52.661794+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 82501632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:53.661980+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 82501632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:54.662160+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 82501632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:55.662351+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 82501632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:56.662497+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 82501632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:57.662658+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479977472 unmapped: 82501632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:58.662805+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479993856 unmapped: 82485248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:59.662931+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 479993856 unmapped: 82485248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:00.663126+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 82477056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:01.663344+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 82477056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:02.663496+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 82477056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:03.663750+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 82477056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:04.663961+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 82477056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:05.664117+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480002048 unmapped: 82477056 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:06.664298+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 82468864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:07.664478+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480010240 unmapped: 82468864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:08.664633+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 82460672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:09.664824+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 82460672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:10.665017+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 82460672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:11.665227+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 82460672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:12.665372+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 82460672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:13.665531+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480018432 unmapped: 82460672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:14.665721+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 82444288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:15.665880+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 82444288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:16.666030+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 82444288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:17.666167+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 82444288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:18.666376+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 82444288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:19.666556+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 82444288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:20.666704+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 82444288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:21.666873+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480034816 unmapped: 82444288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:22.667028+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 82436096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:23.667206+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 82436096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:24.667531+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 82436096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:25.667780+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 82436096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:26.667985+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 82436096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:27.668206+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 82436096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:28.668348+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480043008 unmapped: 82436096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:29.668520+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480051200 unmapped: 82427904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:30.668691+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480067584 unmapped: 82411520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:31.668843+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480067584 unmapped: 82411520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:32.669007+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480067584 unmapped: 82411520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:33.669180+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480067584 unmapped: 82411520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:34.669456+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480067584 unmapped: 82411520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:35.669587+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480075776 unmapped: 82403328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:36.669753+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480075776 unmapped: 82403328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:37.669911+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:38.670112+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480075776 unmapped: 82403328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:39.670328+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 82386944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:40.670506+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 82386944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:41.670701+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 82386944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:42.670865+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 82386944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:43.671049+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 82386944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:44.671184+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 82386944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:45.671389+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 82386944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:46.671667+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480092160 unmapped: 82386944 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:47.671826+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480116736 unmapped: 82362368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:48.672027+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 82354176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:49.672234+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 82354176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:50.672407+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 82354176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:51.672599+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 82354176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:52.672765+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 82354176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:53.672981+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 82354176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:54.673144+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480124928 unmapped: 82354176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:55.673313+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480133120 unmapped: 82345984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:56.673497+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480133120 unmapped: 82345984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:57.673624+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480141312 unmapped: 82337792 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 234881024 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:58.673785+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480141312 unmapped: 82337792 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:59.673956+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480141312 unmapped: 82337792 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:00.674180+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480141312 unmapped: 82337792 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:01.674381+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480141312 unmapped: 82337792 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:02.674485+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480141312 unmapped: 82337792 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:03.674659+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 82313216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:04.674802+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 82313216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:05.675038+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 82313216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:06.675310+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 82313216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:07.675479+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 82313216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:08.675729+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 82313216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:09.675984+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 82313216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:10.676168+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480165888 unmapped: 82313216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:11.676381+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 82296832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:12.676573+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 82296832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:13.676722+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 82296832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:14.676871+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 82296832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:15.677008+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 82296832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:16.677147+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 82296832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:17.677313+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 82296832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:18.677478+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480182272 unmapped: 82296832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:19.677633+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 82280448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:20.677833+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 82280448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:21.678016+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 82280448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:22.678176+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 82280448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:23.678367+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 82280448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:24.678644+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 82280448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:25.678813+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 82280448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:26.679009+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480198656 unmapped: 82280448 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:27.679193+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 82247680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:28.679342+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 82247680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:29.679539+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 82247680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:30.679759+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 82247680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:31.679982+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 82247680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:32.680116+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 82247680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:33.680238+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 82247680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:34.680466+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480231424 unmapped: 82247680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:35.680662+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 82239488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:36.681734+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 82239488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:37.681933+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 82239488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:38.682134+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 82239488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:39.682283+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 82239488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 06 08:46:58 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/364251483' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:40.682477+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480239616 unmapped: 82239488 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:41.682695+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480247808 unmapped: 82231296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:42.682873+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480247808 unmapped: 82231296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:43.683053+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480256000 unmapped: 82223104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:44.683202+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480256000 unmapped: 82223104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:45.683354+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480256000 unmapped: 82223104 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:46.683503+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 82214912 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:47.683670+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 82214912 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:48.683857+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 82214912 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:49.684054+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480264192 unmapped: 82214912 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:50.684212+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480272384 unmapped: 82206720 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:51.684402+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 82198528 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:52.684627+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 82198528 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:53.684828+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 82198528 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:54.684977+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 82198528 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:55.685136+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480280576 unmapped: 82198528 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:56.685322+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 82190336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:57.685500+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 82190336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:58.685683+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480288768 unmapped: 82190336 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:59.685914+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 82165760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:00.686139+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 82165760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:01.686475+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 82165760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:02.686654+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 82165760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:03.686894+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 82165760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:04.687148+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 82165760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:05.687327+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 82165760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:06.687493+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480313344 unmapped: 82165760 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:07.687659+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 82149376 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:08.687823+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 82149376 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:09.688056+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 82149376 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:10.688202+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 82149376 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:11.688457+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 82149376 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:12.688635+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 82149376 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:13.688791+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 82149376 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:14.688939+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480329728 unmapped: 82149376 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:15.689083+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 82124800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:16.689214+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 82124800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:17.689438+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 82124800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:18.689592+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 82124800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:19.689735+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 82124800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:20.689906+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 82124800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:21.690095+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 82124800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:22.690217+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480354304 unmapped: 82124800 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:23.690388+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 82116608 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:24.690520+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 82116608 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:25.690668+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 82116608 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:26.690836+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 82116608 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:27.690975+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 82116608 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:28.691118+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 82116608 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:29.691296+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 82116608 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:30.691508+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480362496 unmapped: 82116608 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:31.691680+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 82108416 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:32.691830+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 82108416 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:33.692015+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 82108416 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:34.692230+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480370688 unmapped: 82108416 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:35.692397+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480387072 unmapped: 82092032 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:36.692617+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480387072 unmapped: 82092032 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:37.692784+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480387072 unmapped: 82092032 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:38.692962+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 82067456 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:39.693168+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 582.967895508s of 583.694030762s, submitted: 46
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 82067456 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:40.693314+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 82067456 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:41.693492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 82067456 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:42.693637+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 82067456 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:43.693810+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480411648 unmapped: 82067456 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:44.693934+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 480428032 unmapped: 82051072 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:45.694056+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481517568 unmapped: 80961536 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:46.694197+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481525760 unmapped: 80953344 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:47.694363+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481533952 unmapped: 80945152 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055117 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:48.694489+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481583104 unmapped: 80896000 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:49.694636+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.382668495s of 10.001228333s, submitted: 252
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:50.694810+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:51.695007+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:52.695212+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:53.695375+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:54.695604+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:55.695880+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:56.696121+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:57.696294+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:58.696493+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:59.696619+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:00.696775+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:01.697003+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:02.697143+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:03.697327+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:04.697483+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:05.697614+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:06.697786+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:07.697929+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:08.698073+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:09.698209+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 80855040 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:10.698389+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 80846848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:11.698659+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 80846848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:12.698807+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 80846848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:13.698944+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 80846848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:14.699088+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 80846848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:15.699237+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 80846848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:16.699387+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 80846848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:17.699570+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 80846848 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:18.699702+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481656832 unmapped: 80822272 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:19.699872+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481656832 unmapped: 80822272 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:20.699998+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481656832 unmapped: 80822272 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:21.700193+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481656832 unmapped: 80822272 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:22.700392+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481656832 unmapped: 80822272 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:23.700600+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481656832 unmapped: 80822272 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:24.700795+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481656832 unmapped: 80822272 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:25.700943+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481656832 unmapped: 80822272 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:26.701237+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481665024 unmapped: 80814080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:27.701404+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481665024 unmapped: 80814080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:28.701706+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481665024 unmapped: 80814080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:29.701858+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481665024 unmapped: 80814080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:30.702087+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481665024 unmapped: 80814080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:31.702321+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481665024 unmapped: 80814080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:32.702509+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481665024 unmapped: 80814080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:33.702656+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481665024 unmapped: 80814080 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:34.702788+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481673216 unmapped: 80805888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:35.702976+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481673216 unmapped: 80805888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:36.703163+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481673216 unmapped: 80805888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:37.703382+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481673216 unmapped: 80805888 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:38.703579+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481681408 unmapped: 80797696 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:39.703712+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481681408 unmapped: 80797696 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:40.703866+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481681408 unmapped: 80797696 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:41.704074+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481681408 unmapped: 80797696 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:42.704213+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481689600 unmapped: 80789504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:43.704389+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481689600 unmapped: 80789504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:44.704583+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481689600 unmapped: 80789504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:45.704730+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481689600 unmapped: 80789504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:46.705010+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481689600 unmapped: 80789504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:47.705394+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481689600 unmapped: 80789504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:48.705887+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481689600 unmapped: 80789504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:49.706576+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481689600 unmapped: 80789504 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:50.706981+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 80781312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:51.707475+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 80781312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:52.707788+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 80781312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:53.707954+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 80781312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:54.708248+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 80781312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:55.708475+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 80781312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:56.708737+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 80781312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:57.708991+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 80781312 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:58.709229+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481705984 unmapped: 80773120 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:59.709492+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 80764928 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:00.709667+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 80764928 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:01.709899+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 80764928 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:02.710133+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 80764928 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:03.710357+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 80764928 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:04.710591+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 80764928 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:05.710803+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 80764928 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:06.710980+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481722368 unmapped: 80756736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:07.711190+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481722368 unmapped: 80756736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:08.711359+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481722368 unmapped: 80756736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:09.711632+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481722368 unmapped: 80756736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:10.711852+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481722368 unmapped: 80756736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:11.712073+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481722368 unmapped: 80756736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:12.712225+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481722368 unmapped: 80756736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:13.712499+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481722368 unmapped: 80756736 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:14.712709+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481730560 unmapped: 80748544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:15.712868+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481730560 unmapped: 80748544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:16.713061+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481730560 unmapped: 80748544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:17.713241+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481730560 unmapped: 80748544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:18.713472+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481730560 unmapped: 80748544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:19.713625+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481730560 unmapped: 80748544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:20.713788+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481730560 unmapped: 80748544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:21.713992+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481730560 unmapped: 80748544 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:22.714127+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481755136 unmapped: 80723968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:23.714272+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481755136 unmapped: 80723968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:24.714469+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481755136 unmapped: 80723968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:25.714612+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481755136 unmapped: 80723968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:26.714774+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481755136 unmapped: 80723968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:27.714910+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481755136 unmapped: 80723968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:28.715087+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481755136 unmapped: 80723968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:29.715245+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481755136 unmapped: 80723968 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:30.715429+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 80707584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:31.715659+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 80707584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:32.715827+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 80707584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:33.716091+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 80707584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:34.716308+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 80707584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:35.716619+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 80707584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:36.716905+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 80707584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:37.717134+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481771520 unmapped: 80707584 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:38.717363+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481779712 unmapped: 80699392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:39.717592+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481779712 unmapped: 80699392 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:40.717884+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481787904 unmapped: 80691200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:41.718210+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481787904 unmapped: 80691200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:42.718686+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481787904 unmapped: 80691200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:43.718923+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481787904 unmapped: 80691200 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:44.719136+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481796096 unmapped: 80683008 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:45.719383+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481796096 unmapped: 80683008 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:46.719713+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481804288 unmapped: 80674816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:47.719975+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481804288 unmapped: 80674816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:48.720221+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481804288 unmapped: 80674816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:49.720502+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481804288 unmapped: 80674816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:50.720717+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481804288 unmapped: 80674816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:51.721157+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481804288 unmapped: 80674816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:52.721341+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481804288 unmapped: 80674816 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets getting new tickets!
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:53.721859+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _finish_auth 0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:53.723817+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:54.722078+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481812480 unmapped: 80666624 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:55.722286+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481828864 unmapped: 80650240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:56.722533+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481828864 unmapped: 80650240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:57.722770+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481828864 unmapped: 80650240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:58.722975+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481828864 unmapped: 80650240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:59.723190+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481828864 unmapped: 80650240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:00.723387+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481828864 unmapped: 80650240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:01.723760+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481828864 unmapped: 80650240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:02.723966+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481828864 unmapped: 80650240 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:03.724180+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481845248 unmapped: 80633856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:04.724365+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481845248 unmapped: 80633856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:05.724537+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481845248 unmapped: 80633856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:06.724693+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481845248 unmapped: 80633856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:07.724829+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481845248 unmapped: 80633856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:08.724998+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481845248 unmapped: 80633856 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:09.725192+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481853440 unmapped: 80625664 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:10.725465+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481853440 unmapped: 80625664 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:11.725707+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481861632 unmapped: 80617472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:12.725966+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481861632 unmapped: 80617472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:13.726107+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481861632 unmapped: 80617472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:14.726341+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481861632 unmapped: 80617472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:15.726573+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481861632 unmapped: 80617472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:16.726834+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481861632 unmapped: 80617472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:17.727031+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481861632 unmapped: 80617472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:18.727259+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481861632 unmapped: 80617472 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:19.727399+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481878016 unmapped: 80601088 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:20.727600+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481878016 unmapped: 80601088 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:21.727874+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481878016 unmapped: 80601088 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:22.728041+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481878016 unmapped: 80601088 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:23.728262+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481878016 unmapped: 80601088 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:24.728529+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481878016 unmapped: 80601088 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:25.728705+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481878016 unmapped: 80601088 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:26.728949+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481878016 unmapped: 80601088 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:27.729184+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 80584704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:28.729406+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 80584704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:29.729656+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 80584704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:30.729880+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 80584704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:31.730105+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 80584704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:32.730283+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 80584704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:33.730487+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 80584704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:34.730720+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 80584704 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:35.730867+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:36.731115+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481910784 unmapped: 80568320 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:37.731374+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481918976 unmapped: 80560128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:38.731597+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481918976 unmapped: 80560128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:39.731813+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481918976 unmapped: 80560128 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:40.732042+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481927168 unmapped: 80551936 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:41.732246+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481927168 unmapped: 80551936 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:42.732475+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481927168 unmapped: 80551936 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:43.732730+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481935360 unmapped: 80543744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:44.732935+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481935360 unmapped: 80543744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:45.733163+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481935360 unmapped: 80543744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:46.733568+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481935360 unmapped: 80543744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:47.733815+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481935360 unmapped: 80543744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:48.733986+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481935360 unmapped: 80543744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:49.734142+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481935360 unmapped: 80543744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:50.734313+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481935360 unmapped: 80543744 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:51.734548+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:52.734742+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:53.734951+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:54.735145+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:55.735323+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:56.735535+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:57.735737+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:58.735936+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:59.736095+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481959936 unmapped: 80519168 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:00.736239+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481968128 unmapped: 80510976 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:01.736442+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481976320 unmapped: 80502784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:02.736607+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481976320 unmapped: 80502784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:03.736820+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481976320 unmapped: 80502784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:04.737116+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481976320 unmapped: 80502784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:05.737291+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481976320 unmapped: 80502784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:06.737468+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481976320 unmapped: 80502784 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:07.737619+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481992704 unmapped: 80486400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:08.737782+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481992704 unmapped: 80486400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:09.737942+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481992704 unmapped: 80486400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:10.738094+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481992704 unmapped: 80486400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:11.738290+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481992704 unmapped: 80486400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:12.738486+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481992704 unmapped: 80486400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:13.738633+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481992704 unmapped: 80486400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:14.738735+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481992704 unmapped: 80486400 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:15.738894+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482009088 unmapped: 80470016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:16.739048+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482009088 unmapped: 80470016 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:17.739245+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482017280 unmapped: 80461824 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:18.739469+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482017280 unmapped: 80461824 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:19.739624+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482017280 unmapped: 80461824 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:20.739754+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482017280 unmapped: 80461824 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:21.739914+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482017280 unmapped: 80461824 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:22.740068+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482017280 unmapped: 80461824 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:23.740215+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482025472 unmapped: 80453632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:24.740406+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482025472 unmapped: 80453632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:25.740604+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482025472 unmapped: 80453632 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:26.740751+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482033664 unmapped: 80445440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:27.740844+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482033664 unmapped: 80445440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:28.740964+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482033664 unmapped: 80445440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:29.741104+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482033664 unmapped: 80445440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:30.741281+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482033664 unmapped: 80445440 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:31.741510+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482041856 unmapped: 80437248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:32.741683+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482041856 unmapped: 80437248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:33.741833+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482041856 unmapped: 80437248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:34.741988+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482041856 unmapped: 80437248 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:35.742121+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482058240 unmapped: 80420864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:36.742257+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482058240 unmapped: 80420864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:37.742368+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482058240 unmapped: 80420864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:38.742471+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482058240 unmapped: 80420864 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:39.742624+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482066432 unmapped: 80412672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:40.742756+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482066432 unmapped: 80412672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:41.742972+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482066432 unmapped: 80412672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:42.743114+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482066432 unmapped: 80412672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:43.743289+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482066432 unmapped: 80412672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:44.743503+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482066432 unmapped: 80412672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:45.743619+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482066432 unmapped: 80412672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:46.743702+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482066432 unmapped: 80412672 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:47.743832+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482082816 unmapped: 80396288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:48.743973+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482082816 unmapped: 80396288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:49.744107+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482082816 unmapped: 80396288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:50.744271+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482082816 unmapped: 80396288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:51.744509+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482082816 unmapped: 80396288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:52.744657+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482082816 unmapped: 80396288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:53.744800+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482082816 unmapped: 80396288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:54.744948+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482082816 unmapped: 80396288 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:55.745102+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482099200 unmapped: 80379904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:56.745267+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482099200 unmapped: 80379904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:57.745569+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482099200 unmapped: 80379904 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:58.745759+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482107392 unmapped: 80371712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:59.745915+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482107392 unmapped: 80371712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:00.746090+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482107392 unmapped: 80371712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:01.746318+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482107392 unmapped: 80371712 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:02.746515+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 80363520 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:03.746663+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482123776 unmapped: 80355328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:04.746823+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482123776 unmapped: 80355328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:05.746966+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482123776 unmapped: 80355328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:06.747131+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482123776 unmapped: 80355328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:07.747274+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482123776 unmapped: 80355328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:08.747465+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482123776 unmapped: 80355328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:09.747623+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482123776 unmapped: 80355328 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:10.747769+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:11.747926+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:12.748083+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:13.748288+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:14.748481+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:15.748668+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:16.748797+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:17.748937+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:18.749088+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482164736 unmapped: 80314368 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:19.749277+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482172928 unmapped: 80306176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:20.749467+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482172928 unmapped: 80306176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:21.749646+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482172928 unmapped: 80306176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:22.749785+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482172928 unmapped: 80306176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:23.749952+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482172928 unmapped: 80306176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:24.750096+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482172928 unmapped: 80306176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:25.750234+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482172928 unmapped: 80306176 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:26.750473+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482181120 unmapped: 80297984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:27.750628+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482181120 unmapped: 80297984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:28.750850+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482181120 unmapped: 80297984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:29.751060+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482181120 unmapped: 80297984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:30.751218+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482181120 unmapped: 80297984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:31.751394+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482181120 unmapped: 80297984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:32.751565+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482181120 unmapped: 80297984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:33.751714+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482181120 unmapped: 80297984 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:34.751868+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482205696 unmapped: 80273408 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:35.752014+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482205696 unmapped: 80273408 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:36.752171+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482205696 unmapped: 80273408 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:37.752356+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482205696 unmapped: 80273408 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:38.752563+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482205696 unmapped: 80273408 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:39.752701+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482205696 unmapped: 80273408 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:40.752927+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482205696 unmapped: 80273408 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:41.753142+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482205696 unmapped: 80273408 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:42.753273+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482213888 unmapped: 80265216 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:43.753462+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482222080 unmapped: 80257024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:44.753640+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482222080 unmapped: 80257024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:45.753782+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482222080 unmapped: 80257024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:46.753929+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482222080 unmapped: 80257024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:47.754077+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482222080 unmapped: 80257024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:48.754241+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482222080 unmapped: 80257024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:49.754377+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482222080 unmapped: 80257024 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:50.754568+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482230272 unmapped: 80248832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:51.754748+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482230272 unmapped: 80248832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:52.754869+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482230272 unmapped: 80248832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:53.755045+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482230272 unmapped: 80248832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:54.755220+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482230272 unmapped: 80248832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:55.755365+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482230272 unmapped: 80248832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:56.755551+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482230272 unmapped: 80248832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:57.755758+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482230272 unmapped: 80248832 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:58.755909+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482254848 unmapped: 80224256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:59.756079+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482254848 unmapped: 80224256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:00.756210+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482254848 unmapped: 80224256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:01.756348+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482254848 unmapped: 80224256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:02.756478+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482254848 unmapped: 80224256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:03.756669+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482254848 unmapped: 80224256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:04.756846+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482254848 unmapped: 80224256 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:05.757004+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482263040 unmapped: 80216064 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:06.757182+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482279424 unmapped: 80199680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:07.757335+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482279424 unmapped: 80199680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:08.757501+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482279424 unmapped: 80199680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:09.757660+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482279424 unmapped: 80199680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:10.757866+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482279424 unmapped: 80199680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:11.758040+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482279424 unmapped: 80199680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:12.758186+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482279424 unmapped: 80199680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:13.758343+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482279424 unmapped: 80199680 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:14.758480+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482295808 unmapped: 80183296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:15.758626+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482295808 unmapped: 80183296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:16.758816+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482295808 unmapped: 80183296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:17.758939+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482295808 unmapped: 80183296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:18.759099+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482295808 unmapped: 80183296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:19.759238+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482295808 unmapped: 80183296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:20.759402+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482295808 unmapped: 80183296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:21.759608+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482295808 unmapped: 80183296 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:22.759822+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482312192 unmapped: 80166912 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:23.759968+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:58 compute-1 ceph-osd[79002]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:58 compute-1 ceph-osd[79002]: bluestore.MempoolThread(0x55b551249b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5055045 data_alloc: 218103808 data_used: 33247232
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482312192 unmapped: 80166912 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:24.760117+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482312192 unmapped: 80166912 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:25.760262+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'config diff' '{prefix=config diff}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'config show' '{prefix=config show}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 80388096 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:26.760457+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: osd.1 437 heartbeat osd_stat(store_statfs(0x1a03b7000/0x0/0x1bfc00000, data 0x30aec44/0x32e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c55f9c6), peers [0,2] op hist [])
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 482148352 unmapped: 80330752 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: tick
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_tickets
Dec 06 08:46:58 compute-1 ceph-osd[79002]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:27.760586+0000)
Dec 06 08:46:58 compute-1 ceph-osd[79002]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 80764928 heap: 562479104 old mem: 2845415833 new mem: 2845415833
Dec 06 08:46:58 compute-1 ceph-osd[79002]: do_command 'log dump' '{prefix=log dump}'
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.46937 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.38709 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.47896 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.46955 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.46970 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2387806077' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1431281258' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3589491132' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3276557058' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3233308557' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3987429521' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2200948100' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3733909566' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1567968785' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:46:58 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/364251483' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:59 compute-1 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:46:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 08:46:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2843281197' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:59 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:46:59 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:59 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:59.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:59 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 06 08:46:59 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1819590850' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.46985 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.47938 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.47950 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: pgmap v4651: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.38775 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.47000 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/951096703' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3645184560' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3234307305' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2843281197' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1574297695' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2220555448' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1819590850' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3285086935' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:47:00 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec 06 08:47:00 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1027309975' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:47:00 compute-1 crontab[335271]: (root) LIST (root)
Dec 06 08:47:00 compute-1 nova_compute[226101]: 2025-12-06 08:47:00.431 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:00 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:00 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:00 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:00.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec 06 08:47:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3794276524' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.38790 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.47012 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.38796 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.47992 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.47024 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.38808 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.48007 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.47039 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1339204430' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1027309975' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.38820 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.48013 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.47051 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3703404667' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/793007941' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: pgmap v4652: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.48028 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.38835 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1156329899' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3099624559' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: from='client.38859 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec 06 08:47:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2157011927' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:47:01 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:01 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:47:01 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:01.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:47:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec 06 08:47:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1542503518' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:47:01.727 139580 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:47:01.728 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:47:01 compute-1 ovn_metadata_agent[139575]: 2025-12-06 08:47:01.728 139580 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:47:01 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec 06 08:47:01 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2007981825' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:47:02 compute-1 sshd-session[335343]: Received disconnect from 106.51.92.114 port 47512:11: Bye Bye [preauth]
Dec 06 08:47:02 compute-1 sshd-session[335343]: Disconnected from authenticating user root 106.51.92.114 port 47512 [preauth]
Dec 06 08:47:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec 06 08:47:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1940468244' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec 06 08:47:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4158636158' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.47084 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3794276524' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.47090 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2117309582' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.38874 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.48061 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2157011927' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1542503518' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2316704263' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/99096409' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.48073 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2007981825' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1940468244' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2097908437' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:47:02 compute-1 sshd-session[333645]: Connection reset by 165.154.55.146 port 32806 [preauth]
Dec 06 08:47:02 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:02 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:47:02 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:02.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:47:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec 06 08:47:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2840103552' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec 06 08:47:02 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3017613200' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec 06 08:47:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3584339672' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec 06 08:47:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1343773895' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:47:03 compute-1 nova_compute[226101]: 2025-12-06 08:47:03.242 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2935464987' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.48094 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.38901 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/4158636158' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3673175152' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.48112 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: pgmap v4653: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2840103552' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2803842601' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3017613200' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2303681952' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2845692263' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3584339672' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1343773895' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3048246434' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3063384302' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec 06 08:47:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1736339127' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec 06 08:47:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3550407173' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:47:03 compute-1 nova_compute[226101]: 2025-12-06 08:47:03.589 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:47:03 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:03 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:03 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:03.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:03 compute-1 nova_compute[226101]: 2025-12-06 08:47:03.629 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:47:03 compute-1 nova_compute[226101]: 2025-12-06 08:47:03.630 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:47:03 compute-1 nova_compute[226101]: 2025-12-06 08:47:03.630 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:47:03 compute-1 nova_compute[226101]: 2025-12-06 08:47:03.630 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:47:03 compute-1 nova_compute[226101]: 2025-12-06 08:47:03.630 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:47:03 compute-1 systemd[1]: Starting Hostname Service...
Dec 06 08:47:03 compute-1 systemd[1]: Started Hostname Service.
Dec 06 08:47:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 06 08:47:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2300073004' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:47:03 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec 06 08:47:03 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2655961151' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:47:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3522175721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.081 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:47:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.250 226109 WARNING nova.virt.libvirt.driver [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.252 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4002MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.252 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.252 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:47:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec 06 08:47:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/651100828' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec 06 08:47:04 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3240636314' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.398 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.399 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.48139 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/180698634' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1736339127' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3550407173' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3812902119' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1311921730' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1767667201' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4246357108' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2300073004' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2655961151' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3674706854' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2991099391' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3522175721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1015597887' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1493012721' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/651100828' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/3240636314' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4072605533' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.545 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing inventories for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.580 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating ProviderTree inventory for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.581 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Updating inventory in ProviderTree for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:47:04 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:04 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:04 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:04.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.602 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing aggregate associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.626 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Refreshing trait associations for resource provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83, traits: HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:47:04 compute-1 nova_compute[226101]: 2025-12-06 08:47:04.647 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:47:05 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:47:05 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/455727909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:47:05 compute-1 nova_compute[226101]: 2025-12-06 08:47:05.091 226109 DEBUG oslo_concurrency.processutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:47:05 compute-1 nova_compute[226101]: 2025-12-06 08:47:05.097 226109 DEBUG nova.compute.provider_tree [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed in ProviderTree for provider: 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:47:05 compute-1 nova_compute[226101]: 2025-12-06 08:47:05.156 226109 DEBUG nova.scheduler.client.report [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Inventory has not changed for provider 466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:47:05 compute-1 nova_compute[226101]: 2025-12-06 08:47:05.158 226109 DEBUG nova.compute.resource_tracker [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:47:05 compute-1 nova_compute[226101]: 2025-12-06 08:47:05.158 226109 DEBUG oslo_concurrency.lockutils [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3104068488' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1280998855' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2684433230' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: pgmap v4654: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.47204 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2338196638' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.47210 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1316993960' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1410012521' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4249903457' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2475362331' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/455727909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1486880162' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3925471560' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-1 nova_compute[226101]: 2025-12-06 08:47:05.432 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:05 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:05 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:05 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:05.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec 06 08:47:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2538820321' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec 06 08:47:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/28598356' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.47231 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.47225 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.39012 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3960332756' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.47240 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.39018 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3820400097' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1667974750' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.39024 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.47249 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2538820321' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:47:06 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/238822674' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:06 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:06 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:06.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:06 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 06 08:47:06 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/59356258' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec 06 08:47:07 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1255464285' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.39030 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.48241 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.47270 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/28598356' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.39045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.48262 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.48256 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: pgmap v4655: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2133397263' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.47288 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/59356258' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.48274 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.39057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.47303 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1524758473' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/1255464285' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:07 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/4031061109' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:47:07 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:07 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:47:07 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:07.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-1 nova_compute[226101]: 2025-12-06 08:47:08.257 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:08 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2248560303' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.48286 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.39069 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2661229246' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.48301 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.39093 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/396089789' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/1542819250' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.48319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.39120 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1229308044' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:08 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:08 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:08.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:47:09 compute-1 podman[336643]: 2025-12-06 08:47:09.269052864 +0000 UTC m=+0.063338628 container health_status 69167d871c6c6a0a1573a5822f2189a600a9a248b40d70fa658ff5cc7b416a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:47:09 compute-1 podman[336642]: 2025-12-06 08:47:09.279470424 +0000 UTC m=+0.074767656 container health_status 46208bd2ea655ca2e11e7486130c25584997c86e2f88746367704a4743f1f3f2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:47:09 compute-1 podman[336644]: 2025-12-06 08:47:09.297588503 +0000 UTC m=+0.091465256 container health_status b55bf730497ecac6795d5ce3949daf0b9e2ad72bd7cd125e90105f9e0e4400e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 06 08:47:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec 06 08:47:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/481958975' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:09 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:47:09 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:09.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.48337 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2248560303' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: pgmap v4656: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1850885369' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.48361 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.47384 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/2502847766' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4094367856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.10:0/4094367856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/481958975' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-1 ceph-mon[81689]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Dec 06 08:47:09 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/254992418' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:47:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec 06 08:47:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/224885905' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:47:10 compute-1 nova_compute[226101]: 2025-12-06 08:47:10.435 226109 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 38 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:10 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:10 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:10 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:10.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:10 compute-1 ceph-mon[81689]: from='client.39216 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/254992418' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:47:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/3519464518' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:47:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3265975113' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:47:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/224885905' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:47:10 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/592466088' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:47:10 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec 06 08:47:10 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2648013193' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:47:11 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:11 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:11 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:11.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:11 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec 06 08:47:11 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2461841317' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:47:11 compute-1 ceph-mon[81689]: from='client.48445 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:11 compute-1 ceph-mon[81689]: pgmap v4657: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/2162878835' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:47:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2648013193' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:47:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/3318357680' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:47:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/1945036162' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:47:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.100:0/246440838' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:47:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.102:0/50389982' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:47:11 compute-1 ceph-mon[81689]: from='client.? 192.168.122.101:0/2461841317' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:47:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec 06 08:47:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3140350994' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:47:12 compute-1 nova_compute[226101]: 2025-12-06 08:47:12.152 226109 DEBUG oslo_service.periodic_task [None req-459ed0aa-7812-4dbe-87d1-046464b7c46c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:47:12 compute-1 radosgw[82404]: ====== starting new request req=0x7fc73d78e6f0 =====
Dec 06 08:47:12 compute-1 radosgw[82404]: ====== req done req=0x7fc73d78e6f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:47:12 compute-1 radosgw[82404]: beast: 0x7fc73d78e6f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:12.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:47:12 compute-1 ceph-mon[81689]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec 06 08:47:12 compute-1 ceph-mon[81689]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3858655743' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
